From 79efedf3cd5c1903ffadc3b507aee9bf16f59b8f Mon Sep 17 00:00:00 2001 From: "David B. Kinder" Date: Mon, 5 Aug 2024 14:41:04 -0700 Subject: [PATCH] publish latest Signed-off-by: David B. Kinder --- 404.html | 29 +- latest/404.html | 29 +- latest/README.html | 231 ++++ latest/README.md | 7 - latest/_images/opea_architecture.png | Bin 0 -> 65495 bytes latest/_images/opea_workflow.png | Bin 0 -> 73074 bytes latest/_sources/README.md.txt | 48 + latest/_sources/codeowner.md.txt | 50 + .../_sources/community/CODE_OF_CONDUCT.md.txt | 130 ++ latest/_sources/community/CONTRIBUTING.md.txt | 123 ++ latest/_sources/community/SECURITY.md.txt | 9 + .../community/pull_request_template.md.txt | 25 + latest/_sources/community/rfc_template.md.txt | 44 + ...g_MicroService_to_implement_ChatQnA.md.txt | 226 ++++ .../24-05-16-OPEA-001-Overall-Design.md.txt | 93 ++ .../24-05-24-OPEA-001-Code-Structure.md.txt | 68 ++ latest/_sources/community/rfcs/README.md.txt | 7 + latest/_sources/faq.md.txt | 84 ++ latest/_sources/framework.md.txt | 857 +++++++++++++ .../gmc_install/gmc_install.md.txt | 119 ++ .../k8s_install/k8s_instal_aws_eks.md.txt | 74 ++ .../k8s_install/k8s_install_kubeadm.md.txt | 411 +++++++ .../k8s_install/k8s_install_kubespray.md.txt | 277 +++++ latest/_sources/index.rst.txt | 3 +- .../release_notes/release_notes.rst.txt | 14 + latest/_sources/release_notes/v0.6.md.txt | 28 + latest/_sources/release_notes/v0.7.md.txt | 125 ++ latest/_sources/release_notes/v0.8.md.txt | 322 +++++ latest/_sources/roadmap/2024-2025.md.txt | 130 ++ latest/_sources/roadmap/CICD.md.txt | 29 + latest/codeowner.html | 281 +++++ latest/community/CODE_OF_CONDUCT.html | 299 +++++ latest/community/CONTRIBUTING.html | 342 ++++++ latest/community/SECURITY.html | 190 +++ latest/community/pull_request_template.html | 208 ++++ latest/community/rfc_template.html | 226 ++++ ...ing_MicroService_to_implement_ChatQnA.html | 402 ++++++ .../24-05-16-OPEA-001-Overall-Design.html | 268 ++++ .../24-05-24-OPEA-001-Code-Structure.html | 244 ++++ latest/community/rfcs/README.html | 186 +++ latest/faq.html | 280 +++++ latest/framework.html | 1075 +++++++++++++++++ latest/genindex.html | 31 +- latest/glossary.html | 29 +- .../installation/gmc_install/gmc_install.html | 271 +++++ .../k8s_install/k8s_instal_aws_eks.html | 248 ++++ .../k8s_install/k8s_install_kubeadm.html | 580 +++++++++ .../k8s_install/k8s_install_kubespray.html | 450 +++++++ latest/index.html | 40 +- latest/objects.inv | Bin 345 -> 1015 bytes latest/release_notes/release_notes.html | 195 +++ latest/release_notes/v0.6.html | 225 ++++ latest/release_notes/v0.7.html | 349 ++++++ latest/release_notes/v0.8.html | 546 +++++++++ latest/roadmap/2024-2025.html | 377 ++++++ latest/roadmap/CICD.html | 223 ++++ latest/search.html | 31 +- latest/searchindex.js | 2 +- 58 files changed, 11165 insertions(+), 25 deletions(-) create mode 100644 latest/README.html delete mode 100644 latest/README.md create mode 100644 latest/_images/opea_architecture.png create mode 100644 latest/_images/opea_workflow.png create mode 100644 latest/_sources/README.md.txt create mode 100644 latest/_sources/codeowner.md.txt create mode 100644 latest/_sources/community/CODE_OF_CONDUCT.md.txt create mode 100644 latest/_sources/community/CONTRIBUTING.md.txt create mode 100644 latest/_sources/community/SECURITY.md.txt create mode 100644 latest/_sources/community/pull_request_template.md.txt create mode 100644 latest/_sources/community/rfc_template.md.txt create mode 100644 latest/_sources/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md.txt create mode 100644 latest/_sources/community/rfcs/24-05-16-OPEA-001-Overall-Design.md.txt create mode 100644 latest/_sources/community/rfcs/24-05-24-OPEA-001-Code-Structure.md.txt create mode 100644 latest/_sources/community/rfcs/README.md.txt create mode 100644 latest/_sources/faq.md.txt create mode 100644 latest/_sources/framework.md.txt create mode 100644 latest/_sources/guide/installation/gmc_install/gmc_install.md.txt create mode 100644 latest/_sources/guide/installation/k8s_install/k8s_instal_aws_eks.md.txt create mode 100644 latest/_sources/guide/installation/k8s_install/k8s_install_kubeadm.md.txt create mode 100644 latest/_sources/guide/installation/k8s_install/k8s_install_kubespray.md.txt create mode 100644 latest/_sources/release_notes/release_notes.rst.txt create mode 100644 latest/_sources/release_notes/v0.6.md.txt create mode 100644 latest/_sources/release_notes/v0.7.md.txt create mode 100644 latest/_sources/release_notes/v0.8.md.txt create mode 100644 latest/_sources/roadmap/2024-2025.md.txt create mode 100644 latest/_sources/roadmap/CICD.md.txt create mode 100644 latest/codeowner.html create mode 100644 latest/community/CODE_OF_CONDUCT.html create mode 100644 latest/community/CONTRIBUTING.html create mode 100644 latest/community/SECURITY.html create mode 100644 latest/community/pull_request_template.html create mode 100644 latest/community/rfc_template.html create mode 100644 latest/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.html create mode 100644 latest/community/rfcs/24-05-16-OPEA-001-Overall-Design.html create mode 100644 latest/community/rfcs/24-05-24-OPEA-001-Code-Structure.html create mode 100644 latest/community/rfcs/README.html create mode 100644 latest/faq.html create mode 100644 latest/framework.html create mode 100644 latest/guide/installation/gmc_install/gmc_install.html create mode 100644 latest/guide/installation/k8s_install/k8s_instal_aws_eks.html create mode 100644 latest/guide/installation/k8s_install/k8s_install_kubeadm.html create mode 100644 latest/guide/installation/k8s_install/k8s_install_kubespray.html create mode 100644 latest/release_notes/release_notes.html create mode 100644 latest/release_notes/v0.6.html create mode 100644 latest/release_notes/v0.7.html create mode 100644 latest/release_notes/v0.8.html create mode 100644 latest/roadmap/2024-2025.html create mode 100644 latest/roadmap/CICD.html diff --git a/404.html b/404.html index 9771b973c..57d7fe519 100644 --- a/404.html +++ b/404.html @@ -80,6 +80,31 @@ @@ -143,8 +168,8 @@

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. - -Published on Aug 02, 2024. + +Published on Aug 05, 2024.

diff --git a/latest/404.html b/latest/404.html index 31e00cc1d..dd3521b10 100644 --- a/latest/404.html +++ b/latest/404.html @@ -79,6 +79,31 @@ @@ -142,8 +167,8 @@

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. - -Published on Aug 02, 2024. + +Published on Aug 05, 2024.

diff --git a/latest/README.html b/latest/README.html new file mode 100644 index 000000000..39391953f --- /dev/null +++ b/latest/README.html @@ -0,0 +1,231 @@ + + + + + + + OPEA Project — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

OPEA Project

+

Mission: Create an open platform project that enables the creation of open, multi-provider, robust, and composable GenAI solutions that harness the best innovation across the ecosystem.

+

OPEA sites within the Linux Foundation AI & Data Organization:

+ +

The OPEA platform includes:

+
    +
  • Detailed framework of composable building blocks for state-of-the-art generative AI systems including LLMs, data stores, and prompt engines

  • +
  • Architectural blueprints of retrieval-augmented generative AI component stack structure, and end-to-end workflows

  • +
  • A four-step assessment for grading generative AI systems around performance, features, trustworthiness, and enterprise-grade readiness

  • +
+

Check out the LF AI & Data Press Release and Intel’s blog post.

+

Technical Steering Committee

+ +
+
+

Member companies at launch:

+
    +
  • Anyscale

  • +
  • Cloudera

  • +
  • Datastax

  • +
  • Domino Data Lab

  • +
  • Hugging Face

  • +
  • Intel

  • +
  • KX

  • +
  • MariaDB Foundation

  • +
  • MinIO

  • +
  • Qdrant

  • +
  • Red Hat

  • +
  • SAS

  • +
  • VMware by Broadcom

  • +
  • Yellowbrick Data

  • +
  • Zilliz

  • +
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/README.md b/latest/README.md deleted file mode 100644 index bb88323a9..000000000 --- a/latest/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# opea-project.github.io - -This is the OPEA Project Bocumentation Publishing site for GitHub Pages. -Content changes are not made directly in this repo. Instead, edit content in -the opea-project/docs and other repos, re-generate the HTML with Sphinx (make -html), and with the right permissions, push the updated content here for -publishing (make publish). diff --git a/latest/_images/opea_architecture.png b/latest/_images/opea_architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..2b13a374c09856a4741e4e8f55ead86712557ab7 GIT binary patch literal 65495 zcmeFZ2UJv9)F!H;k^~eikSr(?iXd5XEOL-kWKe=+P;y2k7m{;^B8MUtIVpk?B3&?DOq!e`lY2D)_0AG#(Ba&Xp@y@ML8q zRj*vR2|@jYU}2!HTziI(`f%l{gQ~Rnm6BcxBw6y#YvGrqVcVTCDclYArg59b=71h*LM^z=YE22x|FHj$_U&?4ZT)D#W9{qEb z!9-{3%9RT&SxGT};#j{9OtIg}3TO;IEf zQE#t*Ol%&$IH9KTyS*cKP7MpXE#6_33AiJU<;D)D7004s3jj-E(IDFM0cV#t?Z5Xm z2l?F=H^3AOy8MgN|E(7sBMmR;ysi9iiw6<7_HzS|cgg9$`?PqUoo6YE;0a#%i0mV= zX%MvA8MxrRWf$Q9Ty{IWiW(by7rnK3@cDh{vJ-fFb&W{8Ll=X9-v;CZF>}05(J?an z$e5hWA!pesJz{Zf2!e9%-R+D@Fx<;HieXEKwf~k9dCasFkk&6^>xSS;Hh?A7ygHn7 zMKbuiW1kIrPL2%T`}S(9I6zMqa27w4^t<_QPAO+K+_I?~+~o(l?(d_lg1x~^t%Q1> zhdI-{$Zp@=n|nQ&t6)o3yyCRw=f2a_|68|eVD;Gs1T*)vm7g( zj9+6>_tG&Jli2l~LqXRUHQX{3nak$0)#%^Z32%^)uB zL55qsZ%-%+Ea5k(99toKjU$sdp%D*i8gFz_)}_ek=j2g-nIL;QH?Ch|WnID*z^i|6 zL@_q>D{ogjQU7eQjQ6d~x**RGX{({Og!1NswT21+a_)q+il_)1@ok{|a`aAG<-`#pdD(JT6C8K>Esu-H7oIP=#SCMRkBClq zfPKT)+lj!birLWP@VgC|M8NJ3C=z&EoZr~jfvEP}iCYknMvy_G9b%FWECGwXxdHAa zT&s6=(l;|ODYF0toaTzlgE}QKZrt|U{d!X2)meN4x6I4X41?#|PYE85C(2)ZdAA6J zC?ApDL6)Xw0TdoM*`xH-b~|G&ZR#dP2Xh`L<)e_m3>klsC_;_ycWFg0jeA5gXJsbYl`ua z42(2la+!6}CEcO%vwn@4dS32tUkp_h?Osv3;kanTSF9Q6BLq{idc>(7*d^++@qL=j z$A-pX05Yuev=`*KubKZKW(` z>8sDya$3rVX2z~=5MO6?Z__Ib2e zvpAPU@=}3^S}JJ`Qax~yu0lN2#C{bsE6z{Z&;e9~;evJ!%6!Fv@AYOm5~$8 zjRWL@0E@CaqIVrMMGm8kq%hI>qY~xBa_G*eL{V!iWc9-g`@vqIPol%eF0TN&@T50N zai!5fx+iTAd#BHkE>HPe=x3tQ&(I1zJt3;9fqLa7gzcSg8>S4HYxnZ0^RN0HaTA1A zLRE(Xd!i8>?lgmY9 zrHZt3OQ_pQzi~PAn*kR`{7D#W;IzcYnbpw5V;R-81iBHZJv7glB#-I*hV9kVk%GOs znK&OgU4pr3;k531*+dz288dF?hO7qtZ^C+Yo_6gbx#n-~V`AS+PapiJgrwbm(_b$D+T%7MV(BW;j*|F#kSiCcIf(@@463jGnE!lHC_f0te(%1e* zun!x!davFdl((Wid-W|gjh|j6l#*?*w|!uyhOL+(e%J!%7>xa`F8^xX-~(@tandfg zVx=LXXCPCp5{u~rRUj~lD*Cp#_v6d-!T)uNwBblzARUDz3`n(eUY)^xP50~v@s9SC zxF1(+mo%Fz1;fW6$I`4>pK4)DwS)OmlyX=!*Qp#QA%o=tN1@(U44kQ*?x)Zys`jJp zcuOFjqrn@i*sXJA5~b`gi!fX35M>SQp$u#J{Spa;UpQMDx{ox^lbk`z=JG@X_w1yo&lem;!1Z zvfg0oS3=XTS^I67VbMIKaJ>!dxP}%Yn^&nMffb_YOz80%a{$78nYF0Ciot1d(b=dD z^FO~;q2)7bY!9(ecc3NB8v1z@12*FS-+iH<4|sghk(%G-7a?yh9;y2@vJ`^=3J|)j zk$Br4Tam#Ymubfd-j3I1ujhO~ zTnVp8)0)2z)hzoGU8meE2~Mf}ZZ48bnraKOhFfesplg*yzQmM zKG(}%$xg`oc3!126)LaUJ@mOa>7Sn>0y3kCgZZ@USlSb?mF6BF)5MiP@l|D%$U}ho z-1nfo4F{LNy$o=6MsIc467HiDUJmu4PT*EtOJbxfGBoln~ zCBav8_$MQy5a}*%Qs+k4RMpqy(OdxAGYgW3EqLAw%k8RK*c3MMZ_g4Qf#mvkQt!!R z^$CB14Nj4lZDmvVTzuLFu2FwOR23o6b zvro97Jqfdh?=}u>-4mg3|8lMPJBrA-%lRZIKAp)5XOq71>)aDPp^`2xqeV!-7})H! zF|{cYUULZ$OE2|9BV1G{)BCg*92tqf=`<0vRGik zBmsr?K5Q}ZBan?FP89t0%U#1DwC%ygG2o0AV!VVM8fEv3u||?$09ev17eK>(%(-Q# zM4KOEwkyg9Q0%f21&Z)QjKPu=$y+-^4MPv9CPD@bE%Td|z zHNIGerxo5>8xcp)$*G-H`W4+u?@WP<&3Qu|I-`J>)YekcI9C1MG~u9g3r0m&9}onb zd?56l6K3whG7QPSp~A*dfzzeH0M6S9Pcxi0d+?uEB5e3$&W zT4jtw0lmMYL*DIBC6ZX@;uqSlGwnIi!J6)y3R@&)D0O7tkUYA#n;DXxJI!oqDI={G z)=oCeN91Ix5?7wlGode|ON+{}E{CGPx6V3LA82JuRpT-Z$!s=UZW4oycnWBvkK-?) z9N6eSID2y^rhkN!t4npIF^X}4#74mI!;BwlcGs6Xj5pWEzBMAfe`u0l56a#!KFvRS zp#s__WFx9sBaNA6X*+#RMx+e@0zjZEcZeLBY>Gdq7% zdjNvha>w3$t=bzx)of1w=nGBFJc`Cn_)d-@=<-GI4gDoRVKEiNlVPrnj6GZQ4puKJ z^R?3M?y)I8aEi%!OyQ!cFdTxstJ+pdpEK&Kw@U~to<Qq4QBmWm$o=;v^==S}1N zj0ouBk5k$trW;NOdEnV(p*(I!;1Wf$rj=^N0S-s$!Scjohh{%;zxSM_I-2rCq^Zg% zafJre+PY4v0lQddJJ^e_9wApurdOq%p!et2ODrZJ=}04~lvjgaN_aBDZ%p2QzPPGiFXwNamLIr1G${>bz#GnCXM?s z*-we`buohTH?sQc9JH{$j7f9Vo07iGe*5+vh$g}H`CK2QaIyj?WFL8x;A%mIYu9MK zh)YeWvZMWIhPrO9Oz@C(&RrtRfr8BBy`<)l9X9Q`p*o1sAlS#Fe@Etw6uEiXmno_JOuqeAk>@FwaB4rdG7; zbDubXO!3ZzSxn^*2f{IEF5WkzscF`>HNNAk{v#}t4 z_j!h2&5dK+hlKIdieu{LzMnL3e{xykb>$%I2($6(V0C|02aj-XALT`x9LXAl?~=sg z!TOa;DvPGFtD9OfWe9n^(aUtUIeRl1l$v{WYM~^S(EbJ`^wozoLbi1x7#42MV&p5u zs81k|9-17ub9*Cg^5P2E%Vl#|nB6W!22H976CUGy+QRZpQTUXE1_Apr4|B&aEvk(X>7OhbniRZK*p8x3HSJVloGB%%D@~`f^Y}(Q0BSZjI?17LIq4HIa$q*m)$=9I37gZxRG zC9l*aU=-;4-WM;L6~xy}UTMvKeID>mY!2+AAAxF!k)-!0Uf3kZq6{!?LdqOYuB$Sx z>EgbHuf+yvP%-KEhq}h>w|V!1H@{4uYYVUUL{ID~4I#KKJ<~|q-yO@c5(1ewH-o_s zTUj(M^`h^_Rec_e375!8ke_Ow5PUIK&B<%~;?d#s+slaZVC1aviF0r26$d!$hA zvawNk^U-+6&*TRvkV`Va>_l7fP9bq0c06o^+Os9v8zj?VmNDT}003GtaLkE_N! zpNv|h*$iuK8F@@ssfrr~G)C~%pLe4YAulQszO~k&x;OR^W<6S+n;y{x`d*SCQ}xPe z$CJ*;Y||oYkt^~N#6+>~`W6{uhLJKI4ap@R6BN5ZKV`=k+{l_`t*2dtUpB}sMcpa= zQFbJ9s5_4_-@VND$RjWTL*LBS0&N#ukOepQM`RBapKRH}(kIy>!6A}FfX_8b>-i8& zXbAGsY(dpj`!l!fsYte>A-^v%VRc>-t9?aa+_zXTO)_GKd`B@}bUv{X zO>%stMFp2eOPauV#GbdJ?;j1B=l@v~zWXjQkaA4xR#|97#=1lE+@C zbAn7a0x@M4D`+JIz79Fn)2@X*~oxX9T{i~m8q_Si@MLWDezA(bZaxREojk(&JR{`QQ{xq_@$TX}S z#et(&_BNbrgJLeGcr{_=fkpSn*I~85VYqnoCpDw&3WM5M!jMKSF8%x9aYrkHDrlpn zhSv96#XYr;3aVIYc?di}FOY5V0~-V`OREW1YjsoCFi1yw$ljsRC7@Ce=w|;)auSn! zDoNHNwghDxKTpe}pNdtbc_`nIH71#>iih<^9Op%(Vov9**>x#oY=8nO6JeE3g%&!s zv4{qKM=k<-Kp=Jdpgk-0aON)g^4_Rc_!X1>jQNc3kE>g3fppmz>@b)q_>$sY9*LfhYIc75)Wtb)Zoa$e;&DMhHJxD}3rx+PCF$CCv|PW>COHC1 z(>FM|+Yw6<1&~&Wzr8rE5>4x6qjLR=ihj8IzP&!X%8(P$oYC8#dh=6L5Q~157|i?u zU<2u^>TqmlnIG+#RbM4)g&h?F?g@mvt7YTHK{MH-3c_TO>nR}{wMu32W=}I@{UN(?zoGYN7RrJo?SVD&O>1kNIdSpSM#2zu)={6?8WXIb~ zksNk*uYI&7rt*F~Coop2x^**@=w4>F+n$kXXO5p9jzqxPUbNmr6!FbSz|8e_=1CTk z;2P$i_Gu>I6Ik7;kx>L}Mc082^2WM`pqn>XSA)>7Akoea5I!nvAG=0Sz{@Zf!tXOM zu&*fOzNo$`Rde9_a|!15Dxim7*1(kPrtI2Xt?&=c`f(^uyG33tWK(Kc^B_6EM`d)0 zz1eF_G$Ku`U2LK4xHj67l@6hLZ*XSAH8!LTC*;JV7?1-PKRwy_sMCd`v9UYmutVnv zL4qZdL;_Bk4TZXl)`UOYh;tYNY(##VBJ4#kF~c*4rxG8jG_Qc`JCCh~C8H{cU5FVV zmjXA3gn^xsrwJi}Z>~%ExLA=_8c*RBqok%HtB!8~q-`#a6rIPlI2U){9J4#TIHb0V z*YeA~mc&8lO;t8ARFG-d+_qY4h5HcB=WM%1i^3|N@C2m!b(B?kx8n`;+RosuYN^7m zU5Zq`p_4Uz_t?s$+DDGX`Jb5UI#>Ksgi(mlVMB1QCd+?icpUO!c0;eH0@t#{qfHg^B_9jOj<)sbYc8{VFliw>)*6-(|!<7N^~{- zO@0@T$8ov&krw!C z^P_3y(mBiKf~cNQn;-Xe26msu+N|6|by7h>iQY>&rwB#+7&R&AHvtrA(PBc(T2JMm zOUwSXvX&uNIZ@GN)sz9f47p9N8PmiONAZ$HE7_WwrQtVnAOwCwr$ zRGnI*qqH$@tC2VqXlysXpzkQsMqkX9G#;#h-jCGzeyAb4h7#TLRs9fyk8{~4A}lSo zK8Bxuy9dm?k^_!uR+>^BqPxBPonHGSbCK;6DvH}^F3K3FFKh*hgtbFjj&7bGe*ZRw z#lhs-Q^Me3y3TG!4X>n4Y=t)y=)8OTi#D7mfG{l0v^ry40hml{Dnt@pJD7j%xnp9> zk@d0a$JE^Gq?+FnkEl|b_pi6 zBmMJOq(1!euQLe@EBicI$$%cc3O=|UyQ%ryg#~e)+`a#qK%dFrHmDKp3F#D{r|AG8 z+duqJQD+Xizdr;)04F43#mDtShJ{()`XwEJgkLP4SJC)gI^w-Q=DxNHj`yh1UC-8I zXmF~Qsj1xnM07O|&@>9!Hm!-nwU>@t;Ot|(sG_b;cR&*}LG5-1E6{!RPmeU9N(_U- zN>lel6m+V0TL=p9TDGII_IRHM82TM0`(BKEC{R6FPRy>1lFk-iRyekq`sM;u6*?oC zo23?DMemOo5x6C!N^3Ke0XQNHku4l5CvHHzwMC`&)9BU(OWTsLU`rKul91H?%)wZ4 z@mlndF^j3WECw0lkNMs1Cot=of$FdOx1XPzup`KjEo&R45;;I#zxm(t4lsEUwZ|6O z!ZDO0R>Hs?Y_Gmy**OBOsjErFK2q>rRw@MLe)yngtj_S) zP3^=c^*dzMgkvZWIj5&$Qd-igmcRCn%x)}$E}oxTvmZMVyARm?InoByU@|{`93E0T zb9SkAoEXXl6UT}$_B$h9XDe8a=tbnFNDiWW)1z}&huh5ZWD!E>|H5%aT58f8a0z5Y zpba(56mgWH4X&e1&aCTZXJA&&o&Ag2*j{wkEQBE0ASYI^! z5yrp<AtmgH10AFI?=2p3 zRjHb)f0dCToy3&U2)Pkdt6v3PO)NGSP>4t(Rx2?4Y=ss~A;Le{n%|zXQzNjwG*yH7h{9J61h`TAG%Z-kdl3 z?#bGSCDR@T9jSxUig`~wT9dt!JY&vlL`J#`#PW|J%|kgV)Ql)nCbX+G%;)Mk6;`CW zzb3BZYzBch&b z>r?(DYYz+O2lr-HTQb$csOGcKDieyE6f-46Xmos5=j>AEN~JHT7Rpl1f(*z#dy*Za zQp}$rJ2j-^5#g&D##SG(u?NUz zuH`esF9IHtaw7q*`UGdb_;8MVC?t;DppEBL)YdKHu#GJ|sdK%IKjMri!RW>?3hW6v zQFpl3gf{oU%LZKCg-xUQD%(Amm;PvaZ}@ob>)g(a&;TA=vsY^l=hWH#ksrpH?4E;U zLIZ5K*b4@l0-i6=WkBkVOz7z0!Ht!P9x)YUgS^#8R8r(!)-_hKtYmwUgGECyHs-I(7 zLV!`P*9bvZ$A}gOK<(2UWe-e%SFs0;OzlPx;!UICc^9*fr#spNdt@&)Ly|A1V$Lhh zCFfw#tMG$d(>%6(_H^b?2M{+4{7rV8U);JL<5C8tVOQ0ze75Dt1VwGEK8fbFvgEGN^ zKlFMui|@3$z@hWBwEFugQkMFtIeSKSE7Vz&rAk>d%7dZFD`RrN&}BeyV-hg& zFRDTqgWKz<3Ii{sBX6^sHOs-7JkZq|VsHdXddBnDe<+XpE+>O;#L zi!~J^zB<2L+}a;m?JXu=cxGsiqVGGj)F= z1I5{okZg$iIp=#*hj5ieM{(d=I*;b2lza>E{BAk4!4-sn}17>lqbEQNF zsleBVlc$kOlgy^bjvfK=YSrA{;Kz<5Racx^CGpRH1nPF!*XWmXS_gG9C-uHUr-*n8 z;ry7%^v;royJX(OO#<+uxO-*ebDu+KNUYbGGZSpzT@A^#LQh4DUp7o`LQZzzn=+wD z1WH6DH=x zt!tHm%^0hvDtUzSLa&r8q2zkkL~YCX{+>F5eW_aQ5*mQd1e3V*cF*LpCGXZB7<3Dp zXf?+d1k1a+Fho8=26?Dsl#4iElIN$Q!Xj2)70z`JvtJLESf~-%C)jbT zUF)INQC?_B8EmoRJk@9z9?T`7Tc!|1y@q)agrT0I#QzkqF{LRU)@u5=TCuLiQtrrD z{HXv5yQ2{Ejdve!kmj_@|MXb8kh$)yfn1hoTdO;jRRrWMa>DCdUuKPD>h!a$FNR&` ze#%(GH|thCq>QE-y2$c2CLqD-Egb=po+_?(!2TJuu9q<6{x=4isCB3>X#PL1I` zL5P-VdHHX(i#CDGbPUT4t4ppAUyS`DfZCgX6+X6Z2)hmLj9rQl>Mv`5(iu~b-`+KA zB){0%`sB@r4vT|<#y_oY{;h@lTR{4^9QG@e{y!^+_3GZK!~H+06x)S91`u)ke^c@K zzox|fpZ1cCK9T9yh@*o#(iA}=aam_K`gvUQ*5x^fkRu<|`4rSWw|71c)JUU#@32B2 zSNrQ}PxMI(zs7r2v}cu$ez%Ea$i>$DBBJj{qS{KiV)4K{^|n3XLqb?k1krt)>o6j5 zrMHr(bA$HRvO_bR#5-goWKbq5y@ua`D`D|r69$7NJFp{o-i~6^(2LQc4r1_K9hBNr*p(QSK@us0c0pfc%Tr6gb=5`{S%vd$ z%s9<;))T|evb>pq$nN`*CE5cEEl{9#rwL6t+Oz32$sc7zsgMd6XL(K_eY^ZU1U?&_ z)}Hj?;0<8?BC!5=WHKhI7Cf;nv3MK%%VTYshiLcFn;X(Q>CNTYOtL@V)(@0Z-rSC& zH*$`|y&tKhJD_MdFmgBXosBsKle8l?iE)gd3EHwV=G2wq2aa$>5i-AOg4nd_Z*y9unXH=9}yrP3*IW+d^-OQ-$6svGE&rB)Tk&Zj@4^xI76CA7E zv?_WGw{Gx1H7?gTGF=l>5%JYr1?@_rrt<2;K~mtL&1o=TCI0De=V>=j{rwSf62M9C z*jh8t&J}LA)l#JQj-a0-+SZf)R5$oy(%9=LCYfeKiLD4wT>)`D@YW`$I@9Fjqc2X3 ziw`YgPCF%jS?kwR^W>Yyv7vw^^7SY9fYmbGU!L3`P?A#T7{{Ii&!Vm%INKse<|THI z=I8D2V~&YW+i=kS3-3v6d2SDVrF%i<;Xu$qSy8jgx>XG~a8M~&fhWD7lUo{}?oD{{ zg#tYYY+%yi8l*Rf&?yUgkhev7@N#z4V%`RdxY_i=P;6zjND{_iPl-2D)V;0vmNEtJ zx3P@dZYQ$)`c0QSz^s4RHB%AF8-I_?+4g?T=#nQ;dJO4S_aN8Dp5A96%o5QouOk=7M-vUI!(^(j3!1iG zL|ws+d@93=#V<06Vv`OI9u3%==ZDut^fTInD(@RQCli|I0wNjEyGX)lAW;DvKBzYSNKh6-DA^5(S`GwKw8al)= z4<_FL7GXMS4>Zd)oQSgcB3*1|+VW$2^%5Tq^d{lnNHxM*?G`!%?Ow^ntGhyoECjF& z%D7m{zM+V;Vi@%j*qU#K*vn^?&uVhH$Z_CTQ34Bt1>UF7;-e$m>scu!;_2)=r>rF= zo>REup@YqCd#vq6=3qz9qQT2Wq-y9nv_Z^ION@5rImnCejSjx_E4dU#4Y)%`vSf*+ znPK9!6SncLo<%vl8QEl{Ef(LU)t^@g^Y8XYozRzk%gbtU@gLNtu$BwWxaRF z0(uTk7>U!7C9!6}tQrTl-SFEYGTB35c)FBMny4mg3F=_*Q1&E3uEfI5Fj4h`6(~7* z#918w5|x=#C7aHfbKmQpMDcD8c+XicDQr?KQ3EmR=zFwqTU!_HmBXNb%Z#8kU`cXl zqeIo{!5u#a=4#|Eh#A)#YFq)TF+G8qd#G&ON8J8NZk2c*<$alrY~4#+eB)cV4Of!z z&Q6U@Gaf0X>F9E+92-F& z{8dgn^XW`@)8&Ac4#!y8=SMsr@5;d(OrhXw1?{l>;a!D}JLnW`;Mm~gHn<#N+tZHS z(lQtLUCy>^K;nX_HljhauJ`4LXOGkQoq5b)^w%&LtRKvkRXS*IJiC90N|nld>*jFg z<#p8j^g}x0FU^yRX0VZ2A3bI>%dE`Rn(g$-Cdu(V3mc4a=Ev)DzZxT-L+|pizL_KW z5$1WXaozW6-WdY)MCBE)M3=6>RRze6rG0)Z&qqjqq5XXXBk`;bT5x85K_ zkci38^D?ET+h;2ydXtPol@A)i?o}!|U`bTHJ5WUyJZP}8l9!Ervv_tJZFLt!hnNMW zWOTJJhVxJHEoQ4fP8cp^KgkY;q7PK#SJHaDyE}8vt+_#a39fyj;wBk^wSZ#Jia94y z+Y*G7AHmfrqyW)SfUqX<_wExKbU38|HcV7HeL=sUs}7J##_s`DB{Xy7h|Bl1nStRQ zrDVT(*i36fWTrl(SNY(iY!{dWQ60t}XsmbKylrX5o2>Nkw95y>+)c#q$E8Z~Hgh&N zJE(R0)Romz6ctv_Wjgwb&0nSKixXB5pn~+6?~@3BERU^+RmD-3T^+Dy zw2b1x2zYfsHk))}TfU|xS4ujbWRdgjhi9uZ*#Md=7TxR2F$=* zZUInOLKX0)PUOwkb9GQ5KZ>^~n0Y7dETaxiZ+bB}3?D9<__m6?QbI{ORM0HtoWVbJ?E#T0!-o=rUz3Sk||>;zYv;HqzWq^0FsHwbGHG$#4D5FY%3h3!n>Y zwgmcoKh|6w&ZUpq+UazWNkj9^r2@~kLaB2Cw2=G%rKta3T&@s&y?7uJU|0UtH3;jE z)ehC`M^BEfdWPKiEFt;&_Z89p6|N7R1YdQTpHR_W`(xFFnSrh3ePzz*!}r^F{#aQF zT~U0wgYknHH!Lvd${!1&t8KZwZVWbwy2y>7CZ|6&ayxnU`RVyq%bZT5)VO5gS7h_M z>MyW8=eAk?Kh|9sL5f?C%|YdGQ`?-H2M42(zpu}zM6KNe#ek3;76)JxWqD25pR6_- z61b(K!u+3f+7B_Z=$@9dlU|2C_`-xqvWUUv!XE?{Iw`5#8$at{vGn&hY% zoVw$&_Q%qX;66{&#|%LGU|e17Y}mN2^zTbV_bcT{N-5zTpNnHV!Q%^=kEMn>(fJ}5 zhdwRY>`h>Mb*eE%bqQ57oI-XV)h|9RmRr=<!HkC`+M%eC&+PZF*Qi!#XO4!YwP!yk%En`iulSG7T`Tw zB}w4qA8Y3QD=1&TTcWUEHN%w+h?t|6CBk8mlQWOXw<=^U#unK%;r%DGd$;n=Ft(~1yWhN=zB=OphCe7sAxM)8pscN};#6Y)P4M#x z-yN7a?q4wGF0BHLh;MBM&?V=ip6M_L<<#(pN0bf}W0k(WA7mKGO}f?6;q$HFm1& zSjy;L?MR+-kO(0q?(C@3ox}JGR#T-rj_4L0y%*iHLYUg`dUiq(4t1waUa18wWw#+TCxpT9o+%IF$0RCC(JVu~#Zn~w0!*7Fwi(V`>Iv-KkRORzdpsR%xQ<>{o7 zlkj#;^f~INA!vhXvaAaFEpKnu57neep3wFd^Ns0yEV8#7TTQcHHYY9o{>!Yd2tu{Q_?s=G2SKq$?v%6TC^uF@_1N%mfl(J}Ph<^cQ2|*Io#x_6 zo{-kUb;?+wN!K1?udOpBQ(*C`Dk0rrY4q@~6XP--?@-q0oSkc*5evN~DIrcqOd|y> zbm{NWx|#iT^Z*}4Cv*bH0!(C$9d7h}7FvHZr|kYD)MSyu&6mHxwtW9WCVXu>A4Zx# zZT423=D4y4WNHz#b3HzD0StzHD5=p_Ni~~;9DBIoq8K77b;3OgCpt1afYyXW_|twg zsvzZNj9x2TQW@g5{4F0`r7$+nFPEI5vRd7Yc-kiGdF#L##Y|vOB`96lT0H{uq_#MxbhF>@-4i`^w@TVlILG-OsktvzPqUXtBiQD?vjJsfU6;XODds3LRHCbelrGuE0)T`*IEUKSt8eCY3U%_JRQ; zhv=NqOo!I4eh`_<$=LPxZhPUjHNO?ta?Pg>5I)9d$^Khfb8yc(Z}tocd;;>`5z>+QTk2iflREP{A|`kRIYoHD zBmS04zaT@8Pj*I$|o?nG}5i0k}SH9XGaUEPvDyj}N$ z=&_(JuEk$f!TXe1hcmC0%|_t)^;_Xn|J1;to)2}sz!pU0>c@rZf3i?$Y%XxKw_#rS z{dVm?x}b0WFtcH|xF& zfjLCN#mSE!MI$w>fASOR>#bO#8^$%;nK}a^q^blbh{yiM zc}F-vOWxf~7PhuCp26Rn_-sD}N$V)-YM^|GRn&SJwO``X)*4n)u_h~gbs`(ruDFkn z9b**Bq^vB=&9xZC+)T5AdYev zvR9`3ap!ru9ChyZL*2+Hk+W@c1Udu8!A(w`4|E0|O#IDil3g?HcE}DX+h<`MNp&3k%tIVte2CDGWd zC)Dp3HsUN%Z_+Ugq>j%^B4y%_Y^$M;S6LX5gU8lQ`Wg3K)NpoaOZj;GySkmNgBgU# zv~Zm^H;s0TK3`(U097x)_Ye4UEISan;Gxdnmr>e3cUIH0RrVJ&pqF{R8yuRZeqjlW z(k-H!R?C|T@_>X#ZOAP(GLlF^LLmH2*_se zQ|E8Uj1p-vekaU8M7wzySv95frw^b4oGZdsOkQp*!$oB#lXzHKRkW^-qMlOs!HzHY z^CZ9sKdqH7;3pwnh0_{6(_EavC-l>Jj;denUs;mx+t=SyBFyi_w6sVsX3*Gsri2kQ zT_#jsb&dm6B`M}LOfT{ILJ3UzJQ^feE#xBZz8XVU!F`YYk|Rk_I}|tn!RG=F{zsxV zGfjY;{J|8+@#g(Xq4KF@%niO=6uX&cw21ZVZ~FmyjB5x+@67^YJ4TGb50j^pg~wL&5rC$82S^kxsR-9|Wac z^8M|7_7hc=y*J&~4?cYQ{Acr&b_9-O?PGR|3VrO)S`2%PRV6IrU*L&%l&Z^?!fAI% z8<@77BkmXDJU}Pd$A<1DTRR1SDtFc#&f1nOuCGKu%l+@`v*_$#PNScBJS**9s6J)} zq&!MH7O$iY9qHc~M}DyM$W+p%Rr5u@0S0`wp@BK#JW;qQJXE+%7SVNah^e@p%8!VT zL2ZI?{(aIGiGWPf_GhV@ub}5xN>@JyH@X<*6>gYpqUbv*!ejj+GH6fr{GDsFbW>hQ zzxExYHGSJpy#zODve(kb4R}pM+ivBW_5O@X(u$9#Z|mNx!@~j$vBY%4$i=q}b?zLc z{2kh|%Q@!|%U8F!#xS_4JMC$g)Apj2R^AP(g^${MIT#xUB8+8OJExuG%()!tCTXUXC>pF- zES&-J=z|6ad!?Z1IQfzn;tct1F)2P4w$y%=6Ma^L-mSVfqcUAfeLiNj(;2 zP28h`9UJ+Vhsu%4#=HdJbH|$VZ)20ww5HdFIL&^-=lcAkMdRAz8Cp8+ux6IGi^H?( zqt1;QP(TxlY<_k3mdW2ucscvR#HR#*_%1Cbhx?@N8^1kgJXeRZC*_NN%Ar59=F*2y zk={riZ#M1s?Uk+!Fg0VMC(cAI6`(;+csNMgi2ACj7cp#!8TVi-$;N+V$DfL2wx_QG z`T%H=#wm%n8OsER9*WO?SR+=Z(;49X{HNS194Rl-mDK6mqFrTzN&hAjbq#OoLh-ECr%K_j~gUI;SajbzPj6 zpBE&8_j<~in6)u}=9_b2N0(W^)>58sTJ}tI znS-n*wXy7Q7KgVx ze~Fd)^RVHg7_>-2DNu@bYN?+ZIW^-%koBUO4+K-c;!4z zn!{PY#L5#EFR4HL&v5?uH{pE!G12(DbG&*JkWPGBEI^h;lP(Snm%=|iMJB~m2n|HBys9eRVFF@7#CyJ%2I};u z9+pL0UmrYvbFp-IzjoR3ni1>Z?4b)Q?M&+Pp9uQr<{S_{Rr{JJ(z+_~b`(ev@~glu z|7){c!A^9VPyow4{6kC7glRo5Xyn#)WK8XViRB(bNKFG-YE?r!O{`;U^cxp^gWupl zOBSxxtq|cGiRSaaDfa9@t;l;FE#md*^8^K6wN2`?jaMyu>D0TC`j75dfNEh4-VwUL zRLPT#*T38dRTg`c^yIf?5Rwnf_4HKCGs7iGeL`#C_BJ;-#wV@uf+bh$S0aeRy8@?r zy+4S6{ZYfDG6K~GhpqBH*~L7xh#jUEj+$(qgcQ0QXkz`Muy2j$*f7jzOgpj>xt9m7dBeiNR|Aq0Q#Zm)@!1wUIiny7_L)G!Wm z@6(fm?qY`h4ww_`z0bc-Dw0!&;Jv%W&gdVt+rDaAuI@Lb`yXz#J*Wm}hZ;Tyo#4C2 zqGK~nN)@l`9h=>2jc;%gl!vdxKN5Lc{gvToY1cgn5Ulbpukk+Xc8yM01Y}-5`x5l zl7jR|N=WC>-3%bo4N6KIgn%gBIdlw2mw*D2f-sblqJV^e64Jx(ngPAN>pACr-*e9I zea`3pcjmhGUVH7e)?Vwo)?Swy*yX-b4DG90{W~HunDROK`cYKI^C8*C$llLB)o~sn znmztk^HV4=I$-aPOdH}r><-dcU6z~AtjVbxY;|?FvOL+0x2LoiP)>u$Q^GhlyquF2 zZe4+6|6yn%)nf~?y)v^;{^OD}iiP*@e|vVvK1MYwVpBMNvXI#Vk~jK(3(re348cZH zQgWeb&t>tk)sF+}dNbf8-m%@d_5*(3x(x2=Y&l1$Am_Kv$92{#$mhZ;-s%_JOrbsp zh-IQK-gZTRrHkWrk{*yp35)AchS3k%)EAJ*0Y%YbeIKXHes-6d0pVK0QZFU;Q5-qz zOOD5a-4tsoVlxyV2@(ww`EKIM{4KFGzN@XHMh0o4nmnZBq8Y zFS(hH{wVE9js_7^(k0O_()je4x%`)q>9n9~kmQG*L4}Q{NuA#ZL8nz&TQWcN0Tn@< zR$zJ4|LUBbmUjI<97LCKHD!QomkcZaIDd{nC5-l`O+v+;r-fr`Ka>dNi=Sp)ejoDv zCH;8wo_4Alot60=+1qDI5}1S;z=-Gs@sB=;p)A5yP>}o0)mqUY7fGv9e$`N8cBXi^i^JD|y_$__yZ)aEOcVKq3s2m{5 z%2{AT2g!+=w1Bh^tP>VSXhU94u5)k!9TTAzG!WMOcoDcn3RH`d{c-p$wb`*U3$m7t zojd-XGw-6-zkJvhphq4UH}`41U_76Yr+c_b`Ko2BlO!iK%1cY%Np-GH$Tm!wDBup~ z_{OKx_5Tru8C5#QzkNYMdG=W|uALGh$3~59mhXIQMo95%eq(-8I73dlceuH&J*E=@}!s%Q8ZN0-@S|zzw`kT zz{OSlVO7npU+ZP)f`O$v&G@wNk*rY*m7ektBhYmg>gjAvv57LL^p@2_BqT< zT9TEi>C^)6SjA26el!E)ImLlOxoXC#tun#FI|; z@uU=U`wWcaFhYTUKx;bfu`y{#q?t?PlUHHLDnUGl!o8dlR!>wp zUmV|?r^zu8hC3(>`X${Y!@TQ*q18nSCI;KYIVgwDQO1BqUbSec`v@=`Dmvxp1}<#` z7V1$}8sN-J6&5f;qy8*Of_v;w4!=1%cCBgo%3%02c6O~h+-A9_k_o<80{B0)y2uh+ z0f_B#Nx`ieLIg2f{zNrT%Wny<3cbRq5Ba0?q(=Ijn7IFyk8MDSoF4j~FS;i^(S20# z5gTn$xlFEX1F(GqEFuTi%3F2hv}M-=TOvl8dBBQMj;pT8fy7ZY&;X9xQ*g|+H3)KK z0DAo>-SH%e$V;@jyn%5&~uNEZ6m#QZ(Z(It4xF>ydp1B5Sav&lP0m zQxx4Fal`YJP>C-!3;0GuT)Ua}mD4r39;H;?n}iI#wPf#T;ER3@I$cwG7j)ho9QwL| z=XCbZ`vTVA>caCfdZAC;?HR(s=-GioB&xb+MVB~S!}0`m1HavN?*Dj~Ttz&g`tr)C zJeeNQGVUB8{@(QVabs`>vh1EyCZ(^;q`v;dk?hZ5&pir{x?cMHlu0t9Y-NTD*Ja~Q z(C9Leu@=GbPGjC9qfxUG!)pdcrh{%cfFW26Ez!AE8?FC@?G#z^Dh+TYd)8>PUtjWC zzgD)QuJv>;Neqf#I5C!A);R@FH;j%KuG4r9Zn+wSmpesR+zmSAI&;OpB_o;lW{#1z ztnmR+Z`1Wr`+ftq&ZgT0_N@I6x1T|LA`FRK51@sLNK>eS$NvR$Gv=%fHHwD9ILiM8 zW5v7Q*1)!Q1FGGypc(@WRH%&PU&u(z;T>=T7M#V#m;jZD{tIzauz`?gg3iNx>CjHs zpLS3j2flc&3YO_Eh`LJf7t+N%*tV7N!Z(Lf7EZ>5%xl{SpsmmMYZIsK3OMdAct&_M z-TMeG@3fUMZROWbSno41va%EP)xlleTfgx5V4|cz&uT85*YGz1i3U7#>puyIKd~rK zu)lsS?P+TWRm_q1vph zUNeI{;=wSI9GG)%xog~<;eC4T(PGmiD|=PoVM%uH25DHx8bC8;fS^SE~= z^MB#r0~zlppe3kiMBnsY^nnqe_I?)`b7#(2?^FJ!{NmldxTCFB7bikXuuBQjz8{L1 zUP2idKiS4&?VeJ4Ps8NxB&}+noYHvgYOHEl1W#Pqki+i{5JheWI5d3ZkZ_x)n=3cf*fheticCh!_`Ij>F4E_X~1hw+|W zew(fcx1fx4STwqN+*SCgo9W8N+sXz<#)yh%Nlt?0UPX+8M}3!dR1^BqrD^nf=w(?7XB@ZKzQ_I?6jjd?C|;2BTxgh-sx5`i>)5swF4_E-JoGUZ(Mv?Rk_BsB-e`F@&`}CD#l0*97#1agM;HaN9Vfo^%-?; z+;S9IdK|LM`P^1NrnSnVwV=X?@&{E#8gW@_X>mB@f-uXL`*r~^VQdrF6` zdv$p#UNk*J{UEk9ezTF!rg=RLT1@lAh9b!J7b(7N+59dH(X&=kSgM21%tQhrtNs2| zSKD8hvxN_#9LcnG&LCvp;=&lMONVkFvSj**5tc6InRERViO0*=p7c3aWno?}HthK$7= zwYYtYLUnKC6@k_qDV=^HB&kcBf9M?3_2Q_9D31MhAS-BRvq`h#+u?3ny)9>Kwi8vm z+>X|ks5M1e#SEsI)h&#SPgrYqUh4MpL-VsfZ};7Q)kgW8G9WRTdv{Y{-08wgaRZ^J zovVd!gdMMt_M?P4IA^4i1P!fIEURf&L(25J`_Do@GtBfx=Zac_<+CN%m zxO{88_PVzd;uj5*XA1!Mb)NMgt0eCG9M<0Bf`e&6F_Ttjuo7h7{apP_tekMUZM1T| zlz5*u_;z>>0i4Mqo;M(@e|w`)FO4k2y4uht;G@##{+VuP&qY(frM3L!zF;rQ?DGTg|qc8a_`q zn432k+|PKYpEj5j(Mpe&}>a657qB0Ubq3r|^%Y&u&!Aja1xDcu%Mc$xn2v+>d z66L~Yb}DV4dVhge`qUx#L!nopGq{zn^D4Tcf37#X!M8e6JLxvts2hP9TL}7AV)Udxc}TVyg@6bS;1`+6Q3| z6%}k7c($!_q$%luF-Mgz%?>FU zC`K)qP~37>&0ps{*958bs*7Hp!YmA2C;04>9E|@lU$)-YI7#h!;S=j~DA#<$NIcXI zb1zs*lapL%W9-p3Bg~ zU5s~b$XM7p@>P)bkS4PC!t}=O)6*mA-&xf?gcOlv>e4UJJ=)#CEQl|rGS@OlXUzr@ z-j65WU#To_ZUlY>Sb0rYUYN>}b$T|O#3+K} zSV{aNl;ywqEWpLgEn^MOB;;hPL~ykkQ$9>r8Z82Ein!)GI( z)SKIQf)^`yjaaBgIm)NQ2Ox7Jam%D$zPYgC_GYByp$oj`MaVmC)dX!+Hs)v8Mb%(W zT%+<;=9kl(0r99=Put@{gQlyr_ws}2g7W&iB*K`yIVuEi`Z}B3*M$w|J>P2R^maZw$M3K3_hm{PpUS`&YW(EalRP zPTlwZ<{ML=Z_B*+@<|Pjpd$HIwamD_$l>Ys47~*s2ew)zJ5(6^BjgjbkUoryNZ^TQ~mlrj}~W# za_DhVXp9mBh)SJD>-i~F(HWX$W&`AcbN-UY72~KR-N?bP7QV*#bZXCjBRLKr8 z!DEs0ZU$wEhAJC<5* zt@cZ@%S|JcV-uGI6=R?p%TNV|97#d_xdK8&)w}4=U!2cu(ciu|BCzwMxavwNX;7{S zvwV!{!uBOlKod_RAfc|TCpPUb17G0Cad8{IAzzl%k4Gh4%}mQVA6Zz1kt?j}@`1x-u9tulu%?juVcNA1iVxr0qfz$u5u#kN-a8bz9 zROb0-B3Y8Sw8wLwvVCvK(;dH2LN+Y{8!Z>c-(QX>4M;Ry?hzMdo?3#~<~v9^^H%4k zD8vQw?$m~737#P$e)izrNaH~WVeac5m+WU~@6e^PuF=D1{=qjHmg-K~I+1Xk5NGegh3zfP*CrZXjQD4OBNyRx-L{b^33Y;Sl06^;(WvQ z=WIE)HQ6vWrb^fIZy8keF&e|$xhzOqroP>y6~3%93cfykv+p-k`g;Vcxj(E*Ta8d~ zbx5ylz9hj_dfL4zqq==;3-~ZuhL`OPH#tQ|Ds%S@k)eUyIAW%pc;ri^se9}JJ@vOF z*VrRckW4y9)Vbb3!(y#kG;bOAu{fFxy0?q;?MCyodxF}f$moJBoG%G#ELtrqI15_( z`n>zL@hOqlU>iC9NVq_NYsxQ=ZIVeocOEeRUY267fDq=)zv5>UhzKoxCQ3qw*a5r# z|49&PzdEOu61rddpM5;nE#tAD-hUDbJJLk@9lZTq^8XPyGgkM1XYI@V{9nOAeLlWL z4FFJ}bzwDxZRRn`9XxLbSpvSXU~>ImL4In|;N;y<-PrW{&8tu%?mq#X{uI!`cC{+l z@j=(j!pO`ZXG_ts;J-p8hX%r!{BuBQmj43mLk)@v?|SvG@KW9?7!EY*(Z51UZEYYV zRrJ{ZlA6b$IPuSTTW`nWi$r9E(&nxGzK!|kRvu_o?S87Lo*wHEH}i=?MZZ$_fhX@> zpA)A&)IesI69lFdA~*3BTmU?UMdnY=Eqh^3#b-y-axh>IO%yD+gu3Fa?H zn8XbrFZ*XK?&8{wW9hJ)Vw)|A^vn)oD!B)r*2N3umDxURK}?oo7dL3=(%M|;k0JRr z=bPA}07S_exW{_DAi#sTJ~CN$yqibr^X)0-6}G`&PW6FA-K{@k!D=yU#3q9)3I?p6 zCi^5VAA4{N9b|9K=aM20f{Z}C3~0ze>YYe#^th=&wYWw*>qlxF9&Ok!NN;qo{`*KLg%cit)@_e!KW61Meyg093EX>D0Tg@4WX3#qk%Pa#8$Mn|#X9{F~;mqH_ z>%7}rlhrWqucK|+giG3Ewb*-eD#wn;`p(Iv1TWA)Io%sIgF!oIRzBDZJfTbf$w7JH zat=tM2A7>FuStU0$;9Q0wvl(|ncETFl;1a9djT*jh)dkkJ9gs{bwLd>;$FzZCvmrb zaYr93{1s%HpQNg#|M=m;c#CLGNDR5PgHv}Wd-FQ&_R)zbCT0+mMN3w1JGTpyS(LQ@ zl3bAB{Y$y%b4%`NB4l=2bymsc9ufG=hF?(C^Pch(1O>MRJ-^p;_-dTtiTdmAh^3a! zhxl>wzmXW40~Guv{R!dEu<}V>@D{vlA7IZGQ5+-C0!PqKYYflop|DSsP_2Ag%&tPJ zrcT-bylnp^SYUVse3tgl`48TSp{g>EBjXJIJoD=B?p4p*h1&a2FbZm4G9>VhK8-X_ z+7MY31pYGC-);7gZ89>P^UI3f`qa|6s)nO7cev+S+Nls1+vLc%*(ewo*VMmfEk=;4b|f5B2QmH94EPv``7 zT5cg<&tS@{?qYet*H&|fO+O_x{hRc$|Gfa}4>9IC_XgD6OdwDmjd{7wnN5E2fezVLe`mEnU9 zy8bB#_ak`uy@U8yIk>+~F99VJeFH=AK9fc=-ki5^h zKK#23*=akZvu|+3{wCvf+JXt2i+T5NGFUgR+g@^qh2n$~zwR#U^2V%%TAo6qtj z2lyLDZvIUs^%su(T>|&Ft;t5V4an(LMQLj%2G`6)+)^<(NnZZ8HUAMIm(o52PD=l= zltZ;k^iRbL|MQRC$7G%p9WbUm`^!b%0aE5U>i@dm{=J)i{lDxF@Ndx0>||wj5L6Sm z;=gYKmv}6P>K6wzjT5Kx^FjBQyYTn@<+^oq0^*+nPw#2omI5w}qCWV-bo#-U|EV9G z!23rlV*#yYGXY4OAw5{a@sZ&Qu!89;iIH!^{H-b34sp4&#Vz^a;D9^uS=@Ssy(3&^ zhQLe2xXNATkd_flA*O?yL8{v`0Z_dyJ1B`PZK^{Uz$MCaBrf`cUoby3P{#^Dmi(U9 zSXhB6rim@9LV$l8&yiFZ4kAKzp8oOxf&7-C;f3clX4@of%La_{48Z>4MqTjrTLOBcF|O7-}5&Iwd<)!My$8=BZ>j zKtDb(7hN&P^4hz@*okccru_xnV`?i}IE!BJV-pmsy*zJA$$}BjI+>Wm&TJxeiwF6+ z^rLj0>cmjp0MC)|Ks7ubs!azUcDHzJfdd3=v1X>|ZDh(RlIKuH8b*3 zZ;ta{CcF%F@mg0I{OUSAmoIgjj>)66vurhb6#%Zc z_AZ^2zd*-uAu+Nu%wszC{%jz2z26E6UJKF6VjA|=MIqsy83>GJGO2P(XLgPxBSymT?XUahcp8tuI0wrwJ;+i$_L3oZ%o9^h778L zY2z2Eii`Ww8I522HP9pxwhV>y0~(<5GEf=lGnDz+!#zR9>(Py27m3o?4xpg*J@s|G=PvfTYKM38!RG{v$!B$|^4eT18fw8WjvcD{vfyQiM*r(LP@$d;`=H6C*WQL1P>rSlKPlD);=ae1 zq}I+d&Fz(+7kxT%jc3n-cCzqmJ!rn)rDx>@l(MM4t^#I@EI<32M0eO28jgc3KQDD5 z6q{+T9Oe>cy`p4RYcV5H4$DJF2TGW}5P{01h$S~nY( zuzE-eOF5pnzWyuT<9GKzLcq&SHtraJkZ_zK+0Y-{EoBf`HPjHBXD+_{MMCW`X;N_a zL40_GoATzfi!8UQVXwHC0NaPsc`JxFKOy#Arygb>v3b4ETNa=u>XI4KKG~53IZn)M zdxm{H*uD2*h_v{Xu|Z%0DU=iz)CKEGIxWF%+*Jvgk(c$~?%-TpJIXW)u=g0xu9j(T zOUF9S)!E1QMDWx`p0Hm`dD zsAZR;G(3u?{m3xLXlzqRuVaVk_RVkD!k3k$yHy0ezr2l`#zkmr4sto_^-@sf;qOg` z@oIzXjV`hhj^Uor2-qKIC4|A#P3ZC2br(&0mE)$b;u~;KEI@Fs2LdAvYkn|nc{||f z!C5*@#axWqT=OuXJz_XW!NH5iO@Y*kBcRNX0x(s8;#5F2K0`3PD>%Jf-c+>PqYmS<(Q+a z>Ry&DfH&tG)3cfynXAU?L`>^Hs>irA*!cts*_f&U9B#djQ(waJO|GT*7-;gjN39}M z8({l+Jq7uM&!0w$gDdg2S=rPDtnjm~PdXHOhCf>|1zvK&RPM5e%#bAZEwEDnTyA`T zGZ==6IzV4wC0>HdqYmCK`}_ZoehmEPX$o-67sN$}K8eT90?pu2M+o)*?jN$$N*0d+ zt0W~mOb6&7&`6dpa92I#!r2oK%+$3wbC;Yl}nEQEFXqT3(I1l0_|S9$oQ?GHN=J`R1P%VM*QgE>7x2N~$NUM)8# z8=;b?O4Q-QCdMYyVLzb@+e{bu^-CfI^{bUm--GrC5FMo_cz{SPK0rS!Y^8zv6*a`& z+ZrRM1Yboz`_w=i^($%!Dt+zh_Bxb+(meiWx__AY?A3Cq(aRN);`a*64ntR7EtVi zld>i``L6H!4ZA^^1`u@6ea=Y?cNC;zu4+LBXNixvm4c-*W(#)-Ymr|*kFo)ui7X>= z2Nrh&6%O`xF^TskjNTIYwgWTY31S%tr1r3w&aD(Ff?BR|ep~>)>C8t*@?&H~->xge z#jslX^`|;24G=4y#}*IVs<06>R~FIPVbeG;~#vzKAfKu~-5;ZW^mj(0&wwT*4W{rc&Eh zd{dN+q36~bc0H}@W>+T@6(}Qht*Nf?UKKHFtgo2X>0RAgj#ah?Q5;5xv3uxG=?J2pf})#0?Pq=0&;m8OsUCleRjG3EKca4nPJe2`tsVr zti}elWdGv~@lPeFQ=wG;t!B_tx@eSLnur9@TCOgST(8G{%jA$j+8ApIEdNIL*!GN2 zZuG5&31x0UseslO!h0PO%;;n1>lx?U5dF~W0=%W-ItJAVE{=}HR}WZ+;7_g#?J`e| zzr#{uosn3fan4-jcI39Ao#dM#$22xXs7S&!}2c$?OCa}UwgR_x4O2A@j6vy@* zCIf?P_-cttgr)1)Q)Ly3z+t@MqW}`LX3f%Fs3J~&3s zmyhQO(*;aLA;DcRcgI2+~gU@~~m+SfZc!ONoag6HbG(wbkQH{SaxeTj&RPu|uO1h)f1 zn*;gmQoe+6dQ03hJ$!+6LGxqf{RZ1yZk6PRjCM)9 z?3fV8CcG1U64VA!;ncI(YlW#7iEDFfTLK zYC-$WX{0m8dMld8UiiM)B_c+!BE~)z{KDpEZ({Il!$zWWtU#sCeIIp?5-=3ATApdA z#%FEkDJ%y%T`z)~8RZxT`K5ahFC%VN0$0eZAi8R z)Xf|Bv?i+9IZz(fE=YSG_%atmY>AdqGBE>YMxDhN6|F>!U|ER{cz>D?fd?8yfysp zIsLD_z0meA%IU}$C<2UIO}HAgTv4qlWi21IG1a1M*GAgbFqS(?l;j-*kVk0)cPujX z#YCyE$t&4l6=|Xh4rwBvI0Mc$wE9;D7EafEGX5X2VY96v+TWiV%626g5c4(Fm?Z#82ym@ul*fA_PjKT zuA$K>4`RYX6V=nTw(nA{=tPybb>a#M9J9marS`iN4HT~m;|I+qyBn?2IAvCQ_}uo% zvOh6~lyv+Tb>s~~s?7FULg&qzaM8l0OkCvjE@)rtg`-k!G+kyi#YBDetq{5##+NE+ zYA$)XC~VwJ2u|&vThP@c6XLrI$(n4;9(hiHb+@rN*WX~6jLEaa*+q+$Jeai!9e>Ez zBKunOVkw1!Rn*6f*kb+c;_qltvB?KLt}Ye_isZ;eyAgi_6I2lM z!HWq;3)HCgy?m{qCfBh0lDa|!yIQ=$jo|Cdp=cim;*AT^2L)@-1s0JaY>B!HCCX|Q){X7Zhv5?GSrPC zR?%<9tmXSh#VfC>xdu3gbB+PB_;413-L$%U=APeGJaI@6K#c&n2gGp8DGmY_uioFu z=)W(^diA+TT`ZS;e5H{jJ(^+5n7a>B_MVoVXcz;xEFjcrkrkI5XV^6l+)w;V#l&L? zl_<6Eto#nsK{nF$U5)sWtS*JwGn0ZANxT${n;x^CD6>&VFDf^6cSC(&NGwIfgTwO5 z=FE2Q@nOdc6Q8?`LOZ<$8%=Wo`RdTsA?owd3}Z8v#y3DHNq%eq<> z{UL)TmP%8_QBjV%@r^4C&$y926l~jL{rer0484hzxjFC+!dsugRclDB1<9VhT$7U{ zzJN5zQkt1z*pOqT7ae~P=Gbrfh&zttFfUSAk}K(E-aIeQo*=Z}m)6zI!xZsE+G$Q! zrJ?cEA-qA93HIF;2~h^^w>+{Fmw1+1C?dn!nwYAh%eGn$ zSDtIKI3C4{p%2dw3$!*6YW@CO@BMWY#TEOmjI-TNiuDhEV}jr7_Gl>6_q?sNY2AZS zHj-TG?9DWjgY#%*ktCCfiW@YuQWYMAb6Ebgvj)PszB*|Ilr;wri}=6Ki%D$I0Yvi= zF%)D{Utskbnrat4wQC5ERFMfqny6A)J_yMKp;YjoY8=M{M3Vy%zJF@nFm+;`I} zn_UfG^im{0byL+UVg7V(tVQji{yLflV`PXjWSqj0-E8);*UF1g9woN9JWO4E#C69Q z-}CP|furGS(X0+utbwI0d+^EGU;zh3Sq()pz*mM6>JPZp|}Q zt(^O8^%pNQa6079FsR@vd>Zg=B=U7NF zH?R0zL-F$$kcY|)0#(kzg)cVV29Al{g@G{tOFz*eSNtWxW=O+RK(S8nqd)%t_+VJ= zQKQQR^q^=%nj#I}m;8wZsYbUvDm+$||B^!AShOHvhCUwqvOq^yPo&{OE-3B%tsKnb z+fjct$DPNBu>Q)u!ZDU#*I}SBM>y~M5cR?-BK4W&AGGQpEbjjvut}sVvnBx&laW@q z$%aCKKWYS>pujIe7(31@1FieCwP&SyV*|6#p-^BNol_LpN8iloWgu|X8S>xs@*+A% zvZoN(cn+WfNVByo*MU(p0%?Ia|NQu0#SjiC(SA2I8r8>(+dyUrP(GD)w2`Iwx1BocwjGq8`7^JkRskcNRQmYo zQwG0I&R$tOHWH2EL)HfaxqYp{if{RypA1(R1+sXMe?i7Mpkjwx^ES^iI?W+E9;lYB zDEJ}h%bJ`4krQMO_~g{n*ARk3xLoXW1#L&{p+HGskmzi?MqKC%>=NxP6K=PTL%eT#&CGuoqK18TavFLP#;Q{dd6}k`U!8y{s3;ZDrDsN1~g~v!PBhw~XNOA^UHQ!SA_%9LI~7`96mt1YpH1 zZ{wd!bl3=i67f72?qE2pIh2Umg;>LzYe{+J9MHFnS;3-a?ir~?)=%>mN7dvSA{;5)sKEP+5_~j=Si1je!8|YTs@YeHRuiBQMB96BNx3owRgrO|_L$ z_{_T_?x@PAfRMT}49Ri$(lP(0Jrdy!LV;&7UDCEhMI%}Tti#mG6%t9q1&or?$;&DQ z+N1TEu3kD8tww4=JhH9Xfw#<15EGe(a`=s=3#`|Rhgpq0ZP<2jYa#p0aGPW^vdXG~ zhFRw}0A`H5`5cu4dC>P=9Zb=m5`Do~oGoLay+1J8p7t0?$6C!Ex2+^uPwbtSLB&B- zUv1Q2Wkv2qji%@ErR_pZ1Esx5MO>*%jzib3uf*7~Y16+|FWS&#j-#j8kcu0Vg6B&K z%uTlqpn$mR*w4ms=zDbIirOu2vwg=kxBm=@KE7rVCTW@q@pdhA+MPf#|CB{tx;_OU zI&K$kI*zMl?4B-xSnHMPTq;9pPwR88B(w2+Fxk6=LkrE!A=(ymn~qL$%)5XJOVkj> zlhjgh1I4MkaqcQn_g1m2gIpL!rCvwV!;nsL44R_P6oN!x#|2AFD5U}(7auB)Wau_k zIG$ul(UHs3EX^#oBc$$~mM!nNZWU$9veo#32_5iYoTyy@aKY=4JtUO?ryE1N0nu7PQsE)Ey+vZcNjZ=xhE zUaL^A6z`nQ;xOHMnat-6_jX6L9L?E0+cBQ%AuYNS(^`QKeGp&Ik~D8HKO4%;NrEyXCYzuHfYHnZwya_o=o7rSd8)=kw*T8z<%=iwQvXq#xU?Yjy_ zS;X{QpXL2GQ?FuKTsZpshBf2X;!DRkLHim<7qXs4*GcV6Z5g7BgNOoiGwBUSz9Ofi zr@9->qz&Am+Di!*O}KGFK!4{>bTV0r|AQ91K<$b4$W~O~8%9z-`LHBGx@v>V73I2E zU@23k>dyuTRH-~#F5Yq7`6!rGBwS=^Vb6zkS+Gw1`BRmx`Tm;V1r>Kk$kE6;I)lVO zzcfvwmfJ8ts+&_h7|f9THMAx?B=M))g`s%C7n#AZLw9;TL|J)F7(J^q+Y zd4>P8mfg5A3&t@h2e0xv&lgU528t0evo~*q6fUEVjC`WA>hfv^b1IO%!e`3MAsog5 zS$1P!EIT!}Erk0ui6ZMk!^z!jVoy}$eJ?s&bh?3Q>DX*W(V}$cfg@i~4U;r}qSX}v zUydns1;@!Y&^wk{` zQ_rr}=16JX(+@^5q}?jsv<=(&`xQV9B|g4HolfUH#!28dZ(eEln-DXNH__zU=61LZ z`X%ECmgnKO1>t(#rscNE8#Wsn0@(OfgiYdjmtK+ZY57(II6&7pTxyjy8>qvqaga7D zYF`#b+Yb~6@H7oAB`{R7UE27a8}B_f=tD?_Tya9>Cw>@?wD0oZhoCt`?B&q-@K27k zK+kB?-lBBwHU?;m;H~_{Yk6;CzwRLt&{kE=Tgq$n)>IIpX*8}>!I~9V^v&n=SK=B& z>S_{e(R`2JO8xYRf8Iw@5v<90Od&PYNJy2Cwfd3Hiti6nKxep?rORMw5*|%}k&J_w z-e4}jc~z^+N{aR3r7D3R7wP$Zl<}Tay>29j10L zHiykeTgW9y_$BM=lBkBjcr5fADXe2vPHC}qVmNG0bn_{Rci>IR_pJtX;9bQzoF1Inz+N5&U83nSR zZZV*3=5YC*cN?3d&U-QIEx7}CpjMj4R#J0%38_i5Jkj?k=n{IhwumO)#43awyg(zy z=QCyfZa?>ZSE_M5E=61xyj|4m(7<6_JRszv4d1>y_4UNgJhK^K9sXXon53kVpC>Tx z!VMA%vK04xt@Fl?j5UiZ=M@?-Y<^N?k8ooXIdOX`25Iae$v2`g+Qt=d?Jn^^j&JKC z8^3R!9T8Xx#JQq_%z7BGuV)dA9~0`Tg0)2I*G_+fk;fq>2{3@SBzYI-yTjFy3zC{* z7s(XHFCfeD8b#w_Qq{#`)fJSuhp#a~N6}q&-CmbdbVQ zmdrA44w^1%4Jl4;z&O8nxklq`*BjshPTnv{Rm-&1t-H7~y{yQXfVq@nWpL_e_=4!^ z+Q)w*3ur1=VQF`+-dIDg+u@ksz%G@UG}#*gyvH=qTt03j54#5)RHqd{I`yo?2)z3 zjd_Prh=P4C`jKkPjnvi_v7+_M!7~q0ur9>o`#s$og@cZ*Y<11#Q!J@gZ5BLNK`uj7 zl}V8A{J1GHp=g_6$#7-UYKn{oy?c1?qn=gJ!_cwAKXrCL?cG0Bg#VOe{;uGZ)D$vD z@P7wE{SUBP0Qmz7+%shUy-?M;zTM3gpy&HRc3jBUb|3mVnpR_UK{??^HK9S0mJY3> z+M0#r)cxb5gx-X`NA&?m`|$0Uz-bb?XH1hIRFV0{)jKdeDE6{iF89}WWjCBB-{Qa@ zA5#4o=NxmxK#S5~-UiO)r+$7RZV~;V9Q~I))dEG*?&zWPyX(~oYuRh{iNtlTwacCC)10J*=jCa#S@Wtx*R${W zJ7#A0HkFFz4|p|I9c`7XD-MHkGul+9wPnXk#BaF z%>rLsIjNwJ!luLi9^mcdsnX&DmMWuJeqVh?NaMb^jm*kAoW9a&HJJ2K3>b;{E>d(4 zmyNZkh53jkHuG(y^M!#YT@$AT`CgZ9mTn3Y9&w*bT?z0gDG>hT`r<_q$yJyqEMmpx z^Bh52qzB~1p5h$xl{pi5nOS1!PQsKSqttZt4|cw^g?GfLc~mX#HJ8JX!)3ZT^6di| zmzxTpGR}lUm#PZu&%!fB-(Jmb^r3hnmmC~S4C$I6KDiD6%wBRl7ekHpz*Qt|S_#}! zdi5?b6T3}U&f`*X;Wri-eICo}nDcbR3DlUY_1xKGT{p6%rPR9|HG3j#y=|qa$zN`J z&s>|kh~s%4bs0hA0@cLQa7Qt3Y@5F=tnN^*^zC4`#!Aaot0gTfePdKi$Wel3S9?8KbGdA-CG}VL)bt{^oyRZ z_CukZ+~l|KOr{)V0!D>3@~I@AkdCWYjoD0Uhet}ev!A^vI^lD5F73jPm86ciL}m|A ze{a;~rTKyY?h~$>$XG86XKTw2+?P?7CgLPxF59zXhgGxMEE-H<(5*}VApJv0>F8!7 z^%ntXRP#>j)&+`4q%BL)kztLP6fU4ZX+N2jI`Z-6UR|dtAx6VUm`jG+f)uvcOiBn$ zE+yZGM_+?Rva9juMy-}`WF!M$^RtKO0fjD6&32Gav7uJHSJD)b#pL!xHeaT!h9tUt zeu~xVWXCLr1)jcv2GZ!FV^Egm7h1L(i^@ZsdCG5e1HJuw^Wf}epYDZ_$^XhShvH9} ziCxukp!rBUtPPW@OpeUFqr}_yS~l~qVCFfH%&qWRff1qe;uPcbAZ?eZnM(-f+2fi!T151nKEhJJXF1a<3F$_ewogGx-FLxUTStph}BEY4OEuB8MPy$_wABvCJ1#Y?K7gkn5-#O7i9g%h_oAZFa?bk1|VP(Zcqn;KLqD8$-$c+tsfMZ+>d{@}>V{lws z?40qS&I{Eq4s$D9WvPM#xTw;MYxSn{T#R0Nz+5E4lzT#Nz}5s+=K!rfl6TUtr_5t8 zk^0i}x?i@LV6!Lbl;~(m-L+uW?u1K?F6EU`Wo<8979@a91<=7~bHQ9Q@s zm7bV)ZN&v3q`=8_e-h}YBkN|6dr$gYOAOT_nKf-7;6Dxb(0tp+V9t zip!EA;CMD|wG&(SncMHw>(%mnD^LHHGsFr`q4C znS3ws^ahR1S3KLuWBYB~&EYE8MFHh2LCwNWHU|&4a(fqqpU$MciwjSS@TFdeyMYar z)bwWtD|U8$An0ZX)*e6k+y!_+#se1cJn>L)s(HeJeE)#tS6!ALShs*wu9l3eMq6cp zs9oD)xnG9vSMT3^M>G+HME**Cn8jjk1^u2|uCU7oscg{EQDj+Y=ep5nXKQPhPH~yI zvTVuK;wQRW0y+nwTDb(%;Fc5XVSdw}k~hlC7LxGzC!q=kJAEbchySO&FOP=0fBWxl zRb(AYveXntWQgob$;ZA83WbSoai_9pr-d*hk}X@4L8y?T7|fIecAn@pEf6zXyj2A) zuq1V#YP-Vkf2_Q?nol~lY-lI`8@R`AHjb0{M0*atPS|^i>f*K!8689gN!03fn4*j@ z=dJU)bVB+{{1|5OC=Ghs@Z43Su#W+M5?6CP<4~^o6gU ztvxf$yKeXPTV%Cexa=1|bqGAH^6YyCZKC1};XT0`$ivoDnsqo1FgH}iFxx~SeT z;^Lh(mD~+2-IFq^^<}e*t=uK$S^wL%onR)R&(=OH6xqIAYf}b)en~KA^6S?!w9GTQ zQXLX%cMrK6(W+kUIsrW{Z>Dv1e+fKy#%DcA#N9;^$aBvKkISnhC1f2vT2J(Ti7uDB zcb?W`9>n=G<=^5RdT)>LzudiNTu140cZL&MB`P;PnK&Z-DLs+)skUPPNAy7v4G{{AFqOqwiQ=ba? z@^$Y}nV!>}dUF&gphNR0C$Jfocp1# zMN|{U;+z|IJi!dpkaB%NR^!tLPLnU(7nhQy7zk4NF&`=|U@%wpINX37MGbEnl7DS$ zzo^Jae3+U=fX?U7fM0a(K0IMqP#|F&jOEkhlfP-%PwIk z9?rfkGj2<~VTZ#4N~X`LnlE%7N7|W?S%FK#I3?2Yh>(b ze{Rc%@Vfc&r9EMbn#Su~JA|$4I()m`dJI`TTW%b&d)Y3xv?$qGo zM>nM$Pjw^r(1-9DKN~OKGpz?m%rIg+#Tb0vW0He;*p+gHC+%c{BpK&{NQU=rv#FnT zE328Q`gVrBpGRFLM@nOac4sCK$X6RpP0}I-dsIXJh+>+-9UZ%o2!>VQ(VveZ6B?bR zfO2W#G5}@lwy2>%x7S5+86dLc;Z|FnvJ}HE2QK;C7=7zJddcoF+UvdGvUUUjfe#l; zt4N1S1lbuQ_l_S10lZa(%ZnyQ^dmjs*y+v%$DK_@gH;pk9#PXb6!j73Ip{D0BJ|KF z<0sN}d4}bh8Ie@Y!!^oq-hx2YFsN`7mrc#Q-#YG>c&0s0lNsz#)K7eeZ}J_AZAF$Z z^wCU1_ZyE|kuAz##p4n$X)eBojs!m>9GOolAAF6h;BxO%lzw@{q;Hdy3^mpC;7Rl2i ze_>&v(_P42RMivXh?~L)Ht~SiKvhTKY6Ic%`+-L82gCi#@*+J(PVmKBO-pxfu2MYM z1V0^}{oc97No0$?c6InlU&{F9r-K^fdoR2k$CFf)Kdv1G&^`u=$RI3WdFL+ zt`FHvJ1j(1%pX(?g#-yjZo-aDPd)n6f-!oWG#k+-@cF! zZ0o)62b3(;ot(g*F+eaBd~Y%o`xHK{?)R*z+KMTvi7H{*Ev>AZ zL=)ZNSvETP^DgIJ)Ef|Qe|sCdXgS|X$b9>=J`i0!;?3```x*IBgFmr$J~tF~y}Z72 z>d}5q;_pAyu=k$^h=B=rult#*-aqtY;{lGnTZ=K?=&_&G+p!nxZnz%u6z-oH$z-n{ zWanRNK4T>E4=Rz4jKS_*w{GmQ`JW&<7{!fH%3H_9JLVtk`R_yg_mIeG%4HKz2`Ebv zOqNZ))$xN(nDe@6<)&SvG-TIou2y_Kp|!fQ2z1dk^c)5o&1+!VlGBu-8nLy< zJ&^F%Q1T8E)SzJ(Z`Ys`=7QHAnP4HX=29=Ij-MLPlryt(+MSbR@Nw}z{!yy#SPpFC z*(IwNpw!KuCm36D!KJUF@isGvAi*ouxQG|9$QbB`BNd;E?t*VNZ0N%kT-A+%ZQNip zd1nWx3EKwC;*bOXdy5ujklL$rT{_6n4IH`O1lL@K=RDYkNzGn;(zWy`4Sw=#PeGS9 z!$cvLL2ugjT>hKJb?lb538Au`1)WpjP76@FFb^PP{nf@2jFG@UCMFQ{Jc@M~X^YIQ zu@|EU0mQVF@<*z_#TL-PBL(lCjzPV?b1c^l5@a;jLg3XQG~Ib{IiZjxAHtDnv3Xod zofe_~WqE$6pvxIpeSM{CW8rHpelXN|oGx$FElVQh^dP2TsJ0IAAwJL2KRSNsr8_2C zr6%f^XOQ+8(NKNvIqAD;;d>++C}oD8`T3-s%|tS(UM`}FJSXLtG}a2o^wjrOAp2h* zPb(NY6zs~dq{)ZXaL3c~kLzF88IqV=K+gY_tfPUOt=cH{>bn+4&c9<;1_%~t!*gLw z=rMCXM|pICpc~5l=A;{1&H8o&Ep7e}odWxw=-#81#+YJ@dl3FE!k4OzZmbmzZ59)s)rW2xk{^wI#-C5z`@$7c>~h*EPm^#G7}bo|UjF*oPrI1gO%PFT zx93hJD1P*Zc;PQ;y#xHb6V#k*f(VGg_^M)dudBMCoJ>S5xkm>>a8KT-JNh_Fc&^Le z>QzIf;dbntumm|}yz7WRF|) zGkjl~V*CUwWKA)k=FPuoNVTOoh7e07#!BdF|+ZI|b6VsE?P8MM~1U)le(9%&$LL z!hED0Wmk>*dk6H)c0Wt)H)H36m*}(9C+cD89DI%fCB_j|6BE=x^N8iA$i8!86LoO% zE|X2!=xZ(WK^<;~FH{@})kVAcxzK+?`0~ks9XERW3!>woC?5gC^V8#1QcLsWNz5zJ zeXeN;n{ISaik{SA)U_7PK8!sN(6)t=E`yQr;$@kl&{;!C=?*6fc5)%0;w!ZX!mE8d z(C0Al58G zvXV{OBg7|CnlfY%N@feryMV@xJ=io*Am57tM$1BDFK_yO(1dnEWbMk~a~qb&|F5K{|4}x)9}C^E;gFP`;CeoN^>RBg zYCXl4d+Yk;tY_T|S=^4_a_sK4WjPv%%^RM_ucej^AA|lk9@1dy4-^Ko{mh?hGF!l! z;CT`cu;H@QvfzgQkA6;bGq-Wg6`!XRo$;P}bH$Tp=b+K~JOs6E@q{w1x2*Egr{~WC zWlKceK58{?3YEzWoUn5U;j<7$nRu>$M0L}&wAjZ}ZyT(eDDS|xmJY$-TeL^hu0RXg z22!i!9d&{J!3OJSO85F_H9C)IEQv3g;KJ}JW=b^DkY#O2ZLB2c%X}4L15<;uZ}Oe@ z)^~`n$CY}fK((;37kl>ov_$XOI8UKdf4t(Wcw8wq{UXNGmCwvCQ|!uw{}i|#;d+jb zWAKmPsBW4nTt1L7{)K)*nfU#qgL-@e1>5faPnS(Z&wpVs`s$@#vDE8YSY2Ix7Oi2m zeh{1m*AXgiUNU1$S!*NfFnwI(*|1>K7;FGnKwhw82&f=N>~gFBxS101Lh$DM{S`wT z<+-09Sj+_8jzh+=N zsPH-|y`B!(4h4T|qX^(#q%>i%fcpX}rLDCgWwJZd9s7Qs&88GY$=zVU7PCi*&;i3S zlkPsMn?cxYW<%JF+rqviG_@qyG2!BWMo`AZsYmBWyQnEzg$`+J7QAP0H`s9v%L)9_ zW^!h*BIyf0dY*rb+-U*qC(XBIv!s#Y_47rS1_(t7>CtK>?{#` z5Nuf`2f7T-EaAuk?o!s3OTA$W+v}bwBIaW(!`pAs|B8m)ACjddM#4~Twg~{)RP~e| zQK@&wz!Sq?k&$8}b?6cI>C#C%MDG5guP!u(BNrmer2E>Tf80TIM z^lx?V$ij|1*oq`X@p)P=M$(2m3K&E0PiY3Xjq`CDBtUCTM#28suz=E}YpoqM=_exn zJUklh1Bp_!j$2(39oeiCvN~h<9h?D+xNM+gGtFV8FRD?)k8J0i%!QU#K&vxrey@6) zbfE6mFf;)!WiBiZ#ACVbl-;a_xWs|R(wA!fF8E;xt&KWokpGa z(ms@Zrs{HHmq)l`%lMeGSeQhjSeV@CJ==)o7lmEEiR)oW48vh8ZH;eHa`elds24zR zy8v1WbvcO3`nN#NiU2*TTg_-<14^bw$?=qCdx*Xsf+IZ{YMo_DG-K zGStF5sZ#(ktTHnHSU-UkfC0I^_S;O2ja0aOO!?L1N|Vq{DV8Rk1n2&!#1b$ zJ}wL#X+hYCG{_p~YEJF)TLwvPEj&P%P&ho&e4DcWddIvb-}O1QY)eb}aaW?IDIz(g zO;eAB*^PP$Nh5Wh{15h7CjFp5^iXus?}Fr%ngJ0a?fr?o5`1$=ip@L42{xj2g~edu zdN3V*AdW+e-tNt7u4+57El&iXH$odr0Ae|#9*k8mY(Do4MBb-}(*`QD_DiD#reEbx zKFx(`J1yk?LT3|vk)Bj=;7rnk9dZlTg2DHqWRpt^W3KRS^UWF#7#p$ng&e%&?{j1z z28;HMIL11SNpmV>PY!N{Eg4&Ax*PL01{;*`^jpU-b`%9@Vbz{VGrBl!ju@;`Oag%Z zkd7o`)cjh-_YcHjr*(2?D0D0glCsg2#y0#>x|6Ps9otgG;%^Xx*B4{4Td*)TYQb|_ zLWECNR0m~~k>sYW0=&|PM4iEoNrPr( zQDCUdGfn=EAz6o`5I`X~Jk>%uTzVxjeiDOn;iV(rvgh^yh{Bd!Ff6ShW}_4g?KUhJ z_?x-8@hPXU?)d%ew%h>k#^Hcnz+SJqM}Zsf$78+DX7F8iLt__va|wpXw-%)a?3?GodMzugd6sc}J0BtcO-K+v(gBMxbN=S!YTgL#=et*U; znW0iC3g92_`~4=i28haG;|T|?AfW+mFdc(=X*I@@0Re(#cQpC>xr;EGqJgnw+eYa* z_ej_diGymUBQWDDNnD=b+i3Z|2zIhbma1WNJBT?iU=){eIzC=Vn#>#_S=wnb zO`Z9oAHY!kLBz$07A#D=B~U18U$jZ5nB%QYV`m`?5fSmQ`}sfRj}W>Ogqjzd0Q3;{ z&=xY`aJcH7p8_Ky9t)9CQ+LTTrSVc>sJ;1-KEdO|P?S~pUoVmX1e<_V{P3S(|F}=m zL0RH;F$|UFAO4GoAVrkRQUjq_qgLRWwAeize-!Kh7ABTL%&8xz6~tka6~;>kJKaX# zpo|t6ii{6No3cZ0vgiwKiP$AhZ;>=e91LtTP{A4+C0{Gz`mMzkJ@pfN+?K8+-f9Uv zfYPM{kbDl>i$-T2ApG)(NNmT#Pp+vDy;j0WwMLGH7JUbYZbnG`Xpr4FikdX z;y3k!6q`}f)%}riLy~NyA+aby7oo5=&u)5)ySBAci00qGati|x>0`5HgW+?ku-xVd zjBZ@IykSAzX;k*SXU+9o`2q)KDHwn2MNkNv|3S>3F89V1iejKzGwljb;;>fJZeR+` z0MGrO)=nBRFNShDreEK2mLiiN7o*v>bt62KA^80N~_3vf>Qe^(yApb8Iq;FGgEg@pzQR#pH z-uzkDv&XDL^Zy%OWAix&qgZ2E>m*^y7hzEr%gQyKfE;o*1Osn+z487F zC1Ug;c#V0CS55HN4D*LhvGgJ5P*>%!@@00?T^8qrESCsnm^wqOBxTIyuu@LL&PY~I zaV;u@)>dEt!<=69ebDd|k)&=mL z!Jl4eJ%F06W4H$h;sMry1D=9Q60{Ut!?#T)jiRy02u9=~#>x_TNc_I)QcA9b=B%ub zq)6%D4RnuYT=)v=0iH>>Htk7=MaWCsHZW4{gb%W;u-?nAv+snm1o3^p(4pW=bE$ev zQRd{^ms6T?V8!R4k!`s@Ph|xke%y3yej#dF@@37;M9OgpHBpCH%}&CyjIcbb;p@G) zT0Lt}fKHCxrz4mxkD|OkH}VSBIc#HRU{TY2jNMJ3C8dKeOqSXL%tsCZV;8J`smlG6 zZn>M^el>TPbF|gcXlS;suoAXiO4Ev$UoaxOM0R9&O@q*yVCp$Y{w*b3^~^z*_u^zE z5RC0a0wK);_$xX5d>)aGbq}|UwbE=5qVwhXs$o9sAymET4zAkb;NK;M-h#c$W+k7A zdqQmJ_E+NM;bft*vVw4YjgNQY@Jr0-Lr2F&Czs)V+fs)FtV2(2lxH`q8P{q|wvTLk zXqtZ87WDx9a|NrT)2XX5WGqLsG@n6dY&F8BMknnStae-4V1e)Jh9|-+F;cZo;cDsd z`@c>OCQ^6j@N{-c^9hIGpjjW{u%R`r&+7W3rX}?6lm}DwA&os!Xsp_wa>>$snJ33%AqRv9ZeYFYwquQ`jS=@`>gm$16a(`i`8}vh1+20MC!stl18%QufcQM; zZG*R}#LZ+kz&LG7IrsntV*x^T_6=@8K}jGzhXZDPHaxFtH_#K+n8i+xl6IYqc0xKj z_+frcx~J;C_Ee0BfZGy79YqEv7HtLbNkT#71(|w1L?~?5>p~<9CnoIL+R#vl0uR*r zOgGTgJbk7t7CZEVhc3pLEx(&H2?`dAesKaqt(QN!|42@jV#4%(eFe9L)*G;lrT_uF z*$?lBXirb-NT0mduBmMgiRxk)yRRIFL>=2^D0I+CdKevpz4oK#zjzxXexh!?XnC~! zB=VF4WcX=>3<>*y-BU`RK5@Y0q8@R@=4tKBR#r_jU=jMRipKZ=bsdQ=@C6WYNivR1 zPq5N9&pm~(!s|gqjadxlf9z9702anEfbZ`Y5Zsc(Tpl}t2O6{F=j-0bCJLIDK7^D) z>XpufkazxYPYnIWGZ|EJY<;rbC7)^U=3x+wQ(YZBh3WjRm13Oc+~YaiIp4g8H+Of6j>(lOHC<_(GW zh46dk@_W7tJFeak*o;uh6nt+Q=R!uy+OIxzKLmeTH6!phcC@Vn0gdME*-(_E?+S+i zK-=DAd)@${df?DmoIFPoMGq&d-w!zwl-w@Z_v=(=4bV)4&5AYWdjy12)9T2uAXY?* zoS)|qsHo6NQ#Sz{J4Tw^XW5z`MEup4?!mqn{Th+X2*>?-8-|L|vf#j~NrI$_w3QPs zkfQ53RaT;YW|`}6GSi9@9kfKv!LEi|*|&$l2c)rkO6XV30I3hL&GqL!$Yy^#Ob8-b zKgt>Feu#hvuLqWWsk{A9qFWLy8u+DrSCWTdB7DG4mfKFUK%c!pp6`R5%;ISBRXXUz zzO5}jLI)+#pSJyR)+5|wDC#2kj~)IFr3pLzmQ`@IgMqaQD`AD`Du8xiEOTapb{H{1 z4;!JfP-tVCrye&D4NgbD4bld9bt#XpkqqO!9-kjp*me(g^2w0cKIywD3fZ)hfGyAMZbZt#e%6%X>fc z*7iB;{5Y9QmaNsDDt-W+pe_s9i^|n4K;$>4-2h5nbpy*VL4hV*+!%!~o zk(w`>7vy6agN2gmWq55Td%d1o7^-LPt^e@AZ!)9kIpSM=NU>apg%J69ZcBgPeUN#9 zepAdb$V)Ma4uqe_PbNT;QeYsz7!wc6M0EBheQ&Cl@OHYLSn)Kb6XI!1v5?T*Q6ack zF(rk1pWIw8kKIxL9tkiB%es zapJN=0#rgDXT#KA3^n38d@pUaD|wP64PGBZ9BEO0-u1_L5H=>Pl(T^M951~XlF8f= z=@mHiM_pQKguQOf(nPL#u*Ik1m1VAra7{Q64{Ov}`inUdxnY zx4}4$%Zki_9eaf?YNnf}r3U%X84j~DtH*sKS_yW0 zv#Ju6@VJ^v)UP#42sZQBQ!(B(!k3v@M_F^>!7vf zbJotZ-ki$$^{cUX#hE*lKft>%O@CSNBCLS3F<)i<-fhF1^vq662lWjf4_9h%HRf;n iKfF{FClMG6%s<-qr#uZhztX<8n-O5HSE76R>VE(@1>w#B literal 0 HcmV?d00001 diff --git a/latest/_images/opea_workflow.png b/latest/_images/opea_workflow.png new file mode 100644 index 0000000000000000000000000000000000000000..8cd1106db1ed4c434eaa997d9496b30be78b9635 GIT binary patch literal 73074 zcmZ6y1yox>+bvuQ6nA%*;_gx$iqk@I4btN7?(Pz#Xps^ixVr|oLU1b{yf~Cg-}k%! zz4u@1WM!RMnMr0cv!DI!XP;;dH3bY*Qq(tZ-e4#x%4)rN^A`B#4ct%UcdsoUDon;- z-{4%e6r|r&Pm&+L-XK^@sY<9kK&}L=lbRiX8*qz+^}Og;LV$tA4;-P zpS_G>c}RxZ`_C^*D-DO;JGZte0*4L(_SvT(0o7TnvjGDB(@sRBm=Sr+5BS(jG1Q+l zm&gEgzGt1cU2*N4J6D0<`{m+kSXeYWHav~e|F-do3;QABn)U+5R3~I>!KXImK57Z4>u_;#=rYxxQR;+8OVd9x$SWumkWxfQ&qh{P z8>6EH2%_$uBS+xlx_P=w{p?UvkAyafh;mbtJ;IYwP?~F+forhfLutM2*4`b@DPgll zgzwJ&rF6TE4nO3Mm|0v4m(Ya&gGH&JRsY!t^RvAE);?#)+sGt8v#uIO9NUcmk~0{F zoQW+r?G;&=%VbbFnUVOj1}26KKM!hn1Jzon#fnzl;bb)>bG1NFf_XtRno;MoMn#UB zg0|Mpm4{R*t`#?5-#XF{baB{z5^I`WyTxuUO$F`=|3wswBPlC!a&cKDUMKu4w=)$* zNO?Qn^Qq&q9riRX}&KzgK+Jb5-jn@ z;}(3COZ;3!`6uRqv088tN4W&sy|JLMmV$Om=z$63>BtjA=|OpXT&8n&ER#|poCFBK zP(3WG9qPd^L}|r-KQ#KeF|*b$AMG8qp+$t;Ny zG9I`f^k%cF8$xm+cS3u3RD2_eh!e6!v&vfW%k04<_ss3I2?x9NQg|{ZM>j=zZ0{|u z4Ex2%^{wKl#H-M2DXMdUg~A0^sI3_x-HGAap4VLyknQ6Szm21ZBYVTjsbfFy6*s4fE9dS9P(hi^d9nw#Ru7 zG0jY6VCuY+2|P1)YfdwT?B$K%R?FV@8~YN`&V4j(vt?2%6S$H#lxQOwUe&_!9HrE; zu0giU0~$|n><5jCnSfTRBMGlJRxD$moiM7A@cE~6%HpR}E%@~7#A-(b!tCh~Sk)P%J`*CI{ED0w zUeFjFZzUniIO=KF9kB8-v+lsBkqhlB8e_r$FS6sEDrEFaLz6U;ElV>HX&e zW|XjtT=#NvVmW1J@)Wenu!V_9N*s}FTTOk<_;t9(9wp0z<8ShMUrDk>9>eU7#W@%) zd7LGWD#tW*G(!{C14gpMZkqTD8XCwa*8J^^V|u%c-P~fQm-G{7FIn8`uQCl+Lwkx> zE|$~;`RaV!5xEuFt1VYDrFfvRZKzEXmmlXB=8Jx`6Rp|X_jxO_QGQVOW%ZGT$G-Nc zw>XuOF(MEZ&?)Og+5PdO-2N1R%NkzZNw_|YzUeKD)M>;Xg{3Z~r_5QxLo)E3SI%O6 zrX=$S?U^e z+#@`2VCg$5R{Qpqte2tRzXdxN^-6{*X_|*IXm=mxG(JS8Cg0?QPSp+&n@tYNdvs6s z-lZis%H`5=MW$i262j9M^W#UO>NtNdaNyA2WN%Qrql=Adj>Ro?h5yaG1n)s7(&@+j zvN+I?=S#kt)tJwg-ydWik%;LWcS1Bo2>Kvt8LN%TpP^Aznnisif_SL8JF46>NY)UJ zw;xdjbC@t4@%td=)fg;n168st6l!l8QS#){a&Wx4Gqq^CvQP!B>qpO2Ucq*QpeUrw zXZ8c5O1TOp_KAY&ZXz{Tc|$VznCy_prB|a7ofPdpFT%S_Q4MsPJilF8nrg8>_Q-89 z97?dqb|zBm85lvPq&Ymgji9C^=ab`!Y)7s2TL>s|X9>@5=MLap=WUSD9ztYml)#$# z0FP<^fM2R}GrdxWd3>J=iJw$W^YEx0XXzk=O_#^>{lW(f!Fr z3ZEc+JO19l%-@Mq0Aw6~H8`P|`796ScwJ}_o`HfJy z?A2f>4TE7?0wS|U1;zV8pcI$1#pYrLNlc_0Dk)7>yzHaO89F%q7@njrdXZ1a3dA|f zGAA7f7d-{SDSBa+`G?zfDk3pxSDW9OFntmTu5;2 zBvLIe6tU{+%jKERt|#NFqyk(lwIKR>Tc z_xpeY<}QRK9I-i(upyq4*ZSKXDz=CW$6%y*yB_YI(Y4veQ4u?|C%lrhIQ#Ax<7Loz zL%Pd4jQpN_dA)*IYlQJ{o*4cACc-#MqUM}JeH!XSNNV>*dSV_+P48KW+vCxlC4^9C zNR+Ufe_J#@)8u;A4s-A-(|@_H)9>VJY#eq@7X`CZ}GyeyajZ>3iG%szYu+MH8$hJ*ex_TC0Y zWox$CkUq5K_p#B(A+Gnqtz|gVw4f7VKczCPZao_xda{HV`dp^DHXGB^^0`5AKJy0w zZJ0My`pX7-A+S+e)51e&-198x@B+=|aG%2Y^GITQ&{|uYiefQW&m_TM>wCrWE@LFi zs7tZnteD!m^<{jo{lrF7odIu)=5;mQb_HoMzL=XjFJ02(-tkEc{^w#?R=sJr0+nc$ zT0!}}fX_1CfMDE^A;%fV;5L@E+3Argw5T32(ui(aqCi?XhIcQky|}si-9kGdb+t|m z4FT)adC1>BdnfU45rG}u#4GO2($-^y-j~*j8*#@+%`; zpNnu|d7WrV*Ze{)=M3X|jfg8v2c}~S_Cmdrg{VM$mvC}trdzY#1Z0m0O=;RsA7)a5 zTh9*YYQ2NlspJD|IVJQh;!}!{#~+>$NVR2%NkK!wtt&X%EYjQFt|0Nep2?wpMWM!b zd;7Qru$rOuYu5q`n|S}db~BF~o`=C*p+8G@^z$rBi!Tmd9QomBU-Xk*oH-Z1n;)NJ z=CCse0l;J;u-q<>-WBNXCfrgI`stpl6Ju-m5&)xNA4pr?0k`iN7z5wSphCUGT}kSE zmzubls)?eoGqv@FzVBcfxo9=RJe)~XAVp=HCYb)p;j8lS+jhKzt}f3}4J41#W*KwV zXB?b?{wOOWj*UqC!q6?|&mV>9T?n#MrjS#X1a*;~2^_c59tgAglq11%i+B*YGH^hq z=ukVu&(9y-2$6bhP8s>YXM>B%d)6jGh2bcXKbwj7>OE;&tI@s_x;jypXm#Fj{hqBI z8|Jy`%8bYznkDeC+1jX4_^ffmE%jvnie$if?$J<8)6yl9hD=$rpMu!@Fp^Ee!C&boo$C^-uN&AhmhG$WViJw>|BtE9JIBX z%FG|*Fchn7`%prc&LL-JDZ+6h^~&H;)@1rrFB@t|hrs~*x%yPd-9vXK4oXq#oTjG{ zE1eWJ7FISL5K;Fs8l|YFU0q`w0dzR}b)3sNTygcyTsWus1SYF)HnHhr^_t4>Hkz~Q zp4#U&Xmt%%eWw7jYE49oIMqXbH(JfS55UW&zlR@nZ3*t~;Of&i2wMomdIj5rN!~z> z9$mm8JUAJJ&7LB816tX|^a<5jonjJtV79{1bRxbG#d;!bzwCW*G9Ow|@!qX59w&DO zRp~3SM7FamJ1`w8Xd6470^GC=(N4Gs0Sx6EqI{}BP|S7#`!Be_mfMN;eAzZBvtt5C zL=)R{OrNcc$m{C+E?@p4?$3!P!R#Xln9K`^b<7HAkRn!^%*%f8V`kY6ZEG_!pJ`Fg zEfS+GILK>xkT&d-_LmjF*EP{Xk59vrYe_tg?IYQ{vl(pr%{r%?xE)n^=yHinTag+8 zxL8>K(MVEd>F86j;P|FPN9#n~V%}9_Fmn*1iFc13uW&&>1;amF;biGV#Ffb6FE*ix zw|k#m_I?)tlnv3n=SPY^>FhffyWVXrH$8T5UMQ;m0S_;J!dz)1FG)pdQ$Xat%Wr2A zs?t?*V8^21A}uz1H8%Zg__t1HV3&PuIlT8FilXAAv`nPpoqxmUJvtf*6b1|m>8X=m zKqt3 zc0Htoq2VF(vH(KuqKYO|d!y%rZKx><(LOZ5O=CO_W(l-;;z}cO0>?wexXw7nK?6%g>>z z1vIot{#_)rZeYkrW}?70vU}Inl6FzcxZ9!9IdOq(p=rE!lX+|+9#OL-uyOlKM~~5m zrzx+rND;=v6V~ehp9MUNj6ZQsOuP-im zx;8E>ejkrI-U06(7BlL1S{+ufY!!LaaM?;A4&Sm0VJ&> zFH+|cJH0&iTOS^977$MV$+d+QLQX;=WfmZD>Ego2Bdn)aUO{?XJCV%#`8<9+OkFlY zen^xL1BhtRALUeye?u+w1EcWk32A21yvG2~3_YcqBF}Jtm($hQ;!#G+A^Bg|^8~pL z{uDu4mCunbj?%C4%4#EkkchB|np2&wU#`CCVL3_4kO#h~ek)K|na(vj+R9@F3RcMd zT|~)fV2`Zq*d6OeEoLEx$DXeug6%P84}L!P`C`KK)AiVCCT@%l;-^akDbY9;drisv z1v!9&gFnjIIH{ENP+~5bs+s7!u^GbU@Hlh0?wQ$H+RSq9nK@YGHKpJ_#-M55{dn-q ze53;|Aty}8nEq+3>+zBZwC{%F_C}7L>J;Gw;pM;la4)63)#vHwZO2ZYd6$2{lbgt* z(375la=hjs{Yf7?cU5GouGS4Kf^n|??SlV0v(h`bSO=Y`hNy(tV2wIl0n8E+y;UV@Y^wRzLZTE=2y6NW!K z%$&PP#_?xxAVcs~z5JR{mG=5Elo-QZ_W!ZEE-A>kf|8~P+m~lF+v_T)>19+T%obX1 zc6ec6g<@Q5+&~%|KA?>D#)Yr$YBUK45F0OVfKB0x=e%ydf0r_Cxd7&(>IENI-9l5BkYggwG!kz7KUM`pD@IWUvof_TvM8bo9i3Uoz-V12gASIax zaW72^Zj;XA#ZrvDZmS!Uz!_@LF_0Rz?hwxDv9wd7PA*Vo7@D|u8rB)eAJW%vdW8S> zwq8KDY4|n#W?`qOpzcoo6FL2gQ}hgz-ubSnJ9D>^PPqRnud1-5^~Ofa2#C72UD%Et zRIW8pC#u|==(SR2q7VPxQzUmLWxex`4cLsI5gD44<4uBey?0MT}grKo;^iJln)956wt9za^cLi?y zqC=l~D0VIvZ;x;OX|RGh@6*nhGsjO#V`EBPTbFmI#%j9e->??zJ#@Qn@s$|4Y3PCY zqj{09IC>|~P~RYg_n6rowYZTJtjD|GcljAXE7b0MurywgvpW3v?_gH@*;`VyZLoPe zO=^V9o%!xtSxcQFu1T^LL5|2BO;$ET`?&KKUFhL>*mK2>Zi5+<^e5$4r8rTTOlJqw zjMwzF#;iyyw8)F|fDITQE3wW6Y$uBT^0wA*NmPjHOGwBFuP*lsc2WP7>4}y_*icxn z+k{%nJws7Ri)l#y)|fXJ120h|)@i@B&&~8YD7F(R(KI}n|BsbjaJXw9U&=Ts%uoQH z@b5{~M~>cJ8IP>1dp032rVo#w$P7n5Z9S3DfyA6B|4O0gcKgt+Ki={BJ=LNaa^BT# z91Dl(TyOS>_!Co2N!ck$PExw`VfqJ>^EVG;?l8*|qW3V-6Oswq^&n9N#54;~hCT{g zgP^Wy^rH0F_vftP0R<|Qr zPwJb+wo=xi+G=J~B3L|ogV!A6h}N^hf7$Ao{mO0>I@^m<|HGc7rIC;gA_j3UV=54d z5}&Nc{V5Vj1BCZWTT^y$azk>Pi7ul5PV?%b5fZF19-L88Kg$7y!5%%I!pK!GWA zPb-?aaV9)7b-ueQ2$`Q5g3DKtr6;6iV@OxpW7N%6dc{q+K3vo+!0Dh$YOIrpL&Uzx zJu;Giv02{{EKeY1DPaWWAOr=P-ejbnD;>+&dA#?qGkg=N`~~d1?vOi19yE~q&p`U1M&xfW3M8(7wR||z=X@;*s>y^lqJ@0q2D&XDXX|c7Z zkz9*^K@YOfsA}f2x=*kT#ivYxK={cv@&T~2ObyfxA5>|l#zc+Bf3fx~_u>1O@u_1>3MPM zzi>&q&Z~6j0yM#Quj$laX8w&JG3qHGx7~@-b&qX8 zE3Z?xbpaPiO+;$HfcL3yXNd^tE`-w?c8eLIM*IIp{*@HHtk|o_>sqT{EFder<}GUJ z{toAb6*O?)WQH_kD7)M z7Wn`d1Vb1+Hi2nm;~0^Qf&p1&=49xu9+d|OYg_!4Cd7Q~HO3S7>oN^bYV=V9uQX#_ z5hYA;&NU9L`2M^)M%%i646bXp(FwYRV%F7YsXuKl|H=x4V&aj|RW zx5n30ts&49ck6&|i}&x}%V^6D>W(UF7;4bR*+Vi32ndTCxTx-* z{;Eo>TT0P65-o00Mb|29tkPGIJ6grBw_=*Z?a8qaR~5wFVa0 znN1%#a3ff@uljuK+CnH4rY}^XZD+_KZ{eM0{n&5FZ>2RgFESH+i(Ui$dwQ~76glK> zEn0du8Z4=c@WlxtuG(=qMZ3w&1vK0tNba&l`+kaWyiCfy6Ym;*swA1OEKTnLzU=$r zvKe7^h~&=s8q9Zzz>lV4uHa`*ol01>YT zI*aKy#f+q%lUmq#NC}{e1y|vZyR!F#4SBY@-ls+X5AVJ#J9S!`9rz8fEG+%`@!26x zsnij`=$B>p)x1MyE=fjB%>$-&=$IayG}O{n)9}9Qmmj-E5QnXM=7k-$F7xUTLH% ztwmp!iDVQ-mp>G?hLndUm?%(Le7q7LBA!;%CU9} zEO39!3LCw6NY=4AVe88&`=+tB@S@RVlk$q^o=m{v{tqT%)NLT(VLZY>>yaS521a60 zi(DKnhN@I5v1+HwbvxXA?5M^9F#NKbd_vh;;G0kE{29Up_@dw=BUP!QWI3UJ&Zol0 zbUB4QcD5BQU6L`cB|sfNhJL%NGk2M45@pKx%Gh;EtDsi!b7r~;nKEPc5Y0Epsu4PS zr+LTz=g^4B$Qrq;oD7w(>iiwBKip^iSlrf-cV5F^Hq{)gj%rkU_PlwWY4`LuLfh`< z?);kBN$HD^aJPL89r@eh(fN znL;-Yx$cx*ouXtCt@OZDh>=pM8A^i(!Hzv+cH@bEpRdG>M5#rW5+sd}aWnQyV8n*_ z3*+8oe&W5Wm3gEQJ$|1S!N*Vi&Nm<=9+u_slv#X}62i|kk~HYw17>Ve4d-US{bU!h zW%)y~Cc1^zULdwbON@rU+-QijDdW*Cz7Uub{IXiT&{yLYo12KrjW}*d948Ttn#^nV z+i;x?AZ%dZ-9nZ6CX+S9@IwM$nBDombJ}EZ?I(c6I_OZHtX%;=ku?0NQ{-VH+4?@7 z3kSM{b+beGP&2l)BQtDMAjMzeuqNk=x>7fkMK1zM4Z4IO zvf~g6ufMGuJNuG>q=9$)OvtQ<0~gmXu_j`-EXGe)XXgjE0a2D#f3d5llvuz!SBS_e z>V21q61_iYlfmhJx2n@VyJ+~=$pE#1=ge^qB0}qe)NpmrTwsgw{$g5l?WM!Dz}=g-c^c;aYnctsGUthwb2!u3Ov2EUZI9Mv}KW0!sanG%Rp_N_r+k!5qv z>&b4|Qv@{0(}Z?fHGuiT)pZ?tlx$;6S7Nf9me#`PuePj26idB7r#{%6B^1tR`uVk` z(O+TqG0Y_@?eQdbt)z^vNohSj`at+JBBS?2)%eSgO^eN|qF|e{-6X-|vm`E`Q$_z*xM4}59PXesxJ~9JUFh$kq=1=Lyf!C-yoPlHt3HA!s z@CFOLvYNj!|J2*N9>QEg8q(16j-ee<=~tG9EM%B{ZgXV1d}rDq!z-MSP=kj^Xq#*$ra;f{SU2aH(<-f%r&LcTENhk7#v@Qcli@9I^n+@m4w7G{`!~&WyVg=g0pQ07*=nEu*m5b<9_*xl+**@ugF&V zO0t5Ks$`s@%iAQ@suRb$O92*gCr2vENfJx$Shp(G;^AGU!`ZeCTRM9|qMGuhQucHV z^>jwYn?MG*4^Y?ek{-sZkOKXznoQ&!d>g=Q?YxBEM%;GU_GQV+N0xy4hgML4l>tJ? zTTt!(CO5rjoeLw_j|}f5uj3 z--(REh6ABxT0zqxs+V_p@MJT(F38ElT|LIB3?mcbnDemy%RFTbJ$KYgT3*=l{d=uX zRmntE@PU!;igAsR%raCS^7UIp4T(zoUq2({Y{w)Dz~cAl*j90iI|$tREIQ;)8v!~> zI3&)Ljxc8#zbaI?0P9+}GKRjfcTRh-_n=wPAl@(8TPHpUoG*VZ#+$DO!$%BrJ^qA# z59L)0x7rSK#oRu6=hVAMX6p$HHnMro5l+AJ*@*!+EAAar>6Vis&T3~T*eaGLZf(}- zM;+eE&7q8B3uT3&Jd;q<3gq)gD=TY+j-^a&|5W%9?7^OiuXKbY6w*GnZ!kqc!}4QB z$Q)RMyDf806759oYNT6QLwdRfRlV=}?OT$l;?i)fb5ZJgf}Sebfy*6zDjbj@UuzoC^ToII8gkpZ1}7on&qd z*Pn6pulqk1il3=_oyL>cm=`%b-@j}L){KYzh{s^F$147*9$sA8l}NNeMX(Ux^h{)& z-HyT@-+H%6q`srO4fW5d8>Q}eS}8NlS&iqZiJ$hlaK%pnsEzn%j_-KzZsiRG6w(UD z;gj{NoY~{=QA$HBc>#v{@b$$x(?CmvH9^TB&lW@{>?fq)LxLYoP-@DTofRR$gJ^_$5WUy2+LHU*sl4(LH( z%m@O~4QDdNRM@b0s!A|GUe2g5uwlZvPZF`|rU`1dg`z+tXvZglw%?;+x9AyRnK%}U0_ZU!4|`drj^AQ ztDL7+SH7F(?eC~MB#DO2r)b$02^T>0&ZwQ9f_ui_vRo8XYzpk zi2`dIG_1rE-R9x9&hg2LIi@eE82_QcsVX(A4o~yc@ajOoI*%mp=l~UaUGf|o;Q|As zi9BWWKXIBV$qFmbBAu|`^@KIB87NWi~Rox7qC{yBj@vW}XPp^R-P!zJ) zK}=ke7cD7vh5q;A!58e2__kZ-({f6p__1XFS-v+RuOLinZ4;XV=1&pJ&WUVm<;5`^ zvxbyJ=1=(IV?q`#&QeGoW0l4BJ$Pc_Vq6<@`%L+d2ESs8Fb-A%;8J;uU=P~(=#IZf zA+>rq$TjDUR%2&?>ijH4p4U$9KEd%}4u@WdJ$xy{^PKY}96Mdt>^>yWVy-l?*7sNe zEZT;+CLF=FaU~&QFjdG3Krz1i?Vd$4J}@+899&kz&0((}bY9*cG>ZYeVck419qKx& zw^#Sd6@=g`TDdiF8h@`RHW`0J;AEE!Do5pxL;td6a^()TI>z)nKNZPMnCdA7CLn*) z+1w^!vT>A1yd$KhpQbwga<_H<6;JFIq7Pc?BBSbTMP3Mq22J944VIZVPNLjp{Mjn1 z;S_M38}>C}8^r83C#K-@?qT3Xl~6O_+UdN#IP~Q-t?s2PaX0E~@_dQbBg-`JFIB%- zJRx(21z~!eiHD7U_bN%t53TB#2Q8nYNGvrH{N>)$t#xF;T~5I!F%MQHV{?8#g6$~r zg1f%afub4DJyJ5ZJ7CYXMt|k!fE6%=aQU^~?&*opdz2n+N^g(4*&F5vq`#UhPQ%zX zj6mkiYe#^rQvDeCK9Xb28H-pc_}!oFNvn;2jHgE80LeqOrRFfrQHLk`5%PLwEpF;J z^9L6)eG7&Hg>Meo6$-|9h;!#;V#*krz;V#xiu0E=}jac1UsboH2R?FU~JR4sPRfI(dUc!kZ*n@=r9 zVDm?@*d+gYr)5`3Y9B6oUY=R9GyO-})}MrJ%Lbd+#);2JDqyb~&JGLLEkeEEgH;vK z;|S(o3>HHdkHMQNiKwriX_lr$p zRcg(*zJccQ&>o|%M!v=8I{lxM3jBsp1^=-DV_l@f__5$4@olC|iue9j%HbgW)u%F! zEN+Mxab=it8)`KPORnWsjWLiV_c1%y+WDi7nIxY2cR5O07(Il(6z8^eCy{QFxH}?o zzZJEV-=3MWMJ5^u6l-L0% z5D%Cf0kvsEolu>$$n$EOlm^yd*;cAg>zqZ*l=dSxW}#W+Vb$LDSVbKHlkA?}tOx|4 z^R9WCHCdC8+CncpRovp5N75Ac5IbzQg75Shp8UkPt?Mbo`cq$hQygrD-LE*73axX{ zJ+j%^5qA*spQR9Oel$)W=O5#sX(-1p6^#c2o;^U4*im3yZgVY}i@n!WNvMtjB8RBk zZiF=^aMt_ngnzo}Iy}CLJ3|kt&H{VmVv1`w#oHVSecUdICMNt5LZI8~%js9lR;%eL zk#Wct>fb=tkpK&;{lPv91Uw)dY{&A~*Zu+^W`K-CEY{n>htm<2DHT=r(N_|C)BH{! z>@wJO*o;t{VVl0Qwt25iIYQ*`D>YSX0r-^MZufu6<3dTLE728iDI$HZ+=7m5iWj`d z5R@l>Z%b@IxGspAQ)dRn$Fx}bA#w2FDc#pJLG>zeN|ed`D>uH`x7Um0T+G&4PsM>9TQ@9G!QyqrVpmhlcMB@(t4 zY*#4kg`<13i!CfW%6#Uy`ASY?C=3*SxudV!649|uzikv=IH<99@AB{}G5AfX%t)2Z z+*Cp#OH8xm6afB#8F08t)0tH)6Q8` zx_%kRT(^s8=lxlLqhy}0)XvVpOs}zJyo~h`C(j76W=sxw!ruItkG8D<1&#zG%JMPq z4(gCskv2X*#AR4;4*OY+Gk+cTBTlu(o^r6vzDLE~ z3OsI$B;fYes;Q}@=u$fq zrml8Xvo?r@q~rkfGgRF`DLb!k1dT{xf76=`xz^XF)8wO%!* zh_LSAN=;@iF`nx@r=NH*@Z1sevLpFU-W~3(3D22KS|d}zw=Se%F@_Xjf8O0!0Z;nB z0^VFRm?*Gud~gHv_f}1Fj60=qGwX-KhyMhheug=q(q3RfV=td!f{gRW7HiC2-W zHV1DRC{!#4sKU`|^NqJxg@BS9zC4Td{jV;*0b_!rL(-cWUoF0n)80d2uE)-_@t&o)>Fpbx$ycmOZ!1Of?D!(|R8sXJA;JggYEJ4e5KJ^w^&4?mu>vDV^fY zP7lNqoKxlT9;R9cmdW@XpZny&(nbER`ls=5=W4ZGoE5Y8fn_D`We!r~QE`-pX{|0S zi+FNS_GlTiC*j{X_Sjia)ev?D<=Lok$v`XIAKl-Qvcl6o;wNEAc)egcroYxDccmT+ zuj>Q1&+ndPrQNk)0|N)F+Q8>BVX3%WF* zS&yqYtzL<#2u}7hrq(l$s0aPMyoa{Dkozog7I6$9%0EmZ{}B$8tKZu>|AD@Bn;Ni3 z_w>+Ruh|=5Laa@F*ulOxc}HwuzdAD+P$na;Eo9a~Nfme_49cZ@b-RKiukaVHOoGdr|00==uE6+vi#A`+0Z(oZyL9oL43JZegT z*Cb;dgC6KfNsu}|2U+u4D%3l;a~0DI4?8lWU&N%VwPI6Z!vWbj^Ma$50M+`mtlBDK z2Eha>V&1WP(E*z#urEiF9z^iVk-(&zc+(GT-#zkzPAdTK45W@LGEGnXC^I>Xr2SX| zt-8sLBdB)LNpirxhfH@iWlZ$5R^%I9B+8j=LzRJl8UD8JqDLqg)Na*u_u;+VI*#Yx zO9b46I_@OC{Ke0o3LsLqc_v5NVMz`Ium=3RMvyS%p{if^#*uvZd$tMWl&d~2Q{mY! zNbols5o*Wle!S`ihuJ?ic0awIjOQ)eM*NNYU6WdyLoLvZM0gYD?foqAsT$8c1??!I z)fbpaSIDzDKL>1Ir?x&M8^?p^G`*ZpK-`>QAjURODHIh)wvOYb%7~Yj`b@hUZ=iq3 zEpSLKRLLR0_4jq>tsyf01Hh%3$= zwo_tsVmxl7QH}63%kEO1iz7W8wcv|?3_;?XQ}|*Vk|gBL38v+j&}?58pjl+Wb=BGC zqvRb;z+FGpcAocl749WpMvlHw>-?TX<0tY#+8e8Q<|J4ks?SQZ=}>q6pt^)WD4>NT z=ldhC#0E+4!og6A;P;_Qoet*$B-@wLa8(ZO5z#ihDe((9mlBY!eoUfqz+2bV7>xr7 z`0fB#4pLcjnBLtNvA?&TwQ>qbyc-WlRyiVc3lqdIw~K*hdC?RpF(OWm0n3NOMj!u4 z#CzPY#MI7;siEn|rh(|J15lxwtRdOYhjM3w8!4ZaD|XM z-o3=n4NU77Fh)rP(Rf|>^3;rZ>v*)-=(bR$j?qUdTXCn3qY`oet3YCzN>;b-W8&|E zs+!ksUJ5^=Ewjr{5K zSB$T4&Q^gJSZ9xY3i(ap|8ZF%4F9Sj{zsbXKKM&}wI+Z!`S*Nz5W`8&-em3A?vwF^ zr$La$HHA`#T&fAjeqJEoy^HtP%>bLP{zTm`$CBrLXq=nj%|x_Z_1L9o#=YbgM7h6m z!n*^%#pL1>J)JB{_S6Rk5(K{NaGsy!OGE~(8iE4%gvZxNrkdvCT5BYN1DCE2F+mSG zgbUvjq;(`id{0Eq10b8djM{dYl8PQM3Vm-w2gL^pjF*yZ7a=Y zD*}Xg2K4m9TbseD%-dBtMLgtUHlgD~#=j9;P0KE%21+b@u3F&hzLDa%-7#~>F4V@r zw`>C9=PLa4{H}{V%;7`sB_St!-aK?E! z^so0jlpm!O)^NZp+@AGe$|w&IR4mSXc&+h78hk8d>q58u3W{LFEM~c0R12Zzd#8GQ zcLn*%2y7$K-PVhEhQWI~da>Wg&-bx`2s7&ps8dy@3f)RyOCDHEZksQ5P+!7|AXN}* zMy4Zflb<@8#?Kp0$cV0x-(8VZfGthVUgvl9Kk-FhogR!>_FFSkgnyOjRS&&9dohmA zcO#j*tfm7ne_2T-JHy4WnP9|I$)_*Mn^TZbNOgA@gw7#Wd|bOu?)TkX1irl{CI0Iw zxKgo!ylrM7qPX|nmmb*P;-PL;@bG>7Vp39!KFcJZh7s6wL2;Kau6+mnfmXlT#yxj^ z%zJDnC~-%P?K$$t(|**oT0W9e+VFTNxG#k$!8;K}wM z`29W@AF&$=PiTfWt`2Ug;njOH^;%}!J~hn`^PDtNwo;?kz<>A=M~DC)E||9a?gNmF zPr!sYjz&PTE55LU8h!Wt@C`aqnCV@c^VBIw=`Pw)pJv zsW`+NCCd6`0GFQ&g^i?JyZhDpJOs}PT!wh9S7TRl%$qK?9*Iu6P@a8zGFKPqsuzuX zmF#KL(Ti&9srXO)%R49L8)7sTmLq=~lk`2}pPy(`%a(i~%{e=y3R4lg=qgoy%|VcE zl)Uy||Is~!4;jBhOp)mBM6;;>lyg79#*_SR-q0qH(QZbOgd~D_eCYtP-ZG?TVPnH_ z*gg+$NO3u&Aqa`e=Kk{}^Nhyk(ubj=S4}pGmjt7&rkIREG!a#Jen)SPa5KU*)~df^&zMLqgdC<=qq1iX@DG0i#Sz*RP_@VdXJA)a`%FNqKrAJ#Fk*L$*3M^ z@)xF_RED2aCZ9CR%rK|9vyx3SR)2R|TR~CUbAQ#l%e^%iQcXq|&_IOxJdFZOiJj}Q zHubRazHpe%tJoQprnHqOJGr2Zj`;!7XLA4KcB~|!`TtEj5AaT!bU$j2k<6^^w|&Rm zy0y@^27=(G2&YCUdg#h6KAQpya{Xwc_|^|$?S;#x&kNS%o4*o^7@fY%{@<*(C(;<0S9+0uwe z-`cCOUTd>X2O%$)w++PP5X$}V@lz{xb`?AUGMY%x^d;^eX0n+drhOHJ(}d0xZD}g_ zmW?(9i{SfjZd_mOtj6?#SswKdC%BBU{-EjxLi;+OKbV5loRcUbm4s&Z5$Rmwhf%N$ zmYXUeEX<~rTDu8z(A=EN2OY+h^%JGzcp69t$1COK$NCAle56ZdZFl17f@8uXJNv4} zAXp4)CBkKt#5>86Ze9ClXMv@jPqs2;j*S&hZ`(&T`q~Bm9H~`aB5*GE$i?tgKq_E_ zGnX_1t$-0uS~K;YoyzN;3ji+6sEr}LMeGOcm@-{?0+^@z7^&B+nyEOLdSKzbNt4J7 zv8IU_C-mL$f{jJwSoZPYA$;@)QgfyaE2{&e2CMbTjMwd<#Jc6~^j_u3VmPdp;jO6P zsjQ1F2`-0dFJuJ|(dd^7rLBf0Nksi=q_f0(!Wp=BF2@t{T+A5h&Js*-J)*4xpSN3b zK%4Y0gu+R^{BP9s0ex9aXYF{w(UOdj%V^6^Ij|H}Sg^*+&kW4EWR+s`_!hh{%&2_n!! z3w~Q7ZZV7Sz#}D)4=4s8fBtHW+_W4-fo9#~Rf&m8$6;E+jBqb-W$GLjZ8Iq>bgCNz zLRQb6`TO?o`YC9fjQdgGL*UXd*J>Ne{R|Ne7WeRqV`o@8@5a#bt}^yvz`N0o4`%sz z6f0G(XB?SV3y`AmdT3ieF2ShB?+6l$<}xG>le(b7i2$Lzrm!_7#%%3EXtOF361n{* zDM9o(n!))@->oyxIZqvr7}&cRj(n0lWZgq1Ovkdvwx38B?0{sL1&E-i59@ahY`|zmUqq?${PaNlNCEt>k8nWt!MNE7|T~+-3^+AnSzl6pWwK8M)Wx+o?#eZ;6kQQjy zI(=A1jUD=aomr$2v8|G*SoG>`0-H(j@99`#bbX+*D}&nOP)W0U0XlSoNceEO zyw8rG+w2CPFmg1{EJ8Rv4vWT(u#R=AJ}B!j%Tp%#{x90zIxMQL4Ifoey1R!?X$7QH zYG?!`r4^Cx?vfmGR9Zj;B_)TBp+lr$VCYWiIvai8@B4n|k8_>tT-WhGyk_sc_FB(+ z;(qRRZ~l*H#UqG}IC1M~bZ)hF3{LHbh2a)!1)Y<&R$&%!oN})jF=20g^H@Xti;-yi z7g(Fhb7r2tLbfD3wU)q{K0oqK_H5a^PI zC}-QI@0s@oE94){Xcp)^;Eq!8KVv zQwN2FJBPNR4HT`v?0HV8tDh?=2>!15-mz<;;5ax>5fp_)|2-8#qIR<`OO@1btwSba z@iBw;D}Z^4^mifCpGfl)4LO9$2VxC62n71rQF0c^|30`Db{8RLaLucHO1UNI(7H0K zSW+Vr_&W&qgyg4;${815VP@-JXK%Y3-PeqFfE)qY-W?81f{ zd3<3zJZsZs3Q2v9Dg6blB30v;uT~pM+qUh5MZ&ids2A_KaB5FdOQPRaaen!V3$zW6 zc`NvX8skB>ltkGMRhXAX*9l2J1cvTAft~aMcB4;ab&N@2g6~55Rsgz=Tco&Ipu9ku zMQO+*uxiaK0PU(e{cN!@%~OW>b?*|F(mW~gK0|Tz2n#fv3u?q<@dMvJKP;7(%aV3j zX2Ob#SpRK~xN@U!?r{ObAFmr{8BL$k6S|bx^BI5sc_DR_j`M_ga;JGznJLA4Io~bB zDq&Qp2)5?;@PD^cV*;2K=`2jtztva!I&L{OEjBhN3gVa@*|>Vq5-=}wcAf!eD;8*j zx59QSP91M1J)rBM$_z_}*d!>A7Y&FjvyMMCqY-krjBtFDagt_; z{-84HJ@)0RYGADC(ZjkEF^HSe@w&N4W-Y~w22I_Bkp52(o>U*d+GojSr=^y9YbYS%ed(YQTxfW9 znL|z4)!ofY4E`PF%!2%yL!jHmUlOe2$pLw&EKxdGru7{WPL7$Pyk6#Z?Vj)a=rSW# zk|=XblOxnD_Iv5kF-2hd^}(;0Zia!{R7W?Nerq}fC=z@o7TTWM8xAcPKT9)*3)^=9%W{OKG509eE?XrBu%?!ceOj;D5?Yv)S z6Gg;*Y~v~<$>U&AhP_C7XXE>E^qCRj(vZ`s=N!MEtAyZ#E=HYJ4v#i2JYZR?hd1UDdjhpW2bTqRB}@(=%1G-Do+#qq}Lr} zT^*lrh@xg&RjJv!@T;Fddx$-ZUsmH zJW-Y=;5$I;115v@E#S`r792d zJfIN=E#8qjqvnw{>8gH(CD*6CiihkozW#@XrgIt7v)-BRGAGMW_wxnEhowV&ta-SvRX7l@ka2rg5b*8ZUPFELP&569DQOc$d>69yi z%SIv^qbK5p-cZ^?WG_1b8K(=R)eD`(LWh|B#3yXz6w z6xC7I$;c_6BAJs%e%m5MxShl3Nf!jJX>u*E^C%9Z7af0f;TyMagWgxrD!8RF2nt~- zq)q0i2|i%HJZ(2*Ou=L)^iyx|Z6!jh-j?z%4sC>6%;>RQ;J+@8Px4oCEEf!MP~(iw zn=7}8C#Zh+byf~oVfHx3>bzPHaFC$>d8}_TrF3XnO}RTj9cpWn#uUpP>A{<7oRU;0VtI= zy>I2)#^m8zfW=8Jv;Ge?P?{D;`ucIp41)ZoM@iV$o0JCZ8Xdyr*fB?D?!~A!C?Kl~ z$JdE{qwn;KfbTvUa$)g7IU%{75yU=8LBSBa1TQJ4cILBIDT*Gi>^qQT#%6QL~f`6z6H?z|Uy7YCcYa61ZJhneDUk zoaL4yzIMXhOi58&t|{R2GIFzm_l~_nMEoavX4@xWuKMJUfkP?}9#;>*&&xa?)`22& zf~af_E6P%Czmsye=E|c+a@j^G&jxE-8z#(h$Y9`a7dO9LkO|u17vV=p zcp5!2LV(R7NBPO5J7O}TSD0oku%KP4GQ40;{`xsS9u*!~#oCzPPC_z}BT9LFxs^w< z4~o6oWzuociG{@1?rpky@PZG`%)Ns^9OZ|w;LO8~9c8wy(pMunjBMT5U22S+%Qe{9 zOh0K@Qdtv7kXSSC)`t7W2k=A#nRlatEe{dP%u%%Zc{V|x>eXCpYO=q6MI~UxE`xKD zpyYb9RD<0uMqk;OKO9PT#vadKcu#`m$Ssan_|CmWvH=HW?%VqMZo`|195it)7C*11 zu1P13^o-NzQnf2-AQE40X0(SSKMi`?Qm9r>=`lmy;EhLtU|pN&R#d$8e4JY{x@hDUJGY{K2-U$Nktw ztz`utS(_bOJ3eBInfJ_qLNW(!Pl^MCwy!COpEC3xu-W0!bmYiJED(}M zB>$#<;))~FieL>>4*&Wae_8hjmiRS1w290sjYvahv%dV+B)#;s*BU*0lJ-r$%oq50%9mzhbc@_W9s=btrmpd;)w{a< zkBS{Iy$fBfx0`)JRyV-M@o zD~grkL`9}HpDt5YgPXFugS>QiB6{geOihDgGW_*STc z2p>Q2_U5$Na}{dicz$Rl-Op;UW`_B{V{Mi&Vrgh-Kq8Sr&9~QkXMLtO!2&je=vJ^X z`&_pt?67>u?WK~sI(&Po^5EnA6{=yiowU4sSXb4>9i#p4Bml(URb+0DAET z`(HXfK#DNwvZ}1ABBm0wYbg!5e(QVCN!H!njgOF3eOe($ z?d3c=Ha2nU(J8XajrQcfOB2fE;*8uQBl{K_`q1}e%I2&`-9Px;iEV)hvsWR|I>zfX;V`W4h{|_SK8A5d6Nh%QZgzOw_8rvqaqP08@?nPxI;OC+i9#ROPa} zJCe?dfPlbr>s!sksVDsWG8Ps~V^&1pd?efA(^Lp03sitc?9Q*sfLnhvvEh-C5wuND zPzN?1Uh(aR$Oz!bKOhOY7!+J%UteF3`7PL&*8{=doxdZ^z{-kNX+6-k6femgW_kCBpRWAkJnJ9xXfDiglPZ`V9J}!r}1CEa!7LoTrGhtx?(8Wl6-(&2BfQh z?jA15Y;%2ae72z;5IrP$OYP<5waGkB?dRuLR#8Ei$#-*gG{nfp7P-sG6lo~+N3ar% z1%W{JkrAN9%lX?T-gF;7zeew#U#;>G)B3fvojW63nk=`#o6YwL_SE>Rj9GvK8;y*Q z$GZV`AXZ4-%=9#7W5eE4Sf!?s(I;MF0ubc&#CPMiJbH}aTED-q@&*kgvA)n6(sup9 zNcjBKuF#48?)o}Ds$VmO@0ZV16d>?*Ih?jV_z_m97hsrpf5bTX-XY^{XY1WKGyN~# zAPM61z8evQ<1adDE~kw($0a8d!+?$)8#g!U>c@{C!$|uIGrg79Uw5E|7v{L5FAn}Q zSaDdI&>r#c^}KlVfe%j|U0sO{7|wm|>`Lm#pPtce_=>4nKqM`$@B+?I zK4oMSY2!)HqWm3v!~^bdK!@pUP4YI%!EehjSn3dVw)oz(&TbG*4P<)L%@~jVEzRe2 zHv@7hiXo%@_rbIrp73m^AnKtw8qr@Wa0Bsk;j{^!H478MhfWUv{DkuC-mWio-_|3@ z+RXe-^?tW9!fZW7&*Z$B@k6}^Y)qZH( zbsFHd2)batJ3l7&M-)J22$+|CwH<|^{cI$#m69w8JH*5PD3_+Yf6?Kvw6<;o_Gkny zO>jm=#(r;mU)S0@J3GgPmix#z;+H2oM6}vHh3Dc>-x$@VPZKnFJXG_j0V}eMZXO<3 zOV)%B`JVk}JsX@ie&z>bdwYAgwYOWe?OO?T4GiEQ5D2|BFL558(&Evj6)~9nub)2& zmGiY4ZG!3Sfir1O^(08;>zU4Xlae1q2f(EngjJ|`?uQpJcn4QsT9$RtaFA(4)U2hZ5pn4Fwk z%H6%1H*-m$-z(7MD+a?gm5>-PJ-0eKsJ}`XI)W=-TwYqN^roP{D$>z+x9}8ivV-C7J$%x1<4MyLGoRGslYVCk z%l}?JK0e-7CJXCKe2=Vs?A*G{l{oQ!wN^}xm^%r^?QpzY>Pxk>0xM~3%m7|`7dW*4 zXW3fOG}i$5j=xSZu8+ZCYFT|3seXcN?C_eBGw*3PZuGaE!Rb9ncTWPELA3AuGx z0Yve;2h;rHY(#4<4^zR2>F!RaE9XPje7HyFA3R7EB2O@sz+jf7I0hOq@H_-lV_MzUl5U zDl8F_u@R;}1qB3%EE80a(IlLqg%8HY$IX|zV)dJR>l`t}+1W8bzWetD?Ir_1e4sQ# z&y#;oiV+`24VsQD?E~@j7)qZ@?HCy;@ne`D{@uBwxSX7vMzxO8p2~P+ima@xIWLli zT-cXO0iP25gpz^+BR*!f(WhoO_iii5($R>zgFSseS65eS)RdO0PYO2QI?vU+wOMf1 zwH%VF)w3r3$3gBaH1_cr*N3*WNcE@l$=0GJC1qxE&&u}jgy`w&Vx|qA?#=UbGpB+5 z-+YoBh^Ps69$9vN=712JoSDJ;mU=}flBQ~61JkSf{+&De{rj*TqK8hG|Be~~+1T0P zsxonQTK)$0b#%WugFk&c{t6yPQ&v_cu_u8q|cauiSxVX4B@{h11ykl(7DXt!wkWb??`i@SZV8t;9Yb(-#XJzze$zY4-u?z`=$nhQp&qi7SLEf}#G+D9hb1|A}8f z@b!+wQ%z1wW5pKj$U?Y?*1a@t+V0xVT=iAVySec{&IBU$zfLwn|1r=i24XEcwsA)U z?c$P!^8xIo=XF%C*O}mVywW~2a~?J}e1oSL!Rge0vZRixFpp`Zc&|J|;=Z?q=d-=_ z_Prm=S{G?RF1QKgRa=P1Fj{GR57a+V{b0CF^g>uz7=6ihFiR2?FL?tb`&p<@ZrGHA z=?YN)_?pF>Fub;=*CPzC-@Nf-j~7272eOo3KqyGZ{3j$+BFxX*K{xvz4V#cM%)eVyZG;! z6M9IoGB(T{e)q5WFyd?MdF=E?WZrYM)3fn?o6{Om8!ziK)Ivp~X(0^2Lz0Q;RJ> zL%8X}4M_ko80wdPPBc@f)4y4q;Ot(hVZV_0U~HJTLp-I-;Mmmn^fG=a%ON0gs?ugL z#_8oYrJsZJ!11~>o7up|J;9Q{9#>Pl`rsP3VQqvgErNBc!>nAC3K18LT??KMI za~)iD(1*<3^z=R3w(#6L^77mr0ksU5;>%)kdvbiPb0A_G{cS|~oG*A%o9e(%@HTAf znXA${k?M1Ligjt92u1w<@k1T`cg8WdO(&G88wT{{pDCR42}5?6C-bf)+&}#e+jA0! zou2p%i!R2e_S$XF2!kAa&z{AhSxS8+5C3w5SzYQ6N=5LYi&({+B0Pz+h*4N$X40j8oM_CF%&fdSw4p6 zzjo!%Qpb~cF&|r-PsIn)6z|8cnYJ{W`r*u!$PABVMm!n{z-#taAba`a+v^@Yc(ebl z3+%@YAu4Kb`KBA!)!D>RBC|Pb&>%MK6b?cT?QP5Exso$R>u#SfoSt1<;Z)xT%@%zoFMTPUb%lEDFq*EriFjj6BB~^? z9j?@{n!V)7RzmEMibOGp+h{xErG`V2q{TQ)s7-x}6&~e7cV08G={CVLp z``3r~yzCzqvmbMUea zxhFGZ-;?^WV{m`S-Zo2ykE=__m?x1dA2Zwf*JYk6d;Gj+E(UZtC@-?9DW!SlPm&Bg z-z5@Fi(>L-DXs3Amdmrw*lQoMUm;ly(k;j$2-p%{RX>Qg6~}E~P*zO0Y`S^N1(P_X zmfL0vk3dD1T-#Ld1hh7PA^H}ksV{n|di5JEenBt5hm6#OZ+#^L(sX#E$#%-~uV)9B zMp`NVCN^%%0!_3z%p(GWNe?DLE~cb6*E6@R0Z>bK*W_2NSn-Q#4qJ2*h4;eF&%42@ zvSwCjs14C&F-ogEH}z<-jDZ!;P(ar^PApK;b#hvnjA|9Nyf)*nIsk>ew&g;IyEWOqHZa|C*L)h7}>_+eqx(qc<;!xdJHv z{f%tN8>D>1z0#qgC7tTc#bGw4x7dar+rhi_UY=Lw4#-$$R(eVZeTO%*|L^uf-heq= za;n{~f*&7|dcW?!(4!}9`W(iZ^;8~c5pQ^&#shR*hdp@}b*}NILrKryLRpN>4iDRv znX@Sa=Qt3yXPmWbu8cDl@9Ci7nMG~2>5Vu=lfTD7ImeTuXLcK?>pQbi$9<&IGZ)lf zD`cS5HQF$pOG2K%$x0c^O7NnqvHQ;|N&SyuKj@%<9`nIlmddPYBT^=T zaYw$HA-rr*harFa!3n5pf1PN^{9u)9vi?xuYB|vXIj|GA4zh#iynD~*(c?|N?f@?y zx+a%Ahbt3%^w2jdScGe_odSf2zbOE?J(3xmEyt#~AAElQo1Ao>k`6g*?lAqgpA(|*o~DM#Bbu$O#LZX+Yjl}j_4yS@qfH%x%~1> z!tdyj*y*h4{s9Bc8B@wD4Hj8rd*a_|elg8rr_>V0MezwSwu&hn*cI2CT;3FwsSPYtvZGs|BdzUjvyZaqpAZ3wBZEd_s;CW%R66 z<3=9H>AtRvmm5@Ou0lu!s<93eCT%OndEzM;iqKjY(~W&rH`*~c2UoI z;W(n>`^Hqcd(7g+;r&^7d}8w<-(&bGCv}8*2`y*~a%utsDY)2Wl{A@?i<3xu?o1eX zy2Y`gPI|r7pCrBX`b87lb#gG~r_F|^^{=PT5$jIu4jT;1i9;5#tx@AHA|wFm&&a-9CZ=biuW zA0}3^v9Teo?jd9)5_-cmUxE$Q0@;}o;0TAIOH#awTeJH7@SjC!p3TkTf{0bOpUC;JxbLWxANb`wFzR20> z`^)2V&CBIgr(lyOKq>zBrT4on`E~U4mhO zTM%iMRjH61n?+x$Kdbz&X3I#1~ z*I7O2rlS6D&~o4;`_x=9M0Ds_&$iOa+xUjfuF!#qjG9s-oW?S1_+ z`T7IP5C?}0GN9_2CA_OL+FbH1cPGnb{Zj7%pY16WqeekNfq-$W_#r*a4oM!8j~P44 zqOWXA@;Z2NDTzB0RhGGL@8{U%){D(>Ky0KF7q->Qq4FK z^IOqfMNIX_yfUenj{s0J2nwZ$iH&VeO#k?C>Boc#W6v)Qdp&+KehQ=B4yTVp_ox?{ zh~#1z4+yty4r|*iwh2!&%cOI*yS%Sb(zejBU+qFAjCvCv=;>Na?l@ulh+VJLO5k=3 zML4vguZq4eyTdu{?^HZxkae2;dp4O~s{rhdv*gZHnVs49iifoJxA{?+1a&mUCtmmR z5b$HP^j9Z~Sh^F!mQrjVzkKK7oH*Wsmy9+FXLT~|_j-Ss%?j>4(%U}XDBpimQXg-A-7^F|+P>`vlTcgfpy4tsF|qs1zu~VS>u)kA~cV z-gn2hKW;`$NO`sT-ApKJIpA@+R}R_n6act~Nn5s#rl;Qwa=v?j>$^S~WtBNMNi_oh zd7HnD0pc9RBIaw|?m%M|EuSkTyG+;dud@gBx4-lgY#eVubD-U6nfl%O7DB<7gi)X) zT%DuH4ZY=5EHJFbbfDR4*%rvSGcV3R#qdxpq|OS6)4b>VGDKs|6BLw`gx*z2mbNT4 z%+Md#o=f_oh-V{lVY?Z;IAnh8U1kj1?>9sIWF)H}{a$F--tqi6Hg>QR<5^z9iPD@aod&c&sbP0_A?G4D^@s-G?l0j^)o2H%7Xt z%ct4etryHZ9}6K8?9#9C7ay-A7x9#NQeq-_ztQR$dXs)ZW&_T15quX0>N&U zI3Mu^K{o#VJVtWNTRT95lC+gwHYk9`z7wr*dRPqrQ-lDcml^rwXtuq(`&>ft0;yfE9t#`$q$=rOwb)+|F!eByWPvfLoPo>s&zF2?IXTZD-A)>N}GfT z2bBgSBc}G(<6q-Cc`((-C9Jz&Wt1#X4)dZ%hG?{ZM!z=2`-G(`SNe&@Kmmh0^Qa}| zr*2HQ&*X*DI#P5YlC!*jnDot$ckA2Ray4H-C>h__2D@c67UY@IYI1)h(-b<~+eC{P zNhN>yUUiLtW&`i`%VXr)LKP|N!lyz!Wy7w|HnpV~^!^C+SggREHMEibIF%9OuxzZg zMB?43sY}1%u@`ZDD_rC$Q5qkNrQYQE_uZ1qLdxT6~!lAE;?Z(yJZCvN|!~&<0e4yuCp16|GjQF%r8w|rn<6T z7tH$}`YN-Jz=DiuFl-ml)Y8CGV=-lg((m0rv}ZvmV~6?qzTj8S=iCo|CjeN!GO%DZ z!G9Jk25aeOzRJO@%McQ1#sT0OLFRUzR?%*;r8CmF#(wUXqP3ocF6c=}7!(Jy!SW0G zQHz8)7WEG`dH+df>hgi-da4Y#yK`8lE@X)sI$@l;4P|yw7*|>q8W|KpAzf%lmjmkU zQ}QKcds@4J$4(bA_=|)`^qSQRSM=JS8i@ms6x12$YcN}V+{$HU9z;RCy5EbSQNa}* z3u8pzJZi48ob-3XeEL(+2oSSa-M&Bnh4b}y&Y(Of2@K_dV{a?PMp#NMBn36mc^e-1 znewgbj#EDD{s7lEa*eob`im!|{`C}yptZ8M?FQ*H4J}*AINP{J&;3$)NShldP3Y+z z#a^LDlrgxb8HY!$U_3-M3T~e|`U3c4c2Oc>&$9){duG!9XMqJx2A}w6qO{k1oen15 zeJ!`c!;RD*A1Zbzx-gL5+T8y%b#c-l)Fxg{{Vr^+e6L$6^pUsO0IUX?GPrC90YZWG zuir>ewJP|@$AfZhs(wZv&B?E!hO74=Tg#E(lg$(~gxpa*EOzqMVBVi8c*^Vh2Hh6U zQv8Xl*cQ@70v}YzDV)$%UwFvIk7cZl$2CtrK~APfut2xEm;}yftf#lh4)0Hp4fRj_-7WOX{kLf$+Iadc z8@`FWJ4WWw`0hmfz^$!|s|H=m`_TZEn zUKd&3x1X0wNL}V(E;D30H#}#Gb!lbK_87g5!3UIsTgWwrQOY$FB*4whYubQ1NFZdi znL1|Cc^IZ&;Z7vzFc%i6Okj}f_v#HH=VRO%%fN(LX%>YS!tZ{N^k^nd7E|$#d(R}| z9;qVGpEXc(dg!-yb`rc5qFT(27uv=#rdx9B=qxk!CVolLI$RyKz%1n=Mmlwt$320? zIcVIfA&VK4Rv%c?hr2BD3PiLs2s@gzeT6$wsa7Ux9si-+kb<@I8?$!IZ%Tq2-!g*R zvbSv!YTqW?m_`JpLs3WY+xk9F5)Q|HyKg*fm0Pzd!)hWj8`>1(ARkDpC%^zRTyAMZ+ zJv$w-uDXk_19wL#3bN+_N0*pJCbseMssT@u|!BlZH%~kPTNRnvok;r|xFY5(&oR7BeHv zulNNhFx(cF0?qXAwZ4Q@;+1?%Tk@K#R+Bqp(#qbCbxWQGn(91WG`G+kn~r`ps*MhQ zO)!36C(^~eWh9SvVKFIsB++`3+fCldi9FT7t32PL%BFtFg66GUM+ZI2Dkfz;{3Gue z;C64a$$m+uNWBXM;qEAzX}6<~A>z|>{bO*GTQ-ITKg&ID@%@N*_Tn&cT5)@NCuH4a zt88qVgwM$-QguMy&7P^mJp=zUfjD%sd7szWxvk8qS(2yqwHx{B%L}6v@C#YsDA&oO zJ&(DJjeBoR19ap~J1FoZJ6Jzw=ijmjMIl0gJh{r2AgR@xzff>dC7_Ju0GMwVWnR&zk+-y!KwtIdx-$1H(S=N#+WV zR6r*rW^PMJW-TkH`=#AQP1_{U7sX>$IlwNyJVZYKH#_Y86;?w%n9hfyE_wX~Xc^K~ z(Mh16FM)2)m|K^<04hl+>A;@vnrPeAbuFtlAW#Zy%4Q=(Vr>42!pP$%6R~d9Zz50H zr7Jy`BaZr3Uar4WZkJ{h*QXVJxEJc$@p8I##-W6{yl@c{mnyvtnSgTwHO$VgzHf&3 zF6?}U4<5kQ7@vxG=z%s_=_g!&p66DE*7_X2KsCcuGL)F=WihT`N>?3C)A5?7WGz$f zTNRZw+DV&T=~LW>^PKPHxDv3i-|KvzPGS{&48w+){2m~ImU~}O;TBOnyWlfnOSJ^;KatAY#-60s`T)|Nb4i>VUl@~g&&V!r?ZCCQAAxf0xg;kb$PbiK zdow{e{xo`Pb#~km6LFFt+m=0O^5E%fqF%uY5!kM0<3T+r^Gz4e$rw`OL0bPYm@O-m zW8rB`^haKfy>O0w93-I#A!$4J%V5);Vyg!;cTMa+ffQOFqq17^vNE)}1|a%eemJnu zue6S(bAgRP`d0E)(n9ghySlm}DwpDwgQw$g*-E#bej~@vX|%%sd1J!jVgb5A*?FoM z{a!&@mGfdoWSEn%L`UkEK^*5n)R)Z#H-!;R!NG^-|{5SgKz^r3rhXiS9Z@|~{1 z7xsdL+^$&4T#gHFqLcVU}F@0g#W#5(t7NC54-mai&1 z`hK#WQwCY?zPlsQ^DieB*GhJ=C#-X}FGQBMzMHK$2n#L-416^snfcaaG?=A#(^uZN zyp~Dvadn8>2I57h8IzT+w=@;<hCdcnX`$580-#akmRT8OCn7<+e);)y7q*Jo>$6NmSuo=NBnF9m<{SL0d!~4 z`p-Sd0DWM9o@(tP3Zc+Lpm1Gl`A*bg&p&0iX?w{cm#(+A)dzI31x)rtpIe=!niVA| z{5nrX>Q9}C&5wKN``ek^RW|?ZZ-A@m4{!@V8Y&fl8J>H+1(`?~#NOH%goAd>`1EXi zy>t0lLL{wAB`evww8ebjhXOafs{yu}hhuImvx7gbT1v28X19iCgj#MSSR@dD{`EhQ z<*zWNqZ8o!7gKIf$z>22p7kLOR5*lLqMgV|-K`dyp0=dyEMncdg+5r9Q?2dpUY zdwISuP_K*K~0auzU9|Y;tPKIAc}4R>WmB zhSz;tZVR#XFD}OK1|5W)e{=9t@njelqG(cY+Vi>Dc3up!;b5+;t|l#1%V^seEo>$> zS>4^0u)1vG`xn&!8v~L#x&V0v;?aEcOItnB(w1DC+1poaGC25wr>AF2U*DF>!SCP1 znckhi*K$>TN&iL4fM=>fdWIeWbiQGLiQ5%RRsLqQ;F7S(0ibI7{6t1Fxm@s)KzeR* zj!#bkw$OdxA37!`At5=r=Tn%NkdWTlspMG3?Eg##YtJ$gRBo)LLbaSnip z>?aA}Gd0V%ap!)i;^N{moYr7)zbX+K8S5;^&xr{Sw`-~zFP{IdxK%x%Fqyjhom-lm z4bT5CP?_-=;MlG!re(V_*mTQtJFm9mt zDWIJv<#3$6Hw1Lk@L}CU80yHk9NcY{h-Iy&7U#%*<=(z^bQ~QWz1wpz3$P*n!oUT< zL3ZL^Au52MJ!{x+D6q$3`({m{TI1pin18Wz3`yL;ns(oty_n83opzB`@ z8OUaUDl#YLUmim8pDSf3{sSay0|Y;S)c{D4|Iec_t$;5h{nNq#I411Gy`KM_;k%8S zeDGhZxl4H20CMra=6&Zf|9>6;V*dB>u{0B*W`Pnw1!TDrmjDFg|NDHefYG@*oFW}% zC@QkKG&7SnvU9QYiWZdmzk`M~15~LOSsRgP896!hkbVq+{a*0&Zr<6WLGHuo}On-5jD1|uD*9tTfc*tT+HuSA_JB{07#N9(ouVJ#wWfl z6sY}#aa4)_+|MaCHuh{G1aBYVvxnhzI_J?m60_$4FfmW%1n;10CMXfGT;R6=u-dsO z-MxZoxwhOkBzdpN|7IT#-5evd`8L*Ur6xtHnoHNTk)~-Nd*Rg+p&F?i4X`IQaj$DH zpww~mr~;r18yFazic8*}53yc9^2|-JR09D%<$Ntc=kS#!;4@y_E#i{9`Z6qq@hKP4 z2`(I;){8=O#3?dLrDJ8%(-@P45-i zJfa1>l`4Pn*RT5kCdZd8Id*RX{-db?frw&D@^90;I?c@R^;D zGa5s>i>c()cCJ+fRbKRE7{6fzmMd;XJn`sYRZTck=B+}6_dpTjV+KrmGlGOZfQrz! z6P}fPRxI~82+R;-#6gyuz!9nk)j=)N@7+x7*D7EceaY{!J6m+~{%Vz%mzVC&e=^Pg8qAQYG;{;Fkv8Gg! zjubP2EIXNn)Kt9x3rSQ<*LXIoL%`|Qc^;TAudZTp#ZXXCXlDXW6W}%n-=VQ7mdYI= z!$&*qS}b0ivxO`wV}$2!NJ!5Xm*a*k_~0IKMJestz#qSrln7p{9Mr%m;(a34@z8}_ zIR(O*e<9gQ76y*q(j_-TRz5w(+20pItGil-u!9mT^!YKPv`{gKTIszcFN25^RFc~|24S6(l?}dQ&aCmrP~Y~Zl$<`6Ji8g(TY`%QMB1okwc6l<+W75woP8?zgnQ+j z^>M?Df;1_7tS%EP&s*tPJ|>AlHQ2;}_4!{s-z9DSX^)Zrdn2BBBCci$85)*^iy*?u z4mLBI>oMwAD~5ikq25?f%dWdk+Q9{lvCI^4!DP&9mkC!$e;tR*#||40e->rWvnKWeyeCj*ITVn~Ky4 zV-P+AnhKE^EDi(I6f__8aEf`EBG-6lIEp({jc-MIHxGbg2^dzsAFdrr&b0>8zdkq~ z;Bq|PuYaS$5-&H62G#0#sEtwXo9w3;CndoltL*aNrfyAMtbV8Ata`*_zuTVcO7ePd zQ9X6Yyx<7jFD?oyoxVm>Cl|-y@;k|~?rsY`y4|=&XUQR5rywkU7sdBjmw$@dj6oE; zXnA)RFaNc5L8WEWVJ{~yA74;nA^||SE)=u2w)Q+8(+l$(U2z^*a_33o5#ekzD3zdo z1@UAuc@R-sy2{&EfaRy(ziEF&aW?G@o8{1Y{A-4UDdKAz6(g-W&=o2qcfXx-J7;&i z(R*$Gr9}%5qsfsJs3TMEurU`FR_I_bItNq(S3rrH(i+mUT4#qz5b9F$sYWNx1}s=- z8K<;1c0cWftc8&I*R9yOr6T(t*Hf64N`XF0u;mvr!cKfkEvfGhzA~kpb?+LBRTyK% zr~9fTz}N7uX$C1+H6=8H-Cv-2M_i7uZAe+ZWk}7M34|Z}hwbg&n2J>#p?TUKIp`bP zM2^5uctVWC9zG&cEm~fM1FhNQjmOZL8V9mLf~)r1u5!CVhEK)~?y6AB6T!6HuP##J zaB@mHr5DdGD?}m}FGSqAHPZrQBSmaArk{FkXhc%t6A?rMhmt;%2$}Oo%+3D_4G-T! zgv?keMn@MsKcPK1I{)P;3aOh@2=y9l)ZgW9Yl^%-*@e5P(XZ!{U^VxQV70RTBh+ph zQRXY=>-T>Deh$JtfPI;SjBoka@Qav#=b=h!R(zxP4Q4GsoBgF{9<)WbDtx@?>?C>D zON{POWZ65*TMX{bPx$6@y%8P|c<79ve+_1V@>V$3YB+!AA$q8T50+pg#@I4R0mmCE zuHJhO1BHAAI%A9)(NjHX8TRV=nH_kp&G8B+5)Zg4_ZHOAZlfo^zpe(~-{8HJB$}ar zM5Kk0WaaK|Pq7gs{e}I;V}|Igy4B-gXD5C!qun9L=q36_H{EA`=H)|z6A4Wf7OO+# zEuyg4(tfH!KB2kva=3W{xfXRtu!dSbpYHFNDaOk)^=sNtcJ2@Wam#4>;F%vej|i>S zZL6%@ySPA`s*CgEvu0&N+}9{rv?)bTK7;stKe&|6u4GZS&No2N$H zsZu%1dfr`wA)gZINj|)-NYLjYTvbTXGJA*zbuNjErjfnPHcF?3sL)%OeufvtJqy*- z+gdt1SQ36^9@!G(o%0Dxy;+|H{GOxIsfU?D{OZQJxvp!U$-8bvn z9Qj+cWU)phmalEF50)d<>My7{fA3-Gc4>#f3<}#W&1FwP@Ju3j=e>CP%CA`Y@8Xk0 z<9Hv}?^D+Tex!!WODY;QdbW9n|&1VCuD(=eOF#hnCB$NyvCPJyP=O{ zN$UrWD*5j62db;PF;;wuzbmslEt!A-E)YK1qq7sWzCqGD(2mtpw6XoUW9G?Jl=*9Y z@+#xkP)k;h8*zx`n0hSU3`gzqio3hJd(q%7E$&*}?c{mC z^CMiz#lPL1+1Z&}-rw!n25}>l3H~M{niL^_W?%=L?(zeOLadEaeRz>64C6ynzQ4!F zL^Y!Mu%&A1t0x zHd8bL26|PO+dW8{k9Dc?bb`8*)$pOBIMIJ6Ff?aTL0+(vrwje<6B!PpJzwIVcq`Ys zkTl?i(K85RC1@4%qka0+UGVx=`aUbC=sU1AA#C3{E~nSuE*|&tG~wRDhO@2JVf@h@ zN7q=1D&K6lh@{ti#}@T?aj=ahz#5Y?R6V_qk=FDjX1Z~%6THQ#=qNa;$3Kzyd}tQ z!{9oLTYjyaSbwk@d{%)g9nP8vBoPQ%-CT=SB<^5)x8(t+66*k!SIKS{PEED9*8`^% zT#RqqNVWbiEfhfK*wOmShg2H!`*yikFI>`plfn+_K+DXd{!rQubY?N!P7SdRaLBr$cP0e z)2JmYt8eC4p4aPpf}c+_niCDJ6QV?^3B=FsY4=Gp!ewrjLkm}^+*da>!^&HcM^aZ& z>+SX(U9Si&XzTI}==Ok7VKU^V;;TUx-p6ls(+FX1sGfS~!xv>Yzsa8{lgR!EYmrF)^sd>CpV2BJw33GQ6M30O{PWCp}=*?|n;HABnl^`!*#%?ZO}P zJK~eAu)`;m@9NV$tFf!DE)`ow1g?Jduw_F1_)G|aknFkBNLn@}oS3ftm@p)#LC;XO zFaXf?AE8)lC|6V#*36uGBvuC+IoBXwL$g+ax)nvzc776cLpD`tPpo}~+c1jl%}ZG*`4d%{v_Y{RgK@Cz|x3TtR}Fh{wKO`TsX z28%(Ngp4{RYI>%)IS3K;yZM2zbp4}iDoG4pvD0LG|1FqivP1j#zBsNY|ERrL3p3`9 zp7DP2NVoUWR_hD}C&NlxT@6}(vk5-$a$asdar^Mud52jw-$wAgqO(b}6CQHFb1hFj znHOa@@wGkfU3+g;2E>Rq*~7KpB*1fLg=9VyO{=R34}KwKY$1Lnjh=W(HW+GzISdP_ ztg7lAE|GFSaHswKUyZT|mIgj1xVIWzEy_*q9YRGFXDm;NT3DXo2E?_^)Q~gv^_D>? zhWun=PVz{{M1(gxJ|cQFdPc-ZWqXwe$kpi#uTiS#qm%%NgjnzZbeL~wYJ z&voLZ*n;}63h8OS*v89nwmZx!>3@lDWFT*@8%{Fl4C6r{!^TcP8$2ys5=tfGkLP|x#Q^WbuNhJ{lkIZ zb?4wO5&cjopXNO)`oLGdI3{K?5o9r2w(y0;TI`Abvv<7-Svir7k^QJY$bzC7`Xtep zqDdxom7Z#q0m&J6jo`doVnsu7&aLW)T92aa3iv1aKa_%ZBWY{(r(p8J2QT=WVeMFS zLMJksv{xLkEn$sUR({pF;FF}0)?UX198h@ex*$_9X3AT9{C$c?-u6}hTPco4#al?x zNmHG+7l%N%Y-!E5VG;0+PQ#0ug7uKxIX1+^xs~Y8AE!F2GmGZ)iRE+jsP-3kEo+%C zw+Z)uD9P&;QYT$+gF;o|V#o|a(NeynAD;vH1NZw-TM3D!_2#vSkr_e?E07m)L!kMP z`K@UkA9lff+4qHQ^r?GQ%bpdA_R!l}2yTgK+TkcsID`#1!kvMiyRczL#p>g4H%xt7 zElPkG3E;N5^)cz)6nP5LR91fli@3twY=76V+Y9z*Z8D2#)W?XiDLh41t2T|-?gv~8 z4I8{1$O?!fN-F(tAeL5XhT4F|CNgp16_u{b|La%c@1Q!E{BCy8Zfuyd!!>#_V|jUA z&)>uM7B^CRWfB`Q?(lTio{ZHS1I}>v;?lD8b*!(P>C0Z{U6kZfFp{tsI7d<#YHFt_ zOys!Ob<Gb8L{1+wT7ry58?{pf zrm%?u0x-~9F}NeomQfrr1qDm#42GQqIT{={&%B%*J;r6V;ullNjz#X*?{{%w-T zyNBO5NW|^v5%M~BZa_B;jYD$H$V_lwmFcf>k@uGbKU%BDA0~9lY$L04iXzb%RMHoy zOZ1u?39saxy78a|8!I|_em7HRV`rX$2A0eNtApgjL0ZR9 z0^dg`LY71|N{(6h&zWv{00BbN{VB&0#&aiA1uM=K6Q zZw#QSS>z|1CFJ&T&T@RHc=Nl~{x0%g>M>CiH#*w6zm_D5J9}NIf5lz5cFP)SbsAxk zHC#_qj8HQ(7kfR5h>```dp&1N28M=A<8CRj@lF^EmuP7d0IIsx0SX2XnEHTlKu2A@ zAEcafKy!GT(gaYc9kjB$o@Ch7M^Q7dh`Q z@TS8tQbtDPI?c8oOMq)A#p)X;1T1_Uc?kAtA|?H!V$1K1-WJ^UMUXfo9`UN#y_GdP zv}}J2BkNaG;CkGzMYDCBL6YD39;$UC=16kRJ1`ivV@cvHD(m6qoSXk?~)pD~%UK&&~gyDOUmjE)~ z7uXH&`_Iy`X&-{+`Xi^mg7LP&Hy-ZpPyH@cJd0HuQ42PmMwSObX99Qqza0@YD(#2k zsTqyVYZD>n^vU(##Kgq9oPc9vdrA!RO5dF6Di{WXJHOaQT?IbVi@0as4Y#XiUkINe z)Rf*{2ou2E(;2#F-u-k-BznnT-6GpVIalqrkGu+aCgAq_gXc^8TKiA(!;8XW|9cb} zs>QjCfjVv@&Z(ZcxpcJ{c`F|v&FASx>}0XZqk2nKQOIQ}@5FsKPNo_2*-BQ%~#-)xbdL8(bzR3EDVv!M;6P;v5AISF>uw`r7b_|hX_S)lI4zWEKS;B^PaPwI z2F#drY~i1&<&D2L;CH{Bc-)I3goB_lj4w}2jNv4VtH7vH#~DLM$JF;2iC~*kl*;LW z8_;t4d;Jw!>e-+PbYph7o$aGw9_HzhLDWWYe%Tbt&I}qa`a|6NcljFA7Z92E8wUcT zhAkU=`x|s}Z^x6}o&WebS+L_B9 z>{rc^94m`$czug3DHe*|dMmK6JEffGZ|eRFpjx6tU;XkVNOWvAi$>@9bN%b#v%*Hj z<<78S@)%S{#ZQmL)Wqahdb;NZ_Y1~MV}Wq|NbZpyT<8m-g~WCX@I_RLMz5FB(7G%s-IeGSq42v3=F^$u*wTW#F0vJT8iM=+| z5il@t%oEJXTafh20k>37Mz?U<2ojgn{BMGEzeK5M6AwwNZXK2-JslOK$ z&HkSC7qXci-F}OFc-rz01`~43OjO_>OU8qbX|K%>o1OjtZT40?LBO)%GS5XGRt>yZ zL{o{PGD6AtZHq({09DC`NZq*crhlZJ=6tu|AYfWjLL?Tg;?_-C!?YD+E9dd^SYewr zP2oICJa%HCT4*VUves7pC$GCRV@Hpo2>^&j=qHGi{x~TC^6O+bweANlY0d+u&ZHL% znvO^SE-u1^J|H}pCa7SHSu?_ZJ$XB*a@Hr{Q!Hh}VkTn0Sm@eBcgRm&$>uBW5*E`c1L^DtOc-w=alAavcZSKrji(ov%yM9i2l1tGBEm&~$x|vfHno9AVze30 zea;Xo{%=vWCoR+V@p0dr4~4HPixYlgDO&eI@5Y=@K-45g*v zgx?-aJ74eq-WEgqj-ARAP(7N-gXp419(PYqM?MbM37&xRqG`92Kj$gXAavt6rb^rB zu-L-M;OuPS{2cfRku5oV8=od^379_PA%$z8n;BAFQE!(Tvv`j-GF$=H^Rs8|Y;P5$ z4%5f+ijZRN)YBu)A%?>AoGt>?2|opyMW8FY&EHEK=p`gTLVe9BwTDR6{$(3%+>G&SMgV&(}laYhcpj+c*uHn)PlH&3|Vle=T!Czku?0>Vo`jLte& zI0~})sRrc^Y$qSqb#E{PCO<>zk~lhPwa#iclQDCi`#Xo*7dV(|2wydTN5z$uTM7tb zPW|LYfJlJYS6C=v_-$u=nQMaqdI7Ial*XF*cJX6P=@xC1JagXYri3bIAhJ2`+(sFB zw+en{)hXFjJs2k{F1Yo1xxSzj*ypi9N;O+VW;}YPWC#?ITG&!9UA;hMcr9yvVIF|S zfQJKacC&0>d#TC)zi$9(KYnoi)5grc4SMoIWi;5Tch_`0q%zMG6E;)(kuPKkD)Z)e8( ziOQ;J^P_>Fg47mTsz?;>wQm0H>bsH7GGJGIct`0@*ty<=4DGA6zk+L)1AsLX@y3DS znXbG3I4IPj&ZlZ6Fk{z|qCa9nm(3&VmCU>OMqr>U;eM7%z|B)a(29cyPHL$+{hkXV7Xo{g~~(9X=Kh z(#J%Abh3*B&$FN_?CJor3`mLM`L24|;>QwVH$FU_dqRMj8ha-{cUfr;*Yx&y0}kKJyHZb|}sJ+B7?VS%(_?+w8Fwd#4w(Q*h%ub6R6T99;)wS$XNeAnObl7fvOsH;t8{jds4;k*?s-QKA{C5c7+tIE z4yruu^pq{eyec;tU*$!uDi(@@2ca^r1g0Dt=`;>~zD^^bw23xc@{KiHAaEU2|H8W$ zN(Sy3in;kN2+xrRAnhV<-!Z1iPW&u~5`sL33&>19O>H3SFB$7LkJox~D zkiPAmTAY&95~zYJc@qwFjq7%-X^vJ(&FbxJdqM&nvyRR=@9sBZ^%&Ej=u(85IoYh# z9A1a*7z`4UXeiU<3#5|v|F666apn*VD*`L4f*9HcF*53Se=kZKzU-V1@|Tgp4A(Mu zM`O%Ctn&RoQ9SU~1W?ZUb6-0=%y$rys~hBTM>%3Y3=e(WD)0|uZ!#LArKO;uCQ;NX z+*Y2Ge8^_Pj9<}Grk@i3`RoWL5FqB**Ea5t(^6Zs#d5G4tdgZ!fE4nlV27fT^~Z|5 zLc95>CjUBv@wrWi3(XPNn`UH>u|F97ECU|tbXbg3tS1ZS6CEYU zrXy+a7ICe(t}&b)9xxG(pYT#P5FmiMFCrMhZo|bXr`R`fr#9=3{n*Aeocw7KNerI_LKW@bAN~Y4#%7p-oVCinP#; zF5qRWHlYHmDMnyNKE-jN{T;uGADB6+Chs56P2*9oqP{DMshDtJR8{# z9&6n2INUZ3(?L1!!d@p3C)r{dw}4EZ>+a^GBp{5aZ!H2^nz^A7Cg_nFhf-e5O|8`l z8zB{gv!ay?h9xKy0*RfgI`E`p2xG$&imxeX>Ak%_-<;4W9*{Jt@-y(03406DMCF8V zluTVZ1JV)`j&@pN#-i$MC~o5E`%Zi5t@zH~-BD@IJx3~Gbu!r8XC4FHeA}k8#6#~LK`UH_Al!7 z`Tn}DCze#mbC}ong?E24%S#5}G@3~7T?%TKnGWo!%bQ9(q-}VV_5oaX?a^DAW5y1T zV_wkhV9;a^6f|v)*%B488!<&2XHlS>NnAbjzQ72MnlfutlC>z1 zR7cqVTPbSQ+Wtwg{6h-3)ZSs3JkBw5z~&W$us^{w%*3s~-}(uW zhD6PhgNg&`Z}a)U(II}KoO$Qpg7H(m(-p4|!-|>PC_hX4loWml-<=}#UyjVt(I;Po zNMNU>wcRqXuvReqzKJfleApeHj=%RSoAN1aPee;?GUEvfP3091T6EIg8T>ZY|2W`y zI`tr^fM3=eD47}d8%k@B=Hd5s(Pj&8&g$7~+PFy^Xnkvl|z2+DVVy!fC0wTdC}|Z9h+IAQkP7d6dU)S81BVi_-6>bs0OHU z7Bl`*kPDVr1mpfq9`E-r$ix4(gNCE)^rMI8lqa}v*z61O=VsHXe`}pA{nbCoiQ7a8 zJrOX7mWE6iFsw1dEUc(D z!351&cPk7B1--;I#|5cOCoW!2F5gz9emQEjvS5SUTwV5jvpnrYqRN6vXddtVtV-G^ zF`vJBY%D0MkIU>0!^}jsd&{ZIY6Ps^G#0dd;>}LI7Br~!#j4sro)fIy6all)!zUn6 zFAO)jr=Bgzi6{>44Xnanrb*z@q}`tXK6Is4ty(-#Nv?OP6#&gyQHV?&vKx^I7r`u8ehl$S+?HLI#rZ&O};@888kSOphO z|E+T?xw<}}XL6|ez=G2aV#~l;Y}Wm9HTJNdVRr#-GHqaVzuxI~Lh%7FtI_PPZDGYE zw`B<)fHLfzzy>yJRT|bm)p!{V?7e$m-n20T{D@ukjKh8TS4Cn<2)b65yO(N+>e#X5 ze_ts9=mS+8UR~CaDbjAE>K`O(8|#9v%W-OXa!{S_g}~^5i-zXklz5t2=@UQoFJLoK zZiCh$7A~&VVk&1%PLZ`jYa4=}Obew}CKqzIT=>{0LeM>Wm zi&$KrKvmNy7kh+#^ECyHH%<~{^URneDfz4{hic0O6+Zq1yAGMT*&A!o5kN$W4(sT0rWvdCB26qpj&hag`aVin-RwN@ObtTX9)GczXSIe$&%RWo{i8SH%+kDi?O}E$YmR>X6^c`jpUaNQf%! zxLiI=O~Y6!4mE+J2|ppPFY^@fA^GrEY4z)aU8fRG@#bc>{&Ks+=T0F#2cSHPI=ikL z8}cQA2VIhqgD32^LWs$fzbtlQ1vxWI2v|A6PTP^R_)h*;6&bd~fd2pPA_kTk46$^= z{Tei#scXR#QPz?rI8bql;(yP6PDJVmY(9JYmkMG_+oz}A+Czjoz`5C3 zcV#Zn`dXpQQZ))?+D*E$3x?QS=s(6A-!(80Wjf4mXpu;wUwG;5?X9-CIV7u!c{o|C z6F-t5905Tvy3Qxsf$^X_*-IQ)khFlGqwTMV#rZ(q{fUI7ocDaV{L)w)w)^YsZ#$ZY2l@D;{k{Q}o*V<JZ3F*4Q7|1b1xi_{_# zC-k)LVH&vH;H&_DMsOP9Yh*iHb8SgX_GOl_Xw>UCe=q%@LltLah zPc|-(Lk+jhX|h9nK!Pkv$4 zD~h&NTWAmHBRx$MJp=V34S-#&OF5mCX-&eWW>G_xmoyHhkWP=R@heBZc|>MxoRn0P z@;IHW;OpZC-Xi`Or(<4hi2#PfWnAV|%G^(KiM5Fc=}xDie-JX2v&23D$xU?`^ZWzi zG?*o#_;G^OA5UvkB;3jnBg(DStjcrgYu>*eYEU=GX@<|=I6FT!@l3^4GW)~*aehjc z-?8j@fpz+8&g*^3>v(;gXK|3vp8+5Q`07-_VS?x<4Z;Qqb8ZiXjj-1bZ@BQ(3x>@< z$qgQ7h`ZL`C%k4!|KG}%y%`xB8+Y}Hpulp$>2`U#6v*xSz=I-lA#>A?plSA_;m!St zj7Rj!|FlQN*8B$t23G6N9TJkq>CADuLDjvX4ul1c9?T@$jBGRwnDynl-p7$v8kgsQ z>-4F`p7?P_CxWzDSFmr|-017@uS&n) zgkhiAv63B-1BIlwUOh-ICoAu#jSVbc>N{y|TmBTNDfeQl1rB!TT^^m*7&WXQGw4$^ zCJ`ozCAxu}lfL*~k>E)(2W))15c{a%`LNZWZgSO}WkZr7rt0m0p@|b^xi1!7SKTC5n z5Q(7`F9O3ua+UXgMWToPk4lNTKi8al^VvOc)~6DAIWya-;Pt)m zX3QA^%Zsx6RAxJ5uJP zK;Km^vv0#QylL`R$!w-9kOe)BU3S1?fsW~Jd-pHOz%xb(0<5nnSUk7PBLh`(G|Kww z;Uv#!TW3>#+ij6>2!Otq!;Fosx*t?M81aJET(@?G5-vhz=1((N$`4)V_MK0Cn(h_J zR^3-yDQtDxaLib1y5C-$dO$Qj*@x%A7x#ADh|W&Imvg`O9Y>jH-^15jw&ZRKa*xit zEn0voc`4D;`gB9|F<3@g+61a#yxiq=?=t-a{Yyzj6HIw#umf z$A72Wwba)Iwk?=2X<4DDhJ>e4=cMbh45`TA31_wRenQgv^Vrx^MlEkTcvlx(G=T}u zYWh&2C)B1!&_OD$T>VpzUM7TrsDCMTc#NiIdj^8G^9akDzcb5{+WL(jwTR^BRXnkj z`k+?h6lP6EXBcg88oL*HKG?Y~x)m#YdUy&otamx_*hp((Ox4Ww1-f(wzOee9 z4W78EC<`JSl$JS>GcxXclBi;eKKJ9g&oEP&%K8`lPOE8bIVXRj;yK70`(xYLAV^Zf z3L4fO171LmZEtjPkUFTSc~fdw`8zfWy5qHn0A?4{$`98y-Km@|xFJ#ak%k@b!5i_r0uZo);eZitiMuKvPuDdxDYX*YJ&H;U1S~ zq@K4)qdC&9r-CJsz{^)wp)@nkJaI2mXd5XFk(%qcOqvi_OJ#>-8Xg2^uj^j8UrwU|+kOKi616Q_uqDUg9}lE($^7wZFJ*iWS=>A#f!B-WOClLeBM6*V#0{w`KEaeaLqP5Qnu3!rWY0PWZE68s@Ey&5UJ z^r57DN8Iwz)N^}ekEa9R$#JY2A33yDn+s>8C# zea}_K2ig+Td`*j_pF1#K9tp?iae&Jq6Sfu%iNOy&Y5*^{AK?SNnJgyR_pVK8!KTon zpoEOWm){glIx0I#*|daxhIk~X2~BU!$NgVc-N_79sGfef**A9LfRx_7Ff0=s%eZg= z44&OU!oK2;$PQWH5<4rl6+o-bk~;H_pF$dQLIq%C_lVkgFpc}pbId;ehaattZYXNP zY1;FJ3cq&CZh&5qN+KypQc4h*L{GOgN?3(NHEN8EjB0GMl7zH>Lee9BK=vld`7HHy za96X0R=YD!Yu)N^k_(`clWA`=oa6Ee{;B$xf%iEQIMt@liTy;E_1ZTD%s`5Ug0gqp z^bea8*Hc#Z3%|=T;J-}wRhEiZyyM87IgRli3d@N3&e|6?~7n zfSwGQR{R@kvAGsVVC6>#FV;4p3w)hS&U=kOZcNSiNw!#Qx)f|uDX8;v$yW6;(HxlF zpfJ=B^{x%r=|M`s4|gABKwxEpR{c#YUwCOl_l0pLpz}NoFiXel3SQS6K>&)_-#X7K zGTT#)-X)P#P@c%*_vMJMx_9K{bTIxIZsb- zzUDjF5krr+X=-~g_>Yahphql2cm`K^#J*E?bVsx)~U6M~%7S`7~L4Gt+%s2D2!cI{4< zX6H9vSksS}YHmDsqVo1^a&?kNJXY0lmp3u|lZ1HBp@Cjv z$P#WzUV$o&=hME?&>bZcp9vQ`Jx&7p;G50_R*p_)zUhWdQAV()9oCk7Ak;LoqfVSJb>u*70);+gUpshZ?p3c z5=*tPR_!!yG16JxkP!NqhPz4AUr%0suYAoMP2a=V8QG2AGtd4hmslec;xVG@Yr3P}udKiw1WV)_B7c$9cQLe<_0&!W!59nT zp!$GU>xI#}e`X&-z7vOU5y==8-s4AI5X9z@78qlg1lyX^V510D0uV@~^h_|@T~GQ? zK=C(SuSagneut%TdgCFY6Oji_SJ#)<8xro9f;FE8ChdlK8((gOWSoGTy zm&Wo#c%aXyO3_=06(>RWEZt6i=`tUir*b!RJI@5AXK%of8$jqe8Yo+1j8E?1dGPPqH+7D#aFNzd4Ms-ppz!xPIqtQ5rrU%6tDcMF%!G zMej4joBi6H8DU-e!2wr9Yn5%Fi0Q`96TKi{lZRrEixDFHM*5xKFR&){P>0#AuPoI` z@czjn-=jx<|D>iT21>ZchK`FMBt=dKlMm41n(@DkhxV;L9u)1jKeW4Zdj=+T8z@-2 zgOL_o-x}D;LjPRnZb_8!BU0*b72YMw-)sp!qB0^5~Y#ITnVxPE0Usyr*F$ zhNev7OLoPTb#G*u${5Qc@mQgZ$E>kX1L7Ll-KMd`Z0XKimw5?MP`GclBa_hL_f5XE z@77Nfh1ChA{dXr98C3+4;iOmgvb@_IsCTNouu!L8#_5Inov0-XJW~`o_q7^?cd~bw-MMh4H z59=$n9@SoIXAz7@$KF<;xwe&QhQB6m(Uu2IsiHQ`7JGxFpha9j^ud_*r1QCq4|T%q z9**4RW-DC3wBqkt%?Pt=Fs8AA3x08L9HlECNuf;6i|tI zKOiARj`6f-J)`DBh52~+eF{Y0X`gJ|>t695oO5dg zSF`f0`u`$TWs61a{bmZO6DakorE?_c-902Y4bTK5xy3AJnnSq#XWX6o2|DZFIM+?C zxh$eKgt*l2iDc>X=EQDMF7 zrv5g+oERi{u$4khP^5PqEUQg$;8zWg`t2A*n`FEjLg}oR9;ujoke^_RiZ|gZ54;G` zCp6F*kes7LV8#RFM%v&4?m=}P&~2DTin=OabPIV535Wq*iRY_5uij(2fcA^5Nycqc zc9L}XQlYfM@5vpx@er6I9^yzqd{EqzVbz=Gn;DU}zI!dRtoiIBwowDMZt{lmod0*sf6*|f!Q!OUf^>rG z%H*O1lLD(L@GfYuMm1j)w{SKR+!xLP>!`fZ8?M8~O3oCSh%T}-YKElcHF<$Sn-w+P z+k+b+lx&P2)Ze~oXR(!1Oso|Qk|IsPos$W;@w7WPEI1BmOj@iF#Nbuhg)Adm-Z|> zA%KQEmtSVCCp?(c6M8_Dw!S?%u144cHji<^)31r*6)NJoc>@%hL`()GA&yUCxmA+4 zMb=gz{uN=lCS3Ieu6~JQhGm{lf zmkE~8Bx}Khnsr3eq^TvYjWQwR)PM)$u=(bb_?wd*oo`3f_=)A0jGUqn8(JV22Y)bc z>=_5^KVuRrCU5mh(HFzv6)+}@Ph~oK%slCG4eqPeB(w{9T8UAH{1`PwL+6LCu{e+@ zJ42qvC9FDdju1`o{hy_`8kzOV_g~=SAfKU6HdakwTP+=a+`Zhmd1Z{pT(Lm+e|yl} z{?X3P?&jZG#@Gzh;*ZT?TM}wM(g%ScKz+)#`BL}y_w~d>*=KI9uF4^h`FGJj^*%i7 zPrJH}XJ))2E|YIK*&Y=fPslmFky}ytrxA?CLdVXg9>{x3U8wn?EXqupkg5XnOWe_gMw1wMwr8vI$%UaGH14_|X;|-y_u3DaYRJxma{_)uk@?(UGBFGg_Rpg-%j`UV}e zZk5RcL93Vvs%#$Fu>5`-pKE+?lDdEF(}IB_51;%{NYKK;u5BZkey(YXOF7ps6|hE&;bwquFwi{aLPT_i)hJ z9x<4v^W60jp(`|Ycs($bHD?2V;w!L8o0iSppBbFmNv1F|I3sGQHK`la6zR4-ZY-kp zr$%c2h{|5S6`OtK(w@XJ;Mb02ZJ#3?){KknghFrGPKg7#6(8=yd9b+4YwOGZ5>W;X zby^#aI(~Zjkk>&PQ#Pwi2_b(r~?-1e&kg*E|8uP`93(4K_3~;qh1ro}s@G2ooLM zCi9=Nhx73?4lbLTAPvpvjaSFZASulJ2Aoc<_#dN9dGNI5j9YH`#C{#}?DLvK1lv@m zzv%;4TuZXf1W-1Pc6ws@fS2We9a>kCGevoJR{d~nn&e9cA&{-Gia~e| z>&WF>n$3M&{$~M=DHvE_YU=tAC6XE_rj(MBayvP96>s#mkWwh(ZxYzEwCjG|E5)w& zKL{`L4;1C%?kMBDaX!UitVqAX@)JE zr<|OpX8;FVAjfQZf^ ztAgaXi5Vd;msW^;&@b5h8q|*YMEI7`%DXya zw($X{L-kcIAh{w{R0aNmb99P`%x7@#?o6?JIPb#r#ml*Y82tG@z-;z*A^9c)dc&Ps z{)?%sr*vmdi$EDx{nhG2vZw`TTJi2Uu8$iQiQvHN#`1AcyTsULD?GvBQg(`dh`I`y zk%YO{%nP7k=g?~>&qJ65ZeL$r)idCiT)J2`jlKM_0hJvJPGZz_JR}7#kRvbAL)|#1 z-6U0XM&slF0W=CAhs!;$)mK?4jOX4Pg`Z9Nxi2B~)NtS+muwgtdt1jyGwMCkgl(f_T!yfk}@E?^ZJ;SKo@lV)zI2Xif`9Tx3X3+Q0AjsWV`=ju&5ENS2 zrYm^S`+4aSs_9}dD1j#AEb$+KhNMcTsXb<~N|z{j{cpx4)L~F>5Gq;jNpd6Fg#oP$ zA6%4TJiou4zrUUHm#Ur+iRENKKXrfpn{9N&K#dSb>6~q?oh6iYN9hjY9kHV!mau=h zm;R8lHos>cFsKCP`tOuT(3pzYT02sgg=GQaB?mq*&pFfxWwX=ajk8Ibt0$o{3i)^p zpWC^-;r<60@O`l%zq&%I79c$9kU* zmrDjqM~+%Ev;4U1mm4D)nG!--{GyuQ>(hj$qOx*4o5%;$Lyl&a)R{6oE9!J(Vo?z-O+!VQuT&&~E&NDhm49+$c zP3GT-c`nPj(5fotbG58v-SsHfvp<&@^ra_>y8e!C>mCOr*%H#qxwwSEX1cx5T-EF+kpysh*f83md3+>^|HeY9A5_=Qe?)*&zj}(Tix?> z5*wf8Tf_^5t3^ar+9pD9Y}XbBr{~e>I0fT*Uh?z?p$MBT33)iE^;cv|4RJ}p7>$^O z1ZLt%DW0a!@F^fXMA?TkvB87SEuq-@t+t4LwC7|d$8F#ZH#+HU9Z&1@@V7uXPiExw zJBs!0?mk6L?nQYIggh-#Rg!12J=fU{vt3v~K%n#KxT14V$fxY-NGT&}gqc~w z)0;`90ZAz_A>4#MTEk`UW#9}7;GDvV_rO}V^amvPR)$;s2Wxz?Qi2q|NQE{68A%vlp}d4(wS%1`^2?l>4NdG~*C+NXI0esWi`>&K&;KealHzNmO5l#{x~?pPMISzqhl!{<|WLJClr|e92_9MeMIsr6!#i)dV-nYU^^i$ zJ3VSXkzbt6SD}{}5b*I|SDt19x`~?(XjH z?(RVX1a}GUKJefKcL?qdr)RD;_nz}S=la~`f^<4fdTUkn)%%TFYlkFHFH|bmdMCfV zE(+$lqfXu-Fk8NOjAz@!y?-qEtp3M2pHasBbZ1U*#o5W}fiJB((CM`dBQoqfhr@o| zRgvoXrax!*{kJik6<}|zAm&S+9LA-#ZW`n3f84K|K2BBplP;g+xI!ARc#-Onpm zGu~`&$xR4o!-VgS)1`_veQ4JCN18(Q+EL_c^cLR~kvGoOg4>AN2*JGGcl5+yql?Yn zjyTkxxjyf&_ts4d0*6B42PHa-eBO5#FyNntUbQ%1_J?;gk4;*_4@jSk&?fa;BAgEx zH(v%dM6+dr-!IW*S2d!#SEZ+im4iO4lc%HII5CQ$G@ZFGlKOO)f7)o-H99L+fh33< ze5#Gkz(@$__5sy8*c0(8V2QZm7p%_(@!t$}AqK64oAiFRCiiNZtoFD&jn7>2jl|~h z36vvg_fUroV|{W_<@C70;r4l?n*!TBgPVPJbAHY4=O;+vMN<2&mvs-8HXPKt6ybZt zW`r>Mod=7Jsa+9_`@*0wClW^3eH1sYzC}CXc8FN)|FP}p`?VFXn*W1O2(ceBWoC`v z17>Afb!FnH7FVjQjg$kx?PBb>SU>QnUZXl>_CQQF2tl$xc(aIzVHi@9Vyza{k6GfxBb0Y)x5qPEge$VK4$4C0^5AU_Yc^Yxtc1ee3od&s-U{*s?9 zC!z{x%sOU;vZ%(xs*rN-D)|w?pQ%%T{+|883RUoVA&p zeNoxYd9}3ASNMGeXXFb^&%Y9o`KmyoQ-Pwo<+KAPl3#H9upSOMUyUGZk=rJdnVOZ z=paq4fYOWw;o1iHQ_}Lq^hYH|>!(#wI?BMQ4giGBlJA}0Z?4hpNq4R__Sq#R&WR3P zY39LWpEbdT0&!|LG=$ywzKhSgpuj16B zGcqw3ZP8k4uPAg&w ziw%?beJPmu_|YwId$FtkEA2drAz)6aJStU~7*^^p|ED>bJF0|xeGcOH{CsMAV&T$(9n{5_9f)WuHj#OZ-AtFs`a>7+N@iRH+l*7I zs^&My0N}tqvcqfwJF2niUo;3cUnU2hfix*??LX^@MN6M3ggz(B;hhfqbz>YfEF*Qv z#sYm%`*w^&!(-0BJU!XDIZX;;b)og@qz0YUz<}YxuTgr`l^qN=^NwZ_Nwa=sZ4k}< zrH>7P7ZPDOO4nIjB8zN7YPz>Bf}PdnX`LUL-uVL?BdCO*Zg^5BK;^EsNN4KRP~uYc z-cOcP0I-%egOF-XWOP3aDl2B!0dbr!#d4J4|B%<;$x-m%>=#q=9dpZj4=Acw%~t{f zzUt(54sM@Yy5wO7k5IVX2ylbvFYi<9f^4}|X}IB^dWSC}C@znCjF5I~lkY@+oos`9 zDJ#R#=gUvOasI?Dgv~63k>6?|9@NCv5%7xGsoNel*aZAY6vtwbBwK5iBn2wekF7O; zrwfHMroV!Gcy<43<}8!j3kJPzI}|R6LZEv~s|+>==s>N{>g*yx9 z2`(C+a39AejC<4CQD)u{W)kXzbrH?E6*%H-7M+h?h)>@I#_*ru^=+4yWF6OBn@zjF zs1^2-D_2zv^%!eS!x^g-_{W&|Yckd?2yT0)) z$oH|CoprnOb0)@F$r58uaFGtk{FHt>XlaOn@El}F{_AChhd3@=y7ue`7=c!!#W{jY9L-SAfCW$^?g!MZPI9ML+T`rmiSQb0}heGWHp; zRQIrwL7UH`r>5^|MX2le@Ch!xt=O70C=DSx`eL}}Ylp_ub)L6TYPx_xTD5d|70>;Z z{7Tt87x;kqqd3`(BqZe&+YVQZ4k(io%6=m@JtM$x(fOZqS>N0uz9~M6WuMr{l6fWb zbr{OM0;ShcB9Y?JC_rxsCRx%LF8fY}R-jDwy+2 zSS3OCnV^@^kPDhnAeQ2N(k400G%+8s-(!U zXUVcwb8;Y5icqXf6kfYz{&4yh`<3$`&nFyvD6+o6 zsw{dJu(!H~$dlch-cXY#;=2Z>X`CHN7!vInQzbWv8=*SGV}_du+pWp3M}I0;6X@1t z>o#YvZI11l>UdaaD<_P96IvA4#raxCx2Ll}VKqeFAYB5uTB>H4)_;;oYul2v;ioOa zHsEq)7fy~!tX)K?UO%pLZp96ga}w$jjtyE;)X3ubtgRU*Cqr24>;378Q8zT^BwB;X zlmAZK{KD@mf)R?*HY?A-LHc&WIjx5cFEr*e0V@>v#~;M7MG{ull=oVuw5I%gZ`3t^ zq*xyFi7X1BnM2H%!Z{-kQJG_u**lt`A%=-XBM$Y=s~7Mu8A^{WDsok3Pjl=gXy`(G zp4=y8xI<*ni!jQr7Q@gcjt$r;I(ad@FE$xUp}qi5dUW8rVAoYBUOQaj(dB&O_E+w3Et*4|Xd0ME zm=KmVaYx2S1ij?c#WXW}>0_D zGE-BvDjjYI(*5uHr(PwtSJs13aq1D$J#u7yn}s3wy1v>`&Tsw5R`@ehI&}!<#a0sN zdc5~&xaR0UowPyVd0V$#>cC`lZ4n1;90%30$T>#B?cDk11NDJepY@X)Ha(;Ir+MPn zP5j!#=FQ^o3x`UUG>k)RE}o*Upt3lNDMR#2||;ECrN7hcK-g_ zx60&-J5yz#X8tqT9V>DOWUtY0Q3B2ugzi^B5rcvoc=uaez6kVIp1^yU@juD3 zZe7JdfB!21R4o7elHxR*VjN}8-z^;7M(&K2EQ<%>MUF1<{0d*mTR9LVBF(8E3L$SO z9kcPB*df#;Wno@k(z);a)-Jz^fl&+$-c6NR;(@ZII zUvG1088`-hA!6p58ZD2Kqbg{TUvj51evuxuCjZuS)+wJznm{DPL{C4NYIOSj+(iRatDpN z0N=cmo_~!IWd(^h%2oCRL-*|vWm9Y$1L@We(tGt~$FkhRw(nIVlM{h)r~XH1vFG~W zc(GaMx;l^NYvM~%o?(`ApP=fsXEKiNi`>LyShNELaZ;HA(VyZFjMsq| z2ngY8KGpEmAt(Tg7~1LY?%Z`aa4P)+M8pkcbsa zgO0uFAdI?FLhxAU&Fq)Pl63Wh+RR9kl*YKme3qJ_=1b<>_b5D5-1Zs{R+Y}cKZZ&Q zkS=y4nG=kElE#m|nI7)Y3^EDO*xaQVduBk$h+t~CAvM~`2KYgCO9`2ZH={Rp-{v~` zjpJ(1Pg6n*AH2iU`s$nT#!KfNEZsnuP?^hT*{{kMTU))?d z#lB2Bm+Cm$XzA$n_;ru5pEoM(I>9*dBntRj)#QoXX4{1}Z}@$47ZKLQT?3bvcR|+7 zf_R%bp}xqWdph3Frh#SIm0N7O#!aq^#4Kf5m6`07c^6JPM~WREEHpuSb#lGhGD;7J zLiUU@q2R89=VBtW)4y}ClPJrUKF$?QaN+Yjb$K_uaaq^5_)vXN!*~0hk&+!298R{m zxg90|6DQox=(jDlS>rQ@Y47#~U(@wYfyKpnePiA6a=D973<5Xqjd7%nL#0z(XSxdr zb^l8Cd=lA8w75S)@mU&G2Q(8G>)0Ln(;REh2FN`7X*u4GXGR29nTLwj?pC!oy^GyD z6O;6>F#6lidLjwxigPvqqNr@}ar&R}JP_hXnD~(7cM?eztsgn7$jQtinKvw!F$xkt z^gjhP=>*g#fxinQsrFiG#NtMOjD%O&6*Y`ou(?Z5T!z+AKpr%~T9n(&f+K z_4!A9QakKWftzMpPUqT z8BVug44LX>yMR0W7MaO-F4O2FDYH*5$9m>UU{U9Op3>C^655Zy3z8zKAXZ4p_~1G_ zcYRcPe6i2!vSp)kaEl&55A=Gs++mA&k?H{V1nFE7ktZVp@iDd?qAib1_gw|w;#NbR7t7Hzv_j{rl-$@Q;)OZHvh|@y$MhMC#uvx!qs6Dn;rI2I zPtM2VV4zb&p}U{lc@E>Qwz@CB5Y?mo+Wqk5m007r`2)+ijfW%ePeuQ%ocC6D0g00H zw*f%&=6P^|jgI!M$^`ET{No%()%3Gw{=(MH_d-9$GFMLhsk@@Kin;k;dpk6xs3^&D zUGg7XCyod1&5x@I6kKl|ms&$i^tL9Y_bF74%g;QTUFKEin^S%%^ORzQl)J1dII;lQ zqnIIkYN2Mf!NVO@h=v?fjShX}Z@#98S$Nd_q(cT9N-tYlF^R`AbxV?XL3bQ$BqHu< zb}y7rtSQN@XKa+IH3i(iVA7?h@@YrWQrD*(z_R8^k0su?e!99bdSbqFY*8R=g7NNy%vm)dAEb(c2Ed5uI>^`OD512D*pw)`F)E(S zq#eiwH*x$}j9VvZ{VfSU19 zzNVYR!BY=SRYZCCFI+c|8FV%ys(DurC;gf#2>~0RhCQUOG7(ngVUzYV(ytwUccKIu zs>{;HWhj2VWR)i`7S0^gFl1UEr5|CU0y`pgCGL zQu*vY?L42UV>6|VxUbCE9`52mmJ||A>N-vK55`+xD1y)DNFFFvs)8_f);3gK5#teT4r*eDinY!)ZgABrT)Rj}4NMrQgz) z@GohT{j%5d5%O`viA)L}T&>27kv=KpL?IK|N458>$hfX7Rik17g-~V;8bOi_(OMx@ z?gNcbYMi;W!A+l`^6i8it~6p5@7~T{@d!WtK|C$P7wmqb8xX3ZY6S&BM@}rG2>(_ z8@ZNhZ3Uvvg()q852%eUYL_!j53yZn1|ey0A0H^=JPl+)8+=rqwa2G)b4S~UQUTA@ zQ`17JYx{$~3IogAG~=gS(D^cRI82&uPI4x3Z%rBP`p49DFR-j>(qnQ3q&^R}!7vvM zZK%om8mH?Yj2ce){_$UOg#ak#g>ptE2=6k*x6HN5V`=Z#0JgQ&KHBMThY^#h=DD`=ln^=-% zC33s+oj~45c_WZtC!9emAn*x`5o!B}QOE6ZP0Gv-c(4iY(_tY11ED(OWF??N`X#kE zjX{3hb<`IjV@9!1qaEEkMzGn!j3#zOwbk+)RsN%72U=ZQGhe-mk_UYzOkOavq})_6 zITae&lRxxEt;udli1F4q1!qn|(1>eO?FvR1`rDm@M*GKBt|MyNnt{xTF~d{hu*w@2 zjiaCN!zi}BwNv4ng2L=Su6SD*azmBtX)y>dLyn7MH3XcI-10_-P>J#B^_%gf%8sZR zCVIVilx<>U1+{jOTiT)XOB*2vl|*iv;617v2gqTFPgdJ8=_;|eJ(>`;)wBAKB_Wgw z+51UMs7SNNnLHx+wwK)1D8a`53jAnsOo@m~p+V8dzY5=Q-f%`L$@AqV)9g#;PaYC`_~}xPxnT!mNy=9sJYEIFc&|PSJcL_;Mwsy0SY3(FORf|8tyDu{WA#%G@Zm33Fh~(I)IWyqrojN(S zmTUdklwJFYQ!9*&VGduw{RRH@i;sgl?az6V%u%PC>n}a#ah?x`uKdLyF4kp=-jh?b zal}LesSHFovD6Cz-O0t*eYGAFCTxFCOrtk>xO7D=f#4!n0lhfUw1r0tTuK{C%W;u+ z*O{zrS>OJ=KiMCF^MQer1?p22@$+*4jso{9)aA$%TDnp&>h8h1`a*n+xdz&{-p6ZLY{4)^w4Ska8#0a z0idPe@e%+E`$c#mkd%XY`qpH(W_7df+&;}$3v?&HtY2J>F&4U?uZz#FMF0usZPfd;-2XgCI%4lZ zElXnM1mGjHPYfnsH5~|=CRq|IOz&S?s&I@62jcp8nas$B=rjS}vO z2;7>2lrLyfB+031Q$KT95oed!1QCl1~EH324bh=bp`J8wIo??0o^g{n$DrAHJqeY!%iL_`MvA=APHCY z7YQ!1d``@+LUoe(ob&C(f5x+Up?AzUH!W0f)V#Dlr~md>13M1%f0!ML?~UVddi4NCAB(4iMYZANR-E2AjQZFH~aYMv$&vkiZe-We2|Uq?E0QMEuV}95%dK(AO{rv zDzNa=Gy0fW3B|yMvhq~Mt0w3tawEN-3idXuE`jHXZnXa-55)$aqVZ%*=!TIQ@@o*a z00g;(Ekcb>Ry6E;`|xIOdW=UZLuOkz^m874ng#~&$yR3ceY)8W!T~|dx(4Kd18!0@ zU80r}+*GoYEIfnRwq(j?`+Vy|K+)^q7nE7RMutQy&_#-Z0ZWBxImKuu&4{v);)OG% zy2PF)c^pZdoaIZ#Mta5Ml8&8jZbni-JORSRSrWYszcPGI!gw*Nif48dDor{g9bJXl z=_~pT$5*Ppn(ntYiwCkz6d9lu1FdBbKw|lU(nH;vbsw(%o|5a*BNVzm2v15dS^@7!CQNqY>>9nfQ@Lg-Y1m><1 z+$bBHE@|Dlre`Lk(qpQgB!epT^1TE^QE`$lU@qrttH8hwNP;tqobjq9O1@s4=SWv< zl&hu)k$%~+g`zEOr@KwH%cy?hVL19;J#Am1sJDE8=08=_$f8((Os}q9KdWAl)JeqM zl(UW2&T**W1)zFTgiGeXUojxss;8T{U5|+IJj?oQr^F`WBQP7vprVVzsfmp+@wdPx zXL8D~YsE;#)aW=(`$(TvmlR=a04W z*ky4~!xX=_COJ~a^(4;)0IX^K7Ut?M0xIi8&uM1{u#U5FZRFS3%{Gm76Fm}R4NUY2 zQu_;}F{XW-ZVh^zk(V)!F;nVhg;l!HU79)0o_s}xH>?WNZ^0ZG-;reQE6l%s0sEk; zQtfXk`9ZooKVSXwmjt0_m~@>c0C5TElSkX`dfIsTRkXvIZG22t9&aQG(S!G>o{DY~ ziOs7PXB+hAF2O;xQPd_!BpqJ&0>v>RIa5nm9%lfJ(eydL)YGMjVf*fHp^J=MS#a7s z08dEi9KELF3SK;pAQF)0vcs)=9aoK!N-6p}l`F{3`<|tl%yd!<2)tkY1UPMeKq&OK znH8#TJ2P!)2@MT}!DL83fI}JgZx*5c|6viLz>b*DWW*8k!CPykA3K98SemY_+mv+P zu(mTzx-9d8v8!flx7O3gIV;b)5i{A6G8<=MM9heLB8_u1Hfxco-~3`qb~0P^td z^QSrh8rv);cAMLpMV#RKOdDUj-_Q8(X&s!-`L&WrOqz6O{# zAH?lUbXW*`9~?ZPlY3Qsnn1^Gr(vv({&}vR9=~XzHY&fgw6*w*0Ywk|pwZa#hP`x8 zc{3d+0==e*Ar`DM4gc~7z4UR7%ClyIWwyN?@bfFv()MQ%TEBsrRWMs1j3WJ5?~G%f zL479SNoDcYId0cTYc1}pdg!5h8w%O=HnPshqC0Fp-B@2PAX6Ch=Ay5vtQ+}_mxhG} z3+ohIR76o(Wos}|JgFx%BFi0I#I9CbTPti}L0?L&>((yz&@_L+liib%|LgOY-o~uu+w~k z7X(RW&{zWb>*X-<;X>6)H+C^}Ksnu4vJ%7ou*)qv4%^=Yi@&_3?;+ZTp6e0FNHXt5 z=O{9DbwL-6VxaX#8B?c>F;QmX{$L!|Dnp#!g`fIali{&v9i%%~Ph&?*Zqg#p@27cV zv%W`JD00I8xcD&^%p7p77j(=VOR>Mx)B5IpgwWd=+?!f4+3wy4UkG@{%b5M7M2bT< z-;te&P^JnxVK(jOYRZSGFCZYeaSoZ4W~{6|jQ`1_u=(2}7Eu^bwUbJ(fgJ?o1D z)A?+)7MjIDdrl4gNGN(WIJs`_#yn0F9|BMcY6La zseRY_+5kKWD(h7O0X~*CnfQ=N1vyel9I=GToaL_Bpkeu*O3P{7KwVB(ZPV|_!~V%b zvs3W>J(F$-VWMSAOLXL+v_1h3Dwt@gM3K7t-fDck$%HmTQfjBxWr)*;I}@f#>qC+B z7?Gr`V8*mc>Fw~m%3;f(&xZW#B1DhN)wEJd;Wr%j%fiq*W14qFyHgZUqC z6}@A+%v%};|9ZIt|1X)o60!jRw2R{UHogrW+!4&m1|}~>Aj8{E4~hE|yRc44t~{_7 z1ALffQ;%dXk}i_u;M+XS3;hKdo~q5K8^io!{4)Z8Eg40HO_~2%&w68MY4N(}(x@=e zrqA1O2`RI|w@fDXw76`ja!=XOJL%Q&uv1EcFMgb2P?e5^Dz4GY`wkB&;zMjQosC0+g{ z&&++RZ=%il$;|F&I^zT@zzzB++)s(3Dm=Cu+k3uxK)xn$DY;rd8D~Gnab>chRR-3X zIdH0r^AMiQ(5e{*&y~oBzRJ+vIuHnVkl)(ldInr^&*`e#E+B1+pLQO8{lw{3-8fi6 zRY!n?v>z63H;ovdJ&0VdaBNR1E1xF44PZ2z%0voR{YFwYN5W&q6U6b2lXNhD`7 zoVI;PX+F-P8B+x#Qs=$Fl|m;|=UqZPxhIq8w99E^)R2JHKKN%qX5qEA|4g{%3kmmGFiw)H3`^>E%N%4?nzlP$UJ>p@6l1oFw%%c6;1oAiF(6xV6i#NM1Q?uH_nV5*LD+;%pDu zt`yB~{}c~EP;|;w>KmW7cb}P&A8&$Pdq@Tb1~zx))e!;e_Tm`BQ{U(cl6n@ms&R$3 zzm39NjTDM*81jbu@Mf=wVW$<0G~2x7x)D-5*=1=OmBHgp5iFk@Wb|Si33K;9`?GHY zHGDX}C8Qwr=ne<-CV!5m_fHyC>u^$z+-42m9A&|s>Y{o)b}}GLX^XMzy52EDbH*XS zEeQ2%rj@8IPMqVNUIgOWI1d58A1RvJ$KqalbIxrBal_b9LL-ndlW`BM4KiPxpnyHl z1NZS5j?BQ)I5_g7xxv)i7aV3&te`&QM19)!)tZgYfhx&;7JFZK9&GonlUk=Q#OQOkFS&DZCY!zf*NWs1C_6`Rlrb1}|C zymM~2<9X(Emon8qecfGn2X^X(+d3z4XQebyZ&Q zt_+BWUO|X1*#pR-2S@-6MkjRY*bl}lNW5CfVZ|up(IU~U9L1&+t=i+U;lr2D!2i<0 z&zA-H7TyxJ9fOdK6a(}?MF*V*`(018;MI;cjwZ^(#Q{Q{xWwM4Y)PW@(arr@tde@x zQu@vSsQqh0L)-ARU#)_sM8#FLVk&kd>{Xfp2`rkDT=i)E91@EsviXBus#vuf4zNyl zy)@$!K_OCaomkCenbN7yNpt-;x7X%C#{%iwOD#g0y+ZF}LZpU2ZzN&kKkMr_R&2}atsEMU2NW;R0pZS0@aRZz};j+jOicW@mQ^QzHouPzr% z6eYviWX&9Be!OThY!j+13ZGXpy)V{pd5ecpKasVaoFU*!O{tI-BISJIz)S)~wDK{L z+FT-$inQkoELHq@fU~I6#QIa|LYh&{yHgAGJSk57Bxs4* zNQaxK0M>H?wJ+?0PhZ+=-!L*cl@;y!ypE*RPscS$o*aZodyahZ=xy z^||+rM44z1!X=!?w@Bu_YW`a{Dwop*?AFInVUYa?fO04ySsd|g0c>)<7ul#JivS)P zs9MetwJFP7*w1oJZr=OeeNA$s(*fnbY}ejr^y+f+BCA~1meOF zuf8^e1>1-xIvT+&c!UPSw_lSk7#Iq2GG?uHuA+W`sZIkwa{+d+snT3%3dCYsbx8*c zWvUbrTxfCE2gqb3?Vo5pX&Cf)gJZI72v$!%q_chC%p0^L7^GRk#toctqCk^B>&t8; zZR&=91b@BIYK2%|i7WWbh*V;{Lz24T?aGYww<6B1s$#K4j1fbUY;HIwi;kEBf@v2) zxePxD+x&!>2f`*Z&v3A453EQtr|%g#dOG^K5-y(wW&bYdm%BryV9YAR5>&LeVog}c zKQ?e-d2L{LK7t=^Vq$u|I)OTSaDB|K-+?h$RHVZlVzPkp@4*A?48`_Y^&YAm%9K9p zGrH4f@E{;ykGIFPf_3=iMp4Sn&xLEnH!3Kbb@S z#qx3AzUAX+(@K7lfS@I8s}-KrnLe>{HzObqIqt9`{H9dl`t+^zhoGd{+UjwV=fj2J zTX0a zA`V1h(rs>UJ7G4)kzdQd8UQ@~g5}VJ@i2*s`5v<}7Cno5sP)4KLO}6Y^{b4ipZm4` zl2yG)fBEjz)zVtQ+PRSR=j@*Yqywhhw1w^=k9=S38v7zmE^clHmK)5^mz))W zon?j%)Ov}M#;bVl3eF?IhdfqjPqA4(5{LFCy2gUKq+Fcsmxi$CgAB!I)ylh6Z0cXJzpdv$S+t zxKK@~cY#jD>X4Rh894D7+LU@1Xlg3S=4fA^OBKr%ziNJ%cNO7Z;CF-KxvT0tcR_NI zbFu3hpRDN|G-j3f`}qI!F3T#4Oke3GWDhAb-lF#&p089x_(YCta_?J)jam?f^c#n&%EbpJK!4RGHTL7H>}zb~Cp9mp{; zF|A4)F-Kxxdm5#>I9+;Bk{5IK&((jy>Ay6X@E9Ri3o?4^XGik~H`GZbjHphJG;Fp_ zBd***8a%nE@0OK%Xa|l)DLqVNp){X5%HoWD0vgI%Gd{G^qwA@qIiReDPViT3bzhQ0 zsK!$=-ipQ_|mv;PRwR?B*{ybwOfkx!K6^r^RZ)=G7t|V&GmdNpH zH|*3XiR_G(q>1w1OBX{?P`OMnW0rTRnh$oK;%-RA`qRE`k;n=7h|~^M0jtwE0ykq()U-^lA2? zvJ#PFTkJLzaRfX8y~*Ze7Wb&4A;e-f!dMV5^tIHisYNt>Pljf&e0gx^%Y46%7Q2~D z1oeHcdc!q3cs`U#s3CwWfn(-Bwzv%`wTdFv0MJ$^reR?f_1GY1VAlW-7iyKgej3KC zH@w`gI)VSYlr1%4#Wp^U587}&n=6COo4x65HRurmYty5c0Iv#b@Sw?HZXh&*E`Ny6 zs#IEO)6O^&YM+uRlLKxC`uC2HA8OoVWmsz@&G@O>wx|tJ(95`i=XY^XxVF*C!sqea z2nG#m3i}7XHrIs8M*0VzfA;4Qr zPQ=lre`X=U4;5>boS^G+l7lGx@4aQ2Z?IjmVDbGI6HO&5rSTN=_2e;69tgS?<_R$n5}D%HvnDO& z-u@K7qIVoS}In8h*pp@}=@z5a`EDdlq~`IVva zB^91H4dIPx;%j9FV=6JWIHQSL-_`R!Ga%Z98+&~9IDI_~oFqPL)Un1O!dyx7al*He zH8)y{FwEdw{6IAm-Kwaj^5;QPpyN;{D}12ESM~$#T>=dwjenqvHPd8 zs2uHyYN7s_SuUC!rNQd-$qDi_ZpZFLJg#P@@o3|ff9J0V*=XsJ{L7CBPb zQu9Nc-CEoQOBprxJ#OINBW(Zg--9SkX;r@)EPFMy;{(@ul}bH67Oq3mANhp=hsO+2 z&7KcP1`aNrVnzI<1I&u4{t3kAb0kp@gjWUf-1^}WBd;2mSQBYus91+Vj{V`rk55dJ zyPyylhT&4?oy}G{Jkz-d$+AFF4tN&x@t(75u1!8NX518J!*MQmAPX`CD9*LBCzDBC zZY^A%Y6&U7or*@P1reUH<1ib7g8FwHy*A)Ovd03|G{cf+(5N$7ayO15KW+_g9WbGY zgewS5Uz(5;XvJ@mF;A;mCE_@KT4~A))akG8A36S|V}YDhHlspnx=7mS^%I>vTQ+cP zeB*N0nkC_@7nXnb1@zM-N?1pM(ryw$7T^C~13b+Y6|uzz%QSWjAP{)WAE3{K(o~`w zM+9YA93{m4xP7o02e+;dY8w@ZGulo=Rnp=E_M$ianV?2vJ~GuI^exx29q!Z-gP*gR zehm@BmEn$z(FxytARe#3kZ|*ZU;UD{>EJ`u*d`+hryB%^OqXBT60#I=t8w}#ZX|>k z!>owp3lGY#=LN%G+pm(*E6~xNJB|#^1XLVLy(SaLyQaDW@BqE(sikLbha4WQIe#NS zN|JUYY$e-t$sYt~iJLhh#SI^AsmJN>g^;WwAj!73RJ#CKEy1E#X1ktOv;R3WM2AvH z$Po31sgtBk^s*1l?%s)nlp)EpAt_5+yx{cezt0|?aN(jrHNB9Lk33t)M)9FoW1miJ zBk#IjVgcr2{-VOi+FK6{H6-mQk;}o-(ixKDQUR>zv!1U4RF!MKM47|9OiSngAh+Fw z`-{@LXtB}K(bjHQQ?5T(q3}#9-n=+|7M|QM2$BLo+Dfu|U06|3R^)kW2+z0zhI)cH z(%Ym4Eas%pZ?ov=!!{ynZqx+W6GFN2;kL731={g%I9xVNUHJS^_{d?sQ@uzqcYyn& zRPj=sH=PayL}FTQ6w=r@lZ~jq%ljN!q^X#OeZxQAQ-cN*J`&_(Z0XwN9O_xoCRb-w zOPwK3bY;?^^~DDtXr-}ZCm}^Dp61N1L##HD(h_Xm-N|>8VW(-a7kEZt6N$K-&HYFX z5-?;O=hQazLu9&|vBni&496zFj#SfdFcL>4$yT~%hT%4JCO8=4Q!3!*1NHt z*xna@2E1|nbz>kyq0N!2`*-V%4Vq688w(2W%*cT|?RVfh3&d_fZ}p@g7x&e+&B>K! z&?Om+(Zdq1M)Lu|ZW8Z_L*39}elYVI2`p3+y@!Icn5YrS9IaJAp?FI>rg%nm_>U{J z`nYGqnJ_X%%o4-c2^~Z{rcZ8q&tFF%-J%P*#1V0WTIG8?$lR@Z3RPH`W0w(NqBU_q z9@Jro#IZz7G0Ci6bt&vH$h<@9{_^R1(*%iw?p-D$B> z{nxd`6Xq-Rck1-Q#veFEXt?%)ay8vbnv)Buf~2WJle4;=#Eee#B}T))k4tB7Kq(im zqYt3i8DmcQ)7LTtvQV9{q-~=nxk+c$R9OR{nw6mBE`Gx4gr=0HCcH-WtFDxvkWfZ# zD6Y4u&iR-pd~s@aqre1>hB3LeyC~)z$*Bkr@PIWqs>RbZouk#mNQVEfNM!mmb;iwp z_F(1FbsZn*qgCvK4v-rCk827BD}j@5Tq!kWcg%mqA5cLv@8#k0aJ7}C1c)=;Qvd|9F8@7%ru#l` z6{~k}`kB1p2nf>tMuh`>guo~U4ejV9jhC0x`aGXDTl^}3I>-+vYBO%MSw=$v;2L|* z=gSS@GU0W{9fUv~{S|Nxqs81)q&w%Q4eJ}Zod-h6f5(2qos^jZuQ%xF9qM|XP*e)J z0|4O6;qB#7oQj0x17M8caq~;!@$vq2h3ap=k#16%$u4I4jz$u}R-}{YDn%USh$D$I z@*~@#I`Mx5p=YXKo0X<7D()Jial`^~?=J@e!k}-KKy5O``_Q8n(Uhx%*SBgo1-VM5r>Nc0%Wcvo&ItY{@0t?+Pb>lnIf63&CTgU#jlS)q>`uGT&;mN5LUo;uz`JS zdP-)LTp51Fch3P{xraH$al(bm_yrmly0%f7BULxA8d|H{!=CG@!KIArzE|f^l2*ZQ~F6#91|4h3J7Fy3KzyQ5T%(&6x}>u ze_E#1zkYEadt@%LRTWCcfm~}_{Cac`zeAz&)YVhWSUB^J`MLHBh$U%yv7LZ#jeN3Yp;|D4*_i-c`xw$**ZU`u+$ zqkYvMEGkIuDY8G@IQ;Tc2!R7G^%LC1gn^;StzAP$p}Py8{3@$o3q($Ww8 zIYfV@^=0@#QGbF1SOjE?*gGYvVuwe1F`g?$wX53D(R?6{vwTb3JciXT{lZPv@$PWR zy`+ejjAc?K8}iD(B@}S?!z3S>C7@ClG13<$%}A1EvF0QB)us0Bt&1M)OiA2w|9-Ve z3abBSEkc9IsU?fgC^5(+;&w?TT#ja-y1Ke_`l!!}P0rTZu>rb$ZNj)2-fiM|{&CdB z#3+P|GhH7&W%#}<#zck+o<^+jB4I#hZ%qjq%O5~S^op^UKP+8zt2ZxJ2Yb#lB zhf0~SqaUh^j}-CmdSS^_OvQB5LXG??O>gr8CC#vH9o%y3v}62vOsf(2z0z_`Gk=^Y za?@!w-Ez$jrF%uxNmf>A8agxiOx=)wv@ElvfhG8F+w^U%fFoICxeS%b4Vt}KE}kD6 z{^Efd+3@tMbc4iap${`$#Qx71j*qN>z<8Aq?LrDS>*E2J+njfsSfBtiD4&%n3--^oCsSQ`u3GtCAw!{3dT?u2b||4)188P-&` zh2atDMLs~l#qZ%P=N_WnxH`lL4^U8U?@_8)C4IOIs%5G(vdETfFdM- zQUzfs(nPp`+#PIlXa3#i{=E5ho|Bw?p6pZhUf){#d%u9>dU2ch3UE;Jo%aLWhSi1l zx7*B;wQOtBn&aETtDx(P%}*Jrrxu^lTOD-Q_q7cQht`v%4<^~=*>?n8S9nYAi=eyr zA>5-f$_%)8jIYR<&qf%#yn@eMJzZhL^O#Oi+(jHU;h$K%XPvVTcKx+VN*q@E4YDJ5 z#pkyEi;HN>m@2-(T3xXiMM`&R>(SKMT(#r&g8=~*di;ZrUbF}+g=YCHpfwcH2T{X> zJSp*=(kao2=l%ovefw4>=~fnNiID=e?#=A2v$9&XEIn(P)UqU`Jc>@Y|HBx*L3MVW z8Ks}Te6Hzq>}t~sM<2U85gz9iK~xi#qxwy|#ADa?F1roOKZfy*eUL>jXsJDf%jnFaO#F$Mz2T3HoO-Bhx#qoRi zNxYjm@31nrmm!rzqS>m#2J1!J%`0!GUgt)gKhZdMtAw^!ZktUM?^-}U6t`lc#uMrM zCa><+Od1s$z$vQZc_b!_J9N8Jef=L?QWTGn=yZ;D$^^AK-f`oYCTWJh;R)jSy-G`B z$^Gk8*dLs(YBr$(m&KP?bQ~Bt!(M#?;R`8?G(sOc(k$`3^&3pS*v#bwXLD>7`^SsO z@!V9=m9l=fY5lmDiFvd^50$Sg4Bn5Xg&FQDOhdA%6}=)4&K zd}-VUYxK_)Vq1-4dvd9P2ujab_>z#9bP3BW*_ga{qMUz!rjCx;nN zgqK_4Wz6nE{E(r~#)P0fpAVi?uY2J~DB5F8MuiJAl59pDR?MlIDdCN$q~D5APVTA{ zFHx6{Ia^paU!dXDpBi@g%AnJ5^WysY`2*IDy>)2IapQB{vE609IKwj?{oz&dP^Dtk z+nElxs3uA?_@;oCC}nyMt+hco4F<2Nl?Crs!O9e910B+Sg=i9`%SlNv@RBMLea-eb zyffJZJoTGP&y6@j)dFTyZek>Pl9Wxo4D4SLpwc@~FjyHC=?crbq~84Wq4%5tYVBno zwx$YxB;*}j`T?PoTKiBquD4=RwnV@ZH0&K_-LebZe~7?aj=Iso5#;vlp;`eNf6tfufs`F@6NvS!mSd#3&W1y&`vBlh4tW|+9(j(W_ zf12gt_MK=>jgb)(v}3?z?9Hnb=OGR{7VSEM>{tInSy&Q!GSOvrZYq>R^Z}@u_F2>j z(J49x->?h1wAj))v&oS_oWV?P!YOU6N=Y-6vHMXTN?Nhb7r;zV0N=GWq-egqfaoZ% zx^k@~rmd7yZTECI+;WH!(_o^?vGb~VWAQbJV2O+H8Dj+*pU+Zi1Ky%z-irH8+158i z>tT7qIT|-ZS8=%TcB0;7_#s=DM+Rk3Y4X0o|J||I)m_ z<)7@CT;v5Br*uS`;YkC3JGLPKa{Y}*;b4?d_=`872K1{sWD zfgpukgR_tK0?)3+HH5XV3r1q#_dCSH=p8NW5XZLhDe+emAL4`QxHp~MifbX0Zr3&g z1Ay5DdG|Y>McpV;Hv-{hm9LGyJByj5?xaT|W@4W1tDhIByzW-{F!wc*wKrt_+jp9)<7jMknld7^sWZ!x%3xJin-?-dA zsL_C;@i%Jp%Qy3pf2Bq<@M*3s(|`V`2VEG!zo^dt1&toD7@12kQ~KJWC;k-;f4KEn zZOzm;IhjGND|lTRUN&#J@hXm4-}rT|y*ZGtd;;R7e~msi!dGd@GMvYgAuGX>@=z^lC9(9;SmC*59tQ6 zs{;$pr8WZ5Ez}A97ru>ZX=Yp-#k)3iOc$xzqYD*NU1EuTHpftLzNOT=fgHe!tt&w9 z?g6!@h1ypm%JE)?Aj?Y%)7g?ZWGN1_onMDMaVBaeYPd`zyMg))VrpWGDuE+bv#nk&3n(_;_a{QoSRp$!KwuHi?Uwh_oez zRWNhXGN5(Wsh$Gh_RL|BwVE0j@{}2p?+TbZ{Jr&r2U3{Hoo#~&H?4JxVU;!0ZB03d z`(n6SE+Z)|`*Q#=icltJ!zvyLx{{n5K3f@>x{OWRJgRL^{y8}%2FkRqFZX+ve|Yb) zJl!k*O(Q_&ByhW(=61WZ%{G|*`al{q)sREfRf%HlrgQg%#m-Sb38%FI&Iyh4sMo8juH> z(@3{`=8(WSdzGn^xbe1TIAJuok`H)>WM}WGs;+(nQcl~?ccfK2b%0*`o-$5T^BDeVv9a}v1W>)eD0S~T&C13$Usw`3(=YZ@5Rxey1VuAT>9UG(Mxam$$_^Eo-n?YS1&UjJH})PZGWx}(v&Ah?t;-VY)y zrcw&hvhO}IJNLB!@bIMO=H@cXC5qjL`o=EO#mC+6#K)6@f>_d9HlP0e6h_%yhzpp@ z7C-6+*Z{DEj8-yoM{+r%wb;yrxP;IWs`e)1-QYCeWGj$z3JgERW+v(Msej{_)WbqN z)VcDjc>zw=sOVo8^Z#Fh8jAmYCkS)1eefbEisgTG c;C7C=&8mJ_6sD}$|2ObK8k-+2JmL`aZ~3By2><{9 literal 0 HcmV?d00001 diff --git a/latest/_sources/README.md.txt b/latest/_sources/README.md.txt new file mode 100644 index 000000000..5c02a118c --- /dev/null +++ b/latest/_sources/README.md.txt @@ -0,0 +1,48 @@ +# OPEA Project + +**Mission**: Create an open platform project that enables the creation of open, multi-provider, robust, and composable GenAI solutions that harness the best innovation across the ecosystem. + +OPEA sites within the Linux Foundation AI & Data Organization: + +* Website: [https://opea.dev](https://opea.dev) +* X/Twitter: [https://twitter.com/opeadev](https://twitter.com/opeadev) +* Linkedin: [https://www.linkedin.com/company/opeadev](https://www.linkedin.com/company/opeadev) +* Github: [https://github.com/opea-project](https://github.com/opea-project) +* Contact us at: [info@opea.dev](mailto:info@opea.dev) + +The OPEA platform includes: + +- Detailed framework of composable building blocks for state-of-the-art generative AI systems including LLMs, data stores, and prompt engines +- Architectural blueprints of retrieval-augmented generative AI component stack structure, and end-to-end workflows +- A four-step assessment for grading generative AI systems around performance, features, trustworthiness, and enterprise-grade readiness + +Check out the [LF AI & Data Press Release](https://lfaidata.foundation/blog/2024/04/16/lf-ai-data-foundation-launches-open-platform-for-enterprise-ai-opea-for-groundbreaking-enterprise-ai-collaboration/) and [Intel's blog post](https://www.intel.com/content/www/us/en/developer/articles/news/introducing-the-open-platform-for-enterprise-ai.html). + +Technical Steering Committee +- [Ke Ding](https://www.linkedin.com/in/dingke/), Senior Prinicipal AI Engineer, Intel +- [Malini Bhandaru](https://www.linkedin.com/in/malinibhandaru/), Senior Principal Engineer, Intel (Chair) +- [Amr Abdelhalem](https://www.linkedin.com/in/amrhalem/), SVP, Head of Cloud Platforms, Fidelity +- [Robert Hafner](https://www.linkedin.com/in/roberthafner/), Senior Principal Architect, Comcast +- Steve Grubb, Senior Principal Engineer, Red Hat +- [Nathan Cartwright](https://www.linkedin.com/in/nathan-cartwright-2008228/), Chief Architect - AI, CDW +- [Logan Markewich](https://www.linkedin.com/in/logan-markewich/), Founding Software Developer, LlamaIndex +- [Justin Cormack](https://www.linkedin.com/in/justincormack/), CTO, Docker +- [Melissa Mckay](https://www.linkedin.com/in/melissajmckay/), Head of Developer Relations, JFrog + +Member companies at launch: +======= +* Anyscale +* Cloudera +* Datastax +* Domino Data Lab +* Hugging Face +* Intel +* KX +* MariaDB Foundation +* MinIO +* Qdrant +* Red Hat +* SAS +* VMware by Broadcom +* Yellowbrick Data +* Zilliz diff --git a/latest/_sources/codeowner.md.txt b/latest/_sources/codeowner.md.txt new file mode 100644 index 000000000..5a34fba41 --- /dev/null +++ b/latest/_sources/codeowner.md.txt @@ -0,0 +1,50 @@ +# OPEA Project Code Owners + +These tables list the GitHub IDs of code owners. For a PR review, please contact the corresponding owner. +- [GenAIExamples](#genaiexamples) +- [GenAIComps](#genaicomps) +- [GenAIEval](#genaieval) +- [GenAIInfra](#genaiinfra) +- [CICD](#cicd) + +## GenAIExamples + +| examples | owner | +|:-------------:|:-----------:| +| AudioQnA | Spycsh | +| ChatQnA | lvliang-intel| +| CodeGen | lvliang-intel| +| CodeTrans | Spycsh | +| DocSum | Spycsh | +| SearchQnA | letonghan | +| Language Translation |letonghan| +| VisualQnA | lvliang-intel| +| Others | lvliang-intel| + +## GenAIComps + +| comps | owner | +|:-------------:|:-----------:| +|asr |Spycsh | +|cores |lvliang-intel| +|dataprep |XinyuYe-Intel| +|embedding |XuhuiRen | +|guardrails |letonghan | +|llms |lvliang-intel| +|reranks |XuhuiRen | +|retrievers |XuhuiRen | +|tts |Spycsh | + +## GenAIEval + +lvliang-intel, changwangss, lkk12014402 + +## GenAIInfra + +mkbhanda, irisdingbj, jfding, ftian1, yongfengdu + +## CICD + +chensuyue,daisy-ycguo, ashahba, preethivenkatesh + + diff --git a/latest/_sources/community/CODE_OF_CONDUCT.md.txt b/latest/_sources/community/CODE_OF_CONDUCT.md.txt new file mode 100644 index 000000000..9cf59d4cb --- /dev/null +++ b/latest/_sources/community/CODE_OF_CONDUCT.md.txt @@ -0,0 +1,130 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +- Demonstrating empathy and kindness toward other people +- Being respectful of differing opinions, viewpoints, and experiences +- Giving and gracefully accepting constructive feedback +- Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +- Focusing on what is best not just for us as individuals, but for the overall + community + +Examples of unacceptable behavior include: + +- The use of sexualized language or imagery, and sexual attention or advances of + any kind +- Trolling, insulting or derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or email address, + without their explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at +[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at +[https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq diff --git a/latest/_sources/community/CONTRIBUTING.md.txt b/latest/_sources/community/CONTRIBUTING.md.txt new file mode 100644 index 000000000..0d64ac25b --- /dev/null +++ b/latest/_sources/community/CONTRIBUTING.md.txt @@ -0,0 +1,123 @@ +# Contribution Guidelines + +Thanks for considering contributing to OPEA project. The contribution process is similar with other open source projects on Github, involving an amount of open discussion in issues and feature requests between the maintainers, contributors and users. + + +## Table of Contents + + + +- [All The Ways to Contribute](#all-the-ways-to-contribute) + - [Community Discussions](#community-discussions) + - [Documentations](#documentations) + - [Reporting Issues](#reporting-issues) + - [Proposing New Features](#proposing-new-features) + - [Submitting Pull Requests](#submitting-pull-requests) + - [Create Pull Request](#create-pull-request) + - [Pull Request Checklist](#pull-request-checklist) + - [Pull Request Template](#pull-request-template) + - [Pull Request Acceptance Criteria](#pull-request-acceptance-criteria) + - [Pull Request Status Checks Overview](#pull-request-status-checks-overview) + - [Pull Request Review](#pull-request-review) +- [Support](#support) +- [Contributor Covenant Code of Conduct](#contributor-covenant-code-of-conduct) + + + +## All The Ways To Contribute + +### Community Discussions + +Developers are encouraged to participate in discussions by opening an issue in one of the GitHub repos at https://github.com/opea-project. Alternatively, they can send an email to [info@opea.dev](mailto:info@opea.dev) or subscribe to [X/Twitter](https://twitter.com/opeadev) and [LinkedIn Page](https://www.linkedin.com/company/opeadev/posts/?feedView=all) to get latest updates about the OPEA project. + +### Documentation + +The quality of OPEA project's documentation can have a huge impact on its success. We reply on OPEA maintainers and contributors to build clear, detailed and update-to-date documentation for user. + +### Reporting Issues + +If OPEA user runs into some unexpected behavior, reporting the issue to the `Issues` page under the corresponding github project is the proper way to do. Please ensure there is no similar one already existing on the issue list). Please follow the Bug Report template and supply as much information as you can, and any additional insights you might have. It's helpful if the issue submitter can narrow down the problematic behavior to a minimal reproducible test case. + +### Proposing New Features + +OPEA communities use the RFC (request for comments) process for collaborating on substantial changes to OPEA projects. The RFC process allows the contributors to collaborate during the design process, providing clarity and validation before jumping to implementation. + +*When the RFC process is needed?* + +The RFC process is necessary for changes which have a substantial impact on end users, workflow, or user facing API. It generally includes: + +- Changes to core workflow. +- Changes with significant architectural implications. +- changes which modify or introduce user facing interfaces. + +It is not necessary for changes like: + +- Bug fixes and optimizations with no semantic change. +- Small features which doesn't involve workflow or interface change, and only impact a narrow use case. + +#### Step-by-Step guidelines + +- Follow the [RFC Template](./rfc_template.md) to propose your idea. +- Submit the proposal to the `Issues` page of the corresponding OPEA github repository. +- Reach out to your RFC's assignee if you need any help with the RFC process. +- Amend your proposal in response to reviewer's feedback. + +### Submitting Pull Requests + +#### Create Pull Request + +If you have improvements to OPEA projects, send your pull requests to each project for review. +If you are new to GitHub, view the pull request [How To](https://help.github.com/articles/using-pull-requests/). + +##### Step-by-Step guidelines + +- Star this repository using the button `Star` in the top right corner. +- Fork the corresponding OPEA repository using the button `Fork` in the top right corner. +- Clone your forked repository to your pc by running `git clone "url to your repo"` +- Create a new branch for your modifications by running `git checkout -b new-branch` +- Add your files with `git add -A`, commit `git commit -s -m "This is my commit message"` and push `git push origin new-branch`. +- Create a `pull request` for the project you want to contribute. + +#### Pull Request Template + +See [PR template](./pull_request_template.md) + +#### Pull Request Acceptance Criteria + +- At least two approvals from reviewers + +- All detected status checks pass + +- All conversations solved + +- Third-party dependency license compatible + +#### Pull Request Status Checks Overview + +The OPEA projects use GitHub Action for CI test. + +| Test Name | Test Scope | Test Pass Criteria | +|--------------------|-------------------------------------------|--------------------| +| DCO | Use `git commit -s` to sign off | PASS | +| Code Format Scan | pre-commit.ci [Bot] | PASS | +| Code Security Scan | Bandit/Hadolint/Dependabot/CodeQL/Trellix | PASS | +| Unit Test | Unit test under test folder | PASS | +| End to End Test | End to end test workflow | PASS | + +- [Developer Certificate of Origin (DCO)](https://en.wikipedia.org/wiki/Developer_Certificate_of_Origin), the PR must agree to the terms of Developer Certificate of Origin by signing off each of commits with `-s`, e.g. `git commit -s -m 'This is my commit message'`. +- Unit Test, the PR must pass all unit tests and without coverage regression. +- End to End Test, the PR must pass all end to end tests. + - If the PR introduces new microservice for `GenAIComps`, the PR must include new end to end tests. The test script name should match with the folder name so the test will be automatically triggered by test structure, for examples, if the new service is `GenAIComps/comps/dataprep/redis/langchain`, then the test script name should be `GenAIComps/tests/test_dataprep_redis_langchain.sh`. + - If the PR introduces new example for `GenAIExamples`, the PR must include new example end to end tests. The test script name should match with the example name so the test will be automatically triggered by test structure, for examples, if the example is `GenAIExamples/ChatQnA`, then the test script name should be `ChatQnA/tests/test_chatqna_on_gaudi.sh` and `ChatQnA/tests/test_chatqna_on_xeon.sh`. + +#### Pull Request Review +You can add reviewers from [the code owners list](../codeowner.md) to your PR. + +## Support + +- Feel free to reach out to [OPEA maintainers](mailto: info@opea.dev) for support. +- Submit your questions, feature requests, and bug reports to the GitHub issues page. + +## Contributor Covenant Code of Conduct + +This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant Code of Conduct](./CODE_OF_CONDUCT.md). diff --git a/latest/_sources/community/SECURITY.md.txt b/latest/_sources/community/SECURITY.md.txt new file mode 100644 index 000000000..89e187047 --- /dev/null +++ b/latest/_sources/community/SECURITY.md.txt @@ -0,0 +1,9 @@ +# Reporting a Vulnerability + +Report any security vulnerabilities in this project by following these [Linux Foundation security guidelines](https://www.linuxfoundation.org/security). + +## Script Usage Notice + +SCRIPT USAGE NOTICE: By downloading and using any script file included with the associated software package (such as files with .bat, .cmd, or .JS extensions, Dockerfiles, or any other type of file that, when executed, automatically downloads and/or installs files onto your system) +(the “Script File”), it is your obligation to review the Script File to understand what files (e.g., other software, AI models, AI Datasets) the Script File will download to your system (“Downloaded Files”). +Furthermore, by downloading and using the Downloaded Files, even if they are installed through a silent install, you agree to any and all terms and conditions associated with such files, including but not limited to, license terms, notices, or disclaimers. diff --git a/latest/_sources/community/pull_request_template.md.txt b/latest/_sources/community/pull_request_template.md.txt new file mode 100644 index 000000000..9de5e89f9 --- /dev/null +++ b/latest/_sources/community/pull_request_template.md.txt @@ -0,0 +1,25 @@ +# OPEA Pull Request Template + +## Description + +The summary of the proposed changes as long as the relevant motivation and context. + +## Issues +List the issue or RFC link this PR is working on. If there is no such link, please mark it as `n/a`. + +## Type of change + +List the type of change like below. Please delete options that are not relevant. + +- [ ] Bug fix (non-breaking change which fixes an issue) +- [ ] New feature (non-breaking change which adds new functionality) +- [ ] Breaking change (fix or feature that would break existing design and interface) + +## Dependencies + +List the newly introduced 3rd party dependency if exists. + +## Tests + +Describe the tests that you ran to verify your changes. Please list the relevant details for your test configuration and step-by-step reproduce instructioins. + diff --git a/latest/_sources/community/rfc_template.md.txt b/latest/_sources/community/rfc_template.md.txt new file mode 100644 index 000000000..6d24ee015 --- /dev/null +++ b/latest/_sources/community/rfc_template.md.txt @@ -0,0 +1,44 @@ +# RFC Template + +Replace the "RFC Template" heading with your RFC Title, followed by +the short description of the feature you want to contribute + +## RFC Content + +### Author + +List all contributors of this RFC. + +### Status + +Change the PR status to `Under Review` | `Rejected` | `Accepted`. + +### Objective + +List what problem will this solve? What are the goals and non-goals of this RFC? + +### Motivation + +List why this problem is valuable to solve? Whether some related work exists? + +### Design Proposal + +This is the heart of the document, used to elaborate the design philosophy and detail proposal. + +### Alternatives Considered + +List other alternatives if have, and corresponding pros/cons to each proposal. + +### Compatibility + +list possible incompatible interface or workflow changes if exists. + +### Miscellaneous + +List other information user and developer may care about, such as: + +- Performance Impact, such as speed, memory, accuracy. +- Engineering Impact, such as binary size, startup time, build time, test times. +- Security Impact, such as code vulnerability. +- TODO List or staging plan. + diff --git a/latest/_sources/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md.txt b/latest/_sources/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md.txt new file mode 100644 index 000000000..f25a89a59 --- /dev/null +++ b/latest/_sources/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md.txt @@ -0,0 +1,226 @@ +# 24-05-16 GenAIExamples-001 Using MicroService to Implement ChatQnA + +## Author +[lvliang-intel](https://github.com/lvliang-intel), [ftian1](https://github.com/ftian1), [hshen14](https://github.com/hshen14), [Spycsh](https://github.com/Spycsh), [letonghan](https://github.com/letonghan) + +## Status +Under Review + +## Objective +This RFC aims to introduce the OPEA microservice design and demonstrate its application to Retrieval-Augmented Generation (RAG). The objective is to address the challenge of designing a flexible architecture for Enterprise AI applications by adopting a microservice approach. This approach facilitates easier deployment, enabling one or multiple microservices to form a megaservice. Each megaservice interfaces with a gateway, allowing users to access services through endpoints exposed by the gateway. The architecture is general and RAG is the first example that we want to apply. + + +## Motivation +In designing the Enterprise AI applications, leveraging a microservices architecture offers significant advantages, particularly in handling large volumes of user requests. By breaking down the system into modular microservices, each dedicated to a specific function, we can achieve substantial performance improvements through the ability to scale out individual components. This scalability ensures that the system can efficiently manage high demand, distributing the load across multiple instances of each microservice as needed. + +The microservices architecture contrasts sharply with monolithic approaches, such as the tightly coupled module structure found in LangChain. In such monolithic designs, all modules are interdependent, posing significant deployment challenges and limiting scalability. Any change or scaling requirement in one module necessitates redeploying the entire system, leading to potential downtime and increased complexity. + + +## Design Proposal + +### Microservice + +Microservices are akin to building blocks, offering the fundamental services for constructing RAG (Retrieval-Augmented Generation) applications. Each microservice is designed to perform a specific function or task within the application architecture. By breaking down the system into smaller, self-contained services, microservices promote modularity, flexibility, and scalability. This modular approach allows developers to independently develop, deploy, and scale individual components of the application, making it easier to maintain and evolve over time. Additionally, microservices facilitate fault isolation, as issues in one service are less likely to impact the entire system. + +### Megaservice + +A megaservice is a higher-level architectural construct composed of one or more microservices, providing the capability to assemble end-to-end applications. Unlike individual microservices, which focus on specific tasks or functions, a megaservice orchestrates multiple microservices to deliver a comprehensive solution. Megaservices encapsulate complex business logic and workflow orchestration, coordinating the interactions between various microservices to fulfill specific application requirements. This approach enables the creation of modular yet integrated applications, where each microservice contributes to the overall functionality of the megaservice. + +### Gateway + +The Gateway serves as the interface for users to access the megaservice, providing customized access based on user requirements. It acts as the entry point for incoming requests, routing them to the appropriate microservices within the megaservice architecture. Gateways support API definition, API versioning, rate limiting, and request transformation, allowing for fine-grained control over how users interact with the underlying microservices. By abstracting the complexity of the underlying infrastructure, gateways provide a seamless and user-friendly experience for interacting with the megaservice. + + +### Proposal +The proposed architecture for the ChatQnA application involves the creation of two megaservices. The first megaservice functions as the core pipeline, comprising four microservices: embedding, retriever, reranking, and LLM. This megaservice exposes a ChatQnAGateway, allowing users to query the system via the `/v1/chatqna` endpoint. The second megaservice manages user data storage in VectorStore and is composed of a single microservice, dataprep. This megaservice provides a DataprepGateway, enabling user access through the `/v1/dataprep` endpoint. + +The Gateway class facilitates the registration of additional endpoints, enhancing the system's flexibility and extensibility. The /v1/dataprep endpoint is responsible for handling user documents to be stored in VectorStore under a predefined database name. The first megaservice will then query the data from this predefined database. + +![architecture](https://i.imgur.com/YdsXy46.png) + + +#### Example Python Code for Constructing Services + +Users can use `ServiceOrchestrator` class to build the microservice pipeline and add a gateway for each megaservice. + +```python +class ChatQnAService: + def __init__(self, rag_port=8888, data_port=9999): + self.rag_port = rag_port + self.data_port = data_port + self.rag_service = ServiceOrchestrator() + self.data_service = ServiceOrchestrator() + + def construct_rag_service(self): + embedding = MicroService( + name="embedding", + host=SERVICE_HOST_IP, + port=6000, + endpoint="/v1/embeddings", + use_remote_service=True, + service_type=ServiceType.EMBEDDING, + ) + retriever = MicroService( + name="retriever", + host=SERVICE_HOST_IP, + port=7000, + endpoint="/v1/retrieval", + use_remote_service=True, + service_type=ServiceType.RETRIEVER, + ) + rerank = MicroService( + name="rerank", + host=SERVICE_HOST_IP, + port=8000, + endpoint="/v1/reranking", + use_remote_service=True, + service_type=ServiceType.RERANK, + ) + llm = MicroService( + name="llm", + host=SERVICE_HOST_IP, + port=9000, + endpoint="/v1/chat/completions", + use_remote_service=True, + service_type=ServiceType.LLM, + ) + self.rag_service.add(embedding).add(retriever).add(rerank).add(llm) + self.rag_service.flow_to(embedding, retriever) + self.rag_service.flow_to(retriever, rerank) + self.rag_service.flow_to(rerank, llm) + self.rag_gateway = ChatQnAGateway(megaservice=self.rag_service, host="0.0.0.0", port=self.rag_port) + + def construct_data_service(self): + dataprep = MicroService( + name="dataprep", + host=SERVICE_HOST_IP, + port=5000, + endpoint="/v1/dataprep", + use_remote_service=True, + service_type=ServiceType.DATAPREP, + ) + self.data_service.add(dataprep) + self.data_gateway = DataPrepGateway(megaservice=self.data_service, host="0.0.0.0", port=self.data_port) + + def start_service(self): + self.construct_rag_service() + self.construct_data_service() + self.rag_gateway.start() + self.data_gateway.start() + +if __name__ == "__main__": + chatqna = ChatQnAService() + chatqna.start_service() +``` + +#### Constructing Services with yaml + +Below is an example of how to define microservices and megaservices using YAML for the ChatQnA application. This configuration outlines the endpoints for each microservice and specifies the workflow for the megaservices. + +```yaml +opea_micro_services: + dataprep: + endpoint: http://localhost:5000/v1/chat/completions + embedding: + endpoint: http://localhost:6000/v1/embeddings + retrieval: + endpoint: http://localhost:7000/v1/retrieval + reranking: + endpoint: http://localhost:8000/v1/reranking + llm: + endpoint: http://localhost:9000/v1/chat/completions + +opea_mega_service: + mega_flow: + - embedding >> retrieval >> reranking >> llm + dataprep: + mega_flow: + - dataprep +``` + +```yaml +opea_micro_services: + dataprep: + endpoint: http://localhost:5000/v1/chat/completions + +opea_mega_service: + mega_flow: + - dataprep +``` + +The following Python code demonstrates how to use the YAML configurations to initialize the microservices and megaservices, and set up the gateways for user interaction. + +```python +from comps import ServiceOrchestratorWithYaml +from comps import ChatQnAGateway, DataPrepGateway +data_service = ServiceOrchestratorWithYaml(yaml_file_path="dataprep.yaml") +rag_service = ServiceOrchestratorWithYaml(yaml_file_path="rag.yaml") +rag_gateway = ChatQnAGateway(data_service, port=8888) +data_gateway = DataPrepGateway(data_service, port=9999) +# Start gateways +rag_gateway.start() +data_gateway.start() +``` + +#### Example Code for Customizing Gateway + +The Gateway class provides a customizable interface for accessing the megaservice. It handles requests and responses, allowing users to interact with the megaservice. The class defines methods for adding custom routes, stopping the service, and listing available services and parameters. Users can extend this class to implement specific handling for requests and responses according to their requirements. + +```python +class Gateway: + def __init__( + self, + megaservice, + host="0.0.0.0", + port=8888, + endpoint=str(MegaServiceEndpoint.CHAT_QNA), + input_datatype=ChatCompletionRequest, + output_datatype=ChatCompletionResponse, + ): + ... + self.gateway = MicroService( + service_role=ServiceRoleType.MEGASERVICE, + service_type=ServiceType.GATEWAY, + ... + ) + self.define_default_routes() + + def define_default_routes(self): + self.service.app.router.add_api_route(self.endpoint, self.handle_request, methods=["POST"]) + self.service.app.router.add_api_route(str(MegaServiceEndpoint.LIST_SERVICE), self.list_service, methods=["GET"]) + self.service.app.router.add_api_route( + str(MegaServiceEndpoint.LIST_PARAMETERS), self.list_parameter, methods=["GET"] + ) + + def add_route(self, endpoint, handler, methods=["POST"]): + self.service.app.router.add_api_route(endpoint, handler, methods=methods) + + def start(self): + self.gateway.start() + + def stop(self): + self.gateway.stop() + + async def handle_request(self, request: Request): + raise NotImplementedError("Subclasses must implement this method") + + def list_service(self): + raise NotImplementedError("Subclasses must implement this method") + + def list_parameter(self): + raise NotImplementedError("Subclasses must implement this method") + + ... +``` + +## Alternatives Considered +An alternative approach could be to design a monolithic application for RAG instead of a microservice architecture. However, this approach may lack the flexibility and scalability offered by microservices. Pros of the proposed microservice architecture include easier deployment, independent scaling of components, and improved fault isolation. Cons may include increased complexity in managing multiple services. + +## Compatibility +Potential incompatible interface or workflow changes may include adjustments needed for existing clients to interact with the new microservice architecture. However, careful planning and communication can mitigate any disruptions. + +## Miscs +Performance Impact: The microservice architecture may impact performance metrics, depending on factors such as network latency. But for large-scale user access, scaling out microservices can enhance responsiveness, thereby significantly improving performance compared to monolithic designs. + +By adopting this microservice architecture for RAG, we aim to enhance the flexibility, scalability, and maintainability of the Enterprise AI application deployment, ultimately improving the user experience and facilitating future development and enhancements. + diff --git a/latest/_sources/community/rfcs/24-05-16-OPEA-001-Overall-Design.md.txt b/latest/_sources/community/rfcs/24-05-16-OPEA-001-Overall-Design.md.txt new file mode 100644 index 000000000..7dff63064 --- /dev/null +++ b/latest/_sources/community/rfcs/24-05-16-OPEA-001-Overall-Design.md.txt @@ -0,0 +1,93 @@ +# 24-05-16 OPEA-001 Overall Design + +## Author + +[ftian1](https://github.com/ftian1), [lvliang-intel](https://github.com/lvliang-intel), [hshen14](https://github.com/hshen14) + +## Status + +Under Review + +## Objective + +Have a stable, extensible, secure, and easy-of-use orchestration framework design for OPEA users to quickly build their own GenAI applications. + +The requirements include but not limited to: + +1. orchestration planner + + have the ability of offer config-based definition or low-code for constructing complex LLM applications. + +2. component registry + + allow user to register new service for building complex GenAI applications + +3. monitoring + + allow user to trace the working flow, including logging, execution status, execution time, and so on. + +4. scalability + + easily scale within K8S or other deployment techs at on-premis and cloud environment. + +## Motivation + +This RFC is used to present the OPEA overall design philosophy, including overall architecture, working flow, component design, for community discussion. + +## Design Proposal + +The proposed overall architecture is + +![OPEA Architecture](opea_architecture.png "OPEA Architecture") + +1. GenAIComps + + The suite of microservices, leveraging a service composer to assemble a mega-service tailored for real-world Enterprise AI applications. + +2. GenAIExamples + + The collective list of Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) examples, targeting for demonstrating the whole orchestration pipeline. + +3. GenAIInfra + + The containerization and cloud native suite for OPEA, including artifacts to deploy GenAIExamples in a cloud native way, which can be used by enterprise users to deploy to their own cloud. + +4. GenAIEval + + The evaluation, benchmark, and scorecard suite for OPEA, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination. + +The proposed OPEA workflow is + +![OPEA Workflow](opea_workflow.png "OPEA Workflow") + +1. Microservice + + Microservices are akin to building blocks, offering the fundamental services for constructing RAG (Retrieval-Augmented Generation) applications. Each microservice is designed to perform a specific function or task within the application architecture. By breaking down the system into smaller, self-contained services, microservices promote modularity, flexibility, and scalability. This modular approach allows developers to independently develop, deploy, and scale individual components of the application, making it easier to maintain and evolve over time. Additionally, microservices facilitate fault isolation, as issues in one service are less likely to impact the entire system. + +2. Megaservice + + A megaservice is a higher-level architectural construct composed of one or more microservices, providing the capability to assemble end-to-end applications. Unlike individual microservices, which focus on specific tasks or functions, a megaservice orchestrates multiple microservices to deliver a comprehensive solution. Megaservices encapsulate complex business logic and workflow orchestration, coordinating the interactions between various microservices to fulfill specific application requirements. This approach enables the creation of modular yet integrated applications, where each microservice contributes to the overall functionality of the megaservice. + +3. Gateway + + The Gateway serves as the interface for users to access the megaservice, providing customized access based on user requirements. It acts as the entry point for incoming requests, routing them to the appropriate microservices within the megaservice architecture. Gateways support API definition, API versioning, rate limiting, and request transformation, allowing for fine-grained control over how users interact with the underlying microservices. By abstracting the complexity of the underlying infrastructure, gateways provide a seamless and user-friendly experience for interacting with the megaservice. + +## Alternatives Considered + +n/a + +## Compatibility + +n/a + +## Miscs + +- TODO List: + + - [ ] Micro Service specification + - [ ] Mega Service specification + - [ ] static cloud resource allocator vs dynamic cloud resource allocator + - [ ] open telemetry support + - [ ] authentication and trusted env + + diff --git a/latest/_sources/community/rfcs/24-05-24-OPEA-001-Code-Structure.md.txt b/latest/_sources/community/rfcs/24-05-24-OPEA-001-Code-Structure.md.txt new file mode 100644 index 000000000..8d7cafff4 --- /dev/null +++ b/latest/_sources/community/rfcs/24-05-24-OPEA-001-Code-Structure.md.txt @@ -0,0 +1,68 @@ +# 24-05-24 OPEA-001 Code Structure + +## Author + +[ftian1](https://github.com/ftian1), [lvliang-intel](https://github.com/lvliang-intel), [hshen14](https://github.com/hshen14) + +## Status + +Under Review + +## Objective + +Define a clear criteria and rule of adding new codes into OPEA projects. + +## Motivation + +OPEA project consists of serveral repos, including GenAIExamples, GenAIInfra, GenAICompos, and so on. We need a clear definition on where the new code for a given feature should be put for a consistent and well-orgnized code structure. + + +## Design Proposal + +The proposed code structure of GenAIInfra is: + +``` +GenAIInfra/ +├── kubernetes-addon/ # the folder implementing additional operational capabilities to Kubernetes applications +├── microservices-connector/ # the folder containing the implementation of microservice connector on Kubernetes +└── scripts/ +``` + +The proposed code structure of GenAIExamples is: + +``` +GenAIExamples/ +└── ChatQnA/ + ├── kubernetes/ + │ ├── manifests + │ └── microservices-connector + ├── docker/ + │ ├── docker_compose.yaml + │ ├── dockerfile + │ └── chatqna.py + ├── chatqna.yaml # The MegaService Yaml + └── README.md +``` + +The proposed code structure of GenAIComps is: + +``` +GenAIComps/ +└── comps/ + └── llms/ + ├── text-generation/ + │ ├── tgi-gaudi/ + │ │ ├── dockerfile + │ │ └── llm.py + │ ├── tgi-xeon/ + │ │ ├── dockerfile + │ │ └── llm.py + │ ├── vllm-gaudi + │ ├── ray + │ └── langchain + └── text-summarization/ +``` + +## Miscs + +n/a diff --git a/latest/_sources/community/rfcs/README.md.txt b/latest/_sources/community/rfcs/README.md.txt new file mode 100644 index 000000000..527628351 --- /dev/null +++ b/latest/_sources/community/rfcs/README.md.txt @@ -0,0 +1,7 @@ +# RFC Archive + +This folder is used to archive all RFCs contributed by OPEA community. Either users directly contribute RFC to this folder or submit to each OPEA repository's `Issues` page with the `[RFC]: xxx` string pattern in title. The latter will be automatically stored to here by an archieve tool. + +The file naming convention follows this rule: yy-mm-dd-[OPEA Project Name]-[index]-title.md + +For example, 24-04-29-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md diff --git a/latest/_sources/faq.md.txt b/latest/_sources/faq.md.txt new file mode 100644 index 000000000..131aa41d1 --- /dev/null +++ b/latest/_sources/faq.md.txt @@ -0,0 +1,84 @@ +# OPEA Frequently Asked Questions + +## What is OPEA’s mission? +OPEA’s mission is to offer a validated enterprise-grade GenAI (Generative Artificial Intelligence) RAG reference implementation. This will simplify GenAI development and deployment, thereby accelerating time-to-market. + +## What is OPEA? +The project currently consists of a technical conceptual framework that enables GenAI implementations to meet enterprise-grade requirements. The project offers a set of reference implementations for a wide range of enterprise use cases that can be used out-of-the-box. The project additionally offers a set of validation and compliance tools to ensure the reference implementations meet the needs outlined in the conceptual framework. This enables new reference implementations to be contributed and validated in an open manner. Partnering with the LF AI & Data places in the perfect spot for multi-partner development, evolution, and expansion. + +## What problems are faced by GenAI deployments within the enterprise? +Enterprises face a myriad of challenges in development and deployment of Gen AI. The development of new models, algorithms, fine tuning techniques, detecting and resolving bias and how to deploy large solutions at scale continues to evolve at a rapid pace. One of the biggest challenges enterprises come up against is a lack of standardized software tools and technologies from which to choose. Additionally, enterprises want the flexibility to innovate rapidly, extend the functionality to meet their business needs while ensuring the solution is secure and trustworthy. The lack of a framework that encompasses both proprietary and open solutions impedes enterprises from charting their destiny. This results in enormous investment of time and money impacting time-to-market advantage. OPEA answers the need for a multi-provider, ecosystem-supported framework that enables the evaluation, selection, customization, and trusted deployment of solutions that businesses can rely on. + +## Why now? +The major adoption and deployment cycle of robust, secure, enterprise-grade Gen AI solutions across all industries is at its early stages. Enterprise-grade solutions will require collaboration in the open ecosystem. The time is now for the ecosystem to come together and accelerate GenAI deployments across enterprises by offering a standardized set of tools and technologies while supporting three key tenets – open, security, and scalability. This will require the ecosystem to work together to build reference implementations that are performant, trustworthy and enterprise-grade ready. + +## How does it compare to other options for deploying Gen AI solutions within the enterprise? +There is not an alternative that brings the entire ecosystem together in a vendor neutral manner and delivers on the promise of open, security and scalability. This is our primary motivation for creating OPEA project. + +## Will OPEA reference implementations work with proprietary components? +Like any other open-source project, the community will determine which components are needed by the broader ecosystem. Enterprises can always extend OPEA project with other multi-vendor proprietary solutions to achieve their business goals. + +## What does OPEA acronym stand for? +Open Platform for Enterprise AI + +## How do I pronounce OPEA? +It is said ‘OH-PEA-AY' + +## What companies and open-source projects are part of OPEA? +AnyScale +Cloudera +DataStax +Domino Data Lab +HuggingFace +Intel +KX +MariaDB Foundation +MinIO +Qdrant +Red Hat +SAS +VMware by Broadcom +Yellowbrick Data +Zilliz + +## What is Intel contributing? +OPEA is to be defined jointly by several community partners, with a call for broad ecosystem contribution, under the well-established LF AI & Data Foundation. As a starting point, Intel has contributed a Technical Conceptual Framework that shows how to construct and optimize curated GenAI pipelines built for secure, turnkey enterprise deployment. At launch, Intel contributed several reference implementations on Intel hardware across Intel® Xeon® 5, Intel® Xeon® 6 and Intel® Gaudi® 2, which you can see in a Github repo here. Over time we intend to add to that contribution including a software infrastructure stack to enable fully containerized AI workload deployments as well as potentially implementations of those containerized workloads. + +## When you say Technical Conceptual Framework, what components are included? +The models and modules can be part of an OPEA repository, or be published in a stable unobstructed repository (e.g., Hugging Face) and cleared for use by an OPEA assessment. These include: + +GenAI models – Large Language Models (LLMs), Large Vision Models (LVMs), multimodal models, etc. +* Ingest/Data Processing +* Embedding Models/Services +* Indexing/Vector/Graph data stores +* Retrieval/Ranking +* Prompt Engines +* Guardrails +* Memory systems + +## What are the different ways partners can contribute to OPEA? +There are different ways partners can contribute to this project: + +* Join the project and contribute assets in terms of use cases, code, test harness, etc. +* Provide technical leadership +* Drive community engagement and evangelism +* Offer program management for various projects +* Become a maintainer, committer, and adopter +* Define and offer use cases for various industry verticals that shape OPEA project +* Build the infrastructure to support OPEA projects + +## Where can partners see the latest draft of the Conceptual Framework spec? +A version of the spec is available in the docs repo in this project + +## Is there a cost for joining? +There is no cost for anyone to join and contribute. + +## Do I need to be Linux Foundation member to join? +Anyone can join and contribute. You don’t need to be a Linux Foundation member. + +## Where can I report a bug? +Vulnerability reports can be sent to info@opea.dev. + + + + diff --git a/latest/_sources/framework.md.txt b/latest/_sources/framework.md.txt new file mode 100644 index 000000000..a5f1583d4 --- /dev/null +++ b/latest/_sources/framework.md.txt @@ -0,0 +1,857 @@ +# Open Platform for Enterprise AI (OPEA) Framework Draft Proposal + +Rev 0.5 April 15, 2024 + +Initial draft by Intel. Contacts for content – Ke Ding (ke.ding@intel.com ), Gadi Singer +(gadi.singer@intel.com) + +Feedback welcome at info@opea.dev + +## 1. Summary + +OPEA (Open Platform for Enterprise AI) is a framework that enables the creation and evaluation of +open, multi-provider, robust and composable GenAI solutions that harness the best innovation across +the ecosystem. + +OPEA is an ecosystem-wide program within the Linux Foundation Data & AI framework that aims to +accelerate enterprise adoption of GenAI end-to-end solutions and realize business value. OPEA will +simplify the implementation of enterprise-grade composite GenAI solutions, including Retrieval +Augmented Generative AI (RAG). The platform is designed to facilitate efficient integration of secure, +performant, and cost-effective GenAI workflows into business systems and manage its deployments. + +This platform’s definition will include an architectural blueprint, a comprehensive set of components for +GenAI systems, and a suite of specifications* for both individual components and entire systems. It will +also include tools for building, tuning, and evaluating end-to-end GenAI workflows. These definitions will +address key aspects such as performance, feature set, trustworthiness (security and transparency), and +readiness for enterprise-grade applications. The specifications will also include a set of reference flows +and demos that can be easily reproduced and adopted. + + + +Figure 1-1: OPEA’s Core Values + +_Disclaimer – The term ‘specification’ is used throughout this draft whitepaper and appendix as a broad +working term, referring generally to a detailed description of systems and their components. However, it +is important to note that this term might be replaced or updated based on more precise characterization +and applying the Linux Foundation licensing considerations._ + + +Figure 1-2 OPEA – proposed Construction and Evaluation Framework for AI Solutions + +We are now in an era where AI algorithms and models, that were initially developed in research +environments and later introduced into consumer-focused settings, are now transitioning to widespread +enterprise deployment. This transition provides an opportunity for partners to leverage decades of +insights into enterprise-scale computing, security, trustworthiness, and datacenter integration, among +other areas, to accelerate AI adoption and unlock its potential value. + +## 2. Introduction + +Recently, the practices for developing AI solutions have undergone significant transformation. Instead of +considering AI model (e.g., a GenAI LLM) as the complete solution, these models are now being +integrated into more comprehensive end-to-end AI solutions. These solutions consist of multiple +components, including retrieval subsystems with embedding agents, a Vector Database for efficient +storage and retrieval, and prompt engines, among others. This shift has led to the emergence of +Composition Frameworks (such as LangChain or Haystack), which are used to assemble these +components into end-to-end GenAI flows, like RAG solutions, for the development and deployment of AI +solutions. + +The ecosystem offers a range of composition frameworks, some are open-source (e.g., LangChain and +LlamaIndex), while others are closed-sourced and come bundled with professional services (e.g., +ScaleAI). Additionally, some are offered by cloud service providers (e.g. AWS) or hardware/software +providers (e.g., NVIDIA). However, as of Q2 2024 these represent individual perspectives and offerings +for the intricate task of building an end-to-end AI solution. + +### 2.1 Key capabilities + +OPEA will offer key capabilities in both the Construction and Evaluation of end-to-end composite GenAI +solutions, that are built with retrieval augmentation. As a construction platform, OPEA will enable +creation of RAG-enabled AI solutions directly or through the use of compositional tools such as +LangChain and Haystack. As an evaluation framework, OPEA will provide the means to assess and grade +end-to-end composite GenAI solutions on aspects derived from four domains – performance, features, +trustworthiness and Enterprise-readiness. + +#### 2.1.1 Construction of GenAI solutions, including retrieval augmentation + +Composing an end-to-end AI solution (including retrieval augmentation) can be done by combining +models and modules from multiple providers. + +OPEA will offer or refer to a set of building blocks – models and modules – that can be called in a flow to +achieve an AI task or service. The models and modules can be part of OPEA repository, or published in +stable open repository (e.g., Hugging Face), or proprietary / closed source and cleared for use by an +OPEA assessment. + +* GenAI models – Large Language Models (LLMs), Large Vision Models (LVMs), multimodal models, etc. +* Other modules - AI system components (other than LLM/LVM models) including Ingest/Data Processing module, Embedding Models/Services, Vector Databases (aka Indexing or Graph data stores), Prompt Engines, Memory systems, etc. + +Each module for the system will be characterized with its expected functionality and attributes. Those +will be evaluated for every particular implementation choice (see following evaluation section). There +will be multiple options offered from various providers for each module and model, to allow for choice +and diversity. + +This platform consists of a set of compositional capabilities that allow for building custom agents, +customizing AI assistants, and creating a full end-to-end GenAI flow that includes retrieval augmentation +as well as other functionality when needed. The platform will also include or reference tools for fine- +tuning as well as optimization (like quantization assists) to support creation of performant, robust +solutions that can run locally on target enterprise compute environments. Similar to building blocks, the +composition capabilities could be part of OPEA repository, or published in stable open repository (e.g., +Hugging Face) or offered by the ecosystem (like LangChain, LlamaIndex and Haystack). + +An important part of the compositional offering will be a set of validated reference flows that are ready +for downloading and recreation in the users’ environment. In the multitude of provided ready reference +flows, there will be domain-independent flows (like a RAG flow for language-based Q&A, or a +multimodal flow to interact with one’s images and videos) that were tuned for different HW providers +and settings. There will also be domain-specific flows like financial service end-to-end flow or nutrition +adviser, which are sometimes called microservices. + +There is a common visualizing language that is used to depict the component of each reference flow +being provided. + +#### 2.1.2 Evaluation of GenAI solutions, including retrieval augmentation: + +OPEA will provide means and services to fully evaluate and grade components and end-to-end GenAI +solutions across four domains – performance, functionality, trustworthiness and enterprise-readiness. +The evaluation can be done on a flow created within OPEA, or created elsewhere but requesting to be +assessed through the platform. + +Some of the evaluation tools will be part of the OPEA repository, while others will be references to +selected benchmarks offered by the ecosystem. + +OPEA will offer tests for self-evaluation that can be done by the users. Furthermore, it will have the +engineering setup and staffing to provide evaluations per request. + +The OPEA evaluations can be viewed at the following levels: + +* Assessment – Detailed tests or benchmarks done for particular modules or attributes of the +end-to-end flow. Assessments will be elaborate and specific, checking for the functionality and +characteristics specified for that module or flow. +* Grading - Aggregation of the individual assessments to a grade per each of the four domains – +Performance, Features, Trustworthiness and Enterprise-readiness. The aggregate grade per +domain could be L1 Entry Level; L2 Market Level; or L3 Advanced Level. +* Certification – It has not yet been decided if certification will be offered as part of OPEA. +However, the draft proposal for consideration is to allow for OPEA Certification that will be +determined by ensuring a minimum of Level 2 grading is achieved on all four domains. + + +Figure 2-1 Key capabilities provided by OPEA + +Appendix A of this document is an early draft of the proposed specification and sample reference flows. + +## 3. Framework Components, Architecture and Flow + +The OPEA definition (see Appendix A) includes characterization of components of State-of-the-Art (SotA) +composite systems including retrieval-augmentation and their architecture as a flow and SW stack. + +There are six sections in the Appendix A which will provide a starting point for a more detailed and +elaborate joint OPEA definition effort: + +* A1: System Components - List of ingredients that comprise a composed system, along with their +key characteristics. Some systems that will be evaluated may only include a subset of these +components. +* A2: SW architecture - Diagram providing the layering of components in a SW stack +* A3: System flows – Diagram[s] illustrating the flow of end-to-end operation through the relevant +components. +* A4: Select specifications at system and component level +* A5: Grading – Grading of systems being evaluated based on performance, features, +trustworthiness and enterprise-grade readiness. +* A6: Reference Flows – List of reference flows that demonstrate key use-cases and allow for +downloading and replication for a faster path to create an instantiation of the flow. + +Assumptions for the development of OPEA sections include: + +* OPEA is a blueprint for composition frameworks and is not set to compete with the popular +frameworks. It is set to help assess the pros and cons of various solutions and allow for +improved interoperability of components. +* In production, it is likely that many customers will deploy their own proprietary pipelines. +* This framework blueprint is complementary and is intended to encourage interoperability of +system components as well as addition of specialized value such as HW-aware optimizations, +access to innovative features, and a variety of assistants and microservices. +* Flexible and allows easy pluggable and replaceable models and other components. Ability to +exchange components is an important factor in the fast progression of the field. +* Providing an environment to experiment with solution variations - e.g. What is the impact (E2E +system performance) when replacing a generic re-ranking component with a particular +provider’s re-ranking component. + +It should be noted that the final shaping of the framework components, architecture and flows will be +jointly defined by a technical committee as the full OPEA definition and governance structure is +established. It is also expected that there will be a regular cadence of updates to the spec to reflect the +rapidly shifting State-of-the-Art in the space. + +## 4. Assessing GenAI components and flows + +One of the important benefits to the ecosystem from the development and broad use of OPEA is a +structured set of evaluation that can provide trusted feedback on GenAI flows – whether composed +within OPEA, or composed elsewhere but has the visibility and access that allows for evaluations. +Evaluations can be done by assessing individual components or complete end-to-end GenAI solutions. +Evaluations in the OPEA context refer to assessment of individual aspects of a solution – like its latency +or accuracy per defined suite of tests. Assessments are covered in this section. Grading is an aggregation +of assessments and is covered in the next section. + +Components and entire end-to-end flows will be evaluated in four domains – performance, features, +trustworthiness and enterprise-readiness. + +Performance can be evaluated at the component level - e.g., Vector Database latency over a given large, +indexed dataset, or latency and throughput of an LLM model. Moreover, performance needs to be +evaluated for end-to-end solutions that perform defined tasks. The term ‘performance’ refers to aspects +of speed (e.g., latency), capacity (e.g., memory or context size) as well as accuracy or results. + +OPEA can utilize existing evaluation specs like those used by SotA RAG systems and other standard +benchmarks wherever possible (e.g., MMLU). As for functionality, there are benchmarks and datasets +available to evaluate particular target functionality such as multi-lingual (like FLORES) or code +generations (e.g., Human-Eval). + +For evaluating trustworthiness/Hallucination safety the spec will leverage existing benchmarks such +as RGB benchmark/Truthful QA where possible. + +Some assessment of enterprise readiness would include aspects of scalability (how large of data set the +system can handle, size of vector store, size and type of models), infrastructure readiness (cloud vs bare +metal), and software ease of deployment (any post-OPEA steps required for broad deployment). One of +the measures that could be assessed in this category is overall Cost/TCO of a full end-to-end GenAI flow. + +When aspects of composite GenAI solutions are not freely available, reliable benchmarks or tests, +efforts will be made to ensure creation of such. As many of the current (early 2024) benchmarks are +focusing on performance and features, there will be an effort to complement those as needed for +assessing trustworthiness and enterprise-readiness. + +The development of assessments should use learnings from similar evaluations when available. For +example, referring to RAG evaluation as reported by Cohere’s Nils Reimers. See more details here : +* Human preference +* Average accuracy of an E2E +* Multi-lingual +* Long-context “Needles in Haystack” +* Domain specific + +The assessments development will be starting with focus on primary use-cases for RAG flow, such as +Open Q&A. It will allow for comparison with common industrial evaluations (see Cohere, GPT-4) + + +## 5. Grading Structure + +OPEA evaluation structure refers to specific tests and benchmarks as ‘assessments’ – see previous +section for details. ‘Grading’ is the part of OPEA evaluation that aggregates multiple individual +assessments into one of three levels, in each of the four evaluation domains – performance, features, +Trustworthiness and Enterprise readiness. + +The following draft of a grading system is for illustration and discussion purposes only. A grading +system should be defined and deployed based on discussions in the technical review body and any other +governance mechanism that will be defined for OPEA. + +To ensure that compositional systems are addressing the range of care-abouts for enterprise +deployment, the grading system has four categories: +* Performance – Focused on overall system performance and perf/TCO +* Features- Mandatory and optional capabilities of system components +* Trustworthiness – Ability to guarantee quality, security, and robustness. This will take into +account relevant government or other policies. +* Enterprise Readiness – Ability to be used in production in enterprise environments. + +The Performance and Features capabilities are well understood by the communities and industry today, +while Trustworthiness and Enterprise Readiness are still in their early stage for assessment and +evaluation when it comes to GenAI solutions. Nevertheless, all domains are essential to ensure +performant, secure, privacy-aware, robust solutions ready for broad deployment. + +The grading system is not intended to add any particular tests or benchmarks. All individual tests are to +be part of the assessments. Rather, the grading system goal is to provide an overall rating as to the +performance, functionality, trustworthiness and enterprise readiness of a GenAI flow over a multitude +of individual assessments. It is expected to provide an abstracted and simplified view of the GenAI flow +under evaluation. It will attempt to address two basic questions – what is the level of capabilities of a +flow relative to other flows evaluated at that time, as well as evaluate some necessary requirements +(such as for security and enterprise readiness) for robust deployment of GenAI solutions at scale. A +grading system establishes a mechanism to evaluate different constructed AI solutions (such as +particular RAG flows) in the context of the OPEA framework. + +For each category, the assessments will be set with 3 levels: +* L1 – Entry Level – Limited capabilities. The solution might be seen as less advanced or +performant relative to other solutions assessed for similar tasks. It might encounter issues in +deployment (if deficiencies in trustworthiness or enterprise readiness). +* L2 – Market – Meets market needs. The solution represents that mid-range of systems being +reviewed and assessed. It can be safely deployed in production enterprise environments and is +expected to meet prevalent standards on security and transparency. +* L3 – Advanced – Exceeds average market needs. The solution represents the top-range of +components or end-to-end GenAI flows being reviewed and assessed at the time. It meets or +exceeds all security, privacy, transparency and deployment-at-scale requirements. + +The grading system can be used by GenAI users to ensure that the solution being evaluated is meeting +the ecosystem expectations in a field that is moving exceptionally fast. It can highlight exceptional +solutions or point out areas of concern. The structured approach across the four domains ensures that +the combined learnings of the ecosystem at any given time are being reflected in the feedback to the +prospective users of a particular GenAI solution. Naturally, the goal posts of what is defined as L1/L2/L3 +need to be updated on regular basis as the industry pushes GenAI State-of-the-Art forward. + + + +Figure 5-1 Overall view of the grading system across four domains + +The grading system can play a different role for the providers of models, building blocks (modules), and +complete end-to-end GenAI solutions. Providers can get structured and impartial feedback on the +strengths and weaknesses of their offering compared with the rest of the market. An articulation of all +key areas for enterprise deployment is expected to guide providers towards a more robust and +complete delivery and continuous improvement for broad enterprise deployment. It also serves to +highlight outstanding solutions, providing them tailwinds as the present and differentiate their offering. + +If and when certification becomes part of the framework (discussion and decisions to be made at a later +stage) it is assumed that a system needs to be at least at Level 2 for every aspect to be “OPEA Certified”. +Such certification can increase the confidence of both providers and users that the GenAI solution being +evaluated is competitive and ready for broad deployment – stopping short of promising a guarantee of +any sort. + +The assessment test suites and associated grading will allow for ISVs and industry solution adopters to +self-test, evaluate and grade themselves on the various metrics. The test suite will be comprised of +applicable tests/benchmarks currently available in the community and where no standard benchmarks +exist, new tests will be developed. For each of these metrics we will have a grading mechanism to map +particular score ranges to L1, L2 or L3 for that time. These ranges will be updated periodically to reflect +the advancements in the field. + +Figure 5-2 illustrates some of the aspects to be evaluated in the four domains. Yellow highlighted +examples show the minimal assessments needed for each of the domains. The blue highlighted +examples show the next level of assessments that indicate higher level capabilities of the RAG pipeline. +The next level and the highest level of assessments are indicated by text with no color. + + + + +Figure 5-2 Capabilities and Testing Phases + +## 6. Reference flows + +Reference flows are end-to-end instantiations of use cases within the OPEA framework. They represent +a specific selection of interoperable components to create an effective implementation of a GenAI +solution. Reference flows documentation and links need to include comprehensive information +necessary for users of the framework to recreate and execute the flow, reproducing the results reported +for the flow. The reference flow documentation will provide links to the required components (which +may come from multiple providers) and the necessary script and other software required to run them. + +Several flows will exclusively focus on open models and other components, providing full transparency +when necessary. Other flows may include proprietary components that can be called/activated within +those flows. However, the components being referred to in a reference flow must be accessible to OPEA +users – whether they are open source or proprietary, free to use or fee-based. + +Reference Flows serve several primary objectives: +* Demonstrate representative instantiations: Within OPEA framework, reference flows showcase +specific uses and tasks. Given the framework’s inherent flexibility, various combinations of +components are possible allowing for maximum flexibility. Reference flows demonstrate how +specific paths and combinations can be effectively implemented within the framework. +* Highlight the framework’s potential: By offering optimized reference flows that excel in +performance, features, trustworthiness, and enterprise readiness, users can gain insight into +what can be achieved. The experience serves as valuable learning tools towards achieving their +AI deployment goals and planning. +* Facilitate easy deployment: Reference flows are designed to be accessible and easy to +instantiate with relatively low effort. It allows replicating a functional flow within their +environment with minimal effort, allowing subsequent modifications as needed. +* Encourage innovation and experimentation: Allow users in the ecosystem to experiment with +and innovate with a broad set of flows and maximize the value for their end-to-end use cases. + +OPEA will deploy and evolve a visualization language to capture the blueprint flows (e.g., a base flow for +RAG chat/Q&A) as well as to document the choices made for every reference flow. The visualization has +a legend (see Figure 6-1) that illustrates the key choices in the reference flow (e.g., sequence of +functions or containerization) (see Figure 6-2) as well as the implementation choices for particular +model and modules (See Appendix A section A6). + + + +Figure 6-1 Legend for Blueprint and Reference Flows + + + + +Figure 6-2 Example of blueprint RAG flow + + + +The Reference flows section of the specification (Section A6 in Appendix A) provides an initial catalog of +reference flows, demonstrating common tasks and diverse combinations of hardware and AI +components. As this collection of reference flows is extended, there will be diverse set of solution +providers and variations of HW (Intel, NVIDIA and others) as well as AI models, modules and +construction. + + + +## Appendix A – Draft OPEA Specifications + +**Rev 0.1 April 15, 2024** + +The draft specifications are intended for illustration and discussion purposes. The appendix has six +sections: + +* A1: System Components - List of ingredients that comprise a composed system, along with their +key characteristics. +* A2: SW architecture - Diagram providing the layering of components in a SW stack +* A3: System flows – Diagram[s] illustrating the flow of end-to-end operation through the relevant +components. +* A4: Select specifications at system and component level +* A5: Grading – Grading of systems being evaluated based on performance, features, +trustworthiness and enterprise-grade readiness. +* A6: Reference Flows – List of reference flows that demonstrate key use-cases and allow for +downloading and replication for a faster path to create an instantiation of the flow. + +This is an early draft of OPEA framework specification. It provides an initial view of the content and is +expected to be substantially expanded in future revisions. + +_Disclaimer – The term ‘specification’ is used throughout this draft whitepaper and appendix as a broad +working term, referring generally to a detailed description of systems and their components. However, it +is important to note that this term might be replaced or updated based on more precise characterization +and applying the Linux Foundation licensing considerations._ + +### A1: System Components + +| Components | Description | OSS Examples | Proprietary Examples | +| ---------- | ----------- | ------------ | -------------------- | +| Agent framework | Orchestration software for building and deploying workflows combining information retrieval components with LLMs for building AI agents with contextualized information | Langchain, LlamaIndex, Haystack, Semantic Kernel +| Ingest/Data Processing | Software components that can be used to enhance the data that is indexed for retrieval. For example: process, clean, normalization, information extraction, chunking, tokenization, meta data enhancement. | NLTK, spaCY, HF Tokenizers, tiktoken, SparkNLP +| Embedding models/service | Models or services that convert text chunks into embedding vectors to be stored in a vector database | HF Transformers, S-BERT | HF TEI, OpenAI, Cohere, GCP, Azure embedding APIs, JinaAI +| Indexing/Vector store | A software for indexing information (sparse/vector) and for retrieving given a query | Elasticsearch, Qdrant, Milvus, ChromaDB, Weaviate, FAISS, Vespa, HNSWLib, SVS, PLAID | Pinecone, Redis +| Retrieval/Ranking | A SW component that can re-evaluate existing contexts relevancy order | S-BERT, HF Transformers, Bi/Cross-encoders, ColBERT | Cohere +| Prompt engine | A component that creates task specific prompts given queries and contexts, tracks user sessions (maintain history/memory) | Langchain hub +| Memory | Conversation history in memory and/or persistent database | Langchain Memory module, vLLM (automatic prefix caching) +| LLM engine/service | LLM inference engine that generate text responses based on given prompts and contexts retrieved | vLLM, Ray, TensorRT-LLM | HF TGI, Deci Infery +| LLM Models | Open-source and close-source models. | LLama2-7B,13B, Falcon 40B, Mixtral-7b, Gemma etc. | LLama2-70B, OpenAI, Cohere, Gemini, etc. +| Guardrails | A software component for enforcing compliance, filtering, safe responses| LLM Guard | Purple llama, OpenAI safety control, NEMO-Guardrails +| Evaluation | Methods to evaluate compliance, Performance, Accuracy, Error rate of the LLM response | Recall, MAP, MTEB, MTBench, MMLU, TriviaQA, TruthfulQA… + + +Figure A1.1 List of key components. + +### A2: SW Architecture + +Support model selection and data integration across popular user-facing frameworks. It leverages +popular agent frameworks (aka orchestration frameworks or AI Construction Platforms) for developer +productivity and availability of platform optimization. + +Tuning of the solutions leverage platform optimizations via popular domain frameworks such as Hugging +Face ecosystem to reduce developer complexity and provide flexibility across platforms. + + +Figure A2.1 – OPEA solution stack. + + +### A3: System Flows + +Figure A3.1 – Main OPEA system RAG flow. + +### A4: Select Specifications + +Evaluating a composite generative AI system requires a view of end-to-end capabilities as well as assessment of individual components. + +#### A4.1 End-to-end assessment + +Following are some examples of assessments addressing the four domains - performance, features, trustworthiness and enterprise readiness. + +##### Performance + * Overall System Performance + * Latency (first token latency, average token latency, streaming vs non-streaming output) + * Throughput + * Given a fixed combination of various components of RAG (specific vendor instance for each component), overall system performance. + * For a specific task/domain, list the combination that would give the best system performance. + * Q&A evaluation (accuracy) + * Task: Open Q&A + * Databases: NQ, TriviaQA and HotpotQA + * Metric: Average Accuracy + * Indexing: KILT Wikipedia + +##### Features / Functionality + + * Functional + * Features – multimodal, Multi-LLM, Multiple embedding model choices, multiple Embedding DBs, context length + * Context Relevance (context precision/recall) + * Groundedness/faithfulness + * Answer Relevance + * Multi-step reasoning + * Task: 3-shot multi-hop REACT agents + * Databases: Wikipedia (HotPotQA), Internet (Bamboogle) + * Metric: Accuracy + * Test sets: Reflexion, Ofir Press + * Multi-lingual + * Task: Semantic Search + * Search Quality + * Metric nDCG @10 + * 18 languages + * Benchmark: MIRCAL + * Multi-lingual + * Tasks: Multilingual MMLU, Machine Translation + * Metric: Accuracy, BLEU + * French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese + * Benchmark: FLORES, MMLU + * Conversational agent and Function calling + * Task: conversational tool-use and single-turn function-calling capabilities + * Benchmark-1: Microsoft’s ToolTalk + * Benchmark-2: Berkeley's Function Calling Leaderboard (BFCL) + * Tool-use Metric: Soft Success rate + * Function calls: Function Pass rate + * Human reference on enterprise RAG use cases + * Domains: Customer support, Workplace support (Tech), Workplace Assistant (Media), Tech FAQ + * Metric: Win ratio vs. Mixtral + +##### Enterprise readiness + +Enterprise Readiness Assessment involving assessing for the following: +1. Scalability +2. Production deployability +3. Updatability +4. Observability/Debuggability + +Scalability is associated with the ability of RAG system to scale the size/dimensions of different components such as the following example metrics: +* Vector DB size +* Dimensionality of retriever (the value of K in top-K documents) +* Maximum context length supported by the generator +* Parameter size of the generator models +* Embedding dimension size + +Production deploy-ability Readiness includes various capabilities such as +* Efficient inference serving +* Integrations with different enterprise systems such as Slack/workday/SAP/Databases +* Enterprise grade RAS capabilities +* Service Level Agreements (SLAs) on factuality, verifiability, performance enforceability + +Updatability includes capability for +* Rolling upgrade +* Online upgrade +* Component level upgrade + +Observability/Debuggability includes capability for +* Error detection and attribution to component +* Early detection of component degradation +* Trace generation to debug failures (functional and performance) +* Traceability of each intermediate step (prompts for chained LLMs) + +Examples for observability include Databricks Inference Tables/Phoenix Open Inference Traces or +Langsmith Observability/monitoring features. + + +#### A4.2 Individual Components Assessment + +Evaluation of individual components (modules) will include: +* Data preprocessing pipeline +* Embedding – Quality/Storage/Processing time +* Chunker, Retriever & Re-ranker +* Generator LLM – quality/latency/context length/reasoning ability/function calling/tool usage +* Auto evaluation vs Manul evaluation +* Observability +* Guardrails - +* Prompting +* Output Generation – structured/grammar/output types(json/text) + +Early example of next level articulation of metrics expected per each major component. + +Component Name: Retriever +* Metric: Normalized Discounted Cumulative Gain@10 with BEIR benchmark datasets or other QA datasets +* Metric: Context Recall@k +* Metric: Context Precision@k +* Metric: Hit Rate + +Component Name: LLM/Generation +* Metric: Faithfulness – How factually correct is the generated answer (computed as a ragas metrics between 0 and 1) +* Metric: Answer Relevance – how relevant generated answer to the query (computed as a +ragas metrics between 0 and 1) + + +### A5: Grading + +To ensure that compositional systems are addressing the range of care-abouts for enterprise +deployment, the grading system has four categories: +* Performance – Focused on overall system performance and perf/TCO +* Features- Mandatory and optional capabilities of system components +* Trustworthiness – Ability to guarantee quality, security, and robustness. +* Enterprise Ready – Ability to be used in production in enterprise environments. + +For each category, the assessments will be set with 3 levels +* L1 – Entry Level – Limited capabilities. Solution acceptable for PoC, but not production. +* L2 – Market – Meets market needs. Can be deployed in production. +* L3 – Advanced – Exceeds market needs. + +Part of the recommendation is to have a certification (if and when it becomes part of the framework) +process. It is assumed that a system needs to be at least at Level 2 for every aspect to be “OPEA +Certified”. + +#### A5.1 Performance Grading + +Performance grading is based on running a set of vertical-specific end-to-end use cases on full system +and capturing the relevant metrics during the run. + + * E2E/System View + * Vendors have flexibility to innovate/differentiate their implementations within the black box + * Running a fixed set of use cases + * Covering different vertical scenarios + * Minimum level of accuracy and reliability + * Input Datasets for benchmark + * Open/publicly available + * Automatic generation + * Scale factors + * Supports different input magnitude size + * Metrics + * First-token latency, overall latency, throughput, cost, consistency + * Formula to aggregate metrics for final score + * Vertical-specific metrics + +Performance +Performance grade is based on a set of ‘black box’ end-to-end RAG benchmarks, based on real use +cases. Each solution submitted to the OpenRag alliance will be measured against it. Performance +measurements will include latency, throughput, scalability, accuracy and consistency. + +* Level 1 – Baseline benchmark complete +* Level 2 – Meets performance levels that are expected for the bulk of GenAI solutions performing similar benchmarks/tasks. +* Level 3 – Exceeds the performance of most solutions being evaluated at that time. Top-tier solutions per the tasks evaluated. + +Figure A5.1 – Performance Grading + + +#### A5.2 Features Grading + +Feature grading consists of running functional tests to test system capabilities in a number of different +domains. Each domain will have its own score. + + * Interoperability/API + * Functional tests for each interface + * Different granularity levels for components + * Open interfaces for 3rd party data sources + * Should enable multiple types of data sources + * Platform capabilities and AI methods + * Ingest, inference, fine tuning + * Gen AI and reinforcement learning + * User experience + * Ease of Use + * Management tools – single pane, inter-vendor + * GUI requirements + * Developer tools + * Deployment models + * Orchestration + * K8, hypervisor + * Compliance + * Potential certification (if and when it becomes part of the framework) based on functional testing + +Features + +Features evaluated for interoperability, platform capabilities, user experience (ease of use), AI +methods being applied, and specialized functionality. +* Level 1 – Single model and accesses few data sources; Limited data ingest; Basic or no development tools; basic UI; bare metal, manual install. +* Level 2 - Multiple models, and accesses diverse enterprise data sources; full data ingest; basic fine-tuning; flexible pipelining of modules in the flow; basic agent controls. +* L3 – Natively supports multimodal models and data source; Advanced development tools with SotA fine-tuning and optimizations capabilities; leading specialized features + +Figure A5.2 – Feature Grading + + +#### A5.3 Trustworthiness Grading + +Trustworthiness and responsible AI are evolving in an operational sense. See NIST trustworthy and +responsible AI and the EU AI Act. While these efforts are evolving, for the interim, we propose grading +solution trustworthiness along the axes of security, reliability, transparency, and confidence: + + * Transparency + * Open Source Models and Code. This provides visibility into the actual code running, being able to verify versions and signed binaries. + * Open standards, reusing existing standards. + * Data sets used in model training, which allows analysis of data distribution and any biases therein. For instance, if a cancer detection model was trained on populations that are very diverse - ethnically (genome), or environments (exposure to carcinogens), it carries with a risk of applicability when used for individuals that are not representative of the training set. + * Citing sources/documents used in generating responses, protecting from hallucinations. One of the chief benefits of RAG. + * Meeting regulatory requirements such as ISO27001, HIPAA, and FedRAMP as appropriate. + * Security: + * Role-based access control, segmented access per user-role regardless of same model use. This could be a pre or post processing step that filters out data based on user access to different information. For instance, executive leadership may have access to company revenues, financials and customer lists versus an engineer. + * Solutions that run at the minimum necessary process privilege to prevent exploits form escalation of privileges should the application be hacked. + * Running in trusted execution environments, that is hardware supported confidential compute environments that protect data in use – providing confidentiality and integrity from privileged and other processes running on the same infrastructure. Valuable particularly in the cloud. + * Attesting binaries in use, be it models or software. + * Audit logs that indicate when and what updates were applied either to models or other software, including security patches. + * Ensuring that results, intermediate and final are persisted only on encrypted storage and shared with end users through secure transport. + * Reliability + * Provide the same answer, all else remaining the same, when prompts are similar, differing in their use of synonyms. + * Returns correct answers, per tests. + * Confidence + * In question answering scenarios, awareness of the quality and how current/up-to-date data used in RAG and providing that information along with the response helps an end user in determining how confident they can be with a response. + * Cites sources for responses. Meta data can also be used to indicate how up-to-date the input information is. + * With respect to diagnosis/classification tasks, such as cancer detection, the divergence of the test subject from the training dataset is an indicator of applicability risk, confidence in the response (alluded to in data transparency above). + +Trustworthiness + +Evaluating transparency, privacy protection and security aspects +* Level 1 – Documentation of aspects called for in trustworthiness domain +* Level 2 - Supports role-based access controls - information being accessed/retrieved is +available based on approval for the user (even if all users access the same model); +* Level 3 - Supports security features (e.g., running Confidential Computing / Trusted +Execution Environment). Supports attestation of the models being run; full open- +source transparency on pre-training dataset, weights, fine-tuning data/recipes + +Figure A5.3 – Trustworthiness Grading + + +#### A5.4 Enterprise-Ready Grading + +Grading enterprise-readiness consists of evaluating the ability of the overall solution to be deployed in +production in an enterprise environment. The following criteria will be taken into account: + + * Ability to have on-prem and cloud deployments + * At least two types of solution instances (on-premise installation, cloud, hybrid option) + * Cloud/Edge-native readiness (refer to CNCF process/guidelines) + * Security-ready for enterprise + * Multi-level Access Control & Response (including ability to integrate with internal tools) + * Data & Model Protection (e.g. including GDPR) + * Lifecycle management including security updates, bug fixes, etc + * Solutions that are packaged as containerized applications that do not run as root or have +more capabilities than necessary. OWASP container best practices. + * Ensure by-products/interim results if saved to disk are done so after encrypting. + * Quality assurance + * Accuracy & Uncertainty Metrics for domain-specific enterprise tasks + * Documentation + * High availability + * Replication & Data/Instance Protection + * Resiliency – time to relaunch an instance when burned down to zero. + * Provides support and instrumentation for enterprise 24/7 support + * Licensing model and SW Distribution + * Scalable from small to large customers + * Ability to customize for specific enterprise needs + +Enterprise Readiness +Must first meet mins across performance, features, and trustworthiness +* Level 1 – Reference Design and deployment guide +* Level 2 - Output ready for enterprise deployment (no post-OPEA steps needed); +containerized, K8 support; generally robust (but not guaranteed) for production +deployment at scale +* Level 3 – Generating sophisticated monitoring and instrumentation for the enterprise +deployment environment. High resiliency – meeting fast time to relaunch an +instance. Allows for L2 + 24/7 support mode out-of-the-box + +Figure A5.4 – Enterprise-Ready Grading + + +### A6: Reference Flows + +This section includes descriptions of reference flows that will be available for loading and reproducing +with minimal effort. + +Reference flows serve four primary objectives: +* Demonstrate representative instantiations: Within OPEA framework, reference flows showcase +specific uses and tasks. Given the framework’s inherent flexibility, various combinations of +components are possible allowing for maximum flexibility. Reference flows demonstrate how +specific paths and combinations can be effectively implemented within the framework. +* Highlight the framework’s potential: By offering optimized reference flows that excel in +performance, features, trustworthiness, and enterprise readiness, users can gain insight into +what can be achieved. The experience serves as valuable learning tools towards achieving their +AI deployment goals and planning. +* Facilitate easy deployment: Reference flows are designed to be accessible and easy to +instantiate with relatively lower effort. It allows replicating a functional flow within their +environment with minimal effort, allowing subsequent modifications as needed. +* Encourage innovation and experimentation: Allow users in the ecosystem to experiment with +and innovate with a broad set of flows and maximize the value for their end-to-end use cases. + +Current examples of reference flows are provided for illustration purposes. The set of reference flows is +expected to grow and cover various combinations of HW and SW/AI components from multiple +providers. + +The reference flow descriptions need to provide high clarity as to what and how they can be recreated +and results reproduced at an OPEA user setting. All reference flows will have a visualization that clarifies +which components are being instantiated and how they are connected in the flow. The graphics legend +described in Figure 6.1 will be used for all reference flow depictions. + + + +Figure A6.1 - Reference Design Flows Visualization - legend + + +#### A6.1 – Xeon + Gaudi2 LLM RAG flow for Chat QnA + + A reference flow that illustrates an LLM enterprise RAG flow that runs on Xeon (GNR) with vector +database and an embedding model, and with a Gaudi2 serving backend for LLM model inference. + +The reference flow demonstrates a RAG application that provides an AI assistant experience with +capability of retrieving information from an external source to enhance the context that is provided to +an LLM. The AI assistant is provided with access to an external knowledge base, consisting of text and +PDF documents and web pages available via direct URL download. +The flow enables users to interact with LLMs and query about information that is unknown to the LLMs, +or for example, consists of proprietary data sources. + +The reference flow consists of the following detailed process: a data storage which is used by a +retrieving module to retrieve relevant information given a query from the user. The query and external +data are stored in an encoded vector format that allows for enhance semantic search. The retriever +module encodes the query and provides the prompt processor the retrieved context and the query to +create an enhanced prompt to the LLM. An LLM receives the enhanced prompt generates a grounded +and correct response to the user. + +The flow contains the following components: +* A data ingest flow that uses an embedding model serving platform (TEI) and an embedding +model (BGE-base) for encoding text and queries into semantic representations (vectors) which +are stored in an index (Redis vector database), both running on Intel Gen6 Xeon GNR for storing +and retrieving data. +* A LLM inference serving flow utilizing TGI-Gaudi for LLM model serving on Gaudi2 platform, +which is used generating answers by inputting prompts that combine retrieved relevant +documents from Redis vector database and the user query. +* An orchestration framework based on LangChain that initializes a pipeline with the components +above and orchestrates the data processing from the user (query), text encoding, retrieval, +prompt generation and LLM inference. + +A complete reference implementation of this flow is available in the ChatQnA example in Intel’s GenAI +examples repository. + + + +Figure A6-1.2 Xeon + Gaudi2 LLM RAG flow for Chat QnA + +A demo user Interface looks like below, which also shows the difference with and without RAG. + + + +Figure A6-1.3 Xeon + Gaudi2 LLM RAG flow for Chat QnA – demo screen + + +#### A6.2 - Multimodal Chat Over Images and Videos + +This reference flow demonstrates a multimodal RAG pipeline which utilizes Intel Labs’ BridgeTower +vision-language model for indexing and LLaVA for inference, both running on Intel Gaudi AI accelerators. +The use case for this reference flow is enabling an AI chat assistant to retrieve and comprehend +multimodal context documents such as images and videos. For example, a user may wish to ask an AI +assistant questions which require reasoning over images and videos stored on their PC. This solution +enables such capabilities by retrieving images and video frames relevant to a user’s query and providing +them as extra context to a Large Vision-Language Model (LVLM), which then answers the user’s +question. + +Specifically, this reference solution takes images and video files as input. The inputs are encoded in a +joint multimodal embedding space by BridgeTower, which is an open-source vision-language +transformer. Detailed instructions and documentation for this model are available via Hugging Face. The +multimodal embeddings are then indexed and stored in a Redis vector database. + +At inference time, a user’s query is embedded by BridgeTower and used to retrieve the most relevant +images & videos from the vector database. The retrieved contexts are then appended to the user’s +query and passed to LLaVA to generate an answer. Detailed instructions and documentation for the +LLaVA model are available via Hugging Face. + +This reference flow requires Intel Gaudi AI Accelerators for the embedding model and for generating +responses with the LVLM. All other components of the reference flow can be executed on CPU. A +complete end-to-end open-source implementation of this reference flow is available via Multimodal +Cognitive AI. + + + + +Figure A6-2.1 Multimodal Chat Over Images and Videos Reference Flow + +Below is an illustration of a user interface constructed for this reference flow, which was showcased at +Intel Vision: + + Figure A6.2.2 Multimodal Chat Over Images and Videos – demo screen + + +#### A6.3 – Optimized Text and Multimodal RAG pipeline + +The reference flow below demonstrates an optimized Text and Multimodal RAG pipeline which can be +leveraged by Enterprise customers on Intel Xeon processor. + +This flow demonstrates RAG inference flow on unstructured data and images with 4th and 5th Gen Intel +Xeon processor using Haystack. It is based on fastRAG for optimized retrieval. + +The first step is to create index for the vector database (i.e. Qdrant in this case). For unstructured text +data, sentence-transformers is used. For images, BridgeTower is used to encode the inputs. + +Once the vector database is set up, next step is to deploy inference chat. The LLM and LMM models +used for inference are Llama-2-7b-chat-hf, Llama-2-13b-chat-hf and LLaVa models respectively. + + The below diagram shows the end-to-end flow for this optimized text and multimodal chat with RAG. + + + +Figure A6-3.1 Optimized Text and Multimodal RAG pipeline Reference Flow + +Below is a visual snapshot of the chat implemented using this flow. It shows how a RAG-enabled chatbot +in Figure A6-3.2 improves the response for a Superbowl query over a non-RAG implementation in Figure +A6-3.3. + + +Figure A6-3.2: Non-RAG chatbot: Super Bowl Query + + +Figure A6-3.3: RAG enabled chatbot - Super Bowl query + + + + diff --git a/latest/_sources/guide/installation/gmc_install/gmc_install.md.txt b/latest/_sources/guide/installation/gmc_install/gmc_install.md.txt new file mode 100644 index 000000000..234620316 --- /dev/null +++ b/latest/_sources/guide/installation/gmc_install/gmc_install.md.txt @@ -0,0 +1,119 @@ +# GenAI-microservices-connector(GMC) Installation + +This document will introduce the GenAI Microservices Connector (GMC) and its installation. It will then use the ChatQnA pipeline as a use case to demonstrate GMC's functionalities. + +## GenAI-microservices-connector(GMC) + +GMC can be used to compose and adjust GenAI pipelines dynamically on Kubernetes. It can leverage the microservices provided by GenAIComps and external services to compose GenAI pipelines. External services might be running in a public cloud or on-prem. Just provide an URL and access details such as an API key and ensure there is network connectivity. It also allows users to adjust the pipeline on the fly like switching to a different Large language Model(LLM), adding new functions into the chain(like adding guardrails), etc. GMC supports different types of steps in the pipeline, like sequential, parallel and conditional. For more information: https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector + +## Install GMC + +**Prerequisites** + +- For the ChatQnA example ensure your cluster has a running Kubernetes cluster with at least 16 CPUs, 32GB of memory, and 100GB of disk space. To install a Kubernetes cluster refer to: +["Kubernetes installation"](../k8s_install/) + +**Download the GMC github repository** + +```sh +git clone https://github.com/opea-project/GenAIInfra.git && cd GenAIInfra/microservices-connector +``` + +**Build and push your image to the location specified by `CTR_IMG`:** + +```sh +make docker.build docker.push CTR_IMG=/gmcmanager: +``` + +**NOTE:** This image will be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. +Make sure you have the proper permissions to the registry if the above commands don’t work. + +**Install GMC CRD** + +```sh +kubectl apply -f config/crd/bases/gmc.opea.io_gmconnectors.yaml +``` + +**Get related manifests for GenAI Components** + +```sh +mkdir -p $(pwd)/config/manifests +cp $(dirname $(pwd))/manifests/ChatQnA/*.yaml -p $(pwd)/config/manifests/ +``` + +**Copy GMC router manifest** + +```sh +cp $(pwd)/config/gmcrouter/gmc-router.yaml -p $(pwd)/config/manifests/ +``` + +**Create Namespace for gmcmanager deployment** + +```sh +export SYSTEM_NAMESPACE=system +kubectl create namespace $SYSTEM_NAMESPACE +``` + +**NOTE:** Please use the exact same `SYSTEM_NAMESPACE` value setting you used while deploying gmc-manager.yaml and gmc-manager-rbac.yaml. + +**Create ConfigMap for GMC to hold GenAI Components and GMC Router manifests** + +```sh +kubectl create configmap gmcyaml -n $SYSTEM_NAMESPACE --from-file $(pwd)/config/manifests +``` + +**NOTE:** The configmap name `gmcyaml' is defined in gmcmanager deployment Spec. Please modify accordingly if you want use a different name for the configmap. + +**Install GMC manager** + +```sh +kubectl apply -f $(pwd)/config/rbac/gmc-manager-rbac.yaml +kubectl apply -f $(pwd)/config/manager/gmc-manager.yaml +``` + +**Check the installation result** + +```sh +kubectl get pods -n system +NAME READY STATUS RESTARTS AGE +gmc-controller-78f9c748cb-ltcdv 1/1 Running 0 3m +``` + +## Use GMC to compose a chatQnA Pipeline +A sample for chatQnA can be found at config/samples/chatQnA_xeon.yaml + +**Deploy chatQnA GMC custom resource** + +```sh +kubectl create ns chatqa +kubectl apply -f $(pwd)/config/samples/chatQnA_xeon.yaml +``` + +**GMC will reconcile chatQnA custom resource and get all related components/services ready** + +```sh +kubectl get service -n chatqa +``` + +**Check GMC chatQnA custom resource to get access URL for the pipeline** + +```bash +$kubectl get gmconnectors.gmc.opea.io -n chatqa +NAME URL READY AGE +chatqa http://router-service.chatqa.svc.cluster.local:8080 8/0/8 3m +``` + +**Deploy one client pod for testing the chatQnA application** + +```bash +kubectl create deployment client-test -n chatqa --image=python:3.8.13 -- sleep infinity +``` + +**Access the pipeline using the above URL from the client pod** + +```bash +export CLIENT_POD=$(kubectl get pod -n chatqa -l app=client-test -o jsonpath={.items..metadata.name}) +export accessUrl=$(kubectl get gmc -n chatqa -o jsonpath="{.items[?(@.metadata.name=='chatqa')].status.accessUrl}") +kubectl exec "$CLIENT_POD" -n chatqa -- curl $accessUrl -X POST -d '{"text":"What is the revenue of Nike in 2023?","parameters":{"max_new_tokens":17, "do_sample": true}}' -H 'Content-Type: application/json' +``` diff --git a/latest/_sources/guide/installation/k8s_install/k8s_instal_aws_eks.md.txt b/latest/_sources/guide/installation/k8s_install/k8s_instal_aws_eks.md.txt new file mode 100644 index 000000000..df430fa01 --- /dev/null +++ b/latest/_sources/guide/installation/k8s_install/k8s_instal_aws_eks.md.txt @@ -0,0 +1,74 @@ +# Kubernetes Installation using AWS EKS Cluster + +In this document, we'll install Kubernetes v1.30 using [AWS EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/clusters.html). + + +There are two ways to create a new Kubernetes cluster with nodes in AWS EKS: +- ["eksctl"](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) +- ["AWS Management Console and AWS CLI"](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html). + +In this document, we'll introduce the "AWS Management Console and AWS CLI" method. + +## Prerequisites + +Before starting this tutorial, you must install and configure the following tools and resources that you need to create and manage an Amazon EKS cluster. + +- AWS CLI – A command line tool for working with AWS services, including Amazon EKS. For more information, see ["Installing, updating, and uninstalling the AWS CLI"](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the AWS Command Line Interface User Guide. After installing the AWS CLI, we recommend that you also configure it. For more information, see ["Quick configuration with aws configure"](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-configure-quickstart-config) in the AWS Command Line Interface User Guide. + +- kubectl – A command line tool for working with Kubernetes clusters. For more information, see ["Installing or updating kubectl"](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html). + +- Required IAM permissions – The IAM security principal that you're using must have permissions to work with Amazon EKS IAM roles, service linked roles, AWS CloudFormation, a VPC, and related resources. For more information, see ["Actions, resources, and condition keys for Amazon Elastic Kubernetes Service"](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html) and ["Using service-linked roles"](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the IAM User Guide. You must complete all steps in this guide as the same user. To check the current user, run the following command: + + ``` + aws sts get-caller-identity + ``` + +## Create AWS EKS Cluster in AWS Console + +You can refer to the YouTube video that demonstrates the steps to create an EKS cluster in the AWS console: +https://www.youtube.com/watch?v=KxxgF-DAGWc + +Alternatively, you can refer to the AWS documentation directly: ["AWS Management Console and AWS CLI"](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html) + +## Uploading images to an AWS Private Registry + +There are several reasons why your images might not be uploaded to a public image repository like Docker Hub. +You can upload your image to an AWS private registry using the following steps: + +1. Create a new ECR repository (if not already created): + +An Amazon ECR private repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. More information about Amazon ECR private repository: https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html + +``` +aws ecr create-repository --repository-name my-app-repo --region +``` + +Replace my-app-repo with your desired repository name and with your AWS region (e.g., us-west-1). + +2. Authenticate Docker to Your ECR Registry: + +``` +aws ecr get-login-password --region | docker login --username AWS --password-stdin .dkr.ecr..amazonaws.com +``` + +Replace with your AWS region and with your AWS account ID. + +3. Build Your Docker Image: + +``` +docker build -t my-app: . +``` + +4. Tag your Docker image so that it can be pushed to your ECR repository: + +``` +docker tag my-app: .dkr.ecr..amazonaws.com/my-app-repo: +``` + +Replace with your AWS account ID, with your AWS region, and my-app-repo with your repository name. + +5. Push your Docker image to the ECR repository with this command: + +``` +docker push .dkr.ecr..amazonaws.com/my-app-repo:latest +``` diff --git a/latest/_sources/guide/installation/k8s_install/k8s_install_kubeadm.md.txt b/latest/_sources/guide/installation/k8s_install/k8s_install_kubeadm.md.txt new file mode 100644 index 000000000..23f6cb948 --- /dev/null +++ b/latest/_sources/guide/installation/k8s_install/k8s_install_kubeadm.md.txt @@ -0,0 +1,411 @@ +# Kubernetes installation demo using kubeadm + +In this demo, we'll install Kubernetes v1.29 using official [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/) on a 2 node cluster. + +## Node configuration + +| hostname | ip address | Operating System | +| ---------- | ------------------ | ---------------- | +| k8s-master | 192.168.121.35/24 | Ubuntu 22.04 | +| k8s-worker | 192.168.121.133/24 | Ubuntu 22.04 | + +These 2 nodes needs the following proxy to access the internet: + +- http_proxy="http://proxy.fake-proxy.com:911" +- https_proxy="http://proxy.fake-proxy.com:912" + +We assume these 2 nodes have been set correctly with the corresponding proxy so we can access the internet both in bash terminal and in apt repository. + +## Step 0. Clean up the environment + +If on any of the above 2 nodes, you have previously installed either Kubernetes, or any other container runtime(i.e. docker, containerd, etc.), please make sure you have clean-up those first. + +If there is any previous Kubernetes installed on any of these nodes by `kubeadm`, please refer to the listed steps to [tear down the Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down) first. + +If there is any previous Kubernetes installed on any of these nodes by `kubespray`, please refer to kubespray doc to [clean up the Kubernetes](https://kubespray.io/#/?id=quick-start) first. + +Once the Kubernetes is teared down or cleaned up, please run the following command on all the nodes to remove relevant packages: + +```bash +sudo apt-get purge docker docker-engine docker.io containerd runc containerd.io kubeadm kubectl kubelet +sudo rm -r /etc/cni /etc/kubernetes /var/lib/kubelet /var/run/kubernetes /etc/containerd /etc/systemd/system/containerd.service.d /etc/default/kubelet +``` + +## Step 1. Install relevant components + +Run the following on all the nodes: + +1. Export proxy settings in bash + +```bash +export http_proxy="http://proxy.fake-proxy.com:911" +export https_proxy="http://proxy.fake-proxy.com:912" +# Please make sure you've added all the node's ip addresses into the no_proxy environment variable +export no_proxy="localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,192.168.121.35,192.168.121.133" +``` + +2. Config system settings + +```bash +# Disable swap +sudo swapoff -a +sudo sed -i "s/^\(.* swap \)/#\1/g" /etc/fstab +# load kernel module for containerd +cat < 7m31s v1.29.6 +``` + +## Step 3 (optional) Reset Kubernetes cluster + +In some cases, you may want to reset the Kubernetes cluster in case some commands after `kubeadm init` fail and you want to reinstall Kubernetes. Please check [tear down the Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tear-down) for details. + +Below is the example of how to reset the Kubernetes cluster we just created: + +On node k8s-master, run the following command: + +```bash +# drain node k8s-worker1 +kubectl drain k8s-worker1 --delete-emptydir-data --force --ignore-daemonsets +``` + +On node k8s-worker1, run the following command: + +```bash +sudo kubeadm reset +# manually reset iptables/ipvs if necessary +``` + +On node k8s-master, delete node k8s-worker1: + +```bash +kubectl delete node k8s-worker1 +``` + +On node k8s-master, clean up the master node: + +```bash +sudo kubeadm reset +# manually reset iptables/ipvs if necessary +``` + +## NOTES + +1. By default, normal workload won't be scheduled to nodes in `control-plane` K8S role(i.e. K8S master node). If you want K8S to schedule normal workload to those nodes, please run the following commands on K8S master node: + +```bash +kubectl taint nodes --all node-role.kubernetes.io/control-plane- +kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers- +``` + +2. Verifying K8S CNI + If you see any issues of the inter-node pod-to-pod communication, please use the following steps to verify that k8s CNI is working correctly: + +```bash +# Create the K8S manifest file for our debug pods +cat < +debug-ddfd698ff-z5qpv 1/1 Running 0 91s 10.244.235.199 k8s-master +``` + +Make sure pod `debug-ddfd698ff-z5qpv` on node k8s-master can ping to the ip address of another pod `debug-ddfd698ff-7gsdc` on node k8s-worker1 to verify east-west traffic is working in K8S. + +``` +vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 10.244.194.66 +PING 10.244.194.66 (10.244.194.66) 56(84) bytes of data. +64 bytes from 10.244.194.66: icmp_seq=1 ttl=62 time=1.76 ms + +--- 10.244.194.66 ping statistics --- +1 packets transmitted, 1 received, 0% packet loss, time 0ms +rtt min/avg/max/mdev = 1.755/1.755/1.755/0.000 ms +``` + +Make sure pod `debug-ddfd698ff-z5qpv` on node k8s-master can ping to the ip address of another node `k8s-worker1` to verify north-south traffic is working in K8S. + +``` +vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 192.168.121.133 +PING 192.168.121.133 (192.168.121.133) 56(84) bytes of data. +64 bytes from 192.168.121.133: icmp_seq=1 ttl=63 time=1.34 ms + +--- 192.168.121.133 ping statistics --- +1 packets transmitted, 1 received, 0% packet loss, time 0ms +rtt min/avg/max/mdev = 1.339/1.339/1.339/0.000 ms +``` + +Delete debug pods after use: + +```bash +kubectl delete -f debug.yaml +``` diff --git a/latest/_sources/guide/installation/k8s_install/k8s_install_kubespray.md.txt b/latest/_sources/guide/installation/k8s_install/k8s_install_kubespray.md.txt new file mode 100644 index 000000000..4ee04c2e7 --- /dev/null +++ b/latest/_sources/guide/installation/k8s_install/k8s_install_kubespray.md.txt @@ -0,0 +1,277 @@ +# Kubernetes installation using Kubespray + +In this document, we'll install Kubernetes v1.29 using [Kubespray](https://github.com/kubernetes-sigs/kubespray) on a 2-node cluster. + +There are several ways to use Kubespray to deploy a Kubernetes cluster. In this document, we choose to use the Ansible way. For other ways to use Kubespary, refer to [Kubespray's document](https://github.com/kubernetes-sigs/kubespray). + +## Node preparation + +| hostname | ip address | Operating System | +| ---------- | ------------------ | ---------------- | +| k8s-master | 192.168.121.35/24 | Ubuntu 22.04 | +| k8s-worker | 192.168.121.133/24 | Ubuntu 22.04 | + + We assume these two machines are used for Kubernetes 2-node cluster. They have direct internet access both in bash terminal and in apt repository. + + If on any of the above 2 nodes, you have previously installed either Kubernetes, or any other container runtime(i.e. docker, containerd, etc.), please make sure you have clean-up those first. Refer to [Kubernetes installation demo using kubeadm](./k8s_install_kubeadm.md) to clean up the environment. + +## Prerequisites + + We assume that there is a third machine as your operating machine. You can log in to this machine and execute the Ansible command. Any of the above two K8s nodes can be used as the operating machine. Unless otherwise specified, all the following operations are performed on the operating machine. + +Please make sure that the operating machine can login to both K8s nodes via SSH without a password prompt. There are different ways to configure the ssh login without password promotion. A simple way is to copy the public key of the operating machine to the K8s nodes. For example: + +``` +# generate key pair in the operation machine +ssh-keygen -t rsa -b 4096 +# manually copy the public key to the K8s master and worker nodes +cat ~/.ssh/id_rsa.pub | ssh username@k8s-master "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" +cat ~/.ssh/id_rsa.pub | ssh username@k8s-worker "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" +``` + +## Step 1. Set up Kubespray and Ansible + +Python3 (version >= 3.10) is required in this step. If you don't have, go to [Python website](https://docs.python.org/3/using/index.html) for installation guide. + +You shall set up a Python virtual environment and install Ansible and other Kubespray dependencies. Simply, you can just run following commands. You can also go to [Kubespray Ansible installation guide](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible/ansible.md#installing-ansible) for details. To get the kubespray code, please check out the [latest release version](https://github.com/kubernetes-sigs/kubespray/releases) tag of kubespray. Here we use kubespary v2.25.0 as an example. + + +``` +git clone https://github.com/kubernetes-sigs/kubespray.git +VENVDIR=kubespray-venv +KUBESPRAYDIR=kubespray +python3 -m venv $VENVDIR +source $VENVDIR/bin/activate +cd $KUBESPRAYDIR +# Check out the latest release version tag of kubespray. +git checkout v2.25.0 +pip install -U -r requirements.txt +``` + +## Step 2. Build your own inventory + +Ansible inventory defines the hosts and groups of hosts on which Ansible tasks are to be executed. You can copy a sample inventory with following command: + +``` +cp -r inventory/sample inventory/mycluster +``` + +Edit your inventory file `inventory/mycluster/inventory.ini` to config the node name and IP address. The inventory file used in this demo is as follows: +``` +[all] +k8s-master ansible_host=192.168.121.35 +k8s-worker ansible_host=192.168.121.133 + +[kube_control_plane] +k8s-master + +[etcd] +k8s-master + +[kube_node] +k8s-master +k8s-worker + +[calico_rr] + +[k8s_cluster:children] +kube_control_plane +kube_node +calico_rr +``` +## Step 3. Define Kubernetes configuration + +Kubespray gives you ability to customize Kubernetes instalation, for example define: +- network plugin +- container manager +- kube_apiserver_port +- kube_pods_subnet +- all K&s addons configurations, or even define to deploy cluster on hyperscaller like AWS or GCP. +All of those settings are stored in group vars defined in `inventory/mycluster/group_vars` + +For K&s settings look in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml` + +**_NOTE:_** If you noted issues on `TASK [kubernetes/control-plane : Kubeadm | Initialize first master]` in K& deployment, change the port on which API Server will be listening on from 6443 to 8080. By default Kubespray configures kube_control_plane hosts with insecure access to kube-apiserver via port 8080. Refer to [kubespray getting-started](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting_started/getting-started.md) + +``` +# The port the API Server will be listening on. +kube_apiserver_ip: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(1) | ansible.utils.ipaddr('address') }}" +kube_apiserver_port: 8080 # (http) +``` + +## Step 4. Deploy Kubernetes + +You can clean up old Kubernetes cluster with Ansible playbook with following command: +``` +# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root +# The option `--become` is required, as for example cleaning up SSL keys in /etc/, +# uninstalling old packages and interacting with various systemd daemons. +# Without --become the playbook will fail to run! +# And be mindful that it will remove the current Kubernetes cluster (if it's running)! +ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root -e override_system_hostname=false reset.yml +``` + +Then you can deploy Kubernetes with Ansible playbook with following command: + +``` +# Deploy Kubespray with Ansible Playbook - run the playbook as root +# The option `--become` is required, as for example writing SSL keys in /etc/, +# installing packages and interacting with various systemd daemons. +# Without --become the playbook will fail to run! +ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root -e override_system_hostname=false cluster.yml +``` + +The Ansible playbooks will take several minutes to finish. After playbook is done, you can check the output. If `failed=0` exists, it means playbook execution is successfully done. + +## Step 5. Create kubectl configuration + +If you want to use Kubernetes command line tool `kubectl` on **k8s-master** node, please login to the node **k8s-master** and run the following commands: + +``` +mkdir -p $HOME/.kube +sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config +sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +If you want to access this Kubernetes cluster from other machines, you can install kubectl by `sudo apt-get install -y kubectl` and copy over the configuration from the k8-master node and set ownership as above. + +Then run following command to check the status of your Kubernetes cluster: +``` +$ kubectl get node +NAME STATUS ROLES AGE VERSION +k8s-master Ready control-plane 31m v1.29.5 +k8s-worker Ready 7m31s v1.29.5 +$ kubectl get pods -A +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system calico-kube-controllers-68485cbf9c-vwqqj 1/1 Running 0 23m +kube-system calico-node-fxr6v 1/1 Running 0 24m +kube-system calico-node-v95sp 1/1 Running 0 23m +kube-system coredns-69db55dd76-ctld7 1/1 Running 0 23m +kube-system coredns-69db55dd76-ztwfg 1/1 Running 0 23m +kube-system dns-autoscaler-6d5984c657-xbwtc 1/1 Running 0 23m +kube-system kube-apiserver-satg-opea-0 1/1 Running 0 24m +kube-system kube-controller-manager-satg-opea-0 1/1 Running 0 24m +kube-system kube-proxy-8zmhk 1/1 Running 0 23m +kube-system kube-proxy-hbq78 1/1 Running 0 23m +kube-system kube-scheduler-satg-opea-0 1/1 Running 0 24m +kube-system nginx-proxy-satg-opea-3 1/1 Running 0 23m +kube-system nodelocaldns-kbcnv 1/1 Running 0 23m +kube-system nodelocaldns-wvktt 1/1 Running 0 24m +``` +Now congratulations. Your two-node K8s cluster is ready to use. + +## Quick reference + +### How to deploy a single node Kubernetes? + +Deploying a single-node K8s cluster is very similar to setting up a multi-node (>=2) K8s cluster. + +Follow the previous [Step 1. Set up Kubespray and Ansible](#step-1-set-up-kubespray-and-ansible) to set up the environment. + +And then in [Step 2. Build your own inventory](#step-2-build-your-own-inventory), you can create single-node Ansible inventory by copying the single-node inventory sample as following: + +``` +cp -r inventory/local inventory/mycluster +``` + +Edit your single-node inventory `inventory/mycluster/hosts.ini` to replace the node name from `node1` to your real node name (for example `k8s-master`) using following command: + +``` +sed -i "s/node1/k8s-master/g" inventory/mycluster/hosts.ini +``` + +Then your single-node inventory will look like below: + +``` +k8s-master ansible_connection=local local_release_dir={{ansible_env.HOME}}/releases + +[kube_control_plane] +k8s-master + +[etcd] +k8s-master + +[kube_node] +k8s-master + +[k8s_cluster:children] +kube_node +kube_control_plane +``` + +And then follow [Step 3. Deploy Kubernetes](#step-3-deploy-kubernetes), please pay attention to the **inventory name** while executing Ansible playbook, which is `inventory/mycluster/hosts.ini` in single node deployment. When the playbook is executed successfully, you will get a 1-node K8s ready. + +And the follow [Step 4. Create kubectl configuration](#step-4-create-kubectl-configuration) to set up `kubectl`. You can check the status by `kubectl get nodes`. + +### How to scale Kubernetes cluster to add more nodes? + +Assume you've already have a two-node K8s cluster and you want to scale it to three nodes. The third node information is: + +| hostname | ip address | Operating System | +| ---------- | ------------------ | ---------------- | +| third-node | 192.168.121.134/24 | Ubuntu 22.04 | + +Make sure the third node has internet access and can be logged in via `SSH` without password promotion from your operating machine. + +Edit your Ansible inventory file to add the third node information to `[all]` and `[kube_node]` section as following: +``` +[all] +k8s-master ansible_host=192.168.121.35 +k8s-worker ansible_host=192.168.121.133 +third-node ansible_host=192.168.121.134 + +[kube_control_plane] +k8s-master + +[etcd] +k8s-master + +[kube_node] +k8s-master +k8s-worker +third-node + +[calico_rr] + +[k8s_cluster:children] +kube_control_plane +kube_node +calico_rr +``` + +Then you can deploy Kubernetes to the third node with Ansible playbook with following command: + +``` +# Deploy Kubespray with Ansible Playbook - run the playbook as root +# The option `--become` is required, as for example writing SSL keys in /etc/, +# installing packages and interacting with various systemd daemons. +# Without --become the playbook will fail to run! +ansible-playbook -i inventory/mycluster/inventory.ini --limit third-node --become --become-user=root scale.yml -b -v +``` +When the playbook is executed successfully, you can check if the third node is ready with following command: +``` +kubectl get nodes +``` + +For more information, you can visit [Kubespray document](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/operations/nodes.md#addingreplacing-a-worker-node) for adding/removing Kubernetes node. + +### How to config proxy? + +If your nodes need proxy to access internet, you will need extra configurations during deploying K8s. + +We assume your proxy is as below: +``` +- http_proxy="http://proxy.fake-proxy.com:911" +- https_proxy="http://proxy.fake-proxy.com:912" +``` + +You can change parameters in `inventory/mycluster/group_vars/all/all.yml` to set `http_proxy`,`https_proxy`, and `additional_no_proxy` as following. Please make sure you've added all the nodes' ip addresses into the `additional_no_proxy` parameter. In this example, we use `192.168.121.0/24` to represent all nodes' ip addresses. + +``` +## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed +http_proxy: "http://proxy.fake-proxy.com:911" +https_proxy: "http://proxy.fake-proxy.com:912" + +## If you need exclude all cluster nodes from proxy and other resources, add other resources here. +additional_no_proxy: "localhost,127.0.0.1,192.168.121.0/24" +``` diff --git a/latest/_sources/index.rst.txt b/latest/_sources/index.rst.txt index fb98c1e2a..e5ca70895 100644 --- a/latest/_sources/index.rst.txt +++ b/latest/_sources/index.rst.txt @@ -5,7 +5,7 @@ OPEA Project Documentation Welcome to the OPEA Project (|version|) documentation published |today|. OPEA streamlines implementation of enterprise-grade Generative AI by efficiently -integrating secure, performant, and cost-effective Generative AI workflows into business value. +integrating secure, performant, and cost-effective Generative AI workflows to business value. Source code for the OPEA Project is maintained in the `OPEA Project GitHub repo`_. @@ -15,5 +15,6 @@ Source code for the OPEA Project is maintained in the :hidden: Documentation Home + release_notes/release_notes .. _OPEA Project GitHub repo: https://github.com/opea-project diff --git a/latest/_sources/release_notes/release_notes.rst.txt b/latest/_sources/release_notes/release_notes.rst.txt new file mode 100644 index 000000000..a54266f83 --- /dev/null +++ b/latest/_sources/release_notes/release_notes.rst.txt @@ -0,0 +1,14 @@ +.. _release_notes: + +Release Notes +############# + +.. comment Maintain current release notes in the master branch/latest docs + +.. toctree:: + :maxdepth: 1 + :glob: + :reversed: + + v* + diff --git a/latest/_sources/release_notes/v0.6.md.txt b/latest/_sources/release_notes/v0.6.md.txt new file mode 100644 index 000000000..390d78abd --- /dev/null +++ b/latest/_sources/release_notes/v0.6.md.txt @@ -0,0 +1,28 @@ +# OPEA Release Notes v0.6 + +## OPEA Highlight + +* Add 4 MegaService examples: CodeGen, ChatQnA, CodeTrans and Docsum, you can deploy them on Kubernetes +* Enable 10 microservices for LLM, RAG, security...etc +* Support text generation, code generation and end-to-end evaluation + +## GenAIExamples + +* Build 4 reference solutions for some classic GenAI applications, like code generation, chat Q&A, code translation and document summarization, through orchestration interface in GenAIComps. +* Support seamlessly deployment on Intel Xeon and Gaudi platform through Kubernetes and Docker Compose. + +## GenAIComps + +* Activate a suite of microservices including ASR, LLMS, Rerank, Embedding, Guardrails, TTS, Telemetry, DataPrep, Retrieval, and VectorDB. ASR functionality is fully operational on Xeon architecture, pending readiness on Gaudi. Retrieval capabilities are functional on LangChain, awaiting readiness on LlamaIndex. VectorDB functionality is supported on Redis, Chroma, and Qdrant, with readiness pending on SVS. +* Added 14 file formats support in data preparation microservices and enabled Safeguard of conversation in guardrails. +* Added the Ray Gaudi Supported for LLM Service. + +## GenAIEvals + +* Add evaluating the models on text-generation tasks(lm-evaluation-harness) and coding tasks (bigcode-evaluation-harness) +* Add end-to-end evaluation with microservice + +## GenAIInfra + +* Add Helm Charts redis-vector-db, TEI, TGI and CodeGen for deploying GenAIExamples on Kubernetes +* Add Manifests for deploying GenAIExamples CodeGen, ChatQnA and Docsum on Kubernetes and on Docker Compose diff --git a/latest/_sources/release_notes/v0.7.md.txt b/latest/_sources/release_notes/v0.7.md.txt new file mode 100644 index 000000000..35790dbe2 --- /dev/null +++ b/latest/_sources/release_notes/v0.7.md.txt @@ -0,0 +1,125 @@ +# OPEA Release Notes v0.7 + +## OPEA Highlights + +- Add 3 MegaService examples: Translation, SearchQnA and AudioQnA +- Add 4 MicroService and LLM supports llamaIndex, vllm, RayServe +- Enable Dataprep: extract info from table, image...etc +- Add HelmChart and GenAI Microservice Connector(GMC) test + +## GenAIExamples + +- ChatQnA + - ChatQnA supports Qwen2([422b4b](https://github.com/opea-project/GenAIExamples/commit/422b4bc56b4e5500538b3d75209320d0a415483b)) + - Add no_proxy in docker compose yaml for micro services([99eb6a](https://github.com/opea-project/GenAIExamples/commit/99eb6a6a7eab4a6d24cbb47d4a541ff4aef41b57), [240587](https://github.com/opea-project/GenAIExamples/commit/240587932b04adeaf740d70229dd27ebd42d5dcd)) + - Fix DataPrep image build in ChatQnA([2fb070](https://github.com/opea-project/GenAIExamples/commit/2fb070dbfd9352d56a7be13606318aa583852a0f)) + - Add Nvidia GPU support for ChatQnA([e80e56](https://github.com/opea-project/GenAIExamples/commit/e80e567817439af1b70b56ff4a60fa58c24e2439)) + - Update ChatQnA docker_compose.yaml to fix downloads failing([e948a7](https://github.com/opea-project/GenAIExamples/commit/e948a7f81b2b68e62b09ad66be35414bf04babd5), [f2a943](https://github.com/opea-project/GenAIExamples/commit/f2a94377aa5e9850a7590c31fd8613f65fdef83c)) + - Chat QNA React UI with conversation history([b994bc](https://github.com/opea-project/GenAIExamples/commit/b994bc87318f245a07e099b395fa49ca3f36baba)) + - Adapt Chinese characters([2f4723](https://github.com/opea-project/GenAIExamples/commit/2f472315fdd4934b4f50b6120a0d583000d7751c)) + +- Other examples + - Refactor Translation Example([409c723](https://github.com/opea-project/GenAIExamples/commit/409c72350e84867ca1ea555c327fe13d00afd926)) + - Add AudioQnA with GenAIComps([b4d8e1](https://github.com/opea-project/GenAIExamples/commit/b4d8e1a19b7cb141dd509c40711d74be26c282ce)) + - Add SearchQnA with GenAIComps([6b76a9](https://github.com/opea-project/GenAIExamples/commit/6b76a93eb70738459d3fd553c44d6e7c120a51b3)) + - Add env for searchqna([d9b62a](https://github.com/opea-project/GenAIExamples/commit/d9b62a5a62d5c192ed34f598f3769378b7f594a1)) + - Supports ASR on HPU([2a4860](https://github.com/opea-project/GenAIExamples/commit/2a48601227557833cae721ad12418060b50dd62e)) + - Fix DocSum Gaudi building instructions([29de55](https://github.com/opea-project/GenAIExamples/commit/29de55da3ca0978123644ccfccdc53da20fc0791)) + - Add image build job in docker compose e2e gaudi test in CI([4fecd4](https://github.com/opea-project/GenAIExamples/commit/4fecd6a850d9b4cc0c4cd88d9987b5ef890c1aa2)) + +- CI + - Add docker build job in manifest e2e workflow([c5f309](https://github.com/opea-project/GenAIExamples/commit/c5f3095ea5c0016e4e9a2568ff063a5da4f6ef48)) + - Create reuse workflow for get-test-matrix in CI([961abb](https://github.com/opea-project/GenAIExamples/commit/961abb3c05c2bfb02e1cbae12ec7a67c3c0dfc8f)) + - Enable new CI runner and improve manifest e2e test scripts([26d6ea](https://github.com/opea-project/GenAIExamples/commit/26d6ea4724aeaef9fc258d79226ed15e3c325d76)) + - Enable building latest megaservice image on push event in CI([a0b94b](https://github.com/opea-project/GenAIExamples/commit/a0b94b540180ddba7892573b2d9ce8b0eb16b403)) + - Fix the image build refer([01eed8](https://github.com/opea-project/GenAIExamples/commit/01eed84db13656a000edd8e47f1e24dbbe2b067a)) + - Add build docker image option for test scripts([e32a51](https://github.com/opea-project/GenAIExamples/commit/e32a51451c38c35ee4bf27e58cb47f824821ce8d)) + - Add e2e test of chatqna([afcb3a](https://github.com/opea-project/GenAIExamples/commit/afcb3a)), codetrans([295b818](https://github.com/opea-project/GenAIExamples/commit/295b818)), codegen([960cf38](https://github.com/opea-project/GenAIExamples/commit/960cf38)), docsum([2e62ecc](https://github.com/opea-project/GenAIExamples/commit/2e62ecc))) + +## GenAIComps + +- Cores + - Add aio orchestrator to boost concurrent serving([db3b4f](https://github.com/opea-project/GenAIComps/commit/db3b4f13fa8fc258236d4cc504f1a083d5fd95df)) + - Add microservice level perf statistics([597b3c](https://github.com/opea-project/GenAIComps/commit/597b3ca7d243ff74ce108ded6255e73df01d2486), [ba1d11](https://github.com/opea-project/GenAIComps/commit/ba1d11d93299f2b1d5e53f747aed73cff0384dda)) + - Add Gateway for Translation([1b654d](https://github.com/opea-project/GenAIComps/commit/1b654de29d260043d8a5811a265013d5f5b4b6e1)) + +- LLM + - Support Qwen2 in LLM Microservice([3f5cde](https://github.com/opea-project/GenAIComps/commit/3f5cdea67d3789be72aafc70364fd1e0cbe6cfaf)) + - Fix the vLLM docker compose issues([3d134d](https://github.com/opea-project/GenAIComps/commit/3d134d260b8968eb9ca18162b2f0d86aa15a85b3)) + - Enable vLLM Gaudi support for LLM service based on officially habana vllm release([0dedc2](https://github.com/opea-project/GenAIComps/commit/0dedc28af38019e92eaf595935907de82c6a1cf5)) + - Openvino support in vllm([7dbad0](https://github.com/opea-project/GenAIComps/commit/7dbad0706d820f3c6ff8e8b4dd0ee40b7c389ff4)) + - Support Ollama microservice([a00e36](https://github.com/opea-project/GenAIComps/commit/a00e3641f25a7b515f427f1fbbcc893d85d97f85)) + - Support vLLM XFT LLM microservice([2a6a29](https://github.com/opea-project/GenAIComps/commit/2a6a29fda4ff13af5488912974b431390ed2ebc2), [309c2d](https://github.com/opea-project/GenAIComps/commit/309c2da5e18ce75b3ecc3ff3f2d71d51477ad4d1), [fe5f39](https://github.com/opea-project/GenAIComps/commit/fe5f39452b7fbca7e512611cef8c1a90c08feae8)) + - Add e2e test for llm summarization tgi([e8ebd9](https://github.com/opea-project/GenAIComps/commit/e8ebd948ee3518860838b50ca59d999d4f028d7c)) + +- DataPrep + - Support Dataprep([f7443f](https://github.com/opea-project/GenAIComps/commit/f7443f)), embedding([f37ce2](https://github.com/opea-project/GenAIComps/commit/f37ce2)) microservice with Llama Index + - Fix dataprep microservice path issue([e20acc](https://github.com/opea-project/GenAIComps/commit/e20acc)) + - Add milvus microservice([e85033](https://github.com/opea-project/GenAIComps/commit/e85033)) + - Add Ray version for multi file process([40c1aa](https://github.com/opea-project/GenAIComps/commit/40c1aa)) + - Fix dataprep timeout issue([61ead4](https://github.com/opea-project/GenAIComps/commit/61ead4)) + - Add e2e test for dataprep redis langchain([6b7bec](https://github.com/opea-project/GenAIComps/commit/6b7bec)) + - Supported image summarization with LVM in dataprep microservice([86412c](https://github.com/opea-project/GenAIComps/commit/86412c)) + - Enable conditional splitting for html files([e1dad1](https://github.com/opea-project/GenAIComps/commit/e1dad1)) + - Added support for pyspark in dataprep microservice([a5eb14](https://github.com/opea-project/GenAIComps/commit/a5eb14)) + - DataPrep extract info from table in the docs([953e78](https://github.com/opea-project/GenAIComps/commit/953e78)) + - Added support for extracting info from image in the docs([e23745](https://github.com/opea-project/GenAIComps/commit/e23745)) + +- Other Components + - Add PGvector support in Vectorstores([1b7001](https://github.com/opea-project/GenAIComps/commit/1b7001)) and Retriever([75eff6](https://github.com/opea-project/GenAIComps/commit/75eff6)), Dataprep([9de3c7](https://github.com/opea-project/GenAIComps/commit/9de3c7)) + - Add Mosec embedding([f76685](https://github.com/opea-project/GenAIComps/commit/f76685)) and reranking([a58ca4](https://github.com/opea-project/GenAIComps/commit/a58ca4)) + - Add knowledge graph components([4c0afd](https://github.com/opea-project/GenAIComps/commit/4c0afd)) + - Add LVMs LLaVA component([bd385b](https://github.com/opea-project/GenAIComps/commit/bd385b)) + - Add asr/tts components for xeon and hpu([cef6ea](https://github.com/opea-project/GenAIComps/commit/cef6ea)) + - Add WebSearch Retriever Microservice([900178](https://github.com/opea-project/GenAIComps/commit/900178)) + - Add initial pii detection microservice([e38041](https://github.com/opea-project/GenAIComps/commit/e38041)) + - Pinecone support for dataprep and retrieval microservice([8b6486](https://github.com/opea-project/GenAIComps/commit/8b6486)) + - Support prometheus metrics for opea microservices([758914](https://github.com/opea-project/GenAIComps/commit/758914)), ([900178](https://github.com/opea-project/GenAIComps/commit/900178)) + - Add no_proxy env for micro services([df0c11](https://github.com/opea-project/GenAIComps/commit/df0c11)) + - Enable RAGAS([8a670e](https://github.com/opea-project/GenAIComps/commit/8a670e)) + - Fix RAG performance issues([70c23d](https://github.com/opea-project/GenAIComps/commit/70c23d)) + - Support rerank and retrieval of RAG OPT([b51675](https://github.com/opea-project/GenAIComps/commit/b51675)) + - Reranking using an optimized bi-encoder([574847](https://github.com/opea-project/GenAIComps/commit/574847)) + - Use parameter for retriever([358dbd](https://github.com/opea-project/GenAIComps/commit/358dbd)), reranker([dfdd08](https://github.com/opea-project/GenAIComps/commit/dfdd08)) + +- CI + - CI optimization to support multiple test for single kind of service([38f646](https://github.com/opea-project/GenAIComps/commit/38f646)) + - Update CI to support dataprep_redis path level change([5c0773](https://github.com/opea-project/GenAIComps/commit/5c0773)) + - Enable python coverage([cd91cf](https://github.com/opea-project/GenAIComps/commit/cd91cf)) + - Add codecov([da2689](https://github.com/opea-project/GenAIComps/commit/da2689)) + - Enable microservice docker images auto build and push([16c5fd](https://github.com/opea-project/GenAIComps/commit/16c5fd)) + +## GenAIEvals + +- Enable autorag to automatically generate the evaluation dataset and evaluate the RAG system([b24bff](https://github.com/opea-project/GenAIEval/commit/b24bff)) +- Support document summarization evaluation with microservice([3ec544](https://github.com/opea-project/GenAIEval/commit/3ec544)) +- Add RAGASMetric([7406bf](https://github.com/opea-project/GenAIEval/commit/7406bf)) +- Update install bkc([26ddcc](https://github.com/opea-project/GenAIEval/commit/26ddcc)) + +## GenAIInfra + +- GMC + - Enable gmc e2e for manifests changes and some minor fix ([758432](https://github.com/opea-project/GenAIInfra/commit/758432)) + - GMC: make "namespace" field of each resource in the CR optional ([7073ac](https://github.com/opea-project/GenAIInfra/commit/7073ac)) + - ChatQnA demo yaml files integration between GMC and Oneclick ([020899](https://github.com/opea-project/GenAIInfra/commit/020899)) + - Add gmc e2e ([595185](https://github.com/opea-project/GenAIInfra/commit/595185)) + - Add docker build and push target for GMC ([04d7f2](https://github.com/opea-project/GenAIInfra/commit/04d7f2)) + - GMC: overwrite config map template before GMC resources are deployed ([ce9190](https://github.com/opea-project/GenAIInfra/commit/ce9190)) + - GMC: replace the service and deployment name if GMC has defined ([eec845](https://github.com/opea-project/GenAIInfra/commit/eec845)) + - Add gmc guide ([6bb8a3](https://github.com/opea-project/GenAIInfra/commit/6bb8a3)) + - GMC: adopt separate e2e for gaudi and xeon ([c5075b](https://github.com/opea-project/GenAIInfra/commit/c5075b)) + - Update readme and user guide for GMC ([2d17c9](https://github.com/opea-project/GenAIInfra/commit/2d17c9)) + - GMC: add Codetrans example ([aed70d](https://github.com/opea-project/GenAIInfra/commit/aed70d)) + - Enable GMC e2e on Gaudi ([d204a7](https://github.com/opea-project/GenAIInfra/commit/d204a7)) + +- HelmChart + - Helm chart: Add default minimal pod security ([8fcf0a](https://github.com/opea-project/GenAIInfra/commit/8fcf0a)) + - Support e2e test for chatqna helm chart ([2f317d](https://github.com/opea-project/GenAIInfra/commit/2f317d)) + - Add helm charts for deploy ChatQnA ([20dce6](https://github.com/opea-project/GenAIInfra/commit/20dce6)) + - Reorg of helm charts ([d332c2](https://github.com/opea-project/GenAIInfra/commit/d332c2)) + +- Others + - Add DocSum llm service manifests ([9ab8de](https://github.com/opea-project/GenAIInfra/commit/9ab8de)) + - Enable golang e2e test in CI ([bc9aba](https://github.com/opea-project/GenAIInfra/commit/bc9aba)) + - Add e2e test for docsum example ([89aa5a](https://github.com/opea-project/GenAIInfra/commit/89aa5a)) + - Add docsum example on both xeon and gaudi node ([c88817](https://github.com/opea-project/GenAIInfra/commit/c88817)) diff --git a/latest/_sources/release_notes/v0.8.md.txt b/latest/_sources/release_notes/v0.8.md.txt new file mode 100644 index 000000000..16c0a2d6c --- /dev/null +++ b/latest/_sources/release_notes/v0.8.md.txt @@ -0,0 +1,322 @@ +# OPEA Release Notes v0.8 +## What’s New in OPEA v0.8 + +- Broaden functionality + - Support frequently asked questions (FAQs) generation GenAI example + - Expand the support of LLMs such as Llama3.1 and Qwen2 and support LVMs such as llava + - Enable end-to-end performance and accuracy benchmarking + - Support the experimental Agent microservice + - Support LLM serving on Ray + +- Multi-platform support + - Release the Docker images of GenAI components under OPEA dockerhub and support the deployment with Docker + - Support cloud-native deployment through Kubernetes manifests and GenAI Microservices Connector (GMC) + - Enable the experimental authentication and authorization support using JWT tokens + - Validate ChatQnA on multiple platforms such as Xeon, Gaudi, AIPC, Nvidia, and AWS + +- OPEA Docker Hub: https://hub.docker.com/u/opea + +## Details + +
GenAIExamples + +- ChatQnA + - Add ChatQnA instructions for AIPC([26d4ff](https://github.com/opea-project/GenAIExamples/commit/26d4ff11ffd323091d80efdd3f65e4c330b68840)) + - Adapt Vllm response format ([034541](https://github.com/opea-project/GenAIExamples/commit/034541404e23ce3927c170237817e98f9323af26)) + - Update tgi version([5f52a1](https://github.com/opea-project/GenAIExamples/commit/5f52a10ffef342ef7ab84e9cf7107903d1e578e4)) + - Update README.md([f9312b](https://github.com/opea-project/GenAIExamples/commit/f9312b37137ac087534d5536c767b465bac1b93b)) + - Udpate ChatQnA docker compose for Dataprep Update([335362](https://github.com/opea-project/GenAIExamples/commit/335362ab1191b1bcaa2c3bef06fb559bdd3d3f3f)) + - [Doc] Add valid micro-service details([e878dc](https://github.com/opea-project/GenAIExamples/commit/e878dc131171068d4d48686ed3909363403c6818)) + - Updates for running ChatQnA + Conversational UI on Gaudi([89ddec](https://github.com/opea-project/GenAIExamples/commit/89ddec9b2d473b6c0b427e264e0ed07e5d0045f5)) + - Fix win PC issues([ba6541](https://github.com/opea-project/GenAIExamples/commit/ba65415b78d237d180cf9f3654d72b106b7b8a2e)) + - [Doc]Add ChatQnA Flow Chart([97da49](https://github.com/opea-project/GenAIExamples/commit/97da49f61e9ae4aff6780b1ae52c7f66550f3608)) + - Add guardrails in the ChatQnA pipeline([955159](https://github.com/opea-project/GenAIExamples/commit/9551594164980fea59667f6679c84ba5cadf6410)) + - Fix a minor bug for chatqna in docker-compose([b46ae8](https://github.com/opea-project/GenAIExamples/commit/b46ae8bdcc1abfe04563cffc004a87d2884e111b)) + - Support vLLM/vLLM-on-Ray/Ray Serve for ChatQnA([631d84](https://github.com/opea-project/GenAIExamples/commit/631d841119ee6d3247551ef713ea40041c77d6b6)) + - Added ChatQnA example using Qdrant retriever([c74564](https://github.com/opea-project/GenAIExamples/commit/c745641ba103d9f88af01f871f31384f16d02360)) + - Update TEI version v1.5 for better performance([f4b4ac](https://github.com/opea-project/GenAIExamples/commit/f4b4ac0d3a762805fe2e1f1a09c8311cadc2114d)) + - Update ChatQnA upload feature([598484](https://github.com/opea-project/GenAIExamples/commit/5984848bb065917f60324c9a35ce98a1503ef1c1)) + - Add auto truncate for embedding and rerank([8b6094](https://github.com/opea-project/GenAIExamples/commit/8b60948c7b9ab96c4d12dd361b329ff72b2e0e0b)) + +- Deployment + - Add Kubernetes manifest files for deploying DocSum([831463](https://github.com/opea-project/GenAIExamples/commit/83146320aa14fbea5fcd795a7b5203be43e32a14)) + - Update Kubernetes manifest files for CodeGen([2f9397](https://github.com/opea-project/GenAIExamples/commit/2f9397e012b7f3443d97f9cca786df5aa6d72437)) + - Add Kubernetes manifest files for deploying CodeTrans([c9548d](https://github.com/opea-project/GenAIExamples/commit/c9548d7921f73ac34b0867969de8ba7fe0c21453)) + - Updated READMEs for kubernetes example pipelines([c37d9c](https://github.com/opea-project/GenAIExamples/commit/c37d9c82b0df8a7a84462bdede93f0425470e4e0)) + - Update all examples yaml files of GMC in GenAIExample([290a74](https://github.com/opea-project/GenAIExamples/commit/290a74fae918da596dbb2d17ab87f828fef95e0d)) + - Doc: fix minor issue in GMC doc([d99461](https://github.com/opea-project/GenAIExamples/commit/d9946180a2372652136bd46a21aab308cda31d7e)) + - README for installing 4 worklods using helm chart([6e797f](https://github.com/opea-project/GenAIExamples/commit/6e797fae8923b520147419b87a193ccfb0d1de11)) + - Update Kubernetes manifest files for deploying ChatQnA([665c46](https://github.com/opea-project/GenAIExamples/commit/665c46ffae23b3dc3b4c6c7d6b7693886e913294)) + - Add new example of SearchQnA for GenAIExample([21b7d1](https://github.com/opea-project/GenAIExamples/commit/21b7d11098ca22accf2cd530a051403b95c5b4ba)) + - Add new example of Translation for GenAIExample([d0b028](https://github.com/opea-project/GenAIExamples/commit/d0b028d1997e1842d9cab48585a7f0b55de9b14b)) + +- Other examples + - Update reranking microservice dockerfile path ([d7a5b7](https://github.com/opea-project/GenAIExamples/commit/d7a5b751d92b7714a8c3308c64f4a8b473710383)) + - Update tgi-gaudi version([3505bd](https://github.com/opea-project/GenAIExamples/commit/3505bd25a4f3494028cde45694f304dba665310b)) + - Refine README of Examples([f73267](https://github.com/opea-project/GenAIExamples/commit/f732674b1ef28e5c2589d3b8e0124ebedaf5d502)) + - Update READMEs([8ad7f3](https://github.com/opea-project/GenAIExamples/commit/8ad7f36fe2007160ba68b0e100f4471c46669afa)) + - [CodeGen] Add codegen flowchart([377dd2](https://github.com/opea-project/GenAIExamples/commit/377dd2fa9eac012b6927abee3ef5f6339549a4eb)) + - Update audioqna image name([615f0d](https://github.com/opea-project/GenAIExamples/commit/615f0d25470624534c541161c6e647f78b448af1)) + - Add auto-truncate to gaudi tei ([8d4209](https://github.com/opea-project/GenAIExamples/commit/8d4209a01541d078e41174ef13c5f5f9686be282)) + - Update visualQnA chinese version([497895](https://github.com/opea-project/GenAIExamples/commit/49789595e5f6f00e96426b2dc5034d0a68c0aea1)) + - Fix Typo for Translation Example([95c13d](https://github.com/opea-project/GenAIExamples/commit/95c13d9558acb85343f2d39fc9ef1d68aacfbb56)) + - FAQGen Megaservice([8c4a25](https://github.com/opea-project/GenAIExamples/commit/8c4a2534c1313a4a20948190489dedcf3c302eea)) + - Code-gen-react-ui([1b48e5](https://github.com/opea-project/GenAIExamples/commit/1b48e54a3d2e5ede8c3268c30766fa5182d3486c)) + - Added doc sum react-ui([edf0d1](https://github.com/opea-project/GenAIExamples/commit/edf0d14c95c9869b416d07c9af80ace2bc3691cb)) + +- CI/UT + - Frontend failed with unknown timeout issue ([7ebe78](https://github.com/opea-project/GenAIExamples/commit/7ebe781ccb0d0396872c3aa9c195118ca07fc0b3)) + - Adding Chatqna Benchmark Test([11a56e](https://github.com/opea-project/GenAIExamples/commit/11a56e09ef86e88b29662130eba1913d40cb8aba)) + - Expand tgi connect timeout([ee0dcb](https://github.com/opea-project/GenAIExamples/commit/ee0dcb3d37ab64c89962fb41fc8b4d4916b05002)) + - Optimize gmc manifest e2e tests([15fc6f](https://github.com/opea-project/GenAIExamples/commit/15fc6f971154f19822ac8d9b168141a381c93114)) + - Add docker compose yaml print for test([bb4230](https://github.com/opea-project/GenAIExamples/commit/bb42307af952a8ca8c80dec329d84e1fe94943f3)) + - Refactor translation ci test ([b7975e](https://github.com/opea-project/GenAIExamples/commit/b7975e79d8c75899961e5946d8ad0356065f20c5)) + - Refactor searchqna ci test([ecf333](https://github.com/opea-project/GenAIExamples/commit/ecf33388359a9bc20ff63676f169cc4d8129b1e7)) + - Translate UT for UI([284d85](https://github.com/opea-project/GenAIExamples/commit/284d855bf410e5194c84523450397f0eb70ad0ee)) + - Enhancement the codetrans e2e test([450efc](https://github.com/opea-project/GenAIExamples/commit/450efcc139f26268b31a456db3f17024a37f896f)) + - Allow gmc e2e workflow to get secrets([f45f50](https://github.com/opea-project/GenAIExamples/commit/f45f508847823f3f6a1831d1a402932294b2a287)) + - Add checkout ref in gmc e2e workflow([62ae64](https://github.com/opea-project/GenAIExamples/commit/62ae64f13c8127cd7afd7d58d06c6cf9c51fafbf)) + - SearchQnA UT([268d58](https://github.com/opea-project/GenAIExamples/commit/268d58d4a971d7d8340e72caf90a4fc14650612d)) +
+ +
GenAIComps + +- Cores + - Support https for microservice([2d6772](https://github.com/opea-project/GenAIComps/commit/2d6772456fb24cd344fc25e3eb4591d1a42eda71)) + - Enlarge megaservice request timeout for supporting high concurrency([876ca5](https://github.com/opea-project/GenAIComps/commit/876ca5080b47bfb9ea484f916561f2c68e3d37a0)) + - Add dynamic DAG([f2995a](https://github.com/opea-project/GenAIComps/commit/f2995ab5f55c8917b865a405fb9ffe99b70ff86d)) + +- LLM + - Optional vllm microservice container build([963755](https://github.com/opea-project/GenAIComps/commit/9637553da6da07988df5d9007d9a736fe0ca4c47)) + - Refine vllm instruction([6e2c28](https://github.com/opea-project/GenAIComps/commit/6e2c28b17850964e5c07d5f418211722a9b09212)) + - Introduce 'entrypoint.sh' for some Containers([9ecc5c](https://github.com/opea-project/GenAIComps/commit/9ecc5c3b02bae88e148bfecafdd24be995d6b4c3)) + - Support llamaindex for retrieval microservice and remove langchain([61795f](https://github.com/opea-project/GenAIComps/commit/61795fd46a5c3047a3f08517b73cad52100396c8)) + - Update tgi with text-generation-inference:2.1.0([f23694](https://github.com/opea-project/GenAIComps/commit/f236949f62e26695ff0f6e7d4fbce8441fb2d8e4)) + - Fix requirements([f4b029](https://github.com/opea-project/GenAIComps/commit/f4b029805a310ce5bd4b0f03a9439ede149cb3ab)) + - Add vLLM on Ray microservice([ec3b2e](https://github.com/opea-project/GenAIComps/commit/ec3b2e841f23d1ee5dc4d89a57d34e51cf5a5909)) + - Update code/readme/UT for Ray Serve and VLLM([dd939c](https://github.com/opea-project/GenAIComps/commit/dd939c554add6a86577e50fc46ac93a7429ab6d9)) + - Allow the Ollama microservice to be configurable with different models([2458e2](https://github.com/opea-project/GenAIComps/commit/2458e2f1ec7f7e383429a54047814347e18c363d)) + - LLM performance optimization and code refine([6e31df](https://github.com/opea-project/GenAIComps/commit/6e31df2f0503eb075472ef5cd9cfc0f81112d804)) + +- DataPrep + - Support get/delete file in Dataprep Microservice([5d0842](https://github.com/opea-project/GenAIComps/commit/5d08426c82f999d8a5b58fda042fa610473b0c9c)) + - Dataprep | PGVector : Added support for new changes in utils.py([54eb7a](https://github.com/opea-project/GenAIComps/commit/54eb7aba5b5a46f6bf9602254e1b331b58109c24)) + - Enhance the dataprep microservice by adding separators([ef97c2](https://github.com/opea-project/GenAIComps/commit/ef97c24792bd5711b5e5a000eafcd7fabcfc914b)) + - Freeze python-bidi==0.4.2 for dataprep/redis([b4012f](https://github.com/opea-project/GenAIComps/commit/b4012f610960514b6351dc94bdc346675e57b356)) + - Support delete data for Redis vector db([967fdd](https://github.com/opea-project/GenAIComps/commit/967fdd2f27fe1e7c99c6e6c28161c8f0f3bf2436)) + +- Other Components + - Remove ingest in Retriever MS([d25d2c](https://github.com/opea-project/GenAIComps/commit/d25d2c4ec3146bcba26b8db3fc7fe4adeafff748)) + - Qdrant retriever microservice([9b658f](https://github.com/opea-project/GenAIComps/commit/9b658f4f8b83575c9acc8c9f4f24db2c0a5bf52f)) + - Update milvus service for dataprep and retriever([d7cdab](https://github.com/opea-project/GenAIComps/commit/d7cdab96744a0a1c914b9acd9a2515a29c1ed997)) + - Architecture specific args for a few containers([1dd7d4](https://github.com/opea-project/GenAIComps/commit/1dd7d41b4daaa8cb567b50143c5cd4b5119d6f4b)) + - Update driver compatible image([1d4664](https://github.com/opea-project/GenAIComps/commit/1d4664bc20793e41e83d4cb10869f0072e7506f3)) + - Fix Llama-Guard-2 issue([6b091c](https://github.com/opea-project/GenAIComps/commit/6b091c657228fcbc14824cd672ecbae4e4d487b6)) + - Embeddings: adaptive detect embedding model arguments in mosec([f164f0](https://github.com/opea-project/GenAIComps/commit/f164f0d7768c7f2463e11679785b9c7d7e93a19c)) + - Architecture specific args for langchain guardrails([5e232a](https://github.com/opea-project/GenAIComps/commit/5e232a9ac2adc8296e6503f6f7b26cc3a5ea5602)) + - Fix requirements install issue for reranks/fastrag([94e807](https://github.com/opea-project/GenAIComps/commit/94e807bbf15a9677209f8d28d0cc3251adfc75cc)) + - Update to remove warnings when building Dockerfiles([3e5dd0](https://github.com/opea-project/GenAIComps/commit/3e5dd0151699880f579ffddaa76293ede06cad2a)) + - Initiate Agent component([c3f6b2](https://github.com/opea-project/GenAIComps/commit/c3f6b2ebb75f6e6995e8b39adebe73051810856f)) + - Add FAQGen gateway in core to support FAQGen Example([9c90eb](https://github.com/opea-project/GenAIComps/commit/9c90ebf573621e894fa368848a79550701a338a6)) + - Prompt registry([f5a548](https://github.com/opea-project/GenAIComps/commit/f5a5489b0a42d01259f39b9016ea68429d2271e9)) + - Chat History microservice for chat data persistence([30d95b](https://github.com/opea-project/GenAIComps/commit/30d95b73dd20e1800e684bf7417a97b4e4cdc4df)) + - Align asr output and llm input without using orchestrator([64e042](https://github.com/opea-project/GenAIComps/commit/64e042146f4a7ea40e70a7fc5431d7f32e8ee02c)) + - Doc: add missing in README.md codeblock([2792e2](https://github.com/opea-project/GenAIComps/commit/2792e28334760d94908aa521be1bedcec8848ad3)) + - Prompt registry([f5a548](https://github.com/opea-project/GenAIComps/commit/f5a5489b0a42d01259f39b9016ea68429d2271e9)) + - Chat History microservice for chat data persistence([30d95b](https://github.com/opea-project/GenAIComps/commit/30d95b73dd20e1800e684bf7417a97b4e4cdc4df)) + - Align asr output and llm input without using orchestrator([64e042](https://github.com/opea-project/GenAIComps/commit/64e042146f4a7ea40e70a7fc5431d7f32e8ee02c)) + +- CI/UT + - Fix duplicate ci test([33f37c](https://github.com/opea-project/GenAIComps/commit/33f37cebd4bba515b21203f94af2616faade2baa)) + - Build and push new docker images into registry([80da5a](https://github.com/opea-project/GenAIComps/commit/80da5a86abafeceaf196bacc17e3922dd3173be8)) + - Update image build for gaudi([fe3d22](https://github.com/opea-project/GenAIComps/commit/fe3d22acabdee2fbf72ced0fae3832e7ca1fa3e4)) + - Add guardrails ut([556030](https://github.com/opea-project/GenAIComps/commit/55603000eba4823678b3e79623186fa591a2f06f)) +
+ +
GenAIEvals + +- Update lm-eval to 0.4.3([89c825](https://github.com/opea-project/GenAIEval/commit/89c8255f3f41a545ace25c61db3160cbece3047f)) +- Add toxicity/bias/hallucination metrics([48015a](https://github.com/opea-project/GenAIEval/commit/48015a1cb0c200aa1e7929367acd68d971ae544c)) +- Support stress benchmark test([59cb27](https://github.com/opea-project/GenAIEval/commit/59cb275ca870bc1ff4514a1e3b8c67ca9e48c71e)) +- Add rag related metrics([83ad9c](https://github.com/opea-project/GenAIEval/commit/83ad9c1eddde42b11be82b745f4d217af3acccfa)) +- Added CRUD Chinese benchmark example([9cc6ca](https://github.com/opea-project/GenAIEval/commit/9cc6ca611e4d00e2e6f4d441cb171896c8ab0f23)) +- Add MultiHop English benchmark accuracy([8aa1e6](https://github.com/opea-project/GenAIEval/commit/8aa1e6ed81f8209db03f653f0579215d36d24af3)) +
+ +
GenAIInfra + +- GMC + - Enable image build on push for gmc([f8a295](https://github.com/opea-project/GenAIInfra/commit/f8a2954a3b1557190bdf1e90271f4a110ff91fb3)) + - Revise workflow to support gmc running in kind([a2dc96](https://github.com/opea-project/GenAIInfra/commit/a2dc9610664025ab8447da2d9baa83226c483296)) + - Enable GMC system installation on push([af2d0f](https://github.com/opea-project/GenAIInfra/commit/af2d0f522c726b8c892e6c8c7b1f984737ec5c10)) + - Enhance the switch mode for GMC router service required([f96b0e](https://github.com/opea-project/GenAIInfra/commit/f96b0e537ff2afcfcab184aa167c07df5955045f)) + - Optimize GMC e2e scripts([27a062](https://github.com/opea-project/GenAIInfra/commit/27a0627b41402b718ec15e29d13475a1505eb726)) + - Optimize app namesapces and fix some typos in gmc e2e test([9c97fa](https://github.com/opea-project/GenAIInfra/commit/9c97fad977450ceeae0b2c4c1bf52593ea298707)) + - Add GMC into README([b25c0b](https://github.com/opea-project/GenAIInfra/commit/b25c0bb01e29b1cc02cd1c6c0604fc03d793e786)) + - Gmc: add authN & authZ support on fake JWT token([3756cf](https://github.com/opea-project/GenAIInfra/commit/3756cf8bc0d7494562db61f8913ea51a663ce7db)) + - GMC: adopt new common/menifests([b18531](https://github.com/opea-project/GenAIInfra/commit/b185311a4ea6a799968b752d0955368a0ec9653a)) + - Add new example of searchQnA on both xeon and gaudi([883c8d](https://github.com/opea-project/GenAIInfra/commit/883c8da01508239354c0ba1320a57d0e64a1dec2)) + - Support switch mode in GMC for MI6 team([d11aeb](https://github.com/opea-project/GenAIInfra/commit/d11aebb028313c12fe4f25d9f617b061c0dda57f)) + - Add translation example into GMC([6235a9](https://github.com/opea-project/GenAIInfra/commit/6235a9ff561f1378b10dc19a80d9fde1cc77fbc5)) + - Gmc: add authN & authZ support on keycloak([3d139b](https://github.com/opea-project/GenAIInfra/commit/3d139b53f83d44eab985e902fc8699f87a21413b)) + - GMC: Support new component([4c5a51](https://github.com/opea-project/GenAIInfra/commit/4c5a51a0e536b7ff58ff0112cdc8310395e5d391)) + - GMC: update README([d57b94](https://github.com/opea-project/GenAIInfra/commit/d57b94b19c5c432bc3154bb11d2b7edcde3603a1)) + +- HelmChart + - Helm chart: change default global.modelUseHostPath value([8ffc3b](https://github.com/opea-project/GenAIInfra/commit/8ffc3bc258c816aa01a83059ef908d7a0d0d6ee4)) + - Helm chart: Add readOnlyRootFilesystem to securityContext([9367a9](https://github.com/opea-project/GenAIInfra/commit/9367a9ce96c9e89098408e0c9078368571c38ef2)) + - Update chatqna with additional dependencies([009c96](https://github.com/opea-project/GenAIInfra/commit/009c960a9cdb28a9a8fb22f15b470a97e53a1bdf)) + - Update codegen with additional dependencies([d41dd2](https://github.com/opea-project/GenAIInfra/commit/d41dd27b49b733e76b2e41cc6a25bc2b2ab942eb)) + - Make endpoints configurable by user([486023](https://github.com/opea-project/GenAIInfra/commit/4860235e1774982ed5b827cbb36b4b3b8639f9fb)) + - Add data prep component([384931](https://github.com/opea-project/GenAIInfra/commit/384931799641c5e0faa89b080426b95ea55d1263)) + - The microservice port number is not configurable([fbaa6a](https://github.com/opea-project/GenAIInfra/commit/fbaa6aba1cf7d6167ffdcb465a57da05bce26b3e)) + - Add MAX_INPUT_TOKENS to tgi([2fcbb0](https://github.com/opea-project/GenAIInfra/commit/2fcbb0d563d04ac8e21df14ecd2c9c05db72c1af)) + - Add script to generate yaml files from helm-charts([6bfe31](https://github.com/opea-project/GenAIInfra/commit/6bfe31528f6be24e5922dfcc6aea0ad18fd61869)) + - Helm: support adding extra env from external configmap([7dabdf](https://github.com/opea-project/GenAIInfra/commit/7dabdf0b378f710e41fadf1fd4ef47b69bee2326)) + - Helm: expose dataprep configurable items into value file([83fc1a](https://github.com/opea-project/GenAIInfra/commit/83fc1a0b6af09ea64466e61d742d09b03eea82c5)) + - Helm: upgrade version to 0.8.0([b3cbde](https://github.com/opea-project/GenAIInfra/commit/b3cbde027932f530eed13393df3beae2d8e2febb)) + - Add whisper and asr components([9def61](https://github.com/opea-project/GenAIInfra/commit/9def61adc506ec61faeed1769ebaed0e3ef9ee95)) + - Add tts and speecht5 components helm chart([9d1465](https://github.com/opea-project/GenAIInfra/commit/9d146529a2f000f169308358a3d724861078d320)) + - Update the script to generate comp manifest([ab53e9](https://github.com/opea-project/GenAIInfra/commit/ab53e952965fc670694ee2ae91b76d0e34cc8bae)) + - Helm: remove unused Probes([c1cff5](https://github.com/opea-project/GenAIInfra/commit/c1cff5fe3c93262b600641694929349f59b86405)) + - Helm: Add tei-gaudi support([a456bf](https://github.com/opea-project/GenAIInfra/commit/a456bfb393f9428c17441ba3da1b1ad99a65d213)) + - Helm redis-vector-db: Add missings in value file([9e15ef](https://github.com/opea-project/GenAIInfra/commit/9e15ef1c523592e58f4e1f8e2a5d0029997c13a6)) + - Helm: Use empty string instead of null in value files([6151ac](https://github.com/opea-project/GenAIInfra/commit/6151ac7ccc53cd41e2e3ca43a5c6a7369eceaa1b)) + - Add component k8s manifest files([68483c](https://github.com/opea-project/GenAIInfra/commit/68483c5dbb0365fbad3b34792313d511e7ef898d)) + - Add helm test for chart redis-vector-db([236381](https://github.com/opea-project/GenAIInfra/commit/23638193f2819b513dbc8fb1c055cfa45b809e5a)) + - Add helm test for chart tgi([9b5def](https://github.com/opea-project/GenAIInfra/commit/9b5def0c26ae97a4c8a6e52a42c44917e9d79352)) + - Add helm test for chart tei([f5c7fa](https://github.com/opea-project/GenAIInfra/commit/f5c7fafd1bbea8f64663283e5131d8334fe4aec5)) + - Add helm test for chart teirerank([00532a](https://github.com/opea-project/GenAIInfra/commit/00532a51b8e1dff47e89a144814ac92627d8b01f)) + - Helm test: Make curl fail if http_status > 400 returned([92c4b5](https://github.com/opea-project/GenAIInfra/commit/92c4b5e21209caaeb288adad076e59acefaf411a)) + - Add helm test for chart embedding-usvc([a98561](https://github.com/opea-project/GenAIInfra/commit/a98561f9c817fa52a99742ee1ab1ac267a650d2f)) + - Add helm test for chart llm-uservice([f4f3ea](https://github.com/opea-project/GenAIInfra/commit/f4f3ea0e58bd09cbd45cb7267c989fa665171d21)) + - Add helm test for chart reranking-usvc([397208](https://github.com/opea-project/GenAIInfra/commit/397208985ba90ff71ec4eeaa0d3ca8f4187c6218)) + - Add helm test for chart retriever-usvc([6db408](https://github.com/opea-project/GenAIInfra/commit/6db408ab719846fe370c557ca1cc88d4cbe0fc18)) + - Helm: Support automatically install dependency charts([dc90a5](https://github.com/opea-project/GenAIInfra/commit/dc90a59803fb1e7730af96b0df09ef8d0a3950ce)) + - Helm: support remove helm dependency([fbdb1d](https://github.com/opea-project/GenAIInfra/commit/fbdb1da9bb40b810eb6615685883445c1c952f29)) + - Helm: upgrade tgi chart([c3a1c1](https://github.com/opea-project/GenAIInfra/commit/c3a1c1a093f0f523ab92a8d714cb03730a8c3d3f)) + - Helm/manifest: update tei config for tei-gaudi([88b3c1](https://github.com/opea-project/GenAIInfra/commit/88b3c108e5b5e3bfb6d9346ce2863b69f70cc2f1)) + - Add CodeTrans helm chart([5b05f9](https://github.com/opea-project/GenAIInfra/commit/5b05f9572879b0d9b939f0fbd2cd1eddc07fdb05)) + - Helm: Update chatqna to latest([7ff03b](https://github.com/opea-project/GenAIInfra/commit/7ff03b5593434b5571e683d52c8a22ab6764a461)) + - Add DocSum helm chart([b56116](https://github.com/opea-project/GenAIInfra/commit/b5611662df4109fd17dcf769c1684a5e01317f56)) + - Add docsum support for helm test([f6354b](https://github.com/opea-project/GenAIInfra/commit/f6354b96f6ec3ac4968b4f9f1eb029762fe5e1c0)) + - Helm: Update codegen to latest([419e5b](https://github.com/opea-project/GenAIInfra/commit/419e5bfc857095bbcea56747e3f4feefc6d81311)) + - Fix codegen helm chart readme([b4b28e](https://github.com/opea-project/GenAIInfra/commit/b4b28e98929c37dc44baaa3fd969e598b3c13836)) + - Disable runAsRoot for speecht5 and whisper([aeef78](https://github.com/opea-project/GenAIInfra/commit/aeef78254ce2a85779b6ff13fb14fcdd5bb0af52)) + - Use upstream tei-gaudi image([e4d3ff](https://github.com/opea-project/GenAIInfra/commit/e4d3ff6c13f210872dfc4ddc788fa735eac2b44b)) + +- Others + - Enhancement the e2e test for GenAIInfra for fixing some bugs([602af5](https://github.com/opea-project/GenAIInfra/commit/602af53742900630a34a4eed9f37980483aa21b3)) + - Fix bugs for router on handling response from pipeline microservices([ef47f9](https://github.com/opea-project/GenAIInfra/commit/ef47f9db525c16b54d493549b8372946988fce2a)) + - Improve the examples of codegen and codetrans e2e test([07494c](https://github.com/opea-project/GenAIInfra/commit/07494c0e6ba09030cc8ea464ef783c983b9d5cf7)) + - Remove the dependencies of common microservices([f6dd87](https://github.com/opea-project/GenAIInfra/commit/f6dd87baf8d569db519e69661ae0d2cdd466fa69)) + - Add scripts for KubeRay and Ray Cluster([7d3d13](https://github.com/opea-project/GenAIInfra/commit/7d3d13f51f2cfed7be1e92f13f12ef2ff478e1f7)) + - Enable CI for common components([9e27a0](https://github.com/opea-project/GenAIInfra/commit/9e27a0d424cb3eacbf2cde636426e644ae739212)) + - Disable common component test([e1cd50](https://github.com/opea-project/GenAIInfra/commit/e1cd50269eebc010bd5f5043a1b4bc8c62a53231)) + - CI for common: avoid false error in helm test result([876b7a](https://github.com/opea-project/GenAIInfra/commit/876b7a4142e2e1e7a25f25ac279f043c844f1687)) + - Add the init input for pipeline to keep the parameter information([e25a1f](https://github.com/opea-project/GenAIInfra/commit/e25a1f86e85c452243aacf90a67e47777caf4703)) + - Adjust CI gaudi version([d75d8f](https://github.com/opea-project/GenAIInfra/commit/d75d8f2e1c356ca26fa09a2e9911de3aff87aa27)) + - Fix CHART_MOUNT and HFTOKEN for CI([10b908](https://github.com/opea-project/GenAIInfra/commit/10b908abf3b728c9652302efcb071bdc7f8e6426)) + - Change tgi tag because gaudi driver is upgraded to 1.16.1 ([6796ef](https://github.com/opea-project/GenAIInfra/commit/6796ef2560645c59cdf7b09af9a2d8aa0cb0d5a5)) + - Update README for new manifests([ec32bf](https://github.com/opea-project/GenAIInfra/commit/ec32bf04459fdbb4c8f99ebd1bac3216ad2e5283)) + - Support multiple router service in one namespace([0ac732](https://github.com/opea-project/GenAIInfra/commit/0ac73213b501fb5949a5ac8bf7f52d5a4acef580)) + - Improve workflow trigger conditions to be more precise([ab5c8d](https://github.com/opea-project/GenAIInfra/commit/ab5c8d8c07d8f8353315b7ebaf1eb745bf7b28e5)) + - Remove unnecessary component DocSumGaudi which would cause error([9b973a](https://github.com/opea-project/GenAIInfra/commit/9b973aceb25c307f2c7692c9364ebac9040b9a5d)) + - Remove chart_test scripts and add script to dump pod status([88caf0](https://github.com/opea-project/GenAIInfra/commit/88caf0df947866ffe609cf60805282970f887429)) +
+ +## Thanks to these contributors +We would like to thank everyone who contributed to OPEA project. Here are the contributors: + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/latest/_sources/roadmap/2024-2025.md.txt b/latest/_sources/roadmap/2024-2025.md.txt new file mode 100644 index 000000000..bc60accc1 --- /dev/null +++ b/latest/_sources/roadmap/2024-2025.md.txt @@ -0,0 +1,130 @@ +# OPEA 2024 - 2025 Roadmap + +## Milestone 1 (May, Done) + +- [x] Components contribution + - [x] ASR + - [x] Data Prep + - [x] Embedding + - [x] Guardrails + - [x] LLM(Gaudi TGI) + - [x] RAG Rerank + - [x] RAG Retrieval + - [x] TTS + - [x] RAG VectorDB + - [x] Open Telemetry support + +- [x] UseCases/Examples + - [x] ChatQnA + - [x] CodeGen + - [x] CodeTrans + +- [x] Cloud Native + - [x] OneClickOPEA on ChatQnA + - [x] OneClickOPEA on CodeGen + +- [x] Evaluation & Others + - [x] CICD & Validation + - [x] lm-eval-harness + - [x] bigcode-eval-harness + - [x] End2End evaluation on GenAIComps & GenAIExamples + +## Milestone 2 (June) + +- [ ] Components contribution + - [ ] LLM on Xeon by vLLM + Ray, Ollama + - [ ] OVMS + - [ ] Prompting + - [ ] User Feedback Management + - [ ] MI6 Mega Components(MI6 RAG Service) + +- [ ] UseCases/Examples + - [ ] DocSum + - [ ] SearchQnA + - [ ] FAQGen + - [ ] End-to-end RAG example using OPEA on Xeon and cloud + +- [ ] Cloud Native + - [ ] OneClickOPEA on 2 or more examples + +- [ ] Evaluation & Others + - [ ] CICD & Validation + - [ ] End2End evaluation on GenAIComps & GenAIExamples + - [ ] RAG evaluation + +## Milestone 3 (July) + +- [ ] Components contribution + - [ ] LLM on Gaudi by vLLM + Ray + - [ ] LVM on Gaudi by vLLM + Ray + - [ ] VectorDB(svs) + - [ ] Telemetry + +- [ ] UseCases/Examples + - [ ] VisualQnA + - [ ] Windows Desktop App for AIPC + +- [ ] Cloud Native + - [ ] OpenShift enablement + - [ ] GenAI Microservice Connector + - [ ] OneClickOPEA on 3 or more examples + +- [ ] Evaluation & Others + - [ ] CICD & Validation + - [ ] End2End evaluation on GenAIComps & GenAIExamples + +## Milestone 4 (Aug) + +- [ ] Components contribution + - [ ] Documentation + - [ ] Automation test script + +- [ ] UseCases/Examples + - [ ] Documentation + - [ ] Automation test script + +- [ ] Cloud Native + - [ ] K8s Resource Management + - [ ] Documentation + - [ ] AutoScaler Analysis + +- [ ] Evaluation & Others + - [ ] CICD & Validation + - [ ] End2End evaluation on GenAIComps & GenAIExamples + +## Milestone 5 (from Sep to Dec) + +- [ ] Components contribution + - [ ] More micro service components for image and video + - [ ] Fine-tuning support + - [ ] Knowledge graph support + - [ ] OPEA Playground support + +- [ ] UseCases/Examples + - [ ] More use cases like language translation and AudioQnA + +- [ ] Cloud Native + - [ ] Docker Containerization through Docker Composer + - [ ] Static tuning on Resource management for deployment + +- [ ] Evaluation & Others + - [ ] CICD & Validation + - [ ] End2End evaluation + + +## Milestone 6 (2025) + +- [ ] Components contribution + - [ ] More micro service components per community request + - [ ] AI Agent support + +- [ ] UseCases/Examples + - [ ] No code solution (GenAIStudio) + - [ ] OPEA Model Hub + +- [ ] Cloud Native + - [ ] Dynamic tuning on Resource management through K8s + +- [ ] Evaluation & Others + - [ ] CICD & Validation + - [ ] End2End evaluation diff --git a/latest/_sources/roadmap/CICD.md.txt b/latest/_sources/roadmap/CICD.md.txt new file mode 100644 index 000000000..2b79eb166 --- /dev/null +++ b/latest/_sources/roadmap/CICD.md.txt @@ -0,0 +1,29 @@ +# OPEA CI/CD Roadmap + +## Milestone 1 (May, Done) +- Format scan for GenAIExamples/GenAIComps/GenAIInfra/GenAIEval +- Security scan for GenAIExamples/GenAIComps/GenAIInfra/GenAIEval +- Unit test for GenAIComps/GenAIInfra/GenAIEval +- E2E test for GenAIExamples/GenAIComps/GenAIInfra milestone1 related scope + +## Milestone 2 (June) +- CI infrastructure optimization +- k8s multi-node cluster on 2 Xeon node for CI +- k8s multi-node cluster on 2 Gaudi node for CI +- Set up image repository for CI +- UT coverage measurement +- Cross-projects impact monitor +- E2E test for GenAIExamples/GenAIComps/GenAIInfra milestone2 related scope +- RAG benchmark with GenAIEval + +## Milestone 3 (July) +- Enhance code coverage +- E2E test for GenAIExamples/GenAIComps/GenAIInfra milestone3 related scope +- GMC test for k8s +- k8s scalability test + +## Milestone 4 (Aug) +- Enhance code coverage +- E2E test for GenAIExamples/GenAIComps/GenAIInfra milestone4 related scope +- Enhance k8s scalability test +- Auto CD workflow setup diff --git a/latest/codeowner.html b/latest/codeowner.html new file mode 100644 index 000000000..6f98788af --- /dev/null +++ b/latest/codeowner.html @@ -0,0 +1,281 @@ + + + + + + + OPEA Project Code Owners — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

OPEA Project Code Owners

+

These tables list the GitHub IDs of code owners. For a PR review, please contact the corresponding owner.

+ +
+

GenAIExamples

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

examples

owner

AudioQnA

Spycsh

ChatQnA

lvliang-intel

CodeGen

lvliang-intel

CodeTrans

Spycsh

DocSum

Spycsh

SearchQnA

letonghan

Language Translation

letonghan

VisualQnA

lvliang-intel

Others

lvliang-intel

+
+
+

GenAIComps

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

comps

owner

asr

Spycsh

cores

lvliang-intel

dataprep

XinyuYe-Intel

embedding

XuhuiRen

guardrails

letonghan

llms

lvliang-intel

reranks

XuhuiRen

retrievers

XuhuiRen

tts

Spycsh

+
+
+

GenAIEval

+

lvliang-intel, changwangss, lkk12014402

+
+
+

GenAIInfra

+

mkbhanda, irisdingbj, jfding, ftian1, yongfengdu

+
+
+

CICD

+

chensuyue,daisy-ycguo, ashahba, preethivenkatesh

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/CODE_OF_CONDUCT.html b/latest/community/CODE_OF_CONDUCT.html new file mode 100644 index 000000000..046d5a1f9 --- /dev/null +++ b/latest/community/CODE_OF_CONDUCT.html @@ -0,0 +1,299 @@ + + + + + + + Contributor Covenant Code of Conduct — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Contributor Covenant Code of Conduct

+
+

Our Pledge

+

We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation.

+

We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community.

+
+
+

Our Standards

+

Examples of behavior that contributes to a positive environment for our +community include:

+
    +
  • Demonstrating empathy and kindness toward other people

  • +
  • Being respectful of differing opinions, viewpoints, and experiences

  • +
  • Giving and gracefully accepting constructive feedback

  • +
  • Accepting responsibility and apologizing to those affected by our mistakes, +and learning from the experience

  • +
  • Focusing on what is best not just for us as individuals, but for the overall +community

  • +
+

Examples of unacceptable behavior include:

+
    +
  • The use of sexualized language or imagery, and sexual attention or advances of +any kind

  • +
  • Trolling, insulting or derogatory comments, and personal or political attacks

  • +
  • Public or private harassment

  • +
  • Publishing others’ private information, such as a physical or email address, +without their explicit permission

  • +
  • Other conduct which could reasonably be considered inappropriate in a +professional setting

  • +
+
+
+

Enforcement Responsibilities

+

Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful.

+

Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate.

+
+
+

Scope

+

This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event.

+
+
+

Enforcement

+

Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders. +All complaints will be reviewed and investigated promptly and fairly.

+

All community leaders are obligated to respect the privacy and security of the +reporter of any incident.

+
+
+

Enforcement Guidelines

+

Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct:

+
+

1. Correction

+

Community Impact: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community.

+

Consequence: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested.

+
+
+

2. Warning

+

Community Impact: A violation through a single incident or series of +actions.

+

Consequence: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban.

+
+
+

3. Temporary Ban

+

Community Impact: A serious violation of community standards, including +sustained inappropriate behavior.

+

Consequence: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban.

+
+
+

4. Permanent Ban

+

Community Impact: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals.

+

Consequence: A permanent ban from any sort of public interaction within the +community.

+
+
+
+

Attribution

+

This Code of Conduct is adapted from the Contributor Covenant, +version 2.1, available at +https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.

+

Community Impact Guidelines were inspired by +Mozilla’s code of conduct enforcement ladder.

+

For answers to common questions about this code of conduct, see the FAQ at +https://www.contributor-covenant.org/faq. Translations are available at +[https://www.contributor-covenant.org/translations][translations].

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/CONTRIBUTING.html b/latest/community/CONTRIBUTING.html new file mode 100644 index 000000000..b56329e72 --- /dev/null +++ b/latest/community/CONTRIBUTING.html @@ -0,0 +1,342 @@ + + + + + + + Contribution Guidelines — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Contribution Guidelines

+

Thanks for considering contributing to OPEA project. The contribution process is similar with other open source projects on Github, involving an amount of open discussion in issues and feature requests between the maintainers, contributors and users.

+
+

Table of Contents

+ + + +
+
+

All The Ways To Contribute

+
+

Community Discussions

+

Developers are encouraged to participate in discussions by opening an issue in one of the GitHub repos at https://github.com/opea-project. Alternatively, they can send an email to info@opea.dev or subscribe to X/Twitter and LinkedIn Page to get latest updates about the OPEA project.

+
+
+

Documentation

+

The quality of OPEA project’s documentation can have a huge impact on its success. We reply on OPEA maintainers and contributors to build clear, detailed and update-to-date documentation for user.

+
+
+

Reporting Issues

+

If OPEA user runs into some unexpected behavior, reporting the issue to the Issues page under the corresponding github project is the proper way to do. Please ensure there is no similar one already existing on the issue list). Please follow the Bug Report template and supply as much information as you can, and any additional insights you might have. It’s helpful if the issue submitter can narrow down the problematic behavior to a minimal reproducible test case.

+
+
+

Proposing New Features

+

OPEA communities use the RFC (request for comments) process for collaborating on substantial changes to OPEA projects. The RFC process allows the contributors to collaborate during the design process, providing clarity and validation before jumping to implementation.

+

When the RFC process is needed?

+

The RFC process is necessary for changes which have a substantial impact on end users, workflow, or user facing API. It generally includes:

+
    +
  • Changes to core workflow.

  • +
  • Changes with significant architectural implications.

  • +
  • changes which modify or introduce user facing interfaces.

  • +
+

It is not necessary for changes like:

+
    +
  • Bug fixes and optimizations with no semantic change.

  • +
  • Small features which doesn’t involve workflow or interface change, and only impact a narrow use case.

  • +
+
+

Step-by-Step guidelines

+
    +
  • Follow the RFC Template to propose your idea.

  • +
  • Submit the proposal to the Issues page of the corresponding OPEA github repository.

  • +
  • Reach out to your RFC’s assignee if you need any help with the RFC process.

  • +
  • Amend your proposal in response to reviewer’s feedback.

  • +
+
+
+
+

Submitting Pull Requests

+
+

Create Pull Request

+

If you have improvements to OPEA projects, send your pull requests to each project for review. +If you are new to GitHub, view the pull request How To.

+
+
Step-by-Step guidelines
+
    +
  • Star this repository using the button Star in the top right corner.

  • +
  • Fork the corresponding OPEA repository using the button Fork in the top right corner.

  • +
  • Clone your forked repository to your pc by running git clone "url to your repo"

  • +
  • Create a new branch for your modifications by running git checkout -b new-branch

  • +
  • Add your files with git add -A, commit git commit -s -m "This is my commit message" and push git push origin new-branch.

  • +
  • Create a pull request for the project you want to contribute.

  • +
+
+
+
+

Pull Request Template

+

See PR template

+
+
+

Pull Request Acceptance Criteria

+
    +
  • At least two approvals from reviewers

  • +
  • All detected status checks pass

  • +
  • All conversations solved

  • +
  • Third-party dependency license compatible

  • +
+
+
+

Pull Request Status Checks Overview

+

The OPEA projects use GitHub Action for CI test.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Test Name

Test Scope

Test Pass Criteria

DCO

Use git commit -s to sign off

PASS

Code Format Scan

pre-commit.ci [Bot]

PASS

Code Security Scan

Bandit/Hadolint/Dependabot/CodeQL/Trellix

PASS

Unit Test

Unit test under test folder

PASS

End to End Test

End to end test workflow

PASS

+
    +
  • Developer Certificate of Origin (DCO), the PR must agree to the terms of Developer Certificate of Origin by signing off each of commits with -s, e.g. git commit -s -m 'This is my commit message'.

  • +
  • Unit Test, the PR must pass all unit tests and without coverage regression.

  • +
  • End to End Test, the PR must pass all end to end tests.

    +
      +
    • If the PR introduces new microservice for GenAIComps, the PR must include new end to end tests. The test script name should match with the folder name so the test will be automatically triggered by test structure, for examples, if the new service is GenAIComps/comps/dataprep/redis/langchain, then the test script name should be GenAIComps/tests/test_dataprep_redis_langchain.sh.

    • +
    • If the PR introduces new example for GenAIExamples, the PR must include new example end to end tests. The test script name should match with the example name so the test will be automatically triggered by test structure, for examples, if the example is GenAIExamples/ChatQnA, then the test script name should be ChatQnA/tests/test_chatqna_on_gaudi.sh and ChatQnA/tests/test_chatqna_on_xeon.sh.

    • +
    +
  • +
+
+
+

Pull Request Review

+

You can add reviewers from the code owners list to your PR.

+
+
+
+
+

Support

+
    +
  • Feel free to reach out to [OPEA maintainers](mailto: info@opea.dev) for support.

  • +
  • Submit your questions, feature requests, and bug reports to the GitHub issues page.

  • +
+
+
+

Contributor Covenant Code of Conduct

+

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant Code of Conduct.

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/SECURITY.html b/latest/community/SECURITY.html new file mode 100644 index 000000000..53acf3b56 --- /dev/null +++ b/latest/community/SECURITY.html @@ -0,0 +1,190 @@ + + + + + + + Reporting a Vulnerability — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Reporting a Vulnerability

+

Report any security vulnerabilities in this project by following these Linux Foundation security guidelines.

+
+

Script Usage Notice

+

SCRIPT USAGE NOTICE: By downloading and using any script file included with the associated software package (such as files with .bat, .cmd, or .JS extensions, Dockerfiles, or any other type of file that, when executed, automatically downloads and/or installs files onto your system) +(the “Script File”), it is your obligation to review the Script File to understand what files (e.g., other software, AI models, AI Datasets) the Script File will download to your system (“Downloaded Files”). +Furthermore, by downloading and using the Downloaded Files, even if they are installed through a silent install, you agree to any and all terms and conditions associated with such files, including but not limited to, license terms, notices, or disclaimers.

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/pull_request_template.html b/latest/community/pull_request_template.html new file mode 100644 index 000000000..60bd5d052 --- /dev/null +++ b/latest/community/pull_request_template.html @@ -0,0 +1,208 @@ + + + + + + + OPEA Pull Request Template — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

OPEA Pull Request Template

+
+

Description

+

The summary of the proposed changes as long as the relevant motivation and context.

+
+
+

Issues

+

List the issue or RFC link this PR is working on. If there is no such link, please mark it as n/a.

+
+
+

Type of change

+

List the type of change like below. Please delete options that are not relevant.

+
    +
  • [ ] Bug fix (non-breaking change which fixes an issue)

  • +
  • [ ] New feature (non-breaking change which adds new functionality)

  • +
  • [ ] Breaking change (fix or feature that would break existing design and interface)

  • +
+
+
+

Dependencies

+

List the newly introduced 3rd party dependency if exists.

+
+
+

Tests

+

Describe the tests that you ran to verify your changes. Please list the relevant details for your test configuration and step-by-step reproduce instructioins.

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/rfc_template.html b/latest/community/rfc_template.html new file mode 100644 index 000000000..375716c73 --- /dev/null +++ b/latest/community/rfc_template.html @@ -0,0 +1,226 @@ + + + + + + + RFC Template — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

RFC Template

+

Replace the “RFC Template” heading with your RFC Title, followed by +the short description of the feature you want to contribute

+
+

RFC Content

+
+

Author

+

List all contributors of this RFC.

+
+
+

Status

+

Change the PR status to Under Review | Rejected | Accepted.

+
+
+

Objective

+

List what problem will this solve? What are the goals and non-goals of this RFC?

+
+
+

Motivation

+

List why this problem is valuable to solve? Whether some related work exists?

+
+
+

Design Proposal

+

This is the heart of the document, used to elaborate the design philosophy and detail proposal.

+
+
+

Alternatives Considered

+

List other alternatives if have, and corresponding pros/cons to each proposal.

+
+
+

Compatibility

+

list possible incompatible interface or workflow changes if exists.

+
+
+

Miscellaneous

+

List other information user and developer may care about, such as:

+
    +
  • Performance Impact, such as speed, memory, accuracy.

  • +
  • Engineering Impact, such as binary size, startup time, build time, test times.

  • +
  • Security Impact, such as code vulnerability.

  • +
  • TODO List or staging plan.

  • +
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.html b/latest/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.html new file mode 100644 index 000000000..2b3038ecb --- /dev/null +++ b/latest/community/rfcs/24-05-16-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.html @@ -0,0 +1,402 @@ + + + + + + + 24-05-16 GenAIExamples-001 Using MicroService to Implement ChatQnA — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    + + + + +
  • Latest »
  • + +
  • 24-05-16 GenAIExamples-001 Using MicroService to Implement ChatQnA
  • + +
  • + View page source +
  • +
+
+
+
+ + + + +
+ +
+

24-05-16 GenAIExamples-001 Using MicroService to Implement ChatQnA

+
+

Author

+

lvliang-intel, ftian1, hshen14, Spycsh, letonghan

+
+
+

Status

+

Under Review

+
+
+

Objective

+

This RFC aims to introduce the OPEA microservice design and demonstrate its application to Retrieval-Augmented Generation (RAG). The objective is to address the challenge of designing a flexible architecture for Enterprise AI applications by adopting a microservice approach. This approach facilitates easier deployment, enabling one or multiple microservices to form a megaservice. Each megaservice interfaces with a gateway, allowing users to access services through endpoints exposed by the gateway. The architecture is general and RAG is the first example that we want to apply.

+
+
+

Motivation

+

In designing the Enterprise AI applications, leveraging a microservices architecture offers significant advantages, particularly in handling large volumes of user requests. By breaking down the system into modular microservices, each dedicated to a specific function, we can achieve substantial performance improvements through the ability to scale out individual components. This scalability ensures that the system can efficiently manage high demand, distributing the load across multiple instances of each microservice as needed.

+

The microservices architecture contrasts sharply with monolithic approaches, such as the tightly coupled module structure found in LangChain. In such monolithic designs, all modules are interdependent, posing significant deployment challenges and limiting scalability. Any change or scaling requirement in one module necessitates redeploying the entire system, leading to potential downtime and increased complexity.

+
+
+

Design Proposal

+
+

Microservice

+

Microservices are akin to building blocks, offering the fundamental services for constructing RAG (Retrieval-Augmented Generation) applications. Each microservice is designed to perform a specific function or task within the application architecture. By breaking down the system into smaller, self-contained services, microservices promote modularity, flexibility, and scalability. This modular approach allows developers to independently develop, deploy, and scale individual components of the application, making it easier to maintain and evolve over time. Additionally, microservices facilitate fault isolation, as issues in one service are less likely to impact the entire system.

+
+
+

Megaservice

+

A megaservice is a higher-level architectural construct composed of one or more microservices, providing the capability to assemble end-to-end applications. Unlike individual microservices, which focus on specific tasks or functions, a megaservice orchestrates multiple microservices to deliver a comprehensive solution. Megaservices encapsulate complex business logic and workflow orchestration, coordinating the interactions between various microservices to fulfill specific application requirements. This approach enables the creation of modular yet integrated applications, where each microservice contributes to the overall functionality of the megaservice.

+
+
+

Gateway

+

The Gateway serves as the interface for users to access the megaservice, providing customized access based on user requirements. It acts as the entry point for incoming requests, routing them to the appropriate microservices within the megaservice architecture. Gateways support API definition, API versioning, rate limiting, and request transformation, allowing for fine-grained control over how users interact with the underlying microservices. By abstracting the complexity of the underlying infrastructure, gateways provide a seamless and user-friendly experience for interacting with the megaservice.

+
+
+

Proposal

+

The proposed architecture for the ChatQnA application involves the creation of two megaservices. The first megaservice functions as the core pipeline, comprising four microservices: embedding, retriever, reranking, and LLM. This megaservice exposes a ChatQnAGateway, allowing users to query the system via the /v1/chatqna endpoint. The second megaservice manages user data storage in VectorStore and is composed of a single microservice, dataprep. This megaservice provides a DataprepGateway, enabling user access through the /v1/dataprep endpoint.

+

The Gateway class facilitates the registration of additional endpoints, enhancing the system’s flexibility and extensibility. The /v1/dataprep endpoint is responsible for handling user documents to be stored in VectorStore under a predefined database name. The first megaservice will then query the data from this predefined database.

+

architecture

+
+

Example Python Code for Constructing Services

+

Users can use ServiceOrchestrator class to build the microservice pipeline and add a gateway for each megaservice.

+
class ChatQnAService:
+    def __init__(self, rag_port=8888, data_port=9999):
+        self.rag_port = rag_port
+        self.data_port = data_port
+        self.rag_service = ServiceOrchestrator()
+        self.data_service = ServiceOrchestrator()
+
+    def construct_rag_service(self):
+        embedding = MicroService(
+            name="embedding",
+            host=SERVICE_HOST_IP,
+            port=6000,
+            endpoint="/v1/embeddings",
+            use_remote_service=True,
+            service_type=ServiceType.EMBEDDING,
+        )
+        retriever = MicroService(
+            name="retriever",
+            host=SERVICE_HOST_IP,
+            port=7000,
+            endpoint="/v1/retrieval",
+            use_remote_service=True,
+            service_type=ServiceType.RETRIEVER,
+        )
+        rerank = MicroService(
+            name="rerank",
+            host=SERVICE_HOST_IP,
+            port=8000,
+            endpoint="/v1/reranking",
+            use_remote_service=True,
+            service_type=ServiceType.RERANK,
+        )
+        llm = MicroService(
+            name="llm",
+            host=SERVICE_HOST_IP,
+            port=9000,
+            endpoint="/v1/chat/completions",
+            use_remote_service=True,
+            service_type=ServiceType.LLM,
+        )
+        self.rag_service.add(embedding).add(retriever).add(rerank).add(llm)
+        self.rag_service.flow_to(embedding, retriever)
+        self.rag_service.flow_to(retriever, rerank)
+        self.rag_service.flow_to(rerank, llm)
+        self.rag_gateway = ChatQnAGateway(megaservice=self.rag_service, host="0.0.0.0", port=self.rag_port)
+
+    def construct_data_service(self):
+        dataprep = MicroService(
+            name="dataprep",
+            host=SERVICE_HOST_IP,
+            port=5000,
+            endpoint="/v1/dataprep",
+            use_remote_service=True,
+            service_type=ServiceType.DATAPREP,
+        )
+        self.data_service.add(dataprep)
+        self.data_gateway = DataPrepGateway(megaservice=self.data_service, host="0.0.0.0", port=self.data_port)
+
+    def start_service(self):
+        self.construct_rag_service()
+        self.construct_data_service()
+        self.rag_gateway.start()
+        self.data_gateway.start()
+
+if __name__ == "__main__":
+    chatqna = ChatQnAService()
+    chatqna.start_service()
+
+
+
+
+

Constructing Services with yaml

+

Below is an example of how to define microservices and megaservices using YAML for the ChatQnA application. This configuration outlines the endpoints for each microservice and specifies the workflow for the megaservices.

+
opea_micro_services:
+  dataprep:
+    endpoint: http://localhost:5000/v1/chat/completions
+  embedding:
+    endpoint: http://localhost:6000/v1/embeddings
+  retrieval:
+    endpoint: http://localhost:7000/v1/retrieval
+  reranking:
+    endpoint: http://localhost:8000/v1/reranking
+  llm:
+    endpoint: http://localhost:9000/v1/chat/completions
+
+opea_mega_service:
+  mega_flow:
+    - embedding >> retrieval >> reranking >> llm
+  dataprep:
+    mega_flow:
+        - dataprep
+
+
+
opea_micro_services:
+  dataprep:
+    endpoint: http://localhost:5000/v1/chat/completions
+
+opea_mega_service:
+  mega_flow:
+    - dataprep
+
+
+

The following Python code demonstrates how to use the YAML configurations to initialize the microservices and megaservices, and set up the gateways for user interaction.

+
from comps import ServiceOrchestratorWithYaml
+from comps import ChatQnAGateway, DataPrepGateway
+data_service = ServiceOrchestratorWithYaml(yaml_file_path="dataprep.yaml")
+rag_service = ServiceOrchestratorWithYaml(yaml_file_path="rag.yaml")
+rag_gateway = ChatQnAGateway(data_service, port=8888)
+data_gateway = DataPrepGateway(data_service, port=9999)
+# Start gateways
+rag_gateway.start()
+data_gateway.start()
+
+
+
+
+

Example Code for Customizing Gateway

+

The Gateway class provides a customizable interface for accessing the megaservice. It handles requests and responses, allowing users to interact with the megaservice. The class defines methods for adding custom routes, stopping the service, and listing available services and parameters. Users can extend this class to implement specific handling for requests and responses according to their requirements.

+
class Gateway:
+    def __init__(
+        self,
+        megaservice,
+        host="0.0.0.0",
+        port=8888,
+        endpoint=str(MegaServiceEndpoint.CHAT_QNA),
+        input_datatype=ChatCompletionRequest,
+        output_datatype=ChatCompletionResponse,
+    ):
+        ...
+        self.gateway = MicroService(
+            service_role=ServiceRoleType.MEGASERVICE,
+            service_type=ServiceType.GATEWAY,
+            ...
+        )
+        self.define_default_routes()
+
+    def define_default_routes(self):
+        self.service.app.router.add_api_route(self.endpoint, self.handle_request, methods=["POST"])
+        self.service.app.router.add_api_route(str(MegaServiceEndpoint.LIST_SERVICE), self.list_service, methods=["GET"])
+        self.service.app.router.add_api_route(
+            str(MegaServiceEndpoint.LIST_PARAMETERS), self.list_parameter, methods=["GET"]
+        )
+
+    def add_route(self, endpoint, handler, methods=["POST"]):
+        self.service.app.router.add_api_route(endpoint, handler, methods=methods)
+
+    def start(self):
+        self.gateway.start()
+
+    def stop(self):
+        self.gateway.stop()
+
+    async def handle_request(self, request: Request):
+        raise NotImplementedError("Subclasses must implement this method")
+
+    def list_service(self):
+        raise NotImplementedError("Subclasses must implement this method")
+
+    def list_parameter(self):
+        raise NotImplementedError("Subclasses must implement this method")
+
+    ...
+
+
+
+
+
+
+

Alternatives Considered

+

An alternative approach could be to design a monolithic application for RAG instead of a microservice architecture. However, this approach may lack the flexibility and scalability offered by microservices. Pros of the proposed microservice architecture include easier deployment, independent scaling of components, and improved fault isolation. Cons may include increased complexity in managing multiple services.

+
+
+

Compatibility

+

Potential incompatible interface or workflow changes may include adjustments needed for existing clients to interact with the new microservice architecture. However, careful planning and communication can mitigate any disruptions.

+
+
+

Miscs

+

Performance Impact: The microservice architecture may impact performance metrics, depending on factors such as network latency. But for large-scale user access, scaling out microservices can enhance responsiveness, thereby significantly improving performance compared to monolithic designs.

+

By adopting this microservice architecture for RAG, we aim to enhance the flexibility, scalability, and maintainability of the Enterprise AI application deployment, ultimately improving the user experience and facilitating future development and enhancements.

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/rfcs/24-05-16-OPEA-001-Overall-Design.html b/latest/community/rfcs/24-05-16-OPEA-001-Overall-Design.html new file mode 100644 index 000000000..13b1efa47 --- /dev/null +++ b/latest/community/rfcs/24-05-16-OPEA-001-Overall-Design.html @@ -0,0 +1,268 @@ + + + + + + + 24-05-16 OPEA-001 Overall Design — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

24-05-16 OPEA-001 Overall Design

+
+

Author

+

ftian1, lvliang-intel, hshen14

+
+
+

Status

+

Under Review

+
+
+

Objective

+

Have a stable, extensible, secure, and easy-of-use orchestration framework design for OPEA users to quickly build their own GenAI applications.

+

The requirements include but not limited to:

+
    +
  1. orchestration planner

    +

    have the ability of offer config-based definition or low-code for constructing complex LLM applications.

    +
  2. +
  3. component registry

    +

    allow user to register new service for building complex GenAI applications

    +
  4. +
  5. monitoring

    +

    allow user to trace the working flow, including logging, execution status, execution time, and so on.

    +
  6. +
  7. scalability

    +

    easily scale within K8S or other deployment techs at on-premis and cloud environment.

    +
  8. +
+
+
+

Motivation

+

This RFC is used to present the OPEA overall design philosophy, including overall architecture, working flow, component design, for community discussion.

+
+
+

Design Proposal

+

The proposed overall architecture is

+

OPEA Architecture

+
    +
  1. GenAIComps

    +

    The suite of microservices, leveraging a service composer to assemble a mega-service tailored for real-world Enterprise AI applications.

    +
  2. +
  3. GenAIExamples

    +

    The collective list of Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) examples, targeting for demonstrating the whole orchestration pipeline.

    +
  4. +
  5. GenAIInfra

    +

    The containerization and cloud native suite for OPEA, including artifacts to deploy GenAIExamples in a cloud native way, which can be used by enterprise users to deploy to their own cloud.

    +
  6. +
  7. GenAIEval

    +

    The evaluation, benchmark, and scorecard suite for OPEA, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination.

    +
  8. +
+

The proposed OPEA workflow is

+

OPEA Workflow

+
    +
  1. Microservice

    +

    Microservices are akin to building blocks, offering the fundamental services for constructing RAG (Retrieval-Augmented Generation) applications. Each microservice is designed to perform a specific function or task within the application architecture. By breaking down the system into smaller, self-contained services, microservices promote modularity, flexibility, and scalability. This modular approach allows developers to independently develop, deploy, and scale individual components of the application, making it easier to maintain and evolve over time. Additionally, microservices facilitate fault isolation, as issues in one service are less likely to impact the entire system.

    +
  2. +
  3. Megaservice

    +

    A megaservice is a higher-level architectural construct composed of one or more microservices, providing the capability to assemble end-to-end applications. Unlike individual microservices, which focus on specific tasks or functions, a megaservice orchestrates multiple microservices to deliver a comprehensive solution. Megaservices encapsulate complex business logic and workflow orchestration, coordinating the interactions between various microservices to fulfill specific application requirements. This approach enables the creation of modular yet integrated applications, where each microservice contributes to the overall functionality of the megaservice.

    +
  4. +
  5. Gateway

    +

    The Gateway serves as the interface for users to access the megaservice, providing customized access based on user requirements. It acts as the entry point for incoming requests, routing them to the appropriate microservices within the megaservice architecture. Gateways support API definition, API versioning, rate limiting, and request transformation, allowing for fine-grained control over how users interact with the underlying microservices. By abstracting the complexity of the underlying infrastructure, gateways provide a seamless and user-friendly experience for interacting with the megaservice.

    +
  6. +
+
+
+

Alternatives Considered

+

n/a

+
+
+

Compatibility

+

n/a

+
+
+

Miscs

+
    +
  • TODO List:

    +
      +
    • [ ] Micro Service specification

    • +
    • [ ] Mega Service specification

    • +
    • [ ] static cloud resource allocator vs dynamic cloud resource allocator

    • +
    • [ ] open telemetry support

    • +
    • [ ] authentication and trusted env

    • +
    +
  • +
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/rfcs/24-05-24-OPEA-001-Code-Structure.html b/latest/community/rfcs/24-05-24-OPEA-001-Code-Structure.html new file mode 100644 index 000000000..40ce74c20 --- /dev/null +++ b/latest/community/rfcs/24-05-24-OPEA-001-Code-Structure.html @@ -0,0 +1,244 @@ + + + + + + + 24-05-24 OPEA-001 Code Structure — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

24-05-24 OPEA-001 Code Structure

+
+

Author

+

ftian1, lvliang-intel, hshen14

+
+
+

Status

+

Under Review

+
+
+

Objective

+

Define a clear criteria and rule of adding new codes into OPEA projects.

+
+
+

Motivation

+

OPEA project consists of serveral repos, including GenAIExamples, GenAIInfra, GenAICompos, and so on. We need a clear definition on where the new code for a given feature should be put for a consistent and well-orgnized code structure.

+
+
+

Design Proposal

+

The proposed code structure of GenAIInfra is:

+
GenAIInfra/
+├── kubernetes-addon/        # the folder implementing additional operational capabilities to Kubernetes applications
+├── microservices-connector/ # the folder containing the implementation of microservice connector on Kubernetes
+└── scripts/
+
+
+

The proposed code structure of GenAIExamples is:

+
GenAIExamples/
+└── ChatQnA/
+    ├── kubernetes/
+    │   ├── manifests
+    │   └── microservices-connector
+    ├── docker/
+    │   ├── docker_compose.yaml
+    │   ├── dockerfile
+    │   └── chatqna.py
+    ├── chatqna.yaml    # The MegaService Yaml
+    └── README.md
+
+
+

The proposed code structure of GenAIComps is:

+
GenAIComps/
+└── comps/
+    └── llms/
+        ├── text-generation/
+        │   ├── tgi-gaudi/
+        │   │   ├── dockerfile
+        │   │   └── llm.py
+        │   ├── tgi-xeon/
+        │   │   ├── dockerfile
+        │   │   └── llm.py
+        │   ├── vllm-gaudi
+        │   ├── ray
+        │   └── langchain
+        └── text-summarization/
+
+
+
+
+

Miscs

+

n/a

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/community/rfcs/README.html b/latest/community/rfcs/README.html new file mode 100644 index 000000000..16cb2f18a --- /dev/null +++ b/latest/community/rfcs/README.html @@ -0,0 +1,186 @@ + + + + + + + RFC Archive — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

RFC Archive

+

This folder is used to archive all RFCs contributed by OPEA community. Either users directly contribute RFC to this folder or submit to each OPEA repository’s Issues page with the [RFC]: xxx string pattern in title. The latter will be automatically stored to here by an archieve tool.

+

The file naming convention follows this rule: yy-mm-dd-[OPEA Project Name]-[index]-title.md

+

For example, 24-04-29-GenAIExamples-001-Using_MicroService_to_implement_ChatQnA.md

+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/faq.html b/latest/faq.html new file mode 100644 index 000000000..d641b0cfe --- /dev/null +++ b/latest/faq.html @@ -0,0 +1,280 @@ + + + + + + + OPEA Frequently Asked Questions — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

OPEA Frequently Asked Questions

+
+

What is OPEA’s mission?

+

OPEA’s mission is to offer a validated enterprise-grade GenAI (Generative Artificial Intelligence) RAG reference implementation. This will simplify GenAI development and deployment, thereby accelerating time-to-market.

+
+
+

What is OPEA?

+

The project currently consists of a technical conceptual framework that enables GenAI implementations to meet enterprise-grade requirements. The project offers a set of reference implementations for a wide range of enterprise use cases that can be used out-of-the-box. The project additionally offers a set of validation and compliance tools to ensure the reference implementations meet the needs outlined in the conceptual framework. This enables new reference implementations to be contributed and validated in an open manner. Partnering with the LF AI & Data places in the perfect spot for multi-partner development, evolution, and expansion.

+
+
+

What problems are faced by GenAI deployments within the enterprise?

+

Enterprises face a myriad of challenges in development and deployment of Gen AI. The development of new models, algorithms, fine tuning techniques, detecting and resolving bias and how to deploy large solutions at scale continues to evolve at a rapid pace. One of the biggest challenges enterprises come up against is a lack of standardized software tools and technologies from which to choose. Additionally, enterprises want the flexibility to innovate rapidly, extend the functionality to meet their business needs while ensuring the solution is secure and trustworthy. The lack of a framework that encompasses both proprietary and open solutions impedes enterprises from charting their destiny. This results in enormous investment of time and money impacting time-to-market advantage. OPEA answers the need for a multi-provider, ecosystem-supported framework that enables the evaluation, selection, customization, and trusted deployment of solutions that businesses can rely on.

+
+
+

Why now?

+

The major adoption and deployment cycle of robust, secure, enterprise-grade Gen AI solutions across all industries is at its early stages. Enterprise-grade solutions will require collaboration in the open ecosystem. The time is now for the ecosystem to come together and accelerate GenAI deployments across enterprises by offering a standardized set of tools and technologies while supporting three key tenets – open, security, and scalability. This will require the ecosystem to work together to build reference implementations that are performant, trustworthy and enterprise-grade ready.

+
+
+

How does it compare to other options for deploying Gen AI solutions within the enterprise?

+

There is not an alternative that brings the entire ecosystem together in a vendor neutral manner and delivers on the promise of open, security and scalability. This is our primary motivation for creating OPEA project.

+
+
+

Will OPEA reference implementations work with proprietary components?

+

Like any other open-source project, the community will determine which components are needed by the broader ecosystem. Enterprises can always extend OPEA project with other multi-vendor proprietary solutions to achieve their business goals.

+
+
+

What does OPEA acronym stand for?

+

Open Platform for Enterprise AI

+
+
+

How do I pronounce OPEA?

+

It is said ‘OH-PEA-AY’

+
+
+

What companies and open-source projects are part of OPEA?

+

AnyScale +Cloudera +DataStax +Domino Data Lab +HuggingFace +Intel +KX +MariaDB Foundation +MinIO +Qdrant +Red Hat +SAS +VMware by Broadcom +Yellowbrick Data +Zilliz

+
+
+

What is Intel contributing?

+

OPEA is to be defined jointly by several community partners, with a call for broad ecosystem contribution, under the well-established LF AI & Data Foundation. As a starting point, Intel has contributed a Technical Conceptual Framework that shows how to construct and optimize curated GenAI pipelines built for secure, turnkey enterprise deployment. At launch, Intel contributed several reference implementations on Intel hardware across Intel® Xeon® 5, Intel® Xeon® 6 and Intel® Gaudi® 2, which you can see in a Github repo here. Over time we intend to add to that contribution including a software infrastructure stack to enable fully containerized AI workload deployments as well as potentially implementations of those containerized workloads.

+
+
+

When you say Technical Conceptual Framework, what components are included?

+

The models and modules can be part of an OPEA repository, or be published in a stable unobstructed repository (e.g., Hugging Face) and cleared for use by an OPEA assessment. These include:

+

GenAI models – Large Language Models (LLMs), Large Vision Models (LVMs), multimodal models, etc.

+
    +
  • Ingest/Data Processing

  • +
  • Embedding Models/Services

  • +
  • Indexing/Vector/Graph data stores

  • +
  • Retrieval/Ranking

  • +
  • Prompt Engines

  • +
  • Guardrails

  • +
  • Memory systems

  • +
+
+
+

What are the different ways partners can contribute to OPEA?

+

There are different ways partners can contribute to this project:

+
    +
  • Join the project and contribute assets in terms of use cases, code, test harness, etc.

  • +
  • Provide technical leadership

  • +
  • Drive community engagement and evangelism

  • +
  • Offer program management for various projects

  • +
  • Become a maintainer, committer, and adopter

  • +
  • Define and offer use cases for various industry verticals that shape OPEA project

  • +
  • Build the infrastructure to support OPEA projects

  • +
+
+
+

Where can partners see the latest draft of the Conceptual Framework spec?

+

A version of the spec is available in the docs repo in this project

+
+
+

Is there a cost for joining?

+

There is no cost for anyone to join and contribute.

+
+
+

Do I need to be Linux Foundation member to join?

+

Anyone can join and contribute. You don’t need to be a Linux Foundation member.

+
+
+

Where can I report a bug?

+

Vulnerability reports can be sent to info@opea.dev.

+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/framework.html b/latest/framework.html new file mode 100644 index 000000000..9be0d5c51 --- /dev/null +++ b/latest/framework.html @@ -0,0 +1,1075 @@ + + + + + + + Open Platform for Enterprise AI (OPEA) Framework Draft Proposal — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    + + + + +
  • Latest »
  • + +
  • Open Platform for Enterprise AI (OPEA) Framework Draft Proposal
  • + +
  • + View page source +
  • +
+
+
+
+ + + + +
+ +
+

Open Platform for Enterprise AI (OPEA) Framework Draft Proposal

+

Rev 0.5 April 15, 2024

+

Initial draft by Intel. Contacts for content – Ke Ding (ke.ding@intel.com ), Gadi Singer +(gadi.singer@intel.com)

+

Feedback welcome at info@opea.dev

+
+

1. Summary

+

OPEA (Open Platform for Enterprise AI) is a framework that enables the creation and evaluation of +open, multi-provider, robust and composable GenAI solutions that harness the best innovation across +the ecosystem.

+

OPEA is an ecosystem-wide program within the Linux Foundation Data & AI framework that aims to +accelerate enterprise adoption of GenAI end-to-end solutions and realize business value. OPEA will +simplify the implementation of enterprise-grade composite GenAI solutions, including Retrieval +Augmented Generative AI (RAG). The platform is designed to facilitate efficient integration of secure, +performant, and cost-effective GenAI workflows into business systems and manage its deployments.

+

This platform’s definition will include an architectural blueprint, a comprehensive set of components for +GenAI systems, and a suite of specifications* for both individual components and entire systems. It will +also include tools for building, tuning, and evaluating end-to-end GenAI workflows. These definitions will +address key aspects such as performance, feature set, trustworthiness (security and transparency), and +readiness for enterprise-grade applications. The specifications will also include a set of reference flows +and demos that can be easily reproduced and adopted.

+

Figure 1-1: OPEA’s Core Values

+

Disclaimer – The term ‘specification’ is used throughout this draft whitepaper and appendix as a broad +working term, referring generally to a detailed description of systems and their components. However, it +is important to note that this term might be replaced or updated based on more precise characterization +and applying the Linux Foundation licensing considerations.

+

Figure 1-2 OPEA – proposed Construction and Evaluation Framework for AI Solutions

+

We are now in an era where AI algorithms and models, that were initially developed in research +environments and later introduced into consumer-focused settings, are now transitioning to widespread +enterprise deployment. This transition provides an opportunity for partners to leverage decades of +insights into enterprise-scale computing, security, trustworthiness, and datacenter integration, among +other areas, to accelerate AI adoption and unlock its potential value.

+
+
+

2. Introduction

+

Recently, the practices for developing AI solutions have undergone significant transformation. Instead of +considering AI model (e.g., a GenAI LLM) as the complete solution, these models are now being +integrated into more comprehensive end-to-end AI solutions. These solutions consist of multiple +components, including retrieval subsystems with embedding agents, a Vector Database for efficient +storage and retrieval, and prompt engines, among others. This shift has led to the emergence of +Composition Frameworks (such as LangChain or Haystack), which are used to assemble these +components into end-to-end GenAI flows, like RAG solutions, for the development and deployment of AI +solutions.

+

The ecosystem offers a range of composition frameworks, some are open-source (e.g., LangChain and +LlamaIndex), while others are closed-sourced and come bundled with professional services (e.g., +ScaleAI). Additionally, some are offered by cloud service providers (e.g. AWS) or hardware/software +providers (e.g., NVIDIA). However, as of Q2 2024 these represent individual perspectives and offerings +for the intricate task of building an end-to-end AI solution.

+
+

2.1 Key capabilities

+

OPEA will offer key capabilities in both the Construction and Evaluation of end-to-end composite GenAI +solutions, that are built with retrieval augmentation. As a construction platform, OPEA will enable +creation of RAG-enabled AI solutions directly or through the use of compositional tools such as +LangChain and Haystack. As an evaluation framework, OPEA will provide the means to assess and grade +end-to-end composite GenAI solutions on aspects derived from four domains – performance, features, +trustworthiness and Enterprise-readiness.

+
+

2.1.1 Construction of GenAI solutions, including retrieval augmentation

+

Composing an end-to-end AI solution (including retrieval augmentation) can be done by combining +models and modules from multiple providers.

+

OPEA will offer or refer to a set of building blocks – models and modules – that can be called in a flow to +achieve an AI task or service. The models and modules can be part of OPEA repository, or published in +stable open repository (e.g., Hugging Face), or proprietary / closed source and cleared for use by an +OPEA assessment.

+
    +
  • GenAI models – Large Language Models (LLMs), Large Vision Models (LVMs), multimodal models, etc.

  • +
  • Other modules - AI system components (other than LLM/LVM models) including Ingest/Data Processing module, Embedding Models/Services, Vector Databases (aka Indexing or Graph data stores), Prompt Engines, Memory systems, etc.

  • +
+

Each module for the system will be characterized with its expected functionality and attributes. Those +will be evaluated for every particular implementation choice (see following evaluation section). There +will be multiple options offered from various providers for each module and model, to allow for choice +and diversity.

+

This platform consists of a set of compositional capabilities that allow for building custom agents, +customizing AI assistants, and creating a full end-to-end GenAI flow that includes retrieval augmentation +as well as other functionality when needed. The platform will also include or reference tools for fine- +tuning as well as optimization (like quantization assists) to support creation of performant, robust +solutions that can run locally on target enterprise compute environments. Similar to building blocks, the +composition capabilities could be part of OPEA repository, or published in stable open repository (e.g., +Hugging Face) or offered by the ecosystem (like LangChain, LlamaIndex and Haystack).

+

An important part of the compositional offering will be a set of validated reference flows that are ready +for downloading and recreation in the users’ environment. In the multitude of provided ready reference +flows, there will be domain-independent flows (like a RAG flow for language-based Q&A, or a +multimodal flow to interact with one’s images and videos) that were tuned for different HW providers +and settings. There will also be domain-specific flows like financial service end-to-end flow or nutrition +adviser, which are sometimes called microservices.

+

There is a common visualizing language that is used to depict the component of each reference flow +being provided.

+
+
+

2.1.2 Evaluation of GenAI solutions, including retrieval augmentation:

+

OPEA will provide means and services to fully evaluate and grade components and end-to-end GenAI +solutions across four domains – performance, functionality, trustworthiness and enterprise-readiness. +The evaluation can be done on a flow created within OPEA, or created elsewhere but requesting to be +assessed through the platform.

+

Some of the evaluation tools will be part of the OPEA repository, while others will be references to +selected benchmarks offered by the ecosystem.

+

OPEA will offer tests for self-evaluation that can be done by the users. Furthermore, it will have the +engineering setup and staffing to provide evaluations per request.

+

The OPEA evaluations can be viewed at the following levels:

+
    +
  • Assessment – Detailed tests or benchmarks done for particular modules or attributes of the +end-to-end flow. Assessments will be elaborate and specific, checking for the functionality and +characteristics specified for that module or flow.

  • +
  • Grading - Aggregation of the individual assessments to a grade per each of the four domains – +Performance, Features, Trustworthiness and Enterprise-readiness. The aggregate grade per +domain could be L1 Entry Level; L2 Market Level; or L3 Advanced Level.

  • +
  • Certification – It has not yet been decided if certification will be offered as part of OPEA. +However, the draft proposal for consideration is to allow for OPEA Certification that will be +determined by ensuring a minimum of Level 2 grading is achieved on all four domains.

  • +
+

Figure 2-1 Key capabilities provided by OPEA

+

Appendix A of this document is an early draft of the proposed specification and sample reference flows.

+
+
+
+
+

3. Framework Components, Architecture and Flow

+

The OPEA definition (see Appendix A) includes characterization of components of State-of-the-Art (SotA) +composite systems including retrieval-augmentation and their architecture as a flow and SW stack.

+

There are six sections in the Appendix A which will provide a starting point for a more detailed and +elaborate joint OPEA definition effort:

+
    +
  • A1: System Components - List of ingredients that comprise a composed system, along with their +key characteristics. Some systems that will be evaluated may only include a subset of these +components.

  • +
  • A2: SW architecture - Diagram providing the layering of components in a SW stack

  • +
  • A3: System flows – Diagram[s] illustrating the flow of end-to-end operation through the relevant +components.

  • +
  • A4: Select specifications at system and component level

  • +
  • A5: Grading – Grading of systems being evaluated based on performance, features, +trustworthiness and enterprise-grade readiness.

  • +
  • A6: Reference Flows – List of reference flows that demonstrate key use-cases and allow for +downloading and replication for a faster path to create an instantiation of the flow.

  • +
+

Assumptions for the development of OPEA sections include:

+
    +
  • OPEA is a blueprint for composition frameworks and is not set to compete with the popular +frameworks. It is set to help assess the pros and cons of various solutions and allow for +improved interoperability of components.

  • +
  • In production, it is likely that many customers will deploy their own proprietary pipelines.

  • +
  • This framework blueprint is complementary and is intended to encourage interoperability of +system components as well as addition of specialized value such as HW-aware optimizations, +access to innovative features, and a variety of assistants and microservices.

  • +
  • Flexible and allows easy pluggable and replaceable models and other components. Ability to +exchange components is an important factor in the fast progression of the field.

  • +
  • Providing an environment to experiment with solution variations - e.g. What is the impact (E2E +system performance) when replacing a generic re-ranking component with a particular +provider’s re-ranking component.

  • +
+

It should be noted that the final shaping of the framework components, architecture and flows will be +jointly defined by a technical committee as the full OPEA definition and governance structure is +established. It is also expected that there will be a regular cadence of updates to the spec to reflect the +rapidly shifting State-of-the-Art in the space.

+
+
+

4. Assessing GenAI components and flows

+

One of the important benefits to the ecosystem from the development and broad use of OPEA is a +structured set of evaluation that can provide trusted feedback on GenAI flows – whether composed +within OPEA, or composed elsewhere but has the visibility and access that allows for evaluations.
+Evaluations can be done by assessing individual components or complete end-to-end GenAI solutions. +Evaluations in the OPEA context refer to assessment of individual aspects of a solution – like its latency +or accuracy per defined suite of tests. Assessments are covered in this section. Grading is an aggregation +of assessments and is covered in the next section.

+

Components and entire end-to-end flows will be evaluated in four domains – performance, features, +trustworthiness and enterprise-readiness.

+

Performance can be evaluated at the component level - e.g., Vector Database latency over a given large, +indexed dataset, or latency and throughput of an LLM model. Moreover, performance needs to be +evaluated for end-to-end solutions that perform defined tasks. The term ‘performance’ refers to aspects +of speed (e.g., latency), capacity (e.g., memory or context size) as well as accuracy or results.

+

OPEA can utilize existing evaluation specs like those used by SotA RAG systems and other standard +benchmarks wherever possible (e.g., MMLU). As for functionality, there are benchmarks and datasets +available to evaluate particular target functionality such as multi-lingual (like FLORES) or code +generations (e.g., Human-Eval).

+

For evaluating trustworthiness/Hallucination safety the spec will leverage existing benchmarks such +as RGB benchmark/Truthful QA where possible.

+

Some assessment of enterprise readiness would include aspects of scalability (how large of data set the +system can handle, size of vector store, size and type of models), infrastructure readiness (cloud vs bare +metal), and software ease of deployment (any post-OPEA steps required for broad deployment). One of +the measures that could be assessed in this category is overall Cost/TCO of a full end-to-end GenAI flow.

+

When aspects of composite GenAI solutions are not freely available, reliable benchmarks or tests, +efforts will be made to ensure creation of such. As many of the current (early 2024) benchmarks are +focusing on performance and features, there will be an effort to complement those as needed for +assessing trustworthiness and enterprise-readiness.

+

The development of assessments should use learnings from similar evaluations when available. For +example, referring to RAG evaluation as reported by Cohere’s Nils Reimers. See more details here :

+
    +
  • Human preference

  • +
  • Average accuracy of an E2E

  • +
  • Multi-lingual

  • +
  • Long-context “Needles in Haystack”

  • +
  • Domain specific

  • +
+

The assessments development will be starting with focus on primary use-cases for RAG flow, such as +Open Q&A. It will allow for comparison with common industrial evaluations (see Cohere, GPT-4)

+
+
+

5. Grading Structure

+

OPEA evaluation structure refers to specific tests and benchmarks as ‘assessments’ – see previous +section for details. ‘Grading’ is the part of OPEA evaluation that aggregates multiple individual +assessments into one of three levels, in each of the four evaluation domains – performance, features, +Trustworthiness and Enterprise readiness.

+

The following draft of a grading system is for illustration and discussion purposes only. A grading +system should be defined and deployed based on discussions in the technical review body and any other +governance mechanism that will be defined for OPEA.

+

To ensure that compositional systems are addressing the range of care-abouts for enterprise +deployment, the grading system has four categories:

+
    +
  • Performance – Focused on overall system performance and perf/TCO

  • +
  • Features- Mandatory and optional capabilities of system components

  • +
  • Trustworthiness – Ability to guarantee quality, security, and robustness. This will take into +account relevant government or other policies.

  • +
  • Enterprise Readiness – Ability to be used in production in enterprise environments.

  • +
+

The Performance and Features capabilities are well understood by the communities and industry today, +while Trustworthiness and Enterprise Readiness are still in their early stage for assessment and +evaluation when it comes to GenAI solutions. Nevertheless, all domains are essential to ensure +performant, secure, privacy-aware, robust solutions ready for broad deployment.

+

The grading system is not intended to add any particular tests or benchmarks. All individual tests are to +be part of the assessments. Rather, the grading system goal is to provide an overall rating as to the +performance, functionality, trustworthiness and enterprise readiness of a GenAI flow over a multitude +of individual assessments. It is expected to provide an abstracted and simplified view of the GenAI flow +under evaluation. It will attempt to address two basic questions – what is the level of capabilities of a +flow relative to other flows evaluated at that time, as well as evaluate some necessary requirements +(such as for security and enterprise readiness) for robust deployment of GenAI solutions at scale. A +grading system establishes a mechanism to evaluate different constructed AI solutions (such as +particular RAG flows) in the context of the OPEA framework.

+

For each category, the assessments will be set with 3 levels:

+
    +
  • L1 – Entry Level – Limited capabilities. The solution might be seen as less advanced or +performant relative to other solutions assessed for similar tasks. It might encounter issues in +deployment (if deficiencies in trustworthiness or enterprise readiness).

  • +
  • L2 – Market – Meets market needs. The solution represents that mid-range of systems being +reviewed and assessed. It can be safely deployed in production enterprise environments and is +expected to meet prevalent standards on security and transparency.

  • +
  • L3 – Advanced – Exceeds average market needs. The solution represents the top-range of +components or end-to-end GenAI flows being reviewed and assessed at the time. It meets or +exceeds all security, privacy, transparency and deployment-at-scale requirements.

  • +
+

The grading system can be used by GenAI users to ensure that the solution being evaluated is meeting +the ecosystem expectations in a field that is moving exceptionally fast. It can highlight exceptional +solutions or point out areas of concern. The structured approach across the four domains ensures that +the combined learnings of the ecosystem at any given time are being reflected in the feedback to the +prospective users of a particular GenAI solution. Naturally, the goal posts of what is defined as L1/L2/L3 +need to be updated on regular basis as the industry pushes GenAI State-of-the-Art forward.

+

Figure 5-1 Overall view of the grading system across four domains

+

The grading system can play a different role for the providers of models, building blocks (modules), and +complete end-to-end GenAI solutions. Providers can get structured and impartial feedback on the +strengths and weaknesses of their offering compared with the rest of the market. An articulation of all +key areas for enterprise deployment is expected to guide providers towards a more robust and +complete delivery and continuous improvement for broad enterprise deployment. It also serves to +highlight outstanding solutions, providing them tailwinds as the present and differentiate their offering.

+

If and when certification becomes part of the framework (discussion and decisions to be made at a later +stage) it is assumed that a system needs to be at least at Level 2 for every aspect to be “OPEA Certified”. +Such certification can increase the confidence of both providers and users that the GenAI solution being +evaluated is competitive and ready for broad deployment – stopping short of promising a guarantee of +any sort.

+

The assessment test suites and associated grading will allow for ISVs and industry solution adopters to +self-test, evaluate and grade themselves on the various metrics. The test suite will be comprised of +applicable tests/benchmarks currently available in the community and where no standard benchmarks +exist, new tests will be developed. For each of these metrics we will have a grading mechanism to map +particular score ranges to L1, L2 or L3 for that time. These ranges will be updated periodically to reflect +the advancements in the field.

+

Figure 5-2 illustrates some of the aspects to be evaluated in the four domains. Yellow highlighted +examples show the minimal assessments needed for each of the domains. The blue highlighted +examples show the next level of assessments that indicate higher level capabilities of the RAG pipeline. +The next level and the highest level of assessments are indicated by text with no color.

+

Figure 5-2 Capabilities and Testing Phases

+
+
+

6. Reference flows

+

Reference flows are end-to-end instantiations of use cases within the OPEA framework. They represent +a specific selection of interoperable components to create an effective implementation of a GenAI +solution. Reference flows documentation and links need to include comprehensive information +necessary for users of the framework to recreate and execute the flow, reproducing the results reported +for the flow. The reference flow documentation will provide links to the required components (which +may come from multiple providers) and the necessary script and other software required to run them.

+

Several flows will exclusively focus on open models and other components, providing full transparency +when necessary. Other flows may include proprietary components that can be called/activated within +those flows. However, the components being referred to in a reference flow must be accessible to OPEA +users – whether they are open source or proprietary, free to use or fee-based.

+

Reference Flows serve several primary objectives:

+
    +
  • Demonstrate representative instantiations: Within OPEA framework, reference flows showcase +specific uses and tasks. Given the framework’s inherent flexibility, various combinations of +components are possible allowing for maximum flexibility. Reference flows demonstrate how +specific paths and combinations can be effectively implemented within the framework.

  • +
  • Highlight the framework’s potential: By offering optimized reference flows that excel in +performance, features, trustworthiness, and enterprise readiness, users can gain insight into +what can be achieved. The experience serves as valuable learning tools towards achieving their +AI deployment goals and planning.

  • +
  • Facilitate easy deployment: Reference flows are designed to be accessible and easy to +instantiate with relatively low effort. It allows replicating a functional flow within their +environment with minimal effort, allowing subsequent modifications as needed.

  • +
  • Encourage innovation and experimentation: Allow users in the ecosystem to experiment with +and innovate with a broad set of flows and maximize the value for their end-to-end use cases.

  • +
+

OPEA will deploy and evolve a visualization language to capture the blueprint flows (e.g., a base flow for +RAG chat/Q&A) as well as to document the choices made for every reference flow. The visualization has +a legend (see Figure 6-1) that illustrates the key choices in the reference flow (e.g., sequence of +functions or containerization) (see Figure 6-2) as well as the implementation choices for particular +model and modules (See Appendix A section A6).

+

Figure 6-1 Legend for Blueprint and Reference Flows

+

Figure 6-2 Example of blueprint RAG flow

+

The Reference flows section of the specification (Section A6 in Appendix A) provides an initial catalog of +reference flows, demonstrating common tasks and diverse combinations of hardware and AI +components. As this collection of reference flows is extended, there will be diverse set of solution +providers and variations of HW (Intel, NVIDIA and others) as well as AI models, modules and +construction.

+
+
+

Appendix A – Draft OPEA Specifications

+

Rev 0.1 April 15, 2024

+

The draft specifications are intended for illustration and discussion purposes. The appendix has six +sections:

+
    +
  • A1: System Components - List of ingredients that comprise a composed system, along with their +key characteristics.

  • +
  • A2: SW architecture - Diagram providing the layering of components in a SW stack

  • +
  • A3: System flows – Diagram[s] illustrating the flow of end-to-end operation through the relevant +components.

  • +
  • A4: Select specifications at system and component level

  • +
  • A5: Grading – Grading of systems being evaluated based on performance, features, +trustworthiness and enterprise-grade readiness.

  • +
  • A6: Reference Flows – List of reference flows that demonstrate key use-cases and allow for +downloading and replication for a faster path to create an instantiation of the flow.

  • +
+

This is an early draft of OPEA framework specification. It provides an initial view of the content and is +expected to be substantially expanded in future revisions.

+

Disclaimer – The term ‘specification’ is used throughout this draft whitepaper and appendix as a broad +working term, referring generally to a detailed description of systems and their components. However, it +is important to note that this term might be replaced or updated based on more precise characterization +and applying the Linux Foundation licensing considerations.

+
+

A1: System Components

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Components

Description

OSS Examples

Proprietary Examples

Agent framework

Orchestration software for building and deploying workflows combining information retrieval components with LLMs for building AI agents with contextualized information

Langchain, LlamaIndex, Haystack, Semantic Kernel

Ingest/Data Processing

Software components that can be used to enhance the data that is indexed for retrieval. For example: process, clean, normalization, information extraction, chunking, tokenization, meta data enhancement.

NLTK, spaCY, HF Tokenizers, tiktoken, SparkNLP

Embedding models/service

Models or services that convert text chunks into embedding vectors to be stored in a vector database

HF Transformers, S-BERT

HF TEI, OpenAI, Cohere, GCP, Azure embedding APIs, JinaAI

Indexing/Vector store

A software for indexing information (sparse/vector) and for retrieving given a query

Elasticsearch, Qdrant, Milvus, ChromaDB, Weaviate, FAISS, Vespa, HNSWLib, SVS, PLAID

Pinecone, Redis

Retrieval/Ranking

A SW component that can re-evaluate existing contexts relevancy order

S-BERT, HF Transformers, Bi/Cross-encoders, ColBERT

Cohere

Prompt engine

A component that creates task specific prompts given queries and contexts, tracks user sessions (maintain history/memory)

Langchain hub

Memory

Conversation history in memory and/or persistent database

Langchain Memory module, vLLM (automatic prefix caching)

LLM engine/service

LLM inference engine that generate text responses based on given prompts and contexts retrieved

vLLM, Ray, TensorRT-LLM

HF TGI, Deci Infery

LLM Models

Open-source and close-source models.

LLama2-7B,13B, Falcon 40B, Mixtral-7b, Gemma etc.

LLama2-70B, OpenAI, Cohere, Gemini, etc.

Guardrails

A software component for enforcing compliance, filtering, safe responses

LLM Guard

Purple llama, OpenAI safety control, NEMO-Guardrails

Evaluation

Methods to evaluate compliance, Performance, Accuracy, Error rate of the LLM response

Recall, MAP, MTEB, MTBench, MMLU, TriviaQA, TruthfulQA…

+

Figure A1.1 List of key components.

+
+
+

A2: SW Architecture

+

Support model selection and data integration across popular user-facing frameworks. It leverages +popular agent frameworks (aka orchestration frameworks or AI Construction Platforms) for developer +productivity and availability of platform optimization.

+

Tuning of the solutions leverage platform optimizations via popular domain frameworks such as Hugging +Face ecosystem to reduce developer complexity and provide flexibility across platforms.

+

Figure A2.1 – OPEA solution stack.

+
+
+

A3: System Flows

+

Figure A3.1 – Main OPEA system RAG flow.

+
+
+

A4: Select Specifications

+

Evaluating a composite generative AI system requires a view of end-to-end capabilities as well as assessment of individual components.

+
+

A4.1 End-to-end assessment

+

Following are some examples of assessments addressing the four domains - performance, features, trustworthiness and enterprise readiness.

+
+
Performance
+
    +
  • Overall System Performance

    +
      +
    • Latency (first token latency, average token latency, streaming vs non-streaming output)

    • +
    • Throughput

    • +
    • Given a fixed combination of various components of RAG (specific vendor instance for each component), overall system performance.

    • +
    • For a specific task/domain, list the combination that would give the best system performance.

    • +
    +
  • +
  • Q&A evaluation (accuracy)

    +
      +
    • Task: Open Q&A

    • +
    • Databases: NQ, TriviaQA and HotpotQA

    • +
    • Metric: Average Accuracy

    • +
    • Indexing: KILT Wikipedia

    • +
    +
  • +
+
+
+
Features / Functionality
+
    +
  • Functional

    +
      +
    • Features – multimodal, Multi-LLM, Multiple embedding model choices, multiple Embedding DBs, context length

    • +
    • Context Relevance (context precision/recall)

    • +
    • Groundedness/faithfulness

    • +
    • Answer Relevance

    • +
    +
  • +
  • Multi-step reasoning

    +
      +
    • Task: 3-shot multi-hop REACT agents

    • +
    • Databases: Wikipedia (HotPotQA), Internet (Bamboogle)

    • +
    • Metric: Accuracy

    • +
    • Test sets: Reflexion, Ofir Press

    • +
    +
  • +
  • Multi-lingual

    +
      +
    • Task: Semantic Search

    • +
    • Search Quality

    • +
    • Metric nDCG @10

    • +
    • 18 languages

    • +
    • Benchmark: MIRCAL

    • +
    +
  • +
  • Multi-lingual

    +
      +
    • Tasks: Multilingual MMLU, Machine Translation

    • +
    • Metric: Accuracy, BLEU

    • +
    • French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese

    • +
    • Benchmark: FLORES, MMLU

    • +
    +
  • +
  • Conversational agent and Function calling

    +
      +
    • Task: conversational tool-use and single-turn function-calling capabilities

    • +
    • Benchmark-1: Microsoft’s ToolTalk

    • +
    • Benchmark-2: Berkeley’s Function Calling Leaderboard (BFCL)

    • +
    • Tool-use Metric: Soft Success rate

    • +
    • Function calls: Function Pass rate

    • +
    +
  • +
  • Human reference on enterprise RAG use cases

    +
      +
    • Domains: Customer support, Workplace support (Tech), Workplace Assistant (Media), Tech FAQ

    • +
    • Metric: Win ratio vs. Mixtral

    • +
    +
  • +
+
+
+
Enterprise readiness
+

Enterprise Readiness Assessment involving assessing for the following:

+
    +
  1. Scalability

  2. +
  3. Production deployability

  4. +
  5. Updatability

  6. +
  7. Observability/Debuggability

  8. +
+

Scalability is associated with the ability of RAG system to scale the size/dimensions of different components such as the following example metrics:

+
    +
  • Vector DB size

  • +
  • Dimensionality of retriever (the value of K in top-K documents)

  • +
  • Maximum context length supported by the generator

  • +
  • Parameter size of the generator models

  • +
  • Embedding dimension size

  • +
+

Production deploy-ability Readiness includes various capabilities such as

+
    +
  • Efficient inference serving

  • +
  • Integrations with different enterprise systems such as Slack/workday/SAP/Databases

  • +
  • Enterprise grade RAS capabilities

  • +
  • Service Level Agreements (SLAs) on factuality, verifiability, performance enforceability

  • +
+

Updatability includes capability for

+
    +
  • Rolling upgrade

  • +
  • Online upgrade

  • +
  • Component level upgrade

  • +
+

Observability/Debuggability includes capability for

+
    +
  • Error detection and attribution to component

  • +
  • Early detection of component degradation

  • +
  • Trace generation to debug failures (functional and performance)

  • +
  • Traceability of each intermediate step (prompts for chained LLMs)

  • +
+

Examples for observability include Databricks Inference Tables/Phoenix Open Inference Traces or +Langsmith Observability/monitoring features.

+
+
+
+

A4.2 Individual Components Assessment

+

Evaluation of individual components (modules) will include:

+
    +
  • Data preprocessing pipeline

  • +
  • Embedding – Quality/Storage/Processing time

  • +
  • Chunker, Retriever & Re-ranker

  • +
  • Generator LLM – quality/latency/context length/reasoning ability/function calling/tool usage

  • +
  • Auto evaluation vs Manul evaluation

  • +
  • Observability

  • +
  • Guardrails -

  • +
  • Prompting

  • +
  • Output Generation – structured/grammar/output types(json/text)

  • +
+

Early example of next level articulation of metrics expected per each major component.

+

Component Name: Retriever

+
    +
  • Metric: Normalized Discounted Cumulative Gain@10 with BEIR benchmark datasets or other QA datasets

  • +
  • Metric: Context Recall@k

  • +
  • Metric: Context Precision@k

  • +
  • Metric: Hit Rate

  • +
+

Component Name: LLM/Generation

+
    +
  • Metric: Faithfulness – How factually correct is the generated answer (computed as a ragas metrics between 0 and 1)

  • +
  • Metric: Answer Relevance – how relevant generated answer to the query (computed as a +ragas metrics between 0 and 1)

  • +
+
+
+
+

A5: Grading

+

To ensure that compositional systems are addressing the range of care-abouts for enterprise +deployment, the grading system has four categories:

+
    +
  • Performance – Focused on overall system performance and perf/TCO

  • +
  • Features- Mandatory and optional capabilities of system components

  • +
  • Trustworthiness – Ability to guarantee quality, security, and robustness.

  • +
  • Enterprise Ready – Ability to be used in production in enterprise environments.

  • +
+

For each category, the assessments will be set with 3 levels

+
    +
  • L1 – Entry Level – Limited capabilities. Solution acceptable for PoC, but not production.

  • +
  • L2 – Market – Meets market needs. Can be deployed in production.

  • +
  • L3 – Advanced – Exceeds market needs.

  • +
+

Part of the recommendation is to have a certification (if and when it becomes part of the framework) +process. It is assumed that a system needs to be at least at Level 2 for every aspect to be “OPEA +Certified”.

+
+

A5.1 Performance Grading

+

Performance grading is based on running a set of vertical-specific end-to-end use cases on full system +and capturing the relevant metrics during the run.

+
    +
  • E2E/System View

    +
      +
    • Vendors have flexibility to innovate/differentiate their implementations within the black box

    • +
    +
  • +
  • Running a fixed set of use cases

    +
      +
    • Covering different vertical scenarios

    • +
    • Minimum level of accuracy and reliability

    • +
    +
  • +
  • Input Datasets for benchmark

    +
      +
    • Open/publicly available

    • +
    • Automatic generation

    • +
    +
  • +
  • Scale factors

    +
      +
    • Supports different input magnitude size

    • +
    +
  • +
  • Metrics

    +
      +
    • First-token latency, overall latency, throughput, cost, consistency

    • +
    • Formula to aggregate metrics for final score

    • +
    • Vertical-specific metrics

    • +
    +
  • +
+

Performance +Performance grade is based on a set of ‘black box’ end-to-end RAG benchmarks, based on real use +cases. Each solution submitted to the OpenRag alliance will be measured against it. Performance +measurements will include latency, throughput, scalability, accuracy and consistency.

+
    +
  • Level 1 – Baseline benchmark complete

  • +
  • Level 2 – Meets performance levels that are expected for the bulk of GenAI solutions performing similar benchmarks/tasks.

  • +
  • Level 3 – Exceeds the performance of most solutions being evaluated at that time. Top-tier solutions per the tasks evaluated.

  • +
+

Figure A5.1 – Performance Grading

+
+
+

A5.2 Features Grading

+

Feature grading consists of running functional tests to test system capabilities in a number of different +domains. Each domain will have its own score.

+
    +
  • Interoperability/API

    +
      +
    • Functional tests for each interface

    • +
    • Different granularity levels for components

    • +
    • Open interfaces for 3rd party data sources

    • +
    • Should enable multiple types of data sources

    • +
    +
  • +
  • Platform capabilities and AI methods

    +
      +
    • Ingest, inference, fine tuning

    • +
    • Gen AI and reinforcement learning

    • +
    +
  • +
  • User experience

    +
      +
    • Ease of Use

    • +
    • Management tools – single pane, inter-vendor

    • +
    • GUI requirements

    • +
    • Developer tools

    • +
    +
  • +
  • Deployment models

    +
      +
    • Orchestration

    • +
    • K8, hypervisor

    • +
    +
  • +
  • Compliance

    +
      +
    • Potential certification (if and when it becomes part of the framework) based on functional testing

    • +
    +
  • +
+

Features

+

Features evaluated for interoperability, platform capabilities, user experience (ease of use), AI +methods being applied, and specialized functionality.

+
    +
  • Level 1 – Single model and accesses few data sources; Limited data ingest; Basic or no development tools; basic UI; bare metal, manual install.

  • +
  • Level 2 - Multiple models, and accesses diverse enterprise data sources; full data ingest; basic fine-tuning; flexible pipelining of modules in the flow; basic agent controls.

  • +
  • L3 – Natively supports multimodal models and data source; Advanced development tools with SotA fine-tuning and optimizations capabilities; leading specialized features

  • +
+

Figure A5.2 – Feature Grading

+
+
+

A5.3 Trustworthiness Grading

+

Trustworthiness and responsible AI are evolving in an operational sense. See NIST trustworthy and +responsible AI and the EU AI Act. While these efforts are evolving, for the interim, we propose grading
+solution trustworthiness along the axes of security, reliability, transparency, and confidence:

+
    +
  • Transparency

    +
      +
    • Open Source Models and Code. This provides visibility into the actual code running, being able to verify versions and signed binaries.

    • +
    • Open standards, reusing existing standards.

    • +
    • Data sets used in model training, which allows analysis of data distribution and any biases therein. For instance, if a cancer detection model was trained on populations that are very diverse - ethnically (genome), or environments (exposure to carcinogens), it carries with a risk of applicability when used for individuals that are not representative of the training set.

    • +
    • Citing sources/documents used in generating responses, protecting from hallucinations. One of the chief benefits of RAG.

    • +
    • Meeting regulatory requirements such as ISO27001, HIPAA, and FedRAMP as appropriate.

    • +
    +
  • +
  • Security:

    +
      +
    • Role-based access control, segmented access per user-role regardless of same model use. This could be a pre or post processing step that filters out data based on user access to different information. For instance, executive leadership may have access to company revenues, financials and customer lists versus an engineer.

    • +
    • Solutions that run at the minimum necessary process privilege to prevent exploits form escalation of privileges should the application be hacked.

    • +
    • Running in trusted execution environments, that is hardware supported confidential compute environments that protect data in use – providing confidentiality and integrity from privileged and other processes running on the same infrastructure. Valuable particularly in the cloud.

    • +
    • Attesting binaries in use, be it models or software.

    • +
    • Audit logs that indicate when and what updates were applied either to models or other software, including security patches.

    • +
    • Ensuring that results, intermediate and final are persisted only on encrypted storage and shared with end users through secure transport.

    • +
    +
  • +
  • Reliability

    +
      +
    • Provide the same answer, all else remaining the same, when prompts are similar, differing in their use of synonyms.

    • +
    • Returns correct answers, per tests.

    • +
    • Confidence

    • +
    • In question answering scenarios, awareness of the quality and how current/up-to-date data used in RAG and providing that information along with the response helps an end user in determining how confident they can be with a response.

    • +
    • Cites sources for responses. Meta data can also be used to indicate how up-to-date the input information is.

    • +
    • With respect to diagnosis/classification tasks, such as cancer detection, the divergence of the test subject from the training dataset is an indicator of applicability risk, confidence in the response (alluded to in data transparency above).

    • +
    +
  • +
+

Trustworthiness

+

Evaluating transparency, privacy protection and security aspects

+
    +
  • Level 1 – Documentation of aspects called for in trustworthiness domain

  • +
  • Level 2 - Supports role-based access controls - information being accessed/retrieved is +available based on approval for the user (even if all users access the same model);

  • +
  • Level 3 - Supports security features (e.g., running Confidential Computing / Trusted +Execution Environment). Supports attestation of the models being run; full open- +source transparency on pre-training dataset, weights, fine-tuning data/recipes

  • +
+

Figure A5.3 – Trustworthiness Grading

+
+
+

A5.4 Enterprise-Ready Grading

+

Grading enterprise-readiness consists of evaluating the ability of the overall solution to be deployed in +production in an enterprise environment. The following criteria will be taken into account:

+
    +
  • Ability to have on-prem and cloud deployments

    +
      +
    • At least two types of solution instances (on-premise installation, cloud, hybrid option)

    • +
    • Cloud/Edge-native readiness (refer to CNCF process/guidelines)

    • +
    +
  • +
  • Security-ready for enterprise

    +
      +
    • Multi-level Access Control & Response (including ability to integrate with internal tools)

    • +
    • Data & Model Protection (e.g. including GDPR)

    • +
    • Lifecycle management including security updates, bug fixes, etc

    • +
    • Solutions that are packaged as containerized applications that do not run as root or have +more capabilities than necessary. OWASP container best practices.

    • +
    • Ensure by-products/interim results if saved to disk are done so after encrypting.

    • +
    +
  • +
  • Quality assurance

    +
      +
    • Accuracy & Uncertainty Metrics for domain-specific enterprise tasks

    • +
    • Documentation

    • +
    +
  • +
  • High availability

    +
      +
    • Replication & Data/Instance Protection

    • +
    • Resiliency – time to relaunch an instance when burned down to zero.

    • +
    • Provides support and instrumentation for enterprise 24/7 support

    • +
    +
  • +
  • Licensing model and SW Distribution

    +
      +
    • Scalable from small to large customers

    • +
    • Ability to customize for specific enterprise needs

    • +
    +
  • +
+

Enterprise Readiness +Must first meet mins across performance, features, and trustworthiness

+
    +
  • Level 1 – Reference Design and deployment guide

  • +
  • Level 2 - Output ready for enterprise deployment (no post-OPEA steps needed); +containerized, K8 support; generally robust (but not guaranteed) for production +deployment at scale

  • +
  • Level 3 – Generating sophisticated monitoring and instrumentation for the enterprise +deployment environment. High resiliency – meeting fast time to relaunch an +instance. Allows for L2 + 24/7 support mode out-of-the-box

  • +
+

Figure A5.4 – Enterprise-Ready Grading

+
+
+
+

A6: Reference Flows

+

This section includes descriptions of reference flows that will be available for loading and reproducing +with minimal effort.

+

Reference flows serve four primary objectives:

+
    +
  • Demonstrate representative instantiations: Within OPEA framework, reference flows showcase +specific uses and tasks. Given the framework’s inherent flexibility, various combinations of +components are possible allowing for maximum flexibility. Reference flows demonstrate how +specific paths and combinations can be effectively implemented within the framework.

  • +
  • Highlight the framework’s potential: By offering optimized reference flows that excel in +performance, features, trustworthiness, and enterprise readiness, users can gain insight into +what can be achieved. The experience serves as valuable learning tools towards achieving their +AI deployment goals and planning.

  • +
  • Facilitate easy deployment: Reference flows are designed to be accessible and easy to +instantiate with relatively lower effort. It allows replicating a functional flow within their +environment with minimal effort, allowing subsequent modifications as needed.

  • +
  • Encourage innovation and experimentation: Allow users in the ecosystem to experiment with +and innovate with a broad set of flows and maximize the value for their end-to-end use cases.

  • +
+

Current examples of reference flows are provided for illustration purposes. The set of reference flows is +expected to grow and cover various combinations of HW and SW/AI components from multiple +providers.

+

The reference flow descriptions need to provide high clarity as to what and how they can be recreated +and results reproduced at an OPEA user setting. All reference flows will have a visualization that clarifies +which components are being instantiated and how they are connected in the flow. The graphics legend +described in Figure 6.1 will be used for all reference flow depictions.

+

Figure A6.1 - Reference Design Flows Visualization - legend

+
+

A6.1 – Xeon + Gaudi2 LLM RAG flow for Chat QnA

+

A reference flow that illustrates an LLM enterprise RAG flow that runs on Xeon (GNR) with vector +database and an embedding model, and with a Gaudi2 serving backend for LLM model inference.

+

The reference flow demonstrates a RAG application that provides an AI assistant experience with +capability of retrieving information from an external source to enhance the context that is provided to +an LLM. The AI assistant is provided with access to an external knowledge base, consisting of text and +PDF documents and web pages available via direct URL download. +The flow enables users to interact with LLMs and query about information that is unknown to the LLMs, +or for example, consists of proprietary data sources.

+

The reference flow consists of the following detailed process: a data storage which is used by a +retrieving module to retrieve relevant information given a query from the user. The query and external +data are stored in an encoded vector format that allows for enhance semantic search. The retriever +module encodes the query and provides the prompt processor the retrieved context and the query to +create an enhanced prompt to the LLM. An LLM receives the enhanced prompt generates a grounded +and correct response to the user.

+

The flow contains the following components:

+
    +
  • A data ingest flow that uses an embedding model serving platform (TEI) and an embedding +model (BGE-base) for encoding text and queries into semantic representations (vectors) which +are stored in an index (Redis vector database), both running on Intel Gen6 Xeon GNR for storing +and retrieving data.

  • +
  • A LLM inference serving flow utilizing TGI-Gaudi for LLM model serving on Gaudi2 platform, +which is used generating answers by inputting prompts that combine retrieved relevant +documents from Redis vector database and the user query.

  • +
  • An orchestration framework based on LangChain that initializes a pipeline with the components +above and orchestrates the data processing from the user (query), text encoding, retrieval, +prompt generation and LLM inference.

  • +
+

A complete reference implementation of this flow is available in the ChatQnA example in Intel’s GenAI +examples repository.

+

Figure A6-1.2 Xeon + Gaudi2 LLM RAG flow for Chat QnA

+

A demo user Interface looks like below, which also shows the difference with and without RAG.

+

Figure A6-1.3 Xeon + Gaudi2 LLM RAG flow for Chat QnA – demo screen

+
+
+

A6.2 - Multimodal Chat Over Images and Videos

+

This reference flow demonstrates a multimodal RAG pipeline which utilizes Intel Labs’ BridgeTower +vision-language model for indexing and LLaVA for inference, both running on Intel Gaudi AI accelerators. +The use case for this reference flow is enabling an AI chat assistant to retrieve and comprehend +multimodal context documents such as images and videos. For example, a user may wish to ask an AI +assistant questions which require reasoning over images and videos stored on their PC. This solution +enables such capabilities by retrieving images and video frames relevant to a user’s query and providing +them as extra context to a Large Vision-Language Model (LVLM), which then answers the user’s +question.

+

Specifically, this reference solution takes images and video files as input. The inputs are encoded in a +joint multimodal embedding space by BridgeTower, which is an open-source vision-language +transformer. Detailed instructions and documentation for this model are available via Hugging Face. The +multimodal embeddings are then indexed and stored in a Redis vector database.

+

At inference time, a user’s query is embedded by BridgeTower and used to retrieve the most relevant +images & videos from the vector database. The retrieved contexts are then appended to the user’s +query and passed to LLaVA to generate an answer. Detailed instructions and documentation for the +LLaVA model are available via Hugging Face.

+

This reference flow requires Intel Gaudi AI Accelerators for the embedding model and for generating +responses with the LVLM. All other components of the reference flow can be executed on CPU. A +complete end-to-end open-source implementation of this reference flow is available via Multimodal +Cognitive AI.

+

Figure A6-2.1 Multimodal Chat Over Images and Videos Reference Flow

+

Below is an illustration of a user interface constructed for this reference flow, which was showcased at +Intel Vision:

+

Figure A6.2.2 Multimodal Chat Over Images and Videos – demo screen

+
+
+

A6.3 – Optimized Text and Multimodal RAG pipeline

+

The reference flow below demonstrates an optimized Text and Multimodal RAG pipeline which can be +leveraged by Enterprise customers on Intel Xeon processor.

+

This flow demonstrates RAG inference flow on unstructured data and images with 4th and 5th Gen Intel +Xeon processor using Haystack. It is based on fastRAG for optimized retrieval.

+

The first step is to create index for the vector database (i.e. Qdrant in this case). For unstructured text +data, sentence-transformers is used. For images, BridgeTower is used to encode the inputs.

+

Once the vector database is set up, next step is to deploy inference chat. The LLM and LMM models +used for inference are Llama-2-7b-chat-hf, Llama-2-13b-chat-hf and LLaVa models respectively.

+

The below diagram shows the end-to-end flow for this optimized text and multimodal chat with RAG.

+

Figure A6-3.1 Optimized Text and Multimodal RAG pipeline Reference Flow

+

Below is a visual snapshot of the chat implemented using this flow. It shows how a RAG-enabled chatbot +in Figure A6-3.2 improves the response for a Superbowl query over a non-RAG implementation in Figure +A6-3.3.

+

Figure A6-3.2: Non-RAG chatbot: Super Bowl Query

+

Figure A6-3.3: RAG enabled chatbot - Super Bowl query

+
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/genindex.html b/latest/genindex.html index 33273dee5..bb369d7f9 100644 --- a/latest/genindex.html +++ b/latest/genindex.html @@ -23,6 +23,8 @@ + + @@ -79,6 +81,31 @@ @@ -138,8 +165,8 @@

Index

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. - -Published on Aug 02, 2024. + +Published on Aug 05, 2024.

diff --git a/latest/glossary.html b/latest/glossary.html index d619bf190..0678e8594 100644 --- a/latest/glossary.html +++ b/latest/glossary.html @@ -79,6 +79,31 @@ @@ -138,8 +163,8 @@

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. - -Published on Aug 02, 2024. + +Published on Aug 05, 2024.

diff --git a/latest/guide/installation/gmc_install/gmc_install.html b/latest/guide/installation/gmc_install/gmc_install.html new file mode 100644 index 000000000..5cfafcbb0 --- /dev/null +++ b/latest/guide/installation/gmc_install/gmc_install.html @@ -0,0 +1,271 @@ + + + + + + + GenAI-microservices-connector(GMC) Installation — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

GenAI-microservices-connector(GMC) Installation

+

This document will introduce the GenAI Microservices Connector (GMC) and its installation. It will then use the ChatQnA pipeline as a use case to demonstrate GMC’s functionalities.

+
+

GenAI-microservices-connector(GMC)

+

GMC can be used to compose and adjust GenAI pipelines dynamically on Kubernetes. It can leverage the microservices provided by GenAIComps and external services to compose GenAI pipelines. External services might be running in a public cloud or on-prem. Just provide an URL and access details such as an API key and ensure there is network connectivity. It also allows users to adjust the pipeline on the fly like switching to a different Large language Model(LLM), adding new functions into the chain(like adding guardrails), etc. GMC supports different types of steps in the pipeline, like sequential, parallel and conditional. For more information: https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector

+
+
+

Install GMC

+

Prerequisites

+
    +
  • For the ChatQnA example ensure your cluster has a running Kubernetes cluster with at least 16 CPUs, 32GB of memory, and 100GB of disk space. To install a Kubernetes cluster refer to: +“Kubernetes installation”

  • +
+

Download the GMC github repository

+
git clone https://github.com/opea-project/GenAIInfra.git && cd GenAIInfra/microservices-connector
+
+
+

Build and push your image to the location specified by CTR_IMG:

+
make docker.build docker.push CTR_IMG=<some-registry>/gmcmanager:<tag>
+
+
+

NOTE: This image will be published in the personal registry you specified. +And it is required to have access to pull the image from the working environment. +Make sure you have the proper permissions to the registry if the above commands don’t work.

+

Install GMC CRD

+
kubectl apply -f config/crd/bases/gmc.opea.io_gmconnectors.yaml
+
+
+

Get related manifests for GenAI Components

+
mkdir -p $(pwd)/config/manifests
+cp $(dirname $(pwd))/manifests/ChatQnA/*.yaml -p $(pwd)/config/manifests/
+
+
+

Copy GMC router manifest

+
cp $(pwd)/config/gmcrouter/gmc-router.yaml -p $(pwd)/config/manifests/
+
+
+

Create Namespace for gmcmanager deployment

+
export SYSTEM_NAMESPACE=system
+kubectl create namespace $SYSTEM_NAMESPACE
+
+
+

NOTE: Please use the exact same SYSTEM_NAMESPACE value setting you used while deploying gmc-manager.yaml and gmc-manager-rbac.yaml.

+

Create ConfigMap for GMC to hold GenAI Components and GMC Router manifests

+
kubectl create configmap gmcyaml -n $SYSTEM_NAMESPACE --from-file $(pwd)/config/manifests
+
+
+

NOTE: The configmap name `gmcyaml’ is defined in gmcmanager deployment Spec. Please modify accordingly if you want use a different name for the configmap.

+

Install GMC manager

+
kubectl apply -f $(pwd)/config/rbac/gmc-manager-rbac.yaml
+kubectl apply -f $(pwd)/config/manager/gmc-manager.yaml
+
+
+

Check the installation result

+
kubectl get pods -n system
+NAME                              READY   STATUS    RESTARTS   AGE
+gmc-controller-78f9c748cb-ltcdv   1/1     Running   0          3m
+
+
+
+
+

Use GMC to compose a chatQnA Pipeline

+

A sample for chatQnA can be found at config/samples/chatQnA_xeon.yaml

+

Deploy chatQnA GMC custom resource

+
kubectl create ns chatqa
+kubectl apply -f $(pwd)/config/samples/chatQnA_xeon.yaml
+
+
+

GMC will reconcile chatQnA custom resource and get all related components/services ready

+
kubectl get service -n chatqa
+
+
+

Check GMC chatQnA custom resource to get access URL for the pipeline

+
$kubectl get gmconnectors.gmc.opea.io -n chatqa
+NAME     URL                                                      READY     AGE
+chatqa   http://router-service.chatqa.svc.cluster.local:8080      8/0/8     3m
+
+
+

Deploy one client pod for testing the chatQnA application

+
kubectl create deployment client-test -n chatqa --image=python:3.8.13 -- sleep infinity
+
+
+

Access the pipeline using the above URL from the client pod

+
export CLIENT_POD=$(kubectl get pod -n chatqa -l app=client-test -o jsonpath={.items..metadata.name})
+export accessUrl=$(kubectl get gmc -n chatqa -o jsonpath="{.items[?(@.metadata.name=='chatqa')].status.accessUrl}")
+kubectl exec "$CLIENT_POD" -n chatqa -- curl $accessUrl  -X POST  -d '{"text":"What is the revenue of Nike in 2023?","parameters":{"max_new_tokens":17, "do_sample": true}}' -H 'Content-Type: application/json'
+
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/guide/installation/k8s_install/k8s_instal_aws_eks.html b/latest/guide/installation/k8s_install/k8s_instal_aws_eks.html new file mode 100644 index 000000000..c5d9331f8 --- /dev/null +++ b/latest/guide/installation/k8s_install/k8s_instal_aws_eks.html @@ -0,0 +1,248 @@ + + + + + + + Kubernetes Installation using AWS EKS Cluster — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Kubernetes Installation using AWS EKS Cluster

+

In this document, we’ll install Kubernetes v1.30 using AWS EKS Cluster.

+

There are two ways to create a new Kubernetes cluster with nodes in AWS EKS:

+ +

In this document, we’ll introduce the “AWS Management Console and AWS CLI” method.

+
+

Prerequisites

+

Before starting this tutorial, you must install and configure the following tools and resources that you need to create and manage an Amazon EKS cluster.

+ +
+
+

Create AWS EKS Cluster in AWS Console

+

You can refer to the YouTube video that demonstrates the steps to create an EKS cluster in the AWS console: +https://www.youtube.com/watch?v=KxxgF-DAGWc

+

Alternatively, you can refer to the AWS documentation directly: “AWS Management Console and AWS CLI”

+
+
+

Uploading images to an AWS Private Registry

+

There are several reasons why your images might not be uploaded to a public image repository like Docker Hub. +You can upload your image to an AWS private registry using the following steps:

+
    +
  1. Create a new ECR repository (if not already created):

  2. +
+

An Amazon ECR private repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. More information about Amazon ECR private repository: https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html

+
aws ecr create-repository --repository-name my-app-repo --region <region> 
+
+
+

Replace my-app-repo with your desired repository name and with your AWS region (e.g., us-west-1).

+
    +
  1. Authenticate Docker to Your ECR Registry:

  2. +
+
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account_id>.dkr.ecr.<region>.amazonaws.com 
+
+
+

Replace with your AWS region and <account_id> with your AWS account ID.

+
    +
  1. Build Your Docker Image:

  2. +
+
docker build -t my-app:<tag> .
+
+
+
    +
  1. Tag your Docker image so that it can be pushed to your ECR repository:

  2. +
+
docker tag my-app:<tag> <account_id>.dkr.ecr.<region>.amazonaws.com/my-app-repo:<tag>
+
+
+

Replace <account_id> with your AWS account ID, with your AWS region, and my-app-repo with your repository name.

+
    +
  1. Push your Docker image to the ECR repository with this command:

  2. +
+
docker push <account_id>.dkr.ecr.<region>.amazonaws.com/my-app-repo:latest
+
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/guide/installation/k8s_install/k8s_install_kubeadm.html b/latest/guide/installation/k8s_install/k8s_install_kubeadm.html new file mode 100644 index 000000000..46106040c --- /dev/null +++ b/latest/guide/installation/k8s_install/k8s_install_kubeadm.html @@ -0,0 +1,580 @@ + + + + + + + Kubernetes installation demo using kubeadm — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Kubernetes installation demo using kubeadm

+

In this demo, we’ll install Kubernetes v1.29 using official kubeadm on a 2 node cluster.

+
+

Node configuration

+ + + + + + + + + + + + + + + + + +

hostname

ip address

Operating System

k8s-master

192.168.121.35/24

Ubuntu 22.04

k8s-worker

192.168.121.133/24

Ubuntu 22.04

+

These 2 nodes needs the following proxy to access the internet:

+
    +
  • http_proxy=”http://proxy.fake-proxy.com:911”

  • +
  • https_proxy=”http://proxy.fake-proxy.com:912”

  • +
+

We assume these 2 nodes have been set correctly with the corresponding proxy so we can access the internet both in bash terminal and in apt repository.

+
+
+

Step 0. Clean up the environment

+

If on any of the above 2 nodes, you have previously installed either Kubernetes, or any other container runtime(i.e. docker, containerd, etc.), please make sure you have clean-up those first.

+

If there is any previous Kubernetes installed on any of these nodes by kubeadm, please refer to the listed steps to tear down the Kubernetes first.

+

If there is any previous Kubernetes installed on any of these nodes by kubespray, please refer to kubespray doc to clean up the Kubernetes first.

+

Once the Kubernetes is teared down or cleaned up, please run the following command on all the nodes to remove relevant packages:

+
sudo apt-get purge docker docker-engine docker.io containerd runc containerd.io kubeadm kubectl kubelet
+sudo rm -r /etc/cni /etc/kubernetes /var/lib/kubelet /var/run/kubernetes /etc/containerd /etc/systemd/system/containerd.service.d /etc/default/kubelet
+
+
+
+
+

Step 1. Install relevant components

+

Run the following on all the nodes:

+
    +
  1. Export proxy settings in bash

  2. +
+
export http_proxy="http://proxy.fake-proxy.com:911"
+export https_proxy="http://proxy.fake-proxy.com:912"
+# Please make sure you've added all the node's ip addresses into the no_proxy environment variable
+export no_proxy="localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,192.168.121.35,192.168.121.133"
+
+
+
    +
  1. Config system settings

  2. +
+
# Disable swap
+sudo swapoff -a
+sudo sed -i "s/^\(.* swap \)/#\1/g" /etc/fstab
+# load kernel module for containerd
+cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
+overlay
+br_netfilter
+EOF
+sudo modprobe overlay
+sudo modprobe br_netfilter
+# Enable IPv4 packet forwarding
+cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
+net.ipv4.ip_forward = 1
+net.bridge.bridge-nf-call-iptables = 1
+net.bridge.bridge-nf-call-ip6tables = 1
+EOF
+sudo sysctl --system
+
+
+
    +
  1. Install containerd CRI and relevant components

  2. +
+
# You may change the component version if necessary
+CONTAINERD_VER="1.7.18"
+RUNC_VER="1.1.12"
+CNI_VER="1.5.0"
+NERDCTL_VER="1.7.6"
+BUILDKIT_VER="0.13.2"
+
+#Install Runc
+wget https://github.com/opencontainers/runc/releases/download/v${RUNC_VER}/runc.amd64
+sudo install -m 755 runc.amd64 /usr/local/sbin/runc
+rm -f runc.amd64
+
+#Install CNI
+sudo mkdir -p /opt/cni/bin
+wget -c https://github.com/containernetworking/plugins/releases/download/v${CNI_VER}/cni-plugins-linux-amd64-v${CNI_VER}.tgz -qO - | sudo tar xvz -C /opt/cni/bin
+
+#Install Containerd
+wget -c https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VER}/containerd-${CONTAINERD_VER}-linux-amd64.tar.gz -qO - | sudo tar xvz -C /usr/local
+sudo mkdir -p /usr/local/lib/systemd/system/containerd.service.d
+sudo -E wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -qO /usr/local/lib/systemd/system/containerd.service
+cat <<EOF | sudo tee /usr/local/lib/systemd/system/containerd.service.d/http-proxy.conf
+[Service]
+Environment="HTTP_PROXY=${http_proxy}"
+Environment="HTTPS_PROXY=${https_proxy}"
+Environment="NO_PROXY=${no_proxy}"
+EOF
+sudo mkdir -p /etc/containerd
+sudo rm -f /etc/containerd/config.toml
+containerd config default | sudo tee /etc/containerd/config.toml
+sudo sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" /etc/containerd/config.toml
+sudo systemctl daemon-reload
+sudo systemctl enable --now containerd
+sudo systemctl restart containerd
+
+#Install nerdctl
+wget -c https://github.com/containerd/nerdctl/releases/download/v${NERDCTL_VER}/nerdctl-${NERDCTL_VER}-linux-amd64.tar.gz -qO - | sudo tar xvz -C /usr/local/bin
+
+#You may skip buildkit installation if you don't need to build container images.
+#Install buildkit
+wget -c https://github.com/moby/buildkit/releases/download/v${BUILDKIT_VER}/buildkit-v${BUILDKIT_VER}.linux-amd64.tar.gz -qO - | sudo tar xvz -C /usr/local
+sudo mkdir -p /etc/buildkit
+cat <<EOF | sudo tee /etc/buildkit/buildkitd.toml
+[worker.oci]
+  enabled = false
+[worker.containerd]
+  enabled = true
+  # namespace should be "k8s.io" for Kubernetes (including Rancher Desktop)
+  namespace = "default"
+EOF
+sudo mkdir -p /usr/local/lib/systemd/system/buildkit.service.d
+cat <<EOF | sudo tee /usr/local/lib/systemd/system/buildkit.service.d/http-proxy.conf
+[Service]
+Environment="HTTP_PROXY=${http_proxy}"
+Environment="HTTPS_PROXY=${https_proxy}"
+Environment="NO_PROXY=${no_proxy}"
+EOF
+sudo -E wget https://raw.githubusercontent.com/moby/buildkit/v${BUILDKIT_VER}/examples/systemd/system/buildkit.service -qO /usr/local/lib/systemd/system/buildkit.service
+sudo -E wget https://raw.githubusercontent.com/moby/buildkit/v${BUILDKIT_VER}/examples/systemd/system/buildkit.socket -qO /usr/local/lib/systemd/system/buildkit.socket
+sudo systemctl daemon-reload
+sudo systemctl enable --now buildkit
+sudo systemctl restart buildkit
+
+
+
    +
  1. Install kubeadm and related components

  2. +
+
# You may change the component version if necessary
+K8S_VER="1.29"
+
+#Install kubeadm/kubectl/kubelet
+sudo apt-get update
+sudo apt-get install -y apt-transport-https ca-certificates curl gpg
+sudo mkdir -p -m 755 /etc/apt/keyrings
+curl -fsSL https://pkgs.k8s.io/core:/stable:/v${K8S_VER}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg --yes
+echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v${K8S_VER}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
+sudo apt-get update
+sudo apt-get install -y kubelet kubeadm kubectl
+
+
+
+
+

Step 2. Create the k8s cluster

+
    +
  1. (optional) Install helm v3: on node k8s-master, run the following commands:

  2. +
+
#You may skip helm v3 installation if you don't plan to use helm
+curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
+echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
+sudo apt-get update
+sudo apt-get install -y helm
+
+
+
    +
  1. Initialize the Kubernetes control-plane node: on node k8s-master, run the following commands:

  2. +
+
POD_CIDR="10.244.0.0/16"
+sudo -E kubeadm init --pod-network-cidr "${POD_CIDR}"
+
+
+

Once succeed, you’ll find the kubeadm’s output such as the following. Please record the kubeadm join command line for later use.

+
Your Kubernetes control-plane has initialized successfully!
+
+To start using your cluster, you need to run the following as a regular user:
+
+ mkdir -p $HOME/.kube
+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+Alternatively, if you are the root user, you can run:
+
+ export KUBECONFIG=/etc/kubernetes/admin.conf
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+ https://kubernetes.io/docs/concepts/cluster-administration/addons/
+
+Then you can join any number of worker nodes by running the following on each as root:
+
+kubeadm join 192.168.121.35:6443 --token 26tg15.km2ru94h9ht9h6ou \
+       --discovery-token-ca-cert-hash sha256:123f3f8ebaf62f8dfc4542360e5103842408a6cdf630af159e2abc260201ba99
+
+
+
    +
  1. Create kubectl configuration for a regular user: on node k8s-master, run the following commands:

  2. +
+
mkdir -p $HOME/.kube
+sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+sudo chown $(id -u):$(id -g) $HOME/.kube/config
+# install bash-completion for kubectl
+sudo apt-get install -y bash-completion
+kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl
+
+
+
    +
  1. Install Kubernetes CNI Calico: on node k8s-master, run the following commands:

  2. +
+
# Please set correct NODE_CIDR based on your node ip address.
+# In this example, because both nodes are in 192.168.121.0/24 subnet,
+# we set NODE_CIDR accordingly.
+NODE_CIDR="192.168.121.0/24"
+# You may change the component version if necessary
+CALICO_VER="3.28.0"
+kubectl create -f "https://raw.githubusercontent.com/projectcalico/calico/v${CALICO_VER}/manifests/tigera-operator.yaml"
+sleep 10
+cat <<EOF | kubectl create -f -
+# This section includes base Calico installation configuration.
+# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
+apiVersion: operator.tigera.io/v1
+kind: Installation
+metadata:
+  name: default
+spec:
+  # Configures Calico networking.
+  calicoNetwork:
+    ipPools:
+    - name: default-ipv4-ippool
+      blockSize: 26
+      cidr: ${POD_CIDR}
+      encapsulation: VXLANCrossSubnet
+      natOutgoing: Enabled
+      nodeSelector: all()
+    nodeAddressAutodetectionV4:
+      cidrs: ["${NODE_CIDR}"]
+---
+# This section configures the Calico API server.
+# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
+apiVersion: operator.tigera.io/v1
+kind: APIServer
+metadata:
+  name: default
+spec: {}
+EOF
+
+
+
    +
  1. Join Kubernetes worker nodes: on node k8s-worker, run the following commands:

  2. +
+
# run the kubeadm join command which we recorded at the end of the step 2.4
+sudo kubeadm join 192.168.121.35:6443 --token 26tg15.km2ru94h9ht9h6ou --discovery-token-ca-cert-hash sha256:123f3f8ebaf62f8dfc4542360e5103842408a6cdf630af159e2abc260201ba99
+
+
+
    +
  1. On Kubernetes master node, verify that all nodes are joined successfully:

  2. +
+

Run command kubectl get pod -A to make sure all pods are in ‘Running’ status. If any of the pods are not in ‘Running’ status, please retry the above command. It could take up to several minutes for all the pods to be ready.

+

Possible output of pod status could be something like

+
vagrant@k8s-master:~$ kubectl get pod -A
+NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
+calico-apiserver   calico-apiserver-59c8dc5bff-ff9vs          1/1     Running   0          3m15s
+calico-apiserver   calico-apiserver-59c8dc5bff-zblxr          1/1     Running   0          3m15s
+calico-system      calico-kube-controllers-596b8f9f7d-68nnp   1/1     Running   0          5m19s
+calico-system      calico-node-gcng6                          1/1     Running   0          5m20s
+calico-system      calico-node-xlwsb                          1/1     Running   0          2m7s
+calico-system      calico-typha-65f5745579-l29v8              1/1     Running   0          5m20s
+calico-system      csi-node-driver-q5gmm                      2/2     Running   0          2m7s
+calico-system      csi-node-driver-xrhw5                      2/2     Running   0          5m19s
+kube-system        coredns-76f75df574-5z57n                   1/1     Running   0          25m
+kube-system        coredns-76f75df574-88pkk                   1/1     Running   0          25m
+kube-system        etcd-k8s-master                            1/1     Running   0          25m
+kube-system        kube-apiserver-k8s-master                  1/1     Running   0          25m
+kube-system        kube-controller-manager-k8s-master         1/1     Running   0          25m
+kube-system        kube-proxy-jbd6r                           1/1     Running   0          2m7s
+kube-system        kube-proxy-lrgb6                           1/1     Running   0          25m
+kube-system        kube-scheduler-k8s-master                  1/1     Running   0          25m
+tigera-operator    tigera-operator-76c4974c85-lx79h           1/1     Running   0          10m
+
+
+

Run command kubectl get node to make sure all node are in ‘Ready’ status. Possible output should be something like:

+
vagrant@k8s-master:~$ kubectl get node
+NAME          STATUS   ROLES           AGE     VERSION
+k8s-master    Ready    control-plane   31m     v1.29.6
+k8s-worker1   Ready    <none>          7m31s   v1.29.6
+
+
+
+
+

Step 3 (optional) Reset Kubernetes cluster

+

In some cases, you may want to reset the Kubernetes cluster in case some commands after kubeadm init fail and you want to reinstall Kubernetes. Please check tear down the Kubernetes for details.

+

Below is the example of how to reset the Kubernetes cluster we just created:

+

On node k8s-master, run the following command:

+
# drain node k8s-worker1
+kubectl drain k8s-worker1 --delete-emptydir-data --force --ignore-daemonsets
+
+
+

On node k8s-worker1, run the following command:

+
sudo kubeadm reset
+# manually reset iptables/ipvs if necessary
+
+
+

On node k8s-master, delete node k8s-worker1:

+
kubectl delete node k8s-worker1
+
+
+

On node k8s-master, clean up the master node:

+
sudo kubeadm reset
+# manually reset iptables/ipvs if necessary
+
+
+
+
+

NOTES

+
    +
  1. By default, normal workload won’t be scheduled to nodes in control-plane K8S role(i.e. K8S master node). If you want K8S to schedule normal workload to those nodes, please run the following commands on K8S master node:

  2. +
+
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
+kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-
+
+
+
    +
  1. Verifying K8S CNI +If you see any issues of the inter-node pod-to-pod communication, please use the following steps to verify that k8s CNI is working correctly:

  2. +
+
# Create the K8S manifest file for our debug pods
+cat <<EOF | tee debug.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  labels:
+    run: debug
+  name: debug
+spec:
+  replicas: 2
+  selector:
+    matchLabels:
+      run: debug
+  template:
+    metadata:
+      labels:
+        run: debug
+    spec:
+      affinity:
+        podAntiAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchExpressions:
+              - key: run
+                operator: In
+                values:
+                - debug
+            topologyKey: kubernetes.io/hostname
+      containers:
+      - image: nicolaka/netshoot:latest
+        name: debug
+        command: [ "sleep", "infinity" ]
+EOF
+# Create the debug pod
+kubectl apply -f debug.yaml
+
+
+

Wait until all 2 debug pods are in ‘Running’ status:

+
vagrant@k8s-master:~$ kubectl get pod -owide
+NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
+debug-ddfd698ff-7gsdc   1/1     Running   0          91s   10.244.194.66    k8s-worker1   <none>           <none>
+debug-ddfd698ff-z5qpv   1/1     Running   0          91s   10.244.235.199   k8s-master    <none>           <none>
+
+
+

Make sure pod debug-ddfd698ff-z5qpv on node k8s-master can ping to the ip address of another pod debug-ddfd698ff-7gsdc on node k8s-worker1 to verify east-west traffic is working in K8S.

+
vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 10.244.194.66
+PING 10.244.194.66 (10.244.194.66) 56(84) bytes of data.
+64 bytes from 10.244.194.66: icmp_seq=1 ttl=62 time=1.76 ms
+
+--- 10.244.194.66 ping statistics ---
+1 packets transmitted, 1 received, 0% packet loss, time 0ms
+rtt min/avg/max/mdev = 1.755/1.755/1.755/0.000 ms
+
+
+

Make sure pod debug-ddfd698ff-z5qpv on node k8s-master can ping to the ip address of another node k8s-worker1 to verify north-south traffic is working in K8S.

+
vagrant@k8s-master:~$ kubectl exec debug-ddfd698ff-z5qpv -- ping -c 1 192.168.121.133
+PING 192.168.121.133 (192.168.121.133) 56(84) bytes of data.
+64 bytes from 192.168.121.133: icmp_seq=1 ttl=63 time=1.34 ms
+
+--- 192.168.121.133 ping statistics ---
+1 packets transmitted, 1 received, 0% packet loss, time 0ms
+rtt min/avg/max/mdev = 1.339/1.339/1.339/0.000 ms
+
+
+

Delete debug pods after use:

+
kubectl delete -f debug.yaml
+
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/guide/installation/k8s_install/k8s_install_kubespray.html b/latest/guide/installation/k8s_install/k8s_install_kubespray.html new file mode 100644 index 000000000..597ef2dfe --- /dev/null +++ b/latest/guide/installation/k8s_install/k8s_install_kubespray.html @@ -0,0 +1,450 @@ + + + + + + + Kubernetes installation using Kubespray — OPEA™ 0.8 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+ + + + +
+ +
+

Kubernetes installation using Kubespray

+

In this document, we’ll install Kubernetes v1.29 using Kubespray on a 2-node cluster.

+

There are several ways to use Kubespray to deploy a Kubernetes cluster. In this document, we choose to use the Ansible way. For other ways to use Kubespary, refer to Kubespray’s document.

+
+

Node preparation

+ + + + + + + + + + + + + + + + + +

hostname

ip address

Operating System

k8s-master

192.168.121.35/24

Ubuntu 22.04

k8s-worker

192.168.121.133/24

Ubuntu 22.04

+

We assume these two machines are used for Kubernetes 2-node cluster. They have direct internet access both in bash terminal and in apt repository.

+

If on any of the above 2 nodes, you have previously installed either Kubernetes, or any other container runtime(i.e. docker, containerd, etc.), please make sure you have clean-up those first. Refer to Kubernetes installation demo using kubeadm to clean up the environment.

+
+
+

Prerequisites

+

We assume that there is a third machine as your operating machine. You can log in to this machine and execute the Ansible command. Any of the above two K8s nodes can be used as the operating machine. Unless otherwise specified, all the following operations are performed on the operating machine.

+

Please make sure that the operating machine can login to both K8s nodes via SSH without a password prompt. There are different ways to configure the ssh login without password promotion. A simple way is to copy the public key of the operating machine to the K8s nodes. For example:

+
# generate key pair in the operation machine
+ssh-keygen -t rsa -b 4096
+# manually copy the public key to the K8s master and worker nodes
+cat ~/.ssh/id_rsa.pub | ssh username@k8s-master "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+cat ~/.ssh/id_rsa.pub | ssh username@k8s-worker "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
+
+
+
+
+

Step 1. Set up Kubespray and Ansible

+

Python3 (version >= 3.10) is required in this step. If you don’t have, go to Python website for installation guide.

+

You shall set up a Python virtual environment and install Ansible and other Kubespray dependencies. Simply, you can just run following commands. You can also go to Kubespray Ansible installation guide for details. To get the kubespray code, please check out the latest release version tag of kubespray. Here we use kubespary v2.25.0 as an example.

+
git clone https://github.com/kubernetes-sigs/kubespray.git
+VENVDIR=kubespray-venv
+KUBESPRAYDIR=kubespray
+python3 -m venv $VENVDIR
+source $VENVDIR/bin/activate
+cd $KUBESPRAYDIR
+# Check out the latest release version tag of kubespray.
+git checkout v2.25.0
+pip install -U -r requirements.txt
+
+
+
+
+

Step 2. Build your own inventory

+

Ansible inventory defines the hosts and groups of hosts on which Ansible tasks are to be executed. You can copy a sample inventory with following command:

+
cp -r inventory/sample inventory/mycluster
+
+
+

Edit your inventory file inventory/mycluster/inventory.ini to config the node name and IP address. The inventory file used in this demo is as follows:

+
[all]
+k8s-master ansible_host=192.168.121.35
+k8s-worker ansible_host=192.168.121.133
+
+[kube_control_plane]
+k8s-master
+
+[etcd]
+k8s-master
+
+[kube_node]
+k8s-master
+k8s-worker
+
+[calico_rr]
+
+[k8s_cluster:children]
+kube_control_plane
+kube_node
+calico_rr
+
+
+
+
+

Step 3. Define Kubernetes configuration

+

Kubespray gives you ability to customize Kubernetes instalation, for example define:

+
    +
  • network plugin

  • +
  • container manager

  • +
  • kube_apiserver_port

  • +
  • kube_pods_subnet

  • +
  • all K&s addons configurations, or even define to deploy cluster on hyperscaller like AWS or GCP. +All of those settings are stored in group vars defined in inventory/mycluster/group_vars

  • +
+

For K&s settings look in inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

+

NOTE: If you noted issues on TASK [kubernetes/control-plane : Kubeadm | Initialize first master] in K& deployment, change the port on which API Server will be listening on from 6443 to 8080. By default Kubespray configures kube_control_plane hosts with insecure access to kube-apiserver via port 8080. Refer to kubespray getting-started

+
# The port the API Server will be listening on.
+kube_apiserver_ip: "{{ kube_service_addresses | ansible.utils.ipaddr('net') | ansible.utils.ipaddr(1) | ansible.utils.ipaddr('address') }}"
+kube_apiserver_port: 8080  # (http)
+
+
+
+
+

Step 4. Deploy Kubernetes

+

You can clean up old Kubernetes cluster with Ansible playbook with following command:

+
# Clean up old Kubernetes cluster with Ansible Playbook - run the playbook as root
+# The option `--become` is required, as for example cleaning up SSL keys in /etc/,
+# uninstalling old packages and interacting with various systemd daemons.
+# Without --become the playbook will fail to run!
+# And be mindful that it will remove the current Kubernetes cluster (if it's running)!
+ansible-playbook -i inventory/mycluster/inventory.ini  --become --become-user=root -e override_system_hostname=false reset.yml
+
+
+

Then you can deploy Kubernetes with Ansible playbook with following command:

+
# Deploy Kubespray with Ansible Playbook - run the playbook as root
+# The option `--become` is required, as for example writing SSL keys in /etc/,
+# installing packages and interacting with various systemd daemons.
+# Without --become the playbook will fail to run!
+ansible-playbook -i inventory/mycluster/inventory.ini  --become --become-user=root -e override_system_hostname=false cluster.yml
+
+
+

The Ansible playbooks will take several minutes to finish. After playbook is done, you can check the output. If failed=0 exists, it means playbook execution is successfully done.

+
+
+

Step 5. Create kubectl configuration

+

If you want to use Kubernetes command line tool kubectl on k8s-master node, please login to the node k8s-master and run the following commands:

+
mkdir -p $HOME/.kube
+sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+
+

If you want to access this Kubernetes cluster from other machines, you can install kubectl by sudo apt-get install -y kubectl and copy over the configuration from the k8-master node and set ownership as above.

+

Then run following command to check the status of your Kubernetes cluster:

+
$ kubectl get node
+NAME          STATUS   ROLES           AGE     VERSION
+k8s-master    Ready    control-plane   31m     v1.29.5
+k8s-worker   Ready    <none>          7m31s   v1.29.5
+$ kubectl get pods -A
+NAMESPACE                    NAME                                       READY   STATUS    RESTARTS   AGE
+kube-system                  calico-kube-controllers-68485cbf9c-vwqqj   1/1     Running   0          23m
+kube-system                  calico-node-fxr6v                          1/1     Running   0          24m
+kube-system                  calico-node-v95sp                          1/1     Running   0          23m
+kube-system                  coredns-69db55dd76-ctld7                   1/1     Running   0          23m
+kube-system                  coredns-69db55dd76-ztwfg                   1/1     Running   0          23m
+kube-system                  dns-autoscaler-6d5984c657-xbwtc            1/1     Running   0          23m
+kube-system                  kube-apiserver-satg-opea-0                 1/1     Running   0          24m
+kube-system                  kube-controller-manager-satg-opea-0        1/1     Running   0          24m
+kube-system                  kube-proxy-8zmhk                           1/1     Running   0          23m
+kube-system                  kube-proxy-hbq78                           1/1     Running   0          23m
+kube-system                  kube-scheduler-satg-opea-0                 1/1     Running   0          24m
+kube-system                  nginx-proxy-satg-opea-3                    1/1     Running   0          23m
+kube-system                  nodelocaldns-kbcnv                         1/1     Running   0          23m
+kube-system                  nodelocaldns-wvktt                         1/1     Running   0          24m
+
+
+

Now congratulations. Your two-node K8s cluster is ready to use.

+
+
+

Quick reference

+
+

How to deploy a single node Kubernetes?

+

Deploying a single-node K8s cluster is very similar to setting up a multi-node (>=2) K8s cluster.

+

Follow the previous Step 1. Set up Kubespray and Ansible to set up the environment.

+

And then in Step 2. Build your own inventory, you can create single-node Ansible inventory by copying the single-node inventory sample as following:

+
cp -r inventory/local inventory/mycluster
+
+
+

Edit your single-node inventory inventory/mycluster/hosts.ini to replace the node name from node1 to your real node name (for example k8s-master) using following command:

+
sed -i "s/node1/k8s-master/g" inventory/mycluster/hosts.ini
+
+
+

Then your single-node inventory will look like below:

+
k8s-master ansible_connection=local local_release_dir={{ansible_env.HOME}}/releases
+
+[kube_control_plane]
+k8s-master
+
+[etcd]
+k8s-master
+
+[kube_node]
+k8s-master
+
+[k8s_cluster:children]
+kube_node
+kube_control_plane
+
+
+

And then follow Step 3. Deploy Kubernetes, please pay attention to the inventory name while executing Ansible playbook, which is inventory/mycluster/hosts.ini in single node deployment. When the playbook is executed successfully, you will get a 1-node K8s ready.

+

And the follow Step 4. Create kubectl configuration to set up kubectl. You can check the status by kubectl get nodes.

+
+
+

How to scale Kubernetes cluster to add more nodes?

+

Assume you’ve already have a two-node K8s cluster and you want to scale it to three nodes. The third node information is:

+ + + + + + + + + + + + + +

hostname

ip address

Operating System

third-node

192.168.121.134/24

Ubuntu 22.04

+

Make sure the third node has internet access and can be logged in via SSH without password promotion from your operating machine.

+

Edit your Ansible inventory file to add the third node information to [all] and [kube_node] section as following:

+
[all]
+k8s-master ansible_host=192.168.121.35
+k8s-worker ansible_host=192.168.121.133
+third-node ansible_host=192.168.121.134
+
+[kube_control_plane]
+k8s-master
+
+[etcd]
+k8s-master
+
+[kube_node]
+k8s-master
+k8s-worker
+third-node
+
+[calico_rr]
+
+[k8s_cluster:children]
+kube_control_plane
+kube_node
+calico_rr
+
+
+

Then you can deploy Kubernetes to the third node with Ansible playbook with following command:

+
# Deploy Kubespray with Ansible Playbook - run the playbook as root
+# The option `--become` is required, as for example writing SSL keys in /etc/,
+# installing packages and interacting with various systemd daemons.
+# Without --become the playbook will fail to run!
+ansible-playbook -i inventory/mycluster/inventory.ini --limit third-node --become --become-user=root scale.yml -b -v
+
+
+

When the playbook is executed successfully, you can check if the third node is ready with following command:

+
kubectl get nodes
+
+
+

For more information, you can visit Kubespray document for adding/removing Kubernetes node.

+
+
+

How to config proxy?

+

If your nodes need proxy to access internet, you will need extra configurations during deploying K8s.

+

We assume your proxy is as below:

+
- http_proxy="http://proxy.fake-proxy.com:911"
+- https_proxy="http://proxy.fake-proxy.com:912"
+
+
+

You can change parameters in inventory/mycluster/group_vars/all/all.yml to set http_proxy,https_proxy, and additional_no_proxy as following. Please make sure you’ve added all the nodes’ ip addresses into the additional_no_proxy parameter. In this example, we use 192.168.121.0/24 to represent all nodes’ ip addresses.

+
## Set these proxy values in order to update package manager and docker daemon to use proxies and custom CA for https_proxy if needed
+http_proxy: "http://proxy.fake-proxy.com:911"
+https_proxy: "http://proxy.fake-proxy.com:912"
+
+## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
+additional_no_proxy: "localhost,127.0.0.1,192.168.121.0/24"
+
+
+
+
+
+ + +
+
+ +
+ +
+ +
+

© Copyright 2024-2024 OPEA™, a Series of LF Projects, LLC. + + +Published on Aug 05, 2024. + +

+ + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/latest/index.html b/latest/index.html index 8cb849188..b416b6644 100644 --- a/latest/index.html +++ b/latest/index.html @@ -25,7 +25,8 @@ - + + @@ -79,6 +80,31 @@ @@ -123,9 +149,9 @@

OPEA Project Documentation

-

Welcome to the OPEA Project (latest) documentation published Aug 02, 2024. +

Welcome to the OPEA Project (latest) documentation published Aug 05, 2024. OPEA streamlines implementation of enterprise-grade Generative AI by efficiently -integrating secure, performant, and cost-effective Generative AI workflows into business value.

+integrating secure, performant, and cost-effective Generative AI workflows to business value.

Source code for the OPEA Project is maintained in the OPEA Project GitHub repo.

@@ -136,15 +162,17 @@
-