{
  "version": 3,
  "sources": ["ssg:https://framerusercontent.com/modules/E67YbaqbTjreEHUuE447/ZqV6MbUZ9lFEwKWLEMI1/roDaneUSn-6.js"],
  "sourcesContent": ["import{jsx as e,jsxs as t}from\"react/jsx-runtime\";import{ComponentPresetsConsumer as n,Link as a}from\"framer\";import{motion as r}from\"framer-motion\";import*as o from\"react\";import{Youtube as i}from\"https://framerusercontent.com/modules/NEd4VmDdsxM3StIUbddO/1de6WpgIbCrKkRcPfQcW/YouTube.js\";import s from\"https://framerusercontent.com/modules/pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js\";export const richText=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"How do you choose a suitable embedding model for your RAG application? A popular starting point for selecting a text embedding model is the \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/spaces/mteb/leaderboard\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Hugging Face MTEB (Massive Text Embedding Benchmark) leaderboard\"})}),\", as shown below:\"]}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"666\",src:\"https://framerusercontent.com/images/BrYn3VtrueYmEvgwaFr0YrBk4c.png\",srcSet:\"https://framerusercontent.com/images/BrYn3VtrueYmEvgwaFr0YrBk4c.png?scale-down-to=512 512w,https://framerusercontent.com/images/BrYn3VtrueYmEvgwaFr0YrBk4c.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/BrYn3VtrueYmEvgwaFr0YrBk4c.png?scale-down-to=2048 2048w,https://framerusercontent.com/images/BrYn3VtrueYmEvgwaFr0YrBk4c.png 2528w\",style:{aspectRatio:\"2528 / 1332\"},width:\"1264\"}),/*#__PURE__*/e(\"p\",{children:\"However, if you\u2019re new to embeddings, this leaderboard with all its various tabs, filters, and metrics, may appear somewhat intimidating. If that\u2019s the case, you're in the right place! This post will help you navigate the embedding models and the leaderboard with ease. You\u2019ll learn:\\xa0\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"When to use a Bi-Encoder and a Cross-Encoder\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"What happens during a Bi-Encoder pre-training\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"How embedding models are benchmarked\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"How to select a baseline embedding model for your use case\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"How to improve upon baseline embeddings\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Let\u2019s start by unpacking how we calculate similarity between two pieces of text using encoder transformer models.\\xa0\"}),/*#__PURE__*/e(\"h2\",{children:\"Bi-Encoders vs Cross-Encoders\"}),/*#__PURE__*/e(\"p\",{children:\"When it comes to calculating similarity between sentence or document pairs, there are two primary approaches: Bi-Encoder transformer models and Cross-Encoder transformer models. You may have noticed the checkbox filters for both of these in the MTEB leaderboard. The models used to generate vector representations of your data for the RAG\u2019s knowledge base are Bi-Encoders. However, Cross-Encoders also have a role to play in RAG systems, so let's dive into the differences between them.\"}),/*#__PURE__*/t(\"p\",{children:[\"Bi-Encoders produce a vector representation for a given sentence or document chunk; which is usually a single vector of a fixed dimension. Note that there\u2019s an exception to this - \",/*#__PURE__*/e(a,{href:\"https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter-in-search/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"ColBERT\"})}),\", but we won\u2019t be covering it in this blog post. In most cases, whether the input text is a single sentence, such as a user question, or a full paragraph, like a document excerpt, as long as the input fits within the embedding model's maximum sequence length, the output will be a fixed-dimension vector. Here\u2019s how it works: the pre-trained encoder model (usually BERT) converts the text into tokens, for each of which it has learned a vector representation during pre-training. It then applies a pooling step to average individual token representations into a single vector representation.\\xa0\"]}),/*#__PURE__*/e(\"p\",{children:\"Common types of pooling are:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"CLS pooling: vector representation of the special [CLS] token (designed to model the representation for the sentence that follows it) becomes the representation for the whole sequence\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Mean pooling: the average of token vector representations is returned as the representation for the whole sequence\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Max pooling: the token vector representation with the largest values becomes the representation for the whole sequence\"})})]}),/*#__PURE__*/e(\"p\",{children:\"The goal is to compress the granular token-level representations into a single fixed-length representation that encapsulates the meaning of the entire input sequence.\"}),/*#__PURE__*/t(\"p\",{children:[\"The \",/*#__PURE__*/e(\"code\",{children:\"bi\"}),\" in Bi-Encoder stems from the fact that documents and user queries are processed separately by two independent instances of the same encoder model. The produced vector representations can then be compared using cosine similarity:\\xa0\"]}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"650\",src:\"https://framerusercontent.com/images/evmHDHBBC1pUVHit7uqforbbgho.png\",srcSet:\"https://framerusercontent.com/images/evmHDHBBC1pUVHit7uqforbbgho.png?scale-down-to=512 512w,https://framerusercontent.com/images/evmHDHBBC1pUVHit7uqforbbgho.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/evmHDHBBC1pUVHit7uqforbbgho.png 1455w\",style:{aspectRatio:\"1455 / 1301\"},width:\"727\"}),/*#__PURE__*/e(\"p\",{children:\"In this setup, document and query vector representations are computed using the same embedding model, but in complete isolation from each other. The model never sees the documents and user queries simultaneously. This is important because it enables us to generate document embeddings at any point in time and store them in a vector store. At inference time, only the user query embedding needs to be computed to run the similarity search and find documents with similar vector representations.\"}),/*#__PURE__*/e(\"p\",{children:\"What about Cross-Encoder embedding models? How do they work? A Cross-Encoder takes in both text pieces (e.g. a user query and a document) simultaneously. It does not produce a vector representation for each of them, but instead outputs a value between 0 and 1 indicating the similarity of the input pair.\\xa0\"}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"394\",src:\"https://framerusercontent.com/images/vW3PuCKYLjuXgvmk79siTMsnwiE.png\",srcSet:\"https://framerusercontent.com/images/vW3PuCKYLjuXgvmk79siTMsnwiE.png?scale-down-to=512 512w,https://framerusercontent.com/images/vW3PuCKYLjuXgvmk79siTMsnwiE.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/vW3PuCKYLjuXgvmk79siTMsnwiE.png 1164w\",style:{aspectRatio:\"1164 / 789\"},width:\"582\"}),/*#__PURE__*/t(\"p\",{children:[\"Because Cross-Encoders have access to both user query and the document at the same time, they \",/*#__PURE__*/e(a,{href:\"https://arxiv.org/abs/1908.10084\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"have shown\"})}),\" to achieve better performances at identifying similar documents compared to Bi-Encoders. However, this comes at the cost of computational efficiency. With Bi-Encoders, you can pre-compute embeddings for millions of documents ahead of time and leverage fast Approximate Nearest Neighbors (ANN) algorithms provided by vector stores to find similar vector representations.\"]}),/*#__PURE__*/e(\"p\",{children:\"In contrast, Cross-Encoders would require calculating scores for each query and each of the millions of documents, which is not feasible at inference time. Nevertheless, their strength in capturing nuanced relationships between the query and candidate documents makes them an excellent choice as a reranker. Once you've retrieved a batch of candidates, you\u2019re working with a small set of documents, and you can use a reranker to re-assess the similarity to the original query. Using a Cross-Encoder at this stage is computationally manageable and allows you to leverage its strengths and improve the accuracy of the retrieval.\"}),/*#__PURE__*/e(\"p\",{children:\"For the initial embedding of documents, however, a Bi-Encoder is necessary. Let's take a closer look at how these models are trained and benchmarked to understand how to select the best one for your specific use case.\"}),/*#__PURE__*/e(\"h2\",{children:\"How Bi-Encoder models are pre-trained and benchmarked\"}),/*#__PURE__*/e(\"h3\",{children:\"Pre-training an embedding model\"}),/*#__PURE__*/e(\"p\",{children:\"Although training steps may vary slightly from one model to another, and not all model publishers have shared their training details, the pre-training steps for Bi-Encoder embedding models generally follow a similar pattern.\"}),/*#__PURE__*/t(\"p\",{children:[\"The process begins with a pre-trained general-purpose encoder-style model, such as a small, ~100M-parameter pre-trained \",/*#__PURE__*/e(a,{href:\"https://arxiv.org/abs/1810.04805\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"BERT\"})}),\". Despite the availability of larger, more advanced generative LLMs, this smaller BERT model remains a solid backbone for text embedding models today.\"]}),/*#__PURE__*/e(\"p\",{children:\"To fine-tune the pre-trained BERT for information retrieval, a dataset is assembled consisting of text pairs (question/answer, query/document) with a contrastive learning objective that reflects the downstream use of text embeddings. The text pairs can be positive (e.g. a question and an answer to it), and negative (a question and unrelated text). The goal is for the model to learn to bring the embeddings of positive pairs closer together in vector space while pushing the embeddings of negative pairs apart.\\xa0\"}),/*#__PURE__*/t(\"p\",{children:[\"The training process often has more than one step. Initially, the model is trained on a large corpus of text pairs in a weakly supervised manner. However, to achieve SOTA results on the leaderboards, a second round of contrastive training is often employed. In this second round, the model is further trained on a smaller dataset with high-quality data and particularly challenging examples from academic datasets like \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/datasets/microsoft/ms_marco\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"MSMARCO\"})}),\", \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/datasets/hotpotqa/hotpot_qa\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"HotpotQA\"})}),\", and \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/datasets/google-research-datasets/natural_questions\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"NQ\"})}),\". This training recipe delivers strong general-purpose embedding models that top the MTEB leaderboard. Given the importance of the MTEB leaderboard in guiding the choice of embedding model, let's take a closer look at what the MTEB benchmark is.\"]}),/*#__PURE__*/e(\"h3\",{children:\"Embedding model benchmarks\"}),/*#__PURE__*/t(\"p\",{children:[\"Evaluating the quality of embedding models within retrieval systems in general, and not within a context of a specific use case, can be challenging. Decades of academic research have led to the development of the \",/*#__PURE__*/e(a,{href:\"https://arxiv.org/pdf/2210.07316\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Massive Text Embedding Benchmark (MTEB)\"})}),\" that has become a standard benchmark for text embedding models.\"]}),/*#__PURE__*/t(\"p\",{children:[\"MTEB spans 8 embedding tasks, including bitext mining, classification, clustering, pair classification, reranking, retrieval, STS (semantic textual similarity), and summarization. At the time of writing, it covers a total of 181 datasets spanning multiple domains, text lengths, and languages. This is the most comprehensive public benchmark of text embeddings to date. For the purpose of evaluating embedding models for RAG, we focus on the \",/*#__PURE__*/e(\"strong\",{children:\"Retrieval\"}),\" tab of the MTEB leaderboard.\"]}),/*#__PURE__*/e(\"p\",{children:\"A widely used metric for evaluating a model\u2019s retrieval performance is Normalized Discounted Cumulative Gain @ 10 (NDCG@10). There are other metrics, of course, such as Precision@K, Recall@K, and more, but NDCG is a standard benchmarking metric that assesses the quality of an ordered list of results or predictions. NDCG@10 specifically evaluates the top 10 retrieval results. The calculation of NDCG considers both the relevance of each result and its position in the list. The metric ranges from 0 to 1, where 1 indicates a perfect match with the ideal order, and lower values represent a lower quality of the results ranking. In the leaderboard, you can find out how well different embeddings model perform at information retrieval (measured with NDCG@10) under the Retrieval tab.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"You can check the average scores across for the embedding models, but depending on your specific RAG application you may also want to refine your selection. For instance you may want to narrow down your choices to a specific language (e.g., English, Chinese, French, Polish) or domain (e.g., law).\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Even within a language, there\u2019s room for nuance. We won\u2019t cover all of the datasets here, but let\u2019s take a brief look at the datasets used to measure retrieval performance for embedding models in English language:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"ArguAna\"}),\": pairs of arguments and counterarguments scraped from an online debate portal.\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"ClimateFEVER\"}),\": dataset for verification of climate change-related claims\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"CQADupstackRetrieval\"}),\": community question-answering dataset with data from 12 different StackExchange subforums: Android, English, Gaming, Gis, Mathematica, Physics, Programmers, Stats, Tex, Unix, Webmasters and Wordpress.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"DBPedia\"}),\": a set of heterogeneous entity-bearing queries containing named entities, IR style keywords, and natural language queries.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"FEVER\"}),\": a dataset build for fact verification systems on real-world misinformation\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"FiQA2018\"}),\": opinion based question-answer pairs for financial data by crawling StackExchange posts under the Investment topic from 2009-2017 as our corpus\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"HotpotQA\"}),\": multi-hop questions which require reasoning over multiple paragraphs to find the correct answer. Answers source - Wikipedia.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"MSMARCO\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"NFCorpus\"}),\": contains natural language queries harvested from NutritionFacts and annotated medical documents from PubMed\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"NQ\"}),\": Google search queries and documents with paragraphs and answer spans within Wikipedia articles.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"QuoraRetrieval\"}),\":\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"SCIDOCS\"}),\": scientific papers and direct-citations\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"SciFact\"}),\": a dataset for fact checking scientific claims\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Touche2020\"}),\": a conversational arguments dataset\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"TRECCOVID\"}),\": biomedical. Queries and scientific articles related to the COVID-19 pandemic\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"A model with the highest average score across all of these dataset will give you a well rounded general purpose model, but if your data is mostly financial, for example, you may care more about the NDCG@10 score on the FiQA2018 dataset than, say, TRECCOVID, or ClimateFever.\\xa0\"}),/*#__PURE__*/e(\"h2\",{children:\"Choosing your baseline embedding model\"}),/*#__PURE__*/e(\"p\",{children:\"Now that you know how embedding models work, how they are trained and how they are academically benchmarked, let\u2019s list the practical tips for choosing an appropriate embedding model:\\xa0\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Look for a bi-encoder on the MTEB Leaderboard. For a RAG application, check out the \u201CRetrieval\u201D task, and explore the NDCG@10 score for datasets in your language and domain. A higher NDCG@10 score indicated a better performance.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Start with a small embedding model. First, the model size directly impacts latency. Second, a smaller model will let you build a quick baseline upon which you can iterate. Finally, a larger model is not necessarily better for your particular use case. It may be overfit on the training data and academic datasets to score high on the leaderboard but it may not translate into the best performance on your custom data.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"Choose an embedding model with a reasonably-sized max tokens for your case. The max tokens value indicates the maximum size of document chunks you can embed with the model. In most cases, you won\u2019t need to embed large documents as a whole, in fact for precise retrieval, smaller chunks are typically preferable. Learn about the importance of chunk sizes in our \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/blog/chunking-for-rag-best-practices\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"recent blog post\"})}),\".\\xa0\"]})})]}),/*#__PURE__*/e(\"ul\",{children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Finally, don\u2019t forget to check the license of the embedding model of your choice, so that you are allowed to use it commercially, if that is your intent.\\xa0\"})})}),/*#__PURE__*/e(\"p\",{children:\"Once you have selected your embedding model, you can easily add it to your data preprocessing pipeline with Unstructured with only a couple of lines of code:\\xa0\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'embedder_config=EmbedderConfig(\\n\\xa0\\xa0\\xa0embedding_provider=\"langchain-huggingface\",\\n\\xa0\\xa0\\xa0embedding_model_name=os.getenv(\"EMBEDDING_MODEL_NAME\"),\\n),',language:\"JSX\"})})}),/*#__PURE__*/e(\"p\",{children:\"Even though in this blog post we have used Hugging Face Leaderboard as an example for comparing embedding models, it doesn\u2019t mean that a model in your unstructured data ETL necessarily has to come from the Hugging Face Hub.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"In fact, Unstructured supports multiple embedding model providers, and you can choose whichever you prefer: OpenAI, HuggingFace, AWS Bedrock, Vertexai, Voyageai, or OctoAI.\"}),/*#__PURE__*/t(\"p\",{children:[\"You can learn more about embedding configuration with Unstructured in \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/ingest/ingest-configuration/embedding-configuration\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"our documentation\"})}),\", and find ETL examples in our \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/examplecode/notebooks\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"notebook collection\"})}),\".\\xa0\"]}),/*#__PURE__*/e(\"p\",{children:\"Once you have your embedding model pick, don\u2019t rely solely on the MTEB benchmark! Evaluate the embedding model on a subset of your own data .\"}),/*#__PURE__*/e(\"h2\",{children:\"Improving the retrieval performance\"}),/*#__PURE__*/e(\"p\",{children:\"Selecting an appropriate baseline embedding model and integrating it into your preprocessing pipeline is the great first step. From then on you can always continue optimizing the retrieval performance of your RAG system. Here are some of the knobs you can turn:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"Optimize chunk sizes. While smaller chunks generally enhance precision, there is no one-size-fits-all approach. Experiment with different chunk sizes to find the sweet spot for your use case. Learn more about \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/blog/chunking-for-rag-best-practices\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"chunking best practices\"})}),\".\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Hybrid search. If your data contains acronyms, domain-specific words, product names or codes that the embedding model was never exposed to, adding keyword search in addition to similarity search will improve retrieval in this situation without adding computational overhead.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"Utilize metadata. Documents do not exist in isolation. For every document type Unstructured extracts \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/api-services/document-elements\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"metadata\"})}),\" that you can use to filter the results.\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Add a reranker step: as mentioned earlier, cross-encoders take in both a user query and a document and are better than bi-encoder combined with similarity search at identifying related pairs. They are inefficient to run over the whole knowledge base, but they can improve the results, if you first retrieve documents with hybrid search, and then apply a cross-encoder as a reranker.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Fine-tune the embedding model. One of the things that you can do to improve the retrieval performance of your RAG system is fine-tuning your embedding model on your data to teach it to match the unique types of questions your users ask to document pieces they expect to have answers. To do so, you won\u2019t need millions of real-world user query examples to start the fine-tuning experiments, you may be able to get by with just a few thousand.\\xa0\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Selecting the right baseline embedding model for a RAG system can be a daunting task, but with the right tools and knowledge, it can be made much easier. By understanding the differences between Bi-Encoders and Cross-Encoders, and how they are trained and evaluated, you can make informed decisions about which model to use for your specific use case.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"The Massive Text Embedding Benchmark (MTEB) leaderboard provides a valuable resource for evaluating the performance of different embedding models, and by considering factors such as language, domain, and task-specific performance, you can refine your selection to find the best model for your needs.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Once you choose the right embedding model, you can integrate it into your Unstructured ETL pipeline with just a few lines of code, no matter where your embedding model is hosted.\\xa0\"}),/*#__PURE__*/t(\"p\",{children:[\"Get your\",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\" Serverless API\"})}),\" key today and build your RAG with confidence! We cannot wait to see what you build with Unstructured Serverless. If you have any questions, we\u2019d love to hear from you! Please feel free to reach out to our team at \",/*#__PURE__*/e(a,{href:\"mailto:hello@unstructured.io\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"hello@unstructured.io\"})}),\" or join our \",/*#__PURE__*/e(a,{href:\"https://short.unstructured.io/pzw05l7\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"\\xa0Slack community\"})}),\".\"]})]});export const richText1=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"Teachers \",/*#__PURE__*/e(a,{href:\"https://theconversation.com/teachers-dont-have-enough-time-to-prepare-well-for-class-we-have-a-solution-175633\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"don\u2019t have enough time\"})}),\" to prepare well for class so finding innovative solutions to streamline their tasks is vital to maintaining education quality in schools. \",/*#__PURE__*/e(a,{href:\"www.alayna.ai\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Alayna AI\"})}),\"'s mission is to leverage AI to allow teachers to focus more on teaching and less on administrative tasks. To achieve this, Alayna partnered with \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured\"})}),\" to enhance their product offerings and bring multimodal RAG features to their platform. In this case study, we explore how Alayna launched their latest feature, an AI Slides and Lesson generator, using \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured's Serverless API\"})}),\" as the key ingestion and preprocessing solution.\"]}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"422\",src:\"https://framerusercontent.com/images/DeFEkvdBKpnDbDSTLeVymz01M4.png\",srcSet:\"https://framerusercontent.com/images/DeFEkvdBKpnDbDSTLeVymz01M4.png?scale-down-to=512 512w,https://framerusercontent.com/images/DeFEkvdBKpnDbDSTLeVymz01M4.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/DeFEkvdBKpnDbDSTLeVymz01M4.png 1600w\",style:{aspectRatio:\"1600 / 845\"},width:\"800\"}),/*#__PURE__*/t(\"p\",{children:[\"^Example slide deck from Alayna\u2019s recent TikTok \",/*#__PURE__*/e(a,{href:\"https://www.tiktok.com/@joinalayna/video/7397514292754533662\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"demo\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Alayna leverages AI to improve educational content creation and delivery. Their flagship product, the AI Slides and Lesson Generator, enables educators to create high-quality, engaging lessons tailored to individual learning styles and objectives. This product allows seamless conversion of PDFs into engaging slideshow presentations, with image and text extraction via Unstructured. They also offer an AI Copilot for educators to assist with everyday teaching tasks.\\xa0\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Multimodal RAG powered by Unstructured\"})}),/*#__PURE__*/t(\"p\",{children:[\"One of the standout features of the Unstructured Serverless API is its ability to handle multimodal data, including text, images, and tables. This was a game-changer for Alayna. As \",/*#__PURE__*/e(a,{href:\"https://www.linkedin.com/in/prabir-vora/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Prabir Vora\"})}),\", Alayna\u2019s co-founder, highlighted, \"]}),/*#__PURE__*/e(\"blockquote\",{children:/*#__PURE__*/e(\"p\",{children:\"\u201CWhen I first learned about Unstructured, I was blown away by the ability to extract not just text, but also images and tables. This is crucial for educational content, where visual aids play a significant role in learning.\u201D\"})}),/*#__PURE__*/t(\"p\",{children:[\"Alayna used the Unstructured Serverless API to partition PDFs and other document formats into their constituent elements. This allowed them to extract meaningful data from textbooks, including diagrams and tables, which were then processed by Alayna's own Large Language Model (LLM) chains built using \",/*#__PURE__*/e(a,{href:\"https://www.langchain.com/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"LangChain\"})}),\". The extracted data was summarized and stored in a vector database, making it accessible for generating new presentations.\"]}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Enhanced Lesson Creation\"})}),/*#__PURE__*/e(\"p\",{children:\"Unstructured provides the unique ability to create engaging lessons from rich media types.For example, teachers now can upload PDFs of textbooks, and the platform will automatically generate a slideshow complete with relevant images and tables. This feature, which Alayna launched on July 8, received overwhelming feedback from educators. As Prabir noted, \"}),/*#__PURE__*/e(\"blockquote\",{children:/*#__PURE__*/e(\"p\",{children:\"\u201CThe fact that they can now upload their PDFs and have them converted into a slideshow is a game-changer.\u201D\"})}),/*#__PURE__*/e(\"p\",{children:\"A recent demo of the Textbook to Slides feature illustrates how a PDF unit on photosynthesis is transformed into a detailed and visually appealing lesson presentation:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{\"--aspect-ratio\":\"560 / 315\",aspectRatio:\"560 / 315\",height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:NEd4VmDdsxM3StIUbddO/1de6WpgIbCrKkRcPfQcW/YouTube.js:Youtube\",children:t=>/*#__PURE__*/e(i,{...t,play:\"Off\",shouldMute:!0,thumbnail:\"Medium Quality\",url:\"https://youtu.be/NKhUuWA4dlE\"})})}),/*#__PURE__*/t(\"p\",{children:[\"This not only saves educators time, but also ensures that the lessons are more engaging for students and appeals to all types of learners. Research has shown that including video, text, audio, and interactive content in course materials \",/*#__PURE__*/e(a,{href:\"https://www.wevideo.com/blog/video-pedagogy-how-top-schools-boost-engagement-deep-learning-and-retention\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"increases information retention rates\"})}),\" by up to 60%.\"]}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Scalability and Serverless Architecture\"})}),/*#__PURE__*/t(\"p\",{children:[\"Alayna chose \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured\u2019s Serverless API\"})}),\" over open-source solutions for its scalability and efficiency. The serverless architecture allowed them to process multiple pages of documents simultaneously, significantly speeding up the workflow,\"]}),/*#__PURE__*/e(\"blockquote\",{children:/*#__PURE__*/e(\"p\",{children:\"\u201CWe could focus on scaling what we are really building rather than having to focus on this part of our functionality,\u201D Prabir explained. \"})}),/*#__PURE__*/e(\"p\",{children:\"This flexibility was crucial as Alayna grew and the number of requests per second increased.\"}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"Conclusion\"})}),/*#__PURE__*/t(\"p\",{children:[\"Together with Unstructured, Alayna delivers a critical tool for educators: directly embedding images and tables from source materials into presentations. This recent \",/*#__PURE__*/e(a,{href:\"https://www.tiktok.com/@missjackson_in3rd/video/7394514160526052639?_r=1&_t=8oGPj1u4QmY\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"TikTok\"})}),\" showcasing the platform's features has garnered significant attention, highlighting the growing interest in AI-driven educational solutions.\"]}),/*#__PURE__*/t(\"p\",{children:[\"Unstructured is proud to support Alayna in their mission to revolutionize education, providing ETL that makes such innovative solutions possible. We look forward to seeing how they continue to transform the educational landscape. If you want to try out the Unstructured Serverless API for yourself, sign up for a 2-week free trial \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"here\"})}),\".\\xa0\"]})]});export const richText2=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Prerequisites:\\xa0\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"Unstructured Serverless API key - \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"get yours here\"})}),\".\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"MongoDB account, a MongoDB Atlas cluster, and your MongoDB connection string (uri). Check out MongoDB\u2019s \",/*#__PURE__*/e(a,{href:\"https://www.mongodb.com/docs/atlas/getting-started/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Getting Started guides\"})}),\" to learn how to set these up.\\xa0\"]})})]}),/*#__PURE__*/t(\"p\",{children:[\"Find the code in this \",/*#__PURE__*/e(a,{href:\"https://github.com/MKhalusova/ai_librarian\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"GitHub repo\"})}),\" to follow along.\\xa0\"]}),/*#__PURE__*/e(\"h2\",{children:\"Unstructured data ETL Pipeline\"}),/*#__PURE__*/t(\"p\",{children:[\"Every RAG application starts with data: a knowledge base with relevant and up-to-date information that will feed the chatbot with crucial context. For this tutorial, we want the AI Librarian to be able to access the personal book collection that we have stored locally in a common EPUB format. If you need some books to get started, you can download over 70,000 free digital books from the \",/*#__PURE__*/e(a,{href:\"https://www.gutenberg.org/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Project Gutenberg website\"})}),\".\\xa0\"]}),/*#__PURE__*/t(\"p\",{children:[\"Let\u2019s make these books ready for our app! To do so, we\u2019ll need to build an ETL (extract, transform, load) pipeline to extract the content of the books into document elements, chunk the document elements into appropriately sized pieces of text (read more about \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/blog/chunking-for-rag-best-practices\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"chunking here\"})}),\"), create vector representations of the chunks with an embedding model, and load the results into a vector store for later retrieval.\\xa0\"]}),/*#__PURE__*/e(\"p\",{children:\"In this tutorial, we\u2019ll be using MongoDB Atlas to store the processed books.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"The whole ETL pipeline can be created with just a few lines of Python code:\\xa0\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'Pipeline.from_configs(\\n\\xa0\\xa0\\xa0context=ProcessorConfig(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0verbose=True,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0tqdm=True,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0num_processes=20\\n\\xa0\\xa0\\xa0),\\n\\xa0\\xa0\\xa0indexer_config=LocalIndexerConfig(input_path=os.getenv(\"BOOKS_PATH\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0recursive=False),\\n\\xa0\\xa0\\xa0downloader_config=LocalDownloaderConfig(),\\n\\xa0\\xa0\\xa0source_connection_config=LocalConnectionConfig(),\\n\\xa0\\xa0\\xa0partitioner_config=PartitionerConfig(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0partition_by_api=True,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0api_key=os.getenv(\"UNSTRUCTURED_API_KEY\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0partition_endpoint=os.getenv(\"UNSTRUCTURED_URL\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0strategy=\"fast\"\\n\\xa0\\xa0\\xa0),\\n\\xa0\\xa0\\xa0chunker_config=ChunkerConfig(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0chunking_strategy=\"by_title\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0chunk_max_characters=512,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0chunk_multipage_sections=True,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0chunk_combine_text_under_n_chars=250,\\n\\xa0\\xa0\\xa0),\\n\\xa0\\xa0\\xa0embedder_config=EmbedderConfig(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0embedding_provider=\"langchain-huggingface\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0embedding_model_name=os.getenv(\"EMBEDDING_MODEL\"),\\n\\xa0\\xa0\\xa0),\\n\\xa0\\xa0\\xa0destination_connection_config=MongoDBConnectionConfig(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0access_config=MongoDBAccessConfig(uri=os.getenv(\"MONGODB_URI\")),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0collection=\"unstructured-demo\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0database=\"books\",\\n\\xa0\\xa0\\xa0),\\n\\xa0\\xa0\\xa0stager_config=MongoDBUploadStagerConfig(),\\n\\xa0\\xa0\\xa0uploader_config=MongoDBUploaderConfig(batch_size=10)\\n).run()',language:\"JSX\"})})}),/*#__PURE__*/t(\"p\",{children:[\"Let\u2019s unpack what\u2019s going on here. Unstructured supports ingesting data from 20+ locations via easy-to-use \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/ingest/source-connectors/overview\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"source connectors\"})}),\", and uploading processed data into 20+ \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/ingest/destination-connector/overview\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"destination connectors\"})}),\".\\xa0\\xa0\"]}),/*#__PURE__*/e(\"p\",{children:\"The ETL pipeline above is constructed from multiple configs that define different aspects of its behavior:\\xa0\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"ProcessorConfig\"}),\": defines the general parameters of the pipeline\u2019s behavior - logging, parallelism, reprocessing, etc.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"LocalIndexerConfig\"}),\", \",/*#__PURE__*/e(\"code\",{children:\"LocalDownloaderConfig\"}),\", and \",/*#__PURE__*/e(\"code\",{children:\"LocalConnectionConfig\"}),\" are the configs for the Local source connector. Our books are stored in a local directory, so we are using the Local source connector to ingest them. The only mandatory parameter here is the \",/*#__PURE__*/e(\"code\",{children:\"input_path\"}),\" that points to where the books are stored.\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"PartitionerConfig\"}),\": Once the books are downloaded from their original source, the first thing Unstructured will do is partition the documents into standardized JSON containing \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/api-services/document-elements\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"document elements and metadata\"})}),\". We\u2019re using Unstructured Serverless API here, but if you set the \",/*#__PURE__*/e(\"code\",{children:\"partition_by_api\"}),\" parameter to \",/*#__PURE__*/e(\"code\",{children:\"False\"}),\", all processing will happen locally on your machine. The \",/*#__PURE__*/e(\"code\",{children:\"fast\"}),\" strategy here lets Unstructured know that we don\u2019t need complex OCR and document understanding models to extract content from these files. Learn more about \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/api-services/partitioning\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"partitioning strategies here\"})}),\".\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"ChunkerConfig\"}),\": Once all of the books are partitioned, the next step is to chunk them. The parameters in this config control the chunking behavior. Here, we want the chunk size to be under 512 characters, but not smaller than 250 characters, if possible. Learn about chunking best practices in our \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/blog/chunking-for-rag-best-practices\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"recent blog post\"})}),\".\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"EmbedderConfig\"}),\": The final processing step is to embed chunks with an embedding model. Unstructured supports multiple popular model providers - specify your favorite provider and model in this config.\\xa0\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"code\",{children:\"MongoDBConnectionConfig\"}),\", \",/*#__PURE__*/e(\"code\",{children:\"MongoDBUploadStagerConfig\"}),\", \",/*#__PURE__*/e(\"code\",{children:\"MongoDBUploaderConfig\"}),\": The three final configs define how you will authenticate yourself with MongoDB and upload the results.\\xa0\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"Note that these configs don\u2019t have to be in this exact order, but it helps to organize them this way to understand the flow of the data.\"}),/*#__PURE__*/t(\"p\",{children:[\"Once the data is preprocessed and loaded into a MongoDB Atlas database, navigate to your MongoDB account, and create a vector search index from the table. Find the detailed \",/*#__PURE__*/e(a,{href:\"https://www.mongodb.com/developer/products/mongodb/langchain-vector-search/#creating-our-search-index\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"steps here\"})}),\". Use the JSON Editor to define your data\u2019s fields - you can find an example in the tutorial\u2019s \",/*#__PURE__*/e(a,{href:\"https://github.com/MKhalusova/ai_librarian/blob/main/mappings.json\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"GitHub repo\"})}),\".\"]}),/*#__PURE__*/t(\"p\",{children:[\"Note: \",/*#__PURE__*/e(\"em\",{children:\"Don\u2019t forget to whitelist the IP for the Python host (your IP) in Network Access, otherwise you will get connection errors when attempting to retrieve data.\"})]}),/*#__PURE__*/e(\"p\",{children:\"At this point the data is preprocessed and loaded, we can set up the retriever for the chatbot.\\xa0\"}),/*#__PURE__*/e(\"h2\",{children:\"MongoDB Retriever\"}),/*#__PURE__*/e(\"p\",{children:\"LangChain integration with MongoDB supports creating retrievers from an existing Vector Search Index, here\u2019s how we can create a LangChain retriever with the index we\u2019ve just created:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'def get_retriever():\\n\\xa0\\xa0\\xa0vectorstore = MongoDBAtlasVectorSearch.from_connection_string(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0connection_string=os.getenv(\"MONGODB_URI\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0namespace=\"books.unstructured-demo\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0embedding=HuggingFaceEmbeddings(model_name=os.getenv(\"EMBEDDING_MODEL\")),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0text_key=\"text\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0embedding_key=\"embeddings\",\\n\\xa0\\xa0\\xa0)\\n\\xa0\\xa0\\xa0return vectorstore.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 6})',language:\"JSX\"})})}),/*#__PURE__*/e(\"p\",{children:\"Make sure to use the exact same embedding model here as the one that you used when preprocessing the documents.\\xa0\"}),/*#__PURE__*/e(\"h2\",{children:\"LangChain-powered RAG\"}),/*#__PURE__*/e(\"p\",{children:\"Finally, let\u2019s build a RAG chain using LangChain to orchestrate the whole process.\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"We\u2019ll use the latest Llama3.1:8b model by Meta AI, available through \",/*#__PURE__*/e(a,{href:\"https://ollama.com/library/llama3.1\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Ollama\"})}),\".\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"We\u2019ll define a system prompt that will instruct the model to behave as a knowledgeable AI Librarian and use provided context to generate answers to user questions.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"We\u2019ll enable memory to have multi-turn conversations and a prompt to rephrase a user\u2019s question to contextualize it with the conversation history.\\xa0\"})})]}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'def get_chain():\\n\\xa0\\xa0\\xa0retriever = get_retriever()\\n\\xa0\\xa0\\xa0local_model = \"llama3.1:8b\"\\n\\xa0\\xa0\\xa0model = ChatOllama(model=local_model,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0num_predict=500,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0stop=[\"<|start_header_id|>\", \"<|end_header_id|>\", \"<|eot_id|>\", \"<|reserved_special_token\"])\\n\\xa0\\xa0\\xa0system_prompt = \"\"\"\\n\\xa0\\xa0\\xa0<|start_header_id|>user<|end_header_id|>\\n\\xa0\\xa0\\xa0You are a helpful and knowledgeable AI Librarian. Use the following context and the user\\'s chat history to\\n\\xa0\\xa0\\xa0help the user. If you don\\'t know the answer, just say that you don\\'t know.\\xa0\\n\\xa0\\xa0\\xa0Context: {context}\\n\\xa0\\xa0\\xa0Question: {question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\\n\\xa0\\xa0\\xa0\"\"\"\\n\\xa0\\xa0\\xa0rag_prompt = ChatPromptTemplate.from_messages(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0[\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0(\"system\", system_prompt),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0(\"human\", \"{question}\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0]\\n\\xa0\\xa0\\xa0)\\n\\xa0\\xa0\\xa0rag_chain = (\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0{\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\"context\": RunnableLambda(get_question) | retriever | format_docs,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\"question\": RunnablePassthrough()\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0}\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0| rag_prompt\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0| model\\n\\xa0\\xa0\\xa0)\\n\\xa0\\xa0\\xa0contextualize_q_system_prompt = \"\"\"Given a chat history and the latest user question \\\\\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0which might reference context in the chat history, formulate a standalone question \\\\\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0which can be understood without the chat history. Do NOT answer the question, \\\\\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0just reformulate it if needed and otherwise return it as is.\"\"\"\\n\\xa0\\xa0\\xa0contextualize_q_prompt = ChatPromptTemplate.from_messages(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0[\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0(\"system\", contextualize_q_system_prompt),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0MessagesPlaceholder(variable_name=\"chat_history\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0(\"human\", \"{question}\"),\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0]\\n\\xa0\\xa0\\xa0)\\n\\xa0\\xa0\\xa0runnable = contextualize_q_prompt | model | rag_chain\\n\\xa0\\xa0\\xa0chat_memory = ChatMessageHistory()\\n\\xa0\\xa0\\xa0def get_session_history(session_id: str) -> BaseChatMessageHistory:\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0return chat_memory\\n\\xa0\\xa0\\xa0with_message_history = RunnableWithMessageHistory(\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0runnable,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0get_session_history,\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0input_messages_key=\"question\",\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0history_messages_key=\"chat_history\",\\n\\xa0\\xa0\\xa0)\\n\\xa0\\xa0\\xa0return with_message_history',language:\"JSX\"})})}),/*#__PURE__*/e(\"h2\",{children:\"Streamlit UI\"}),/*#__PURE__*/t(\"p\",{children:[\"The final step is to give the app a nice looking UI which we will add using \",/*#__PURE__*/e(a,{href:\"https://streamlit.io\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Streamlit\"})}),\".\\xa0\"]}),/*#__PURE__*/e(\"p\",{children:\"First, let\u2019s give the app a title:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'st.set_page_config(page_title=\"Personal Digital Library Assistant\")\\nst.title(\"Personal Digital Library Assistant\")',language:\"JSX\"})})}),/*#__PURE__*/e(\"p\",{children:\"Next, let\u2019s define a function that will display the app\u2019s UI:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Initialize session state values and initial prompt to the user.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Display existing chat messages.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"If a message has been entered, append the message to the messages list in the session state and display the message in a chat bubble.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Generate a response if the last message is from the user.\"})})]}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'def show_ui(qa, prompt_to_user=\"How may I help you?\"):\\n\\xa0\\xa0\\xa0if \"messages\" not in st.session_state.keys():\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.session_state.messages = [{\"role\": \"assistant\", \"content\": prompt_to_user}]\\n\\xa0\\xa0\\xa0# Display chat messages\\n\\xa0\\xa0\\xa0for message in st.session_state.messages:\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0with st.chat_message(message[\"role\"]):\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.write(message[\"content\"])\\n\\xa0\\xa0\\xa0# User-provided prompt\\n\\xa0\\xa0\\xa0if prompt := st.chat_input():\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.session_state.messages.append({\"role\": \"user\", \"content\": prompt})\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0with st.chat_message(\"user\"):\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.write(prompt)\\n\\xa0\\xa0\\xa0# Generate a new response if last message is not from assistant\\n\\xa0\\xa0\\xa0if st.session_state.messages[-1][\"role\"] != \"assistant\":\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0with st.chat_message(\"assistant\"):\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0with st.spinner(\"Flipping pages...\"):\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0response = ask_question(qa, prompt)\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.markdown(response.content)\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0message = {\"role\": \"assistant\", \"content\": response.content}\\n\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0st.session_state.messages.append(message)',language:\"JSX\"})})}),/*#__PURE__*/e(\"h2\",{children:\"Running the app\"}),/*#__PURE__*/t(\"p\",{children:[\"Finally, to run the app, all we need to do is to navigate to the location of the \",/*#__PURE__*/e(\"code\",{children:\"streamlit-app.py\"}),\" file in your terminal and call \",/*#__PURE__*/e(\"code\",{children:/*#__PURE__*/e(\"em\",{children:\"streamlit run streamlit-app.py\"})})]}),/*#__PURE__*/t(\"p\",{children:[\"If you\u2019d like to try this example on your own machine, clone this \",/*#__PURE__*/e(a,{href:\"https://github.com/MKhalusova/ai_librarian\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"GitHub repo\"})}),\", create \",/*#__PURE__*/e(\"code\",{children:\".env\"}),\" in the project\u2019s root and add your secrets to it.\\xa0\"]}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"732\",src:\"https://framerusercontent.com/images/6kCiSPuYwQqf7DQnTvEwUzKBF1Y.png\",srcSet:\"https://framerusercontent.com/images/6kCiSPuYwQqf7DQnTvEwUzKBF1Y.png?scale-down-to=512 512w,https://framerusercontent.com/images/6kCiSPuYwQqf7DQnTvEwUzKBF1Y.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/6kCiSPuYwQqf7DQnTvEwUzKBF1Y.png 1758w\",style:{aspectRatio:\"1758 / 1464\"},width:\"879\"}),/*#__PURE__*/e(\"p\",{children:\"You can easily modify this app to use any of the other 20+ file types supported by Unstructured including PDFs, markdown, HTML, and more! \"}),/*#__PURE__*/t(\"p\",{children:[\"Get your \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Serverless API\"})}),\" key today and start building! We look forward to see what you build with Unstructured Serverless. If you have any questions, we\u2019d love to hear from you! Please feel free to reach out to our team at hello@unstructured.io or join our\",/*#__PURE__*/e(a,{href:\"https://short.unstructured.io/pzw05l7\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\" Slack community\"})}),\".\"]})]});export const richText3=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/e(\"h2\",{children:\"Why is Chunking Necessary?\"}),/*#__PURE__*/e(\"p\",{children:\"Chunking is an essential preprocessing step when preparing data for RAG for a number of reasons.\\xa0\"}),/*#__PURE__*/e(\"h3\",{children:\"Context Window Limit\"}),/*#__PURE__*/e(\"p\",{children:\"Let\u2019s start with the basics. The retrieved chunks will be fed directly into the prompt as context for your LLM to generate a response. This means that at the very minimum, the total length of all of the retrieved chunks combined cannot exceed the context window of the LLM. Though many LLMs today have generous context windows, you still may not want to fill up the context window to the brim, as these LLMs will struggle with the \u201Cneedle in the haystack\u201D problem. You may also want to utilize that large context window in additional ways, such as providing thorough instructions, a persona description, or some few-shot examples.\"}),/*#__PURE__*/e(\"p\",{children:\"Additionally, if you intend to employ similarity search and embed your documents, you have to consider that embedding models also have a limited context window. These models cannot embed text that surpasses the maximum length of their context window. This limit varies depending on the specific model, but you can easily find this information in the model\u2019s description, e.g. in the model card on the Hugging Face Hub. Once you know which model you will be using to generate the embeddings, you have the hard maximum value for the chunk length (expressed in tokens, not characters or words). Embedding models typically max out at around 8K tokens or less in context window size, which equates to roughly 6200 words in the English language. To put this in perspective, the entire Lord of the Rings series, including \u201CThe Hobbit\u201D, clocks in at approximately 576,459 words, so if you wanted to utilize this corpus for RAG with similarity search, you'd need to divide it into at least 93 chunks.\"}),/*#__PURE__*/e(\"h3\",{children:\"The Effect of Chunk Size on Retrieval Precision\"}),/*#__PURE__*/e(\"p\",{children:\"While the embedding model imposes a hard maximum limit on the number of tokens it can embed, that doesn't mean your chunks need to reach that length. It simply means they can't exceed it.\\xa0 In fact, utilizing the maximum length for each chunk, such as 6200 words (8K tokens), may be excessive in many scenarios. There are several compelling reasons to opt for smaller chunks.\"}),/*#__PURE__*/e(\"p\",{children:\"Let\u2019s take a step back for a minute, and recall what happens when we embed a piece of text to get the embedding vector. Most embedding models are encoder-type transformer models that take as input text up to the maximum length, and give you back a single vector representation of fixed dimension, for example 768. Regardless of whether you give a model a sentence of 10 words or a paragraph of 1000 words, both resulting embedding vectors will have the same dimension of 768. The way this works is that the model first converts the text into tokens, for each of these tokens it has learned a vector representation during pre-training. It then applies a pooling step to average individual token representations into a single vector representation.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Common types of pooling are:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"CLS pooling: vector representation of the special [CLS] token becomes the representation for the whole sequence\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Mean pooling: the average of token vector representations is returned as the representation for the whole sequence\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Max pooling: the token vector representation with the largest values becomes the representation for the whole sequence\"})})]}),/*#__PURE__*/e(\"p\",{children:\"The goal is to compress the granular token-level representations into a single fixed-length representation that encapsulates the meaning of the entire input sequence. This compression is inherently lossy. With larger chunks, the representation may become overly coarse, potentially obscuring important details. To ensure precise retrieval, it's crucial for chunks to possess meaningful and nuanced representations of the text.\"}),/*#__PURE__*/e(\"p\",{children:\"Now, consider another potential issue. A large chunk may encompass multiple topics, some of which may be relevant to a user query, while others are not. In such a case, the representation of each topic within a single vector can become diluted, which again will affect the retrieval precision.\"}),/*#__PURE__*/e(\"p\",{children:\"On the other hand, smaller chunks that maintain a focused context, allow for more precise matching and retrieval of relevant information. By breaking down documents into meaningful segments, the retriever can more accurately locate specific passages or facts, which ultimately improves RAG performance. So, how small can chunks be while still retaining their contextual integrity? This will depend on the nature of your documents, and may require some experimentation. Typically, a chunk size of about 250 tokens, equivalent to approximately 1000 characters, is a sensible starting point for experimentation.\"}),/*#__PURE__*/e(\"p\",{children:\"Now that we've covered the reasons for chunking, let's look at some practical aspects of it.\"}),/*#__PURE__*/e(\"h2\",{children:\"Common Approaches to Chunking\"}),/*#__PURE__*/e(\"h3\",{children:\"Character splitting\"}),/*#__PURE__*/e(\"p\",{children:\"The very basic way to split a large document into smaller chunks is to divide the text into N-character sized chunks. Often in this case, you would also specify a certain number of characters that should overlap between consecutive chunks. This somewhat reduces the likelihood of sentences or ideas being abruptly cut off at the boundary between two adjacent chunks. However, as you can imagine, even with overlap, a fixed character count per chunk, coupled with a fixed overlap window, will inevitably lead to disruptions in the flow of information, mixing of disparate topics, and even sentences being split in the middle of a word. The character splitting approach has absolutely no regard for document structure.\\xa0\"}),/*#__PURE__*/e(\"h3\",{children:\"Sentence-level chunking or recursive chunking\"}),/*#__PURE__*/e(\"p\",{children:\"Character splitting is a simplistic approach that doesn\u2019t take into account the structure of a document at all. By relying solely on a fixed character count, this method often results in sentences being split mid-way or even mid-word, which is not great.\"}),/*#__PURE__*/e(\"p\",{children:\"One way to address this problem is to use a recursive chunking method that helps to preserve\\xa0 individual sentences. With this method you can specify an ordered list of separators to guide the splitting process. For example, here are some commonly used separators:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'\"\\\\n\\\\n\" - Double new line, commonly indicating paragraph breaks'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'\"\\\\n\" - Single new line'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'\".\" - Period'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'\" \" - Space'})})]}),/*#__PURE__*/e(\"p\",{children:'If we apply the separators listed above in their specified order, the process will go like this. First, recursive chunking will break down the document at every occurrence of a double new line (\"\\\\n\\\\n\"). Then, if these resulting segments still exceed the desired chunk size, it will further break them down at new lines\\xa0 (\"\\\\n\" ), and so on.\\xa0'}),/*#__PURE__*/e(\"p\",{children:\"While this method significantly reduces the likelihood of sentences being cut off mid-word, it still falls short of capturing the complex document structure. Documents often contain a variety of elements, such as paragraphs, section headers, footers, lists, tables, and more, all of which contribute to their overall organization. However, the recursive chunking approach outlined above primarily considers paragraphs and sentences, neglecting other structural nuances.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Moreover, documents are stored in many native formats, so you would have to come up with different sets of separators for each different document type. The list above may work fine for plain text, but you\u2019ll need a more nuanced and tailored list of separators for markdown, another list if you have HTML or XML document, and so on. Extending this approach to handle image-based documents like PDFs and PowerPoint presentations introduces further complexities. If your use case involves a diverse set of unstructured documents, uniformly applying recursive chunking quickly becomes a non-trivial task.\"}),/*#__PURE__*/e(\"h3\",{children:\"Smart chunking with Unstructured\"}),/*#__PURE__*/e(\"p\",{children:\"Unstructured offers several smart chunking strategies, all of which have a significant advantage over the previously mentioned approaches. Instead of dealing with a wall of plain text with a random set of potential separators, once you partition the documents of any kind with Unstructured, chunking is applied to a set of individual document elements that represent logical units of the original document and reflect its structure.\\xa0\"}),/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"433\",src:\"https://framerusercontent.com/images/nQOz8xvZNGg2TEbB2q51eAEzjEg.png\",srcSet:\"https://framerusercontent.com/images/nQOz8xvZNGg2TEbB2q51eAEzjEg.png?scale-down-to=512 512w,https://framerusercontent.com/images/nQOz8xvZNGg2TEbB2q51eAEzjEg.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/nQOz8xvZNGg2TEbB2q51eAEzjEg.png 1700w\",style:{aspectRatio:\"1700 / 866\"},width:\"850\"}),/*#__PURE__*/t(\"p\",{children:[\"This means that you don\u2019t have to figure out how to tell apart individual sections of a document. Unstructured has already done the heavy lifting, presenting you with distinct \",/*#__PURE__*/e(a,{href:\"https://docs.unstructured.io/api-reference/api-services/document-elements\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"document elements\"})}),\" that encapsulate paragraphs, tables, images, code snippets, and any other meaningful text units within the document. Coming out of the partitioning step, the document is already divided into smaller segments. Does this mean that the document is already chunked? Not quite, but you're halfway there!\"]}),/*#__PURE__*/e(\"p\",{children:\"Some of the document elements derived from partitioning may still exceed the context window of your embedding model or your desired chunk size. These will require further splitting. Conversely, some document elements may be too small to contain enough context. For instance, a list is partitioned into individual `ListItem` elements, but you may opt to combine these elements into a single chunk, provided they still fit within your preferred chunk size.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Starting with a document that has been systematically partitioned into discrete elements, the smart chunking strategies Unstructured offers, allow you to:\\xa0\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Ensure that information flow remains uninterrupted, preventing mid-word splits that simple character chunking suffers from.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Control the maximum and minimum sizes of the chunks.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Guarantee that distinct topics or ideas such as separate sections with different subjects, are not, are not merged.\\xa0\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Smart chunking is a step beyond recursive chunking that actually takes into account the semantic structure and content of the documents.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Smart chunking offers four strategies which differ in how they guarantee the purity of content within chunks:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"\u201CBasic\u201D chunking strategy: This method allows you to combine sequential elements to maximally fill each chunk while respecting the maximum chunk size limit. If a single isolated element exceeds the hard-max, it will be divided into two or more chunks.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"\u201CBy title\u201D chunking strategy: This strategy leverages the document element types identified during partitioning to understand the document structure, and preserves section boundaries. This means that a single chunk will never contain text that occurred in two different sections, ensuring that topics remain self-contained for enhanced retrieval precision.\\xa0\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"\u201CBy page\u201D chunking strategy (available only in \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Serverless API\"})}),\"): Tailored for documents where each page conveys unique information, this strategy ensures that content from different pages is never intermixed within the same chunk. When a new page is detected, the existing chunk is completed and a new one is started, even if the next element would fit in the prior chunk.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[\"\u201CBy similarity\u201D chunking strategy (available only in \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Serverless API\"})}),\"): When the document structure fails to provide clear topic boundaries, you can use the \u201Cby similarity\u201D strategy. This strategy employs the `sentence-transformers/multi-qa-mpnet-base-dot-v1` embedding model to identify topically similar sequential elements and combine them into chunks.\\xa0\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"An added advantage of Unstructured\u2019s smart chunking strategies is their universal applicability across diverse document types. You won\u2019t need to hardcode and maintain the separator lists for each document as you would in case of recursive chunking. This allows for easy experimentation with chunk size and chunking strategy, enabling you to pinpoint the optimal approach for any given use case.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Chunking is one of the essential preprocessing steps in any RAG system. The choices you make when you set it up, will influence the retrieval quality, and as a consequence, the overall performance of the system. Here are some considerations to keep in mind when designing the chunking step:\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"- Experiment with different chunk sizes: While large chunks may contain more context, they also result in coarse representation, negatively affecting the retrieval precision. Optimal size chunk depends on the nature of your documents, but aim to optimize for smaller chunks without losing important context.\\xa0\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"- Utilize smart chunking strategies: Opt\\xa0 for chunking strategies that allow you to separate text on semantically meaningful boundaries to avoid interrupting the information flow, or mixing content.\"}),/*#__PURE__*/e(\"p\",{children:\"- Evaluate the impact of your chunking choices on the overall RAG performance: Set up an evaluation set for your specific use case, and track how your experiments with chunk sizes and chunking strategies impact the overall performance. Unstructured streamlines chunking experimentation by allowing you to simply tweak a parameter or two, no matter the documents\u2019 type.\\xa0\"}),/*#__PURE__*/t(\"p\",{children:[\"Get your \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Serverless API\"})}),\" key today and start your experiments! We cannot wait to see what you build with Unstructured Serverless. If you have any questions, we\u2019d love to hear from you! Please feel free to reach out to our team at hello@unstructured.io or join our\",/*#__PURE__*/e(a,{href:\"https://short.unstructured.io/pzw05l7\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\" Slack community\"})}),\".\"]})]});export const richText4=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Unstructured, a leading provider of data processing and transformation solutions, announces that GovSignals, a prominent provider of AI-driven solutions for government contracting, has integrated Unstructured\u2019s advanced data processing capabilities into their platform. This integration aims to transform the proposal development lifecycle for government contracting companies by leveraging cutting-edge AI technologies, allowing GovSignals' clients to leverage the latest Generative AI in tandem with their own data and with government solicitations.\"}),/*#__PURE__*/e(\"p\",{children:\"GovSignals' platform, based on Retrieval Augmented Generation (RAG) pipelines, is designed to streamline the identification of bidding opportunities and enhance proposal management. To deliver effective RAG pipelines, data needs to be preprocessed and indexed into a knowledge base. By incorporating Unstructured's technology, GovSignals can seamlessly connect to various data sources, such as S3, Azure Blob, SharePoint, and Google Drive, and support a wide range of file types, including DOCX, PDFs, and PowerPoint presentations. This backend integration means that all relevant data is pushed through Unstructured\u2019s transformation and enrichment pipelines to deliver pristine, chunked, vectorized JSON to the vector database within GovSignals, ensuring a strong data layer and optimizing RAG performance.\"}),/*#__PURE__*/e(\"p\",{children:\"GovSignals' platform features two primary modules: Signals and Proposals. The Signals module leverages an AI recommendation engine to quickly identify and prioritize new procurement opportunities, significantly reducing the time spent searching across multiple bidding sites like SAM.gov and GSA. The Proposals module reduces content creation time and enhances compliance by using a dynamic compliance matrix that links requirements to proposal sections. This module automatically initiates AI-driven content creation based on explicit requirements, with the AI proposal assistant generating accurate and sourced content, aiding in compliance and accelerating timelines while improving quality.\"}),/*#__PURE__*/e(\"p\",{children:\"The collaboration between Unstructured and GovSignals represents a significant advancement in the field of government contracting, offering companies a powerful tool to navigate the complex proposal development lifecycle with greater efficiency and precision.\"}),/*#__PURE__*/t(\"p\",{children:[\"For more information about GovSignals, visit \",/*#__PURE__*/e(a,{href:\"govsignals.ai\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"govsignals.ai\"})}),\". \"]}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"About Unstructured\"})}),/*#__PURE__*/e(\"p\",{children:\"Unstructured is a leading provider of data processing and transformation solutions, empowering organizations to harness their data with foundation models. Unstructured's technology supports a wide range of file types and data sources, enabling seamless integration and efficient data management for various applications.\"}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"About GovSignals\"})}),/*#__PURE__*/e(\"p\",{children:\"GovSignals is an AI-driven platform designed to streamline the proposal development lifecycle for government contractors. By integrating with existing IT architectures and utilizing advanced AI technologies, GovSignals helps companies accelerate their proposal creation processes and bid identification capabilities, enhancing compliance and overall efficiency.\"})]});export const richText5=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Today we\u2019re thrilled to announce Unstructured Serverless\u2014the simplest, fastest, and most cost-effective way to render enterprise data AI-ready. Along with our new serverless infrastructure, we are deploying a host of enhancements including: \"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"New Signup Flow\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"New Admin Dashboard\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"New Per Page Pricing Model to improve predictability and reduces cost\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"5x improvement in processing throughput for PDFs\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"70% improvement in table classification and structure detection\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"11% improvement in text accuracy\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"20% reduction in word error rate\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Multi-Region cloud hosting to improve resiliency\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"SOC 2 Type 2 compliance \"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Entirely new documentation \"})})]}),/*#__PURE__*/t(\"p\",{children:[\"All available now\u2014try it \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"here\"})}),\"!\"]}),/*#__PURE__*/e(\"h3\",{children:\"Good Data = Good AI\"}),/*#__PURE__*/e(\"p\",{children:\"It\u2019s becoming increasingly clear that harnessing proprietary data through production AI workflows is going to be critical to companies\u2019 long term competitive advantage. Over the past two years we\u2019ve worked alongside leading enterprises and our developer community to deliver GenAI-ready data to power a wide range of AI applications spanning enterprise search to customer service agents.  \"}),/*#__PURE__*/e(\"p\",{children:\"As organizations move beyond prototypes and into production they require performant, resilient, scaleable, and secure tooling they can trust with their most important data workloads. It\u2019s with this in mind that we\u2019ve worked over the past several months to dramatically enhance our developer experience and the performance of our commercial API with a host of enterprise-grade features including SOC 2 Type 2 certification.\"}),/*#__PURE__*/e(\"p\",{children:\"Throughout this journey we\u2019ve been consistently reminded that what matters most to the success of production GenAI projects is good, clean data, and we\u2019re committed to making it effortless to achieve this reality.\"}),/*#__PURE__*/e(\"h3\",{children:\"Improved Transformation Performance\"}),/*#__PURE__*/e(\"p\",{children:\"Unstructured Serverless API leverages our next-generation document transformation models that deliver superior performance over our open source models along several key dimensions: \"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"5x improvement in processing throughput for PDFs\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"70% improvement in table classification and structure detection\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"11% improvement in text accuracy\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"20% reduction in word error rate\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Improved transformation and better document element classification helps deliver better LLM-enabled workflows across three critical areas: \"}),/*#__PURE__*/t(\"ol\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Developers can clean data by removing unwanted document elements (ie. removing headers and footers or images).\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Developers can more effectively utilize advanced chunking strategies. For example, developers can chunk by document element to identify document sections (ie. chunk by title).\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Developers can use metadata filtering to enhance retrieval by surfacing the most relevant data within a file at query time.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"In addition to layering in AI-native processing capabilities, we\u2019ve continued to advance our transformation pipelines by implementing traditional pre-processing techniques including hand crafted parsers, regular expressions, python libraries, and more. This has resulted in the fastest, most performant, and most cost effective solution on the market.\"}),/*#__PURE__*/e(\"h3\",{children:\"Improved Pricing\"}),/*#__PURE__*/e(\"p\",{children:\"With Unstructured Serverless API, we are launching a per-page pricing model. In our initial launch in Q1 of this year we charged by compute hour. We quickly learned that this wasn\u2019t the best approach for predictability and transparency. As a result, we\u2019ve simplified our pricing to: \"}),/*#__PURE__*/t(\"ol\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Fast Pipeline: $1 per 1,000 pages\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Hi-Res Pipeline: $10 per 1,000 pages \"})})]}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"With the previous pricing model, processing 1,000 PDF pages costs $12.93.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"With the new pricing model, processing 1,000 PDF pages now costs $10.00. \"})})]}),/*#__PURE__*/e(\"p\",{children:\"There is no longer any charge to create infrastructure. It is always available. \"}),/*#__PURE__*/e(\"h3\",{children:\"Improved Startup Speed and Latency\"}),/*#__PURE__*/e(\"p\",{children:\"Another advantage of our serverless infrastructure is that 99+% of requests have virtually zero startup time. Continuously online worker nodes reduce ramp up times from thirty minutes previously to under three seconds. \"}),/*#__PURE__*/e(\"p\",{children:\"We have also made significant improvements to reduce latency in our preprocessing pipelines. With this release, we process documents 5x faster using techniques like document splitting, where each document is deconstructed and distributed to individual worker nodes for parallelized transformation.\"}),/*#__PURE__*/e(\"h3\",{children:\"Improved Developer Experience\"}),/*#__PURE__*/e(\"p\",{children:\"Delivering a delightful developer experience is our number one priority. With this release, we have drastically improved our onboarding flow with a refreshed signup process, a new admin panel to manage API keys and track usage, and completely new documentation.\"}),/*#__PURE__*/e(\"h3\",{children:\"Good Data = Good AI\"}),/*#__PURE__*/e(\"p\",{children:\"Since we released our commercial API, thousands of organizations have turned to Unstructured to carry their GenAI workloads. Unstructured Serverless offers better extraction performance and all of the benefits that come with SaaS software such as hosting, regular upgrades, scalability, reliability, and a managed SLA.\"}),/*#__PURE__*/t(\"p\",{children:[\"We cannot wait to see what you build with Unstructured Serverless. If you have any questions, we\u2019d love to hear from you! Please feel free to reach out to our team at \",/*#__PURE__*/e(a,{href:\"mailto:hello@unstructured.io\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"hello@unstructured.io\"})}),\" or join our \",/*#__PURE__*/e(a,{href:\"https://short.unstructured.io/pzw05l7\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Slack community\"})}),\".\"]})]});export const richText6=/*#__PURE__*/t(o.Fragment,{children:[/*#__PURE__*/e(\"h1\",{children:\"Introduction\"}),/*#__PURE__*/e(\"p\",{children:\"The ability to extract valuable insights from text is paramount for businesses and researchers. This journey begins with understanding the power of embeddings, a crucial technique in Artificial Intelligence that transforms text into a format readily interpretable by machines. This blog post will equip you with a foundational understanding of text embeddings using two platforms:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"http://octo.ai/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"OctoAI\"})}),\" delivers a complete stack for app builders to run, tune, and scale their AI applications in either the cloud or on-prem. Their Text Generation solution hosts highly scalable and optimized LLMs and embedding models.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"http://unstructured.io/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured\"})}),\" is a powerful data management solution designed to handle unstructured data, which typically poses challenges for traditional data processing systems. By effectively managing and extracting value from unstructured data, Unstructured enables organizations to tap into a wealth of untapped insights, enhancing their decision-making capabilities.\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"Together, these tools provide a comprehensive solution for data analysis challenges, enabling users to derive meaningful insights from vast amounts of information. We will explore how integrating OctoAI's GTE-Large embedding model with Unstructured Embedding functionality can enhance data processing and RAG performance.\"}),/*#__PURE__*/e(\"h2\",{children:\"Understanding the OctoAI Text Embedding Model\"}),/*#__PURE__*/e(\"p\",{children:\"Embeddings in Machine Learning and Natural Language Processing (NLP) refer to categorical or textual data representation as numerical vectors. These vectors capture the semantic relationships between words, phrases, or entire documents, allowing machine learning algorithms to process and understand language data more effectively.\"}),/*#__PURE__*/t(\"p\",{children:[\"OctoAI \",/*#__PURE__*/e(a,{href:\"https://octo.ai/blog/introducing-octoais-embedding-api-to-power-your-rag-needs/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"provides\"})}),\" the \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/thenlper/gte-large\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"General Text Embeddings (GTE)\"})}),\" Large Embedding model. Alibaba DAMO Academy has trained the GTE models on a large-scale corpus of relevant text pairs covering a large domain and scenarios. This makes the GTE models suitable for various sophisticated NLP tasks, such as information retrieval, semantic textual similarity, language translation, summarization, and text reranking.\\xa0 All these require a deep understanding of language structure and meaning.\"]}),/*#__PURE__*/t(\"p\",{children:[\"GTE-Large performs very well on the \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/spaces/mteb/leaderboard\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"MTEB leaderboard\"})}),\", with an average score of 63.13% (comparable to OpenAI\u2019s text-embedding-ada-002, which scores 61.0%). This embedding model caters predominantly to English text input. Input text is limited to a maximum of 512 tokens (any text longer than that will be truncated), producing an embedding vector of 1024 dimensions. This makes OctoAI GTE Large Embeddings a versatile tool for various NLP projects, from chatbots and virtual assistants to content analysis and recommendation systems.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Handling Unstructured Data\"}),/*#__PURE__*/e(\"p\",{children:\"Unstructured provides a data ingestion and processing platform. Businesses deal with abundant unstructured data that does not have a predefined format or organization, making it challenging to process and analyze using traditional methods. Examples of unstructured and semi-structured data include text documents, tables, and images in various file formats, such as PDFs, Excel, or PowerPoint.\"}),/*#__PURE__*/e(\"p\",{children:\"Unstructured stands out from other data management solutions due to its unique features and capabilities:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Advanced data processing: State-of-the-art algorithms and machine learning models to extract text information from unstructured data. This includes optical character recognition (OCR), NLP techniques, and Tranformer-based models.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Scalability: Built to handle large volumes of unstructured data via the Unstructured SaaS API and Enterprise Platform, making it suitable for enterprises and organizations with vast information.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Customization: Allows users to define custom data models and extraction rules, ensuring the platform can adapt to specific business needs and use cases.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Integration: This can be easily integrated with other data management systems, AI platforms, and analytics tools, enabling seamless data flow and facilitating end-to-end data processing pipelines.\"})})]}),/*#__PURE__*/e(\"h1\",{children:\"What We Will Build\"}),/*#__PURE__*/t(\"p\",{children:[\"This article demonstrates how to work with OctoAI GTE Large Embeddings and Unstructured OctoAI Embedding in a RAG application, together with \",/*#__PURE__*/e(a,{href:\"https://www.pinecone.io/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Pinecone\"})}),\" vector database and \",/*#__PURE__*/e(a,{href:\"https://mistral.ai/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"MistralAI\"})}),\" LLM. We will explore their key features, understand their work, and examine their practical applications through code examples.\"]}),/*#__PURE__*/t(\"p\",{children:[\"We develop three use cases as examples, from the basic text embedding using Unstructured and OctoAI Embedding to a complete RAG application capable of processing a PDF file and performing an RAG search on its content. We begin with a basic script demonstrating the use of OctoAI for embeddings with Unstructured. Then, we will enhance this script to showcase an example of processing a PDF file and generating the corresponding embeddings. Lastly, we will build upon the previous examples by uploading the embeddings to the Pinecone vector database with vector search capabilities and utilizing \",/*#__PURE__*/e(a,{href:\"https://mistral.ai/\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"MistralAI\"})}),\" to execute the RAG functionality.\"]}),/*#__PURE__*/e(\"p\",{children:\"At the end of this article, you can build a RAG application with the following workflow:\"}),/*#__PURE__*/e(\"h2\",{children:\"Code Walkthrough\"}),/*#__PURE__*/t(\"p\",{children:[\"You can follow the code below on this \",/*#__PURE__*/e(a,{href:\"https://colab.research.google.com/drive/1ALnkIys7Txltcoj8K3Ic4CMTXzsfa-UR\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Google Colab\"})}),\". Before we start, you will need to get the following API keys:\"]}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured API Key\"})})})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(a,{href:\"https://octo.ai/docs/getting-started/how-to-create-an-octoai-access-token\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"OctoAI API Key\"})})})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(a,{href:\"https://docs.pinecone.io/guides/getting-started/quickstart#2-get-your-api-key\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Pinecone API Key\"})})})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(a,{href:\"https://docs.mistral.ai/#where-to-start\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"MistralAI API key\"})})})})]}),/*#__PURE__*/e(\"p\",{children:\"You can create a .env file to store the API keys with the following configuration:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:\"OCTOAI_API_KEY=<YOUR_OCTOAI_API_KEY> \\nUNSTRUCTURED_API_KEY=<YOUR_UNSTRUCTURED_API_KEY> \\nPINECONE_API_KEY=<YOUR_PINECONE_API_KEY> \\nMISTRAL_API_KEY=<YOUR_MISTRAL_API_KEY>\",language:\"JSX\"})})}),/*#__PURE__*/e(\"h3\",{children:\"Use Case #1: Unstructured API with OctoAI GTE Embeddings for Simple Text Processing\"}),/*#__PURE__*/e(\"p\",{children:\"This code demonstrates how to use the OctoAI embedding engine within the Unstructured library to generate embeddings for text elements:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'# Import the necessary packages  \\nfrom decouple import config\\nfrom unstructured.documents.elements import Text\\nfrom unstructured.embed.octoai import OctoAiEmbeddingConfig, OctoAIEmbeddingEncoder\\n\\n# Define the embeddings_example function\\ndef embeddings_example(): \\xa0 \\n  # Create an OctoAIEmbeddingEncoder \\xa0 \\n  print(\"Creating OctoAIEmbeddingEncoder\") \\xa0 \\n  embedding_encoder = OctoAIEmbeddingEncoder( \\xa0 \\xa0 \\xa0 \\n    config=OctoAiEmbeddingConfig(api_key=config(\"OCTOAI_API_KEY\")) \\xa0 \\n  ) \\xa0 \\n  # Define the elements to embed, here we have two sentences \\xa0 \\n  elements = [Text(\"This is sentence 1\"), Text(\"This is sentence 2\")] \\xa0 \\n  # Embed the elements \\xa0 \\n  print(\"Embedding the elements\") \\xa0 \\n  embedded_elements = embedding_encoder.embed_documents( \\xa0 \\xa0 \\xa0 \\n    elements=elements \\xa0 \\n  ) \\xa0 \\n  # Return the embedded elements \\xa0 \\n  return embedded_elements\\n    \\nif __name__ == \"__main__\":\\n  _embedded_elements = embeddings_example() \\xa0 \\n  # Print the embedded elements \\xa0 \\n  [print(e.embeddings, e, \"\\\\n\") for e in _embedded_elements]',language:\"JSX\"})})}),/*#__PURE__*/e(\"p\",{children:\"Here's a step-by-step description of the code:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Import the necessary packages.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the embeddings_example function, which generates embeddings for the given text elements using the OctoAI embedding engine.\"})}),/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:\"Inside the embeddings_example function:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create an OctoAIEmbeddingEncoder instance using the OctoAiEmbeddingConfig class and the OctoAI API key retrieved from the application configuration.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define two text elements as examples stored in the elements list.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Generate embeddings for the text elements using the embed_documents method of the OctoAIEmbeddingEncoder instance.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Return the embedded elements.\"})})]})]}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'Check if the script runs as the main module (__name__ == \"__main__\"). If so, call the embeddings_example function and store the result in the _embedded_elements variable.'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Print the embeddings and the corresponding text elements by iterating through the _embedded_elements list and using a list comprehension with the print function. The embeddings are NumPy arrays, while the text elements are represented as Text objects.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Use Case #2: Process a PDF Document with Unstructured API and OctoAI Embeddings\"}),/*#__PURE__*/e(\"p\",{children:\"This code demonstrates how to use the Unstructured library in combination with the OctoAI embedding engine to process a PDF document, extract text elements, and generate embeddings for those elements:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'# Import the necessary packages\\nfrom decouple import config\\nfrom unstructured.documents.elements import Text\\nfrom unstructured.embed.octoai import OctoAiEmbeddingConfig, OctoAIEmbeddingEncoder\\nfrom unstructured_client import UnstructuredClient\\nfrom unstructured_client.models import shared\\n  \\n# Process the PDF function\\ndef process_pdf():\\n  # Create an UnstructuredClient\\n  print(\"Creating UnstructuredClient\")\\n  client = UnstructuredClient(api_key_auth=config(\"UNSTRUCTURED_API_KEY\"),\\n                              server_url=config(\"UNSTRUCTURED_SERVER_URL\")\\n                             )\\n\\n  # Define the file to process and open it\\n  filename = \"layout-parser-paper-fast.pdf\"\\n  file = open(filename, \"rb\")\\n    \\n  # Define the partition parameters\\n  print(\"Defining the partition parameters\")\\n  request = shared.PartitionParameters(\\n    # Note that this currently only supports a single file\\n    files=shared.Files(\\n      content=file.read(),\\n      file_name=filename,\\n    ),\\n    # Other partition params\\n    strategy=\"fast\",\\n  )\\n      \\n  # Partition the document\\n  print(\"Partitioning the document\")\\n  result = client.general.partition(request)\\n    \\n  # Process the resulting elements\\n  print(\"Processing the resulting elements\")\\n    \\n  elements = []\\n  \\n  for element in result.elements:\\n    if element[\\'text\\']:\\n      text = Text(text=element[\\'text\\'])\\n      text.metadata.filename = element[\\'metadata\\'][\\'filename\\']\\n      text.metadata.page_number = element[\\'metadata\\'][\\'page_number\\']\\n      elements.append(text)\\n        \\n  # Create an OctoAIEmbeddingEncoder\\n  print(\"Creating OctoAIEmbeddingEncoder\")\\n  embedding_encoder = OctoAIEmbeddingEncoder(\\n    config=OctoAiEmbeddingConfig(api_key=config(\"OCTOAI_API_KEY\"))\\n  )\\n    \\n  # Embed the elements\\n  print(\"Embedding the elements\")\\n  embedded_elements = embedding_encoder.embed_documents(\\n    elements=elements\\n  )\\n    \\n  # Return the embedded elements and the embedding encoder\\n  return embedded_elements, embedding_encoder\\n    \\nif __name__ == \"__main__\":\\n  _embedded_elements, _embedding_encoder = process_pdf()\\n  # Print the embedded elements\\n  for e in _embedded_elements:\\n    print(e.embeddings, e, \"\\\\n\")',language:\"JSX\"})})}),/*#__PURE__*/t(\"p\",{children:[\"This example uses the PDF file from the Unstructured GitHub examples folder at\",/*#__PURE__*/e(a,{href:\"https://github.com/Unstructured-IO/unstructured/tree/main/example-docs\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\" https://github.com/Unstructured-IO/unstructured/tree/main/example-docs\"})}),\".\"]}),/*#__PURE__*/e(\"p\",{children:\"Here's a step-by-step description of the code:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Import the necessary packages.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the process_pdf function, which processes a PDF document, extracts text elements, and generates embeddings for those elements using the OctoAI embedding engine.\"})}),/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:\"Inside the process_pdf function:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create an UnstructuredClient instance using the Unstructured API key retrieved from the application configuration.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the PDF file to process and open it in binary mode.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the partition parameters, specifying the file content, file name, and partitioning strategy.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Partition the document using the partition method of the UnstructuredClient instance.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Process the resulting elements, creating Text objects for elements containing text and storing them in the elements list along with their metadata (file name and page number).\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create an OctoAIEmbeddingEncoder instance using the OctoAiEmbeddingConfig class and the OctoAI API key retrieved from the application configuration.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Generate embeddings for the text elements using the embed_documents method of the OctoAIEmbeddingEncoder instance.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Return the embedded elements and the embedding encoder.\"})})]})]}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:'Check if the script runs as the main module (__name__ == \"__main__\"). If so, call the process_pdf function and store the result in the _embedded_elements and _embedding_encoder variables.'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Print the embeddings and the corresponding text elements by iterating through the _embedded_elements list and using a for loop with the print function. The embeddings are NumPy arrays, while the text elements are represented as Text objects.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Use Case #3: Build a Full RAG Application\"}),/*#__PURE__*/e(\"p\",{children:\"This code demonstrates how to use Pinecone, Mistral AI, and the previously processed PDF data to create a vector search index, generate contextually relevant responses to user queries, and interact with a large language model.\"}),/*#__PURE__*/e(\"p\",{children:\"This example builds on the previous one and assumes you stored the previous code in a process_pdf.py file:\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(n,{componentIdentifier:\"module:pVk4QsoHxASnVtUBp6jr/QVzZltTawVJTjmjAWG3C/CodeBlock.js:default\",children:t=>/*#__PURE__*/e(s,{...t,code:'# Import the necessary packages\\nimport time\\nfrom decouple import config\\nfrom mistralai.client import MistralClient\\nfrom mistralai.models.chat_completion import ChatMessage\\nfrom pinecone import Pinecone, ServerlessSpec, PodSpec\\nfrom process_pdf import process_pdf\\n  \\n# Define the pinecone_index function\\ndef pinecone_index(embedded_elements, embedding_encoder):\\n  # Create a Pinecone client\\n  print(\"Creating Pinecone client\")\\n  pc = Pinecone(api_key=config(\"PINECONE_API_KEY\"))\\n    \\n  # Create an index\\n  print(\"Creating index\")\\n  index_name = \"unstructured\"\\n    \\n  if index_name not in pc.list_indexes().names():\\n    pc.create_index(\\n      name=index_name,\\n      dimension=embedding_encoder.num_of_dimensions()[0],\\n      metric=\"cosine\",\\n      spec=PodSpec(environment=\"gcp-starter\")\\n    )\\n      \\n  # Wait for index to be initialized\\n  print(\"Waiting for index to be initialized\")\\n    \\n  while not pc.describe_index(index_name).status[\\'ready\\']:\\n    time.sleep(1)\\n      \\n  # Connect to index\\n  print(\"Connecting to index\")\\n  index = pc.Index(index_name)\\n    \\n  # Insert the embedded elements into the index\\n  print(\"Inserting embedded elements into the index\")\\n    \\n  for e in embedded_elements:\\n    index.upsert(\\n      vectors=[\\n        {\\n          \"id\": e.id,\\n          \"values\": e.embeddings,\\n          \"metadata\": dict({\\'text\\': e.text}, **e.metadata.to_dict())\\xa0 # Add text and metadata\\n         }\\n       ]\\n    )\\n\\n  # Return the index\\n  return index\\n    \\n# Define the RAG function\\ndef rag(query, embedding_encoder, index):\\n  # Create query embedding\\n  print(\"Creating query embedding\")\\n  query_embedding = embedding_encoder.embed_query(query=_query)\\n    \\n  # Get relevant contexts from Pinecone\\n  print(\"Getting relevant contexts from Pinecone\")\\n  search_result = index.query(vector=query_embedding, top_k=10, include_metadata=True)\\n    \\n  # Get a list of retrieved texts\\n  contexts = [x[\\'metadata\\'][\\'text\\'] for x in search_result[\\'matches\\']]\\n  context_str = \"\\\\n\".join(contexts)\\n    \\n  # Place contexts into RAG prompt\\n  prompt = f\"\"\"You are a helpful assistant, below is a query from a user and\\n    some relevant contexts. Answer the question given the information in those\\n    contexts. If you cannot find the answer to the question, say \"I don\\'t know\".\\n    \\n    Contexts: \\xa0 {context_str}\\n    \"\"\"\\n    \\n    # Prepare the chat request with Mistral AI, with the system prompt and the user query\\n    print(\"Preparing the chat request\")\\n    model = \"open-mixtral-8x7b\"\\n    client = MistralClient(api_key=config(\"MISTRAL_API_KEY\"))\\n    messages = [\\n      ChatMessage(role=\"system\", content=prompt),\\n      ChatMessage(role=\"user\", content=f\"\"\"Query: {query} Answer: \"\"\")\\n    ]\\n    \\n  # Execute the chat request\\n  print(\"Executing the chat request\")\\n  chat_response = client.chat(\\n    model=model,\\n    messages=messages,\\n  )\\n    \\n  # Return the answer\\n  return chat_response.choices[0].message.content\\n    \\nif __name__ == \"__main__\":\\n  _embedded_elements, _embedding_encoder = process_pdf()\\n  _index = pinecone_index(_embedded_elements, _embedding_encoder)\\n  _query = \"What is the Layout Parser library used for?\"\\n  _answer = rag(_query, _embedding_encoder, _index)\\n  print(_answer)',language:\"JSX\"})})}),/*#__PURE__*/e(\"p\",{children:\"Here's a step-by-step description of the code:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:\"Import the necessary packages and:\"}),/*#__PURE__*/e(\"ul\",{children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"process_pdf: A function imported from the process_pdf module used to process a PDF document, extract text elements, and generate embeddings for those elements using the OctoAI embedding engine. Defined in the previous code example.\"})})})]}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the pinecone_index function, which creates a Pinecone index, initializes it, and inserts the embedded elements into it.\"})}),/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:\"Inside the pinecone_index function:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create a Pinecone client instance using the Pinecone API key retrieved from the application configuration.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create an index with the specified name, dimension, metric, and specification if it doesn't already exist.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Wait for the index to be initialized and connect to it.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Insert the embedded elements into the index, including their embeddings, text, and metadata.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Return the index.\"})})]})]}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define the rag function, which takes a user query, an embedding encoder, and a Pinecone index as input and generates a contextually relevant response using Mistral AI.\"})}),/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:\"Inside the rag function:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create a query embedding using the embedding encoder.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Retrieve relevant contexts from the Pinecone index based on the query embedding.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Format the retrieved contexts into a single string.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create a prompt that includes the contexts and the user query.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Prepare a chat request with Mistral AI, including the system prompt and the user query.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Execute the chat request using the Mistral AI client and retrieve the response.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Return the answer from the Mistral AI response.\"})})]})]}),/*#__PURE__*/t(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:[/*#__PURE__*/e(\"p\",{children:'Check if the script runs as the main module (__name__ == \"__main__\"). If so:'}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Call the process_pdf function to process the PDF document and generate embedded elements and an embedding encoder.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Create a Pinecone index using the pinecone_index function, embedded elements, and embedding encoder.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Define a user query.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Call the rag function with the user query, embedding encoder, and Pinecone index to generate a contextually relevant response.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",style:{\"--framer-font-size\":\"11px\",\"--framer-text-color\":\"rgb(0, 0, 0)\",\"--framer-text-decoration\":\"none\"},children:/*#__PURE__*/e(\"p\",{children:\"Print the answer.\"})})]})]})]}),/*#__PURE__*/e(\"p\",{children:\"Let\u2019s see the output of this RAG example:\"}),/*#__PURE__*/e(\"p\",{children:\"The Layout Parser library is used as a unified toolkit for deep learning-based document image analysis and processing. It provides a set of simple and intuitive interfaces for applying and customizing DL models for tasks such as layout detection, character recognition, and other document processing tasks. The library is designed to streamline the usage of DL in DIA research and applications, making it accessible to a wide audience including researchers and industry professionals. It also includes comprehensive tools for efficient document image data annotation and model tuning to support different levels of customization.\"}),/*#__PURE__*/e(\"h1\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"The synergy between OctoAI's GTE Large Embeddings and Unstructured.io unlocks new possibilities and improvements for advanced NLP tasks, including understanding and retrieving complex documents for an RAG application.\"}),/*#__PURE__*/t(\"p\",{children:[\"To take the first step towards transforming your data analysis processes and unleashing the full potential of your unstructured data, we encourage you to sign up for an \",/*#__PURE__*/e(a,{href:\"https://octo.ai/docs/getting-started/how-to-create-an-octoai-access-token\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"OctoAI API Key\"})}),\" and an \",/*#__PURE__*/e(a,{href:\"https://unstructured.io/api-key-hosted\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured API Key\"})}),\" today. We invite you to join the \",/*#__PURE__*/e(a,{href:\"https://short.unstructured.io/pzw05l7\",motionChild:!0,nodeId:\"roDaneUSn\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(r.a,{children:\"Unstructured Slack Community\"})}),\" to collaborate with like-minded individuals, receive direct support, and stay informed about the latest advancements in this exciting field.\"]})]});\nexport const __FramerMetadata__ = {\"exports\":{\"richText6\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],
  "mappings": "kdAAAA,IAAqZ,IAAMC,EAAsBC,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,+IAA4JE,EAAEC,EAAE,CAAC,KAAK,iDAAiD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kEAAkE,CAAC,CAAC,CAAC,EAAE,mBAAmB,CAAC,CAAC,EAAeF,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,sEAAsE,OAAO,iWAAiW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAiS,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,8CAA8C,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,+CAA+C,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sCAAsC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,4DAA4D,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yCAAyC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4HAAuH,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6eAAwe,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4LAAoME,EAAEC,EAAE,CAAC,KAAK,2FAA2F,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAE,+lBAAqlB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,8BAA8B,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yLAAyL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,wHAAwH,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wKAAwK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,OAAoBE,EAAE,OAAO,CAAC,SAAS,IAAI,CAAC,EAAE,2OAA2O,CAAC,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,uQAAuQ,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,KAAK,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gfAAgf,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAsT,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,uQAAuQ,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,iGAA8GE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,oXAAoX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,ynBAAonB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2NAA2N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uDAAuD,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kOAAkO,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,2HAAwIE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,wJAAwJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,sgBAAsgB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,saAAmbE,EAAEC,EAAE,CAAC,KAAK,qDAAqD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAE,KAAkBF,EAAEC,EAAE,CAAC,KAAK,qDAAqD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,SAAsBF,EAAEC,EAAE,CAAC,KAAK,6EAA6E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,IAAI,CAAC,CAAC,CAAC,EAAE,uPAAuP,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,wNAAqOE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yCAAyC,CAAC,CAAC,CAAC,EAAE,kEAAkE,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,6bAA0cE,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,EAAE,+BAA+B,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2xBAAsxB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+SAA+S,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sOAAuN,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,qFAAqF,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,6DAA6D,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,sBAAsB,CAAC,EAAE,2MAA2M,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,6HAA6H,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,EAAE,8EAA8E,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,UAAU,CAAC,EAAE,kJAAkJ,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,UAAU,CAAC,EAAE,gIAAgI,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,UAAU,CAAC,EAAE,+GAA+G,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,IAAI,CAAC,EAAE,mGAAmG,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,gBAAgB,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,0CAA0C,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,iDAAiD,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,YAAY,CAAC,EAAE,sCAAsC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,EAAE,gFAAgF,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wRAAwR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kMAA6L,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gPAAsO,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,uaAAua,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,iXAAyXE,EAAEC,EAAE,CAAC,KAAK,+DAA+D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oKAA+J,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mKAAmK,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA,IAAoK,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,0OAAqO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8KAA8K,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yEAAsFE,EAAEC,EAAE,CAAC,KAAK,iGAAiG,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,EAAE,kCAA+CF,EAAEC,EAAE,CAAC,KAAK,qDAAqD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,oJAA+I,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uQAAuQ,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,oNAAiOE,EAAEC,EAAE,CAAC,KAAK,+DAA+D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,wRAAwR,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,wGAAqHE,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,8CAA8C,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oYAAoY,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,ocAA+b,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qWAAqW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iTAAiT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wLAAwL,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,WAAwBE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAE,8NAAsOF,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,EAAE,gBAA6BF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBR,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,YAAyBE,EAAEC,EAAE,CAAC,KAAK,iHAAiH,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,6BAAwB,CAAC,CAAC,CAAC,EAAE,8IAA2JF,EAAEC,EAAE,CAAC,KAAK,gBAAgB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,qJAAkKF,EAAEC,EAAE,CAAC,KAAK,2BAA2B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,8MAA2NF,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,EAAE,mDAAmD,CAAC,CAAC,EAAeF,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,sEAAsE,OAAO,oQAAoQ,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,wDAAgEE,EAAEC,EAAE,CAAC,KAAK,+DAA+D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,ydAAyd,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,wCAAwC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,wLAAqME,EAAEC,EAAE,CAAC,KAAK,2CAA2C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,2CAAsC,CAAC,CAAC,EAAeF,EAAE,aAAa,CAAC,SAAsBA,EAAE,IAAI,CAAC,SAAS,4OAAkO,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,iTAA8TE,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,6HAA6H,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sWAAsW,CAAC,EAAeA,EAAE,aAAa,CAAC,SAAsBA,EAAE,IAAI,CAAC,SAAS,sHAA4G,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yKAAyK,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,iBAAiB,YAAY,YAAY,YAAY,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,sEAAsE,SAASC,GAAgBJ,EAAEO,EAAE,CAAC,GAAGH,EAAE,KAAK,MAAM,WAAW,GAAG,UAAU,iBAAiB,IAAI,8BAA8B,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeN,EAAE,IAAI,CAAC,SAAS,CAAC,gPAA6PE,EAAEC,EAAE,CAAC,KAAK,2GAA2G,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uCAAuC,CAAC,CAAC,CAAC,EAAE,gBAAgB,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,yCAAyC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gBAA6BE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oCAA+B,CAAC,CAAC,CAAC,EAAE,yMAAyM,CAAC,CAAC,EAAeF,EAAE,aAAa,CAAC,SAAsBA,EAAE,IAAI,CAAC,SAAS,qJAA2I,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8FAA8F,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yKAAsLE,EAAEC,EAAE,CAAC,KAAK,0FAA0F,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAE,+IAA+I,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,8UAA2VE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeM,EAAuBV,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,oBAAoB,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,qCAAkDE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,gHAAwHE,EAAEC,EAAE,CAAC,KAAK,sDAAsD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAE,oCAAoC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,yBAAsCE,EAAEC,EAAE,CAAC,KAAK,6CAA6C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,uBAAuB,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yYAAsZE,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,2BAA2B,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,iRAAoRE,EAAEC,EAAE,CAAC,KAAK,+DAA+D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,EAAE,2IAA2I,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,uFAAkF,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iFAAiF,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,SAA+xD,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeN,EAAE,IAAI,CAAC,SAAS,CAAC,wHAA2HE,EAAEC,EAAE,CAAC,KAAK,+EAA+E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,EAAE,2CAAwDF,EAAEC,EAAE,CAAC,KAAK,mFAAmF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAE,WAAW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,gHAAgH,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,iBAAiB,CAAC,EAAE,6GAAwG,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,oBAAoB,CAAC,EAAE,KAAkBA,EAAE,OAAO,CAAC,SAAS,uBAAuB,CAAC,EAAE,SAAsBA,EAAE,OAAO,CAAC,SAAS,uBAAuB,CAAC,EAAE,mMAAgNA,EAAE,OAAO,CAAC,SAAS,YAAY,CAAC,EAAE,iDAAiD,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,mBAAmB,CAAC,EAAE,iKAA8KA,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,EAAE,2EAAmFF,EAAE,OAAO,CAAC,SAAS,kBAAkB,CAAC,EAAE,iBAA8BA,EAAE,OAAO,CAAC,SAAS,OAAO,CAAC,EAAE,6DAA0EA,EAAE,OAAO,CAAC,SAAS,MAAM,CAAC,EAAE,qKAA6KA,EAAEC,EAAE,CAAC,KAAK,uEAAuE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,eAAe,CAAC,EAAE,+RAA4SA,EAAEC,EAAE,CAAC,KAAK,+DAA+D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,gBAAgB,CAAC,EAAE,+LAA+L,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,OAAO,CAAC,SAAS,yBAAyB,CAAC,EAAE,KAAkBA,EAAE,OAAO,CAAC,SAAS,2BAA2B,CAAC,EAAE,KAAkBA,EAAE,OAAO,CAAC,SAAS,uBAAuB,CAAC,EAAE,8GAA8G,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+IAA0I,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gLAA6LE,EAAEC,EAAE,CAAC,KAAK,wGAAwG,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,4GAA+GF,EAAEC,EAAE,CAAC,KAAK,qEAAqE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,SAAsBE,EAAE,KAAK,CAAC,SAAS,mKAA8J,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qGAAqG,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mMAAyL,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,+FAA0iB,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,qHAAqH,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yFAAoF,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,6EAAqFE,EAAEC,EAAE,CAAC,KAAK,sCAAsC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,8KAAyK,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,kKAAwJ,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,yCAAsgG,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+EAA4FE,EAAEC,EAAE,CAAC,KAAK,uBAAuB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,yCAAoC,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA,gDAAsH,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,yEAA+D,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iEAAiE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iCAAiC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,uIAAuI,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,2DAA2D,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,uEAAw3C,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,oFAAiGE,EAAE,OAAO,CAAC,SAAS,kBAAkB,CAAC,EAAE,mCAAgDA,EAAE,OAAO,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,0EAAkFE,EAAEC,EAAE,CAAC,KAAK,6CAA6C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,YAAyBF,EAAE,OAAO,CAAC,SAAS,MAAM,CAAC,EAAE,6DAAwD,CAAC,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,uQAAuQ,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,KAAK,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4IAA4I,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,YAAyBE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,gPAAwPF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeO,EAAuBX,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sGAAsG,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uoBAAwnB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,g/BAAi+B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iDAAiD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2XAA2X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qvBAAgvB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8BAA8B,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iHAAiH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,wHAAwH,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4aAA4a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uSAAuS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kmBAAkmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8FAA8F,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ktBAAktB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+CAA+C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAgQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4QAA4Q,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,kEAAkE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+VAA+V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2dAA2d,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+lBAA0lB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kCAAkC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sbAAsb,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,uQAAuQ,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,wLAAgME,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,EAAE,6SAA6S,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,4cAA4c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gKAAgK,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,6HAA6H,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sDAAsD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yHAAyH,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8IAA8I,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+GAA+G,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,2QAAiQ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oXAA0W,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,4DAA+DE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,wTAAwT,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,kEAAqEE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,8SAAoS,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,sZAA4Y,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wSAAwS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2MAA2M,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2XAAsX,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,YAAyBE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,uPAA+PF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeQ,EAAuBZ,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,8iBAAyiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8yBAAyyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wrBAAwrB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAqQ,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gDAA6DE,EAAEC,EAAE,CAAC,KAAK,gBAAgB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,EAAE,IAAI,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kUAAkU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2WAA2W,CAAC,CAAC,CAAC,CAAC,EAAeW,EAAuBb,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,6PAAmP,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,uEAAuE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kDAAkD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iEAAiE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kDAAkD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6BAA6B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,iCAAyCE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sZAAuY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kbAAwa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iOAAuN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uLAAuL,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kDAAkD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iEAAiE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6IAA6I,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gHAAgH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iLAAiL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6HAA6H,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sWAAiW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uSAA6R,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,uCAAuC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,2EAA2E,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,2EAA2E,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kFAAkF,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6NAA6N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2SAA2S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uQAAuQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+KAAuLE,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,EAAE,gBAA6BF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeU,EAAuBd,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8XAA8X,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,kBAAkB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAE,yNAAyN,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,0BAA0B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,0VAA0V,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,mUAAmU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+CAA+C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6UAA6U,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,UAAuBE,EAAEC,EAAE,CAAC,KAAK,kFAAkF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,QAAqBF,EAAEC,EAAE,CAAC,KAAK,4CAA4C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,EAAE,0aAA0a,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,uCAAoDE,EAAEC,EAAE,CAAC,KAAK,iDAAiD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,ueAAke,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2YAA2Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2GAA2G,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,uOAAuO,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oMAAoM,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,0JAA0J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sMAAsM,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gJAA6JE,EAAEC,EAAE,CAAC,KAAK,2BAA2B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,wBAAqCF,EAAEC,EAAE,CAAC,KAAK,sBAAsB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,kIAAkI,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,slBAAmmBE,EAAEC,EAAE,CAAC,KAAK,sBAAsB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,oCAAoC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,0FAA0F,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yCAAsDE,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,iEAAiE,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAEC,EAAE,CAAC,KAAK,gFAAgF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAEC,EAAE,CAAC,KAAK,0CAA0C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,oFAAoF,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA,wCAA8K,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,qFAAqF,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yIAAyI,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,+DAAglC,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,gDAAgD,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,mIAAmI,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,yCAAyC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sJAAsJ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,mEAAmE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,4KAA4K,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,6PAA6P,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iFAAiF,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0MAA0M,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,mCAAmsE,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeN,EAAE,IAAI,CAAC,SAAS,CAAC,iFAA8FE,EAAEC,EAAE,CAAC,KAAK,yEAAyE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yEAAyE,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,gDAAgD,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yKAAyK,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,4DAA4D,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,qGAAqG,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,uFAAuF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iLAAiL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sJAAsJ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yDAAyD,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,6LAA6L,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,mPAAmP,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2CAA2C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oOAAoO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4GAA4G,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEG,EAAE,CAAC,oBAAoB,wEAAwE,SAASC,GAAgBJ,EAAEK,EAAE,CAAC,GAAGD,EAAE,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,kBAAmuG,SAAS,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,gDAAgD,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yOAAyO,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gIAAgI,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,qCAAqC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,4GAA4G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,4GAA4G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yDAAyD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,8FAA8F,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yKAAyK,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,0BAA0B,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,uDAAuD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,kFAAkF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,qDAAqD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gEAAgE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,yFAAyF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iFAAiF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,iDAAiD,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAS,CAAcE,EAAE,IAAI,CAAC,SAAS,8EAA8E,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sGAAsG,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,gIAAgI,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,MAAM,CAAC,qBAAqB,OAAO,sBAAsB,eAAe,2BAA2B,MAAM,EAAE,SAAsBA,EAAE,IAAI,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gDAA2C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,unBAAunB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2NAA2N,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4KAAyLE,EAAEC,EAAE,CAAC,KAAK,4EAA4E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,WAAwBF,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAE,qCAAkDF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAE,+IAA+I,CAAC,CAAC,CAAC,CAAC,CAAC,EACl77HW,EAAqB,CAAC,QAAU,CAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,mBAAqB,CAAC,KAAO,UAAU,CAAC,CAAC",
  "names": ["init_ssg_sandbox_shims", "richText", "u", "x", "p", "Link", "motion", "ComponentPresetsConsumer", "t", "CodeBlock_default", "richText1", "Youtube", "richText2", "richText3", "richText4", "richText5", "richText6", "__FramerMetadata__"]
}
