{
  "version": 3,
  "sources": ["ssg:https://framerusercontent.com/modules/YVgynkVt96a0ES9Gk8yz/mFiv8ywq5DZfpSmtbIfZ/eo4RAmtig-24.js"],
  "sourcesContent": ["import{jsx as e,jsxs as t}from\"react/jsx-runtime\";import{Link as a}from\"framer\";import{motion as i}from\"framer-motion\";import*as n from\"react\";export const richText=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"In the rapidly evolving landscape of AI technologies, data labeling plays a vital role in training machine learning models. Accurate and well-labeled data is the foundation of model performance. Traditionally, manual data labeling has been the go-to method, but it's progressively becoming outdated for modern enterprises.\"}),/*#__PURE__*/e(\"p\",{children:\"We'll further explore the evolution of data labeling, from manual to automated data labeling, and finally, the superior form of automated data labeling with Large Language Models (LLMs). We'll also delve into the concept of hybrid labeling, which combines human assistance with LLMs for the most desirable labeling result.\"}),/*#__PURE__*/e(\"h2\",{children:\"Manual Data Labeling: The Traditional Approach\"}),/*#__PURE__*/e(\"p\",{children:\"Manual labeling, also known as human annotation, is a fundamental procedure in data annotation and plays a crucial role in various machine learning projects and AI applications. It involves human labelers or annotators reviewing and assigning data labels or annotations to datasets based on specific criteria or guidelines.\"}),/*#__PURE__*/e(\"p\",{children:\"Manual labeling ensures the creation of high-quality labeled datasets, which are the foundation of powerful machine learning models. Annotators can apply domain expertise, contextual understanding, and common-sense reasoning to manifold labeling tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"While this method offers a high level of precision, it is labor-intensive, time-consuming, and expensive. Moreover, manual labeling is considered outdated in the context of modern data labeling and machine learning applications for modern enterprises.\"}),/*#__PURE__*/e(\"h3\",{children:\"Limitations of the manual labeling process\"}),/*#__PURE__*/e(\"p\",{children:\"One of the primary drawbacks of the manual labeling is its restricted scalability. With the exponential growth of data, many machine learning tasks require enormous datasets for training, testing, and validation. Manually labeling such vast amounts of data is lengthy and occasionally even impractical. Limited scalability of manual labeling makes it unsuitable for the ever-growing datasets required in AI applications today.\"}),/*#__PURE__*/e(\"p\",{children:\"Moreover, hiring and training human annotators, along with the time required for them to label data accurately, can result in significant expenses for organizations. This cost factor becomes a major deterrent, especially for startups and smaller enterprises with limited budgets.\"}),/*#__PURE__*/e(\"p\",{children:\"Manual labeling heavily depends on the availability of a skilled workforce. Organizations need to invest in recruiting, training, and managing annotators, which can be resource-intensive. However, labelers, no matter how experienced they are, are susceptible to inconsistencies. Even with guidelines and training, there can be discrepancies in how various labelers identify labels in the same data, leading to issues in the quality and reliability of labeled datasets.\"}),/*#__PURE__*/e(\"p\",{children:\"Manual data labeling is time-intensive. It can take days, weeks, or even months to annotate a large dataset, depending on its size and complexity. For repetitive labeling tasks, manual labeling is not only inefficient but also monotonous for annotators. This can lead to boredom and decreased accuracy over time.\"}),/*#__PURE__*/e(\"p\",{children:\"But still manual labeling is a critical component of data preparation in AI and machine learning projects. It excels in handling complex, nuanced, and context-dependent tasks, providing high-quality labeled datasets. However, it usually takes a lot of time, can be resource-intensive, and is subject to limited scalability and label variations.\"}),/*#__PURE__*/e(\"h2\",{children:\"Automated Data Labeling: A Step Towards Efficiency\"}),/*#__PURE__*/e(\"p\",{children:\"As organizations sought to overcome the limitations of the manual labeling process, they turned to automated data labeling solutions. These solutions often leverage rule-based algorithms and predefined guidelines to label raw data automatically. Auto-labeling is a capability commonly integrated into data annotation tools, utilizing artificial intelligence to automatically enhance or label a dataset.\"}),/*#__PURE__*/e(\"p\",{children:\"With the rise of machine learning algorithms, automating the assignment of labels to data with a high level of precision has become feasible. This process entails training a model on high-quality training datasets and then employing this model to label fresh, unlabeled data. Over time, the model refines its accuracy through exposure to more data, eventually achieving levels of precision comparable to manual labeling.\"}),/*#__PURE__*/e(\"p\",{children:\"In contrast to manual labeling, automated data labeling relies on machine learning algorithms to label data points efficiently and accurately. These algorithms can swiftly and precisely label extensive datasets, reducing the time and expense associated with manual labeling. Moreover, automated data labeling helps to mitigate the risk of human error and bias, yielding more uniform and dependable data annotations.\"}),/*#__PURE__*/e(\"h3\",{children:\"Advantages of Automated Labeling\"}),/*#__PURE__*/e(\"p\",{children:\"Automated labeling can significantly accelerate the labeling process, especially for large datasets, making it an essential tool for tasks related to natural language processing (NLP) or computer vision.\"}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Speed and Efficiency.\"}),\" One of the primary advantages of automated labeling is its speed. Automated systems can label large volumes of data at a fraction of the time it would take human annotators. This efficiency is particularly valuable in applications that require quick data processing.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Scalability.\"}),\" Automated data labeling is highly scalable. It can handle substantial datasets without hiring and training a large team of annotators. This scalability is essential in machine learning applications that require extensive data for training.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Cost Savings.\"}),\" Automated labeling can significantly reduce the expenses associated with data labeling. While developing and implementing automated systems may require certain investments, the long-term savings can be substantial, especially for organizations dealing with massive datasets.\"]}),/*#__PURE__*/e(\"p\",{children:\"Automated labeling can handle the processing of large datasets rapidly, making it highly scalable and cost-effective for projects with extensive data requirements. Nonetheless, automated data labeling comes with its own set of hurdles. For instance, the precision of automated data labeling is highly contingent on the qualitative characteristics of the training data and the intricacies of labeling data tasks. Furthermore, certain data types may pose challenges for automated labeling, such as images featuring ambiguous backgrounds or jokes in the text.\"}),/*#__PURE__*/e(\"p\",{children:\"Human oversight and manual review may still be necessary, especially for nuanced or domain-specific labeling tasks, to ensure the highest level of accuracy and reliability of labels. So, while automation brings efficiency and consistency, it can still struggle with complex and nuanced labeling objectives, often requiring a high degree of manual tuning to achieve acceptable accuracy.\"}),/*#__PURE__*/e(\"h2\",{children:\"Labeling with LLMs: The Superior Form of Automation\"}),/*#__PURE__*/e(\"p\",{children:\"Large Language Models are advanced AI models that have revolutionized data labeling. They use huge amounts of data and elaborate algorithms to understand, interpret, and create texts in human language. LLMs possess the ability to understand context, language nuances, and even the specific objectives of a labeling assignment. They are mostly built using deep learning techniques, most notably neural networks, which allow them to process and learn from substantial amounts of textual data.\"}),/*#__PURE__*/e(\"p\",{children:\"Utilizing LLMs to automate data labeling brings fantastic speed and stable quality to the process while simultaneously lowering labeling costs significantly. This is a kind of superior form of automated data labeling.\"}),/*#__PURE__*/e(\"p\",{children:\"Such advanced AI models, which are pre-trained on vast amounts of training data, have the capability to understand and generate human-like text, making them highly versatile tools for an extensive range of natural language processing tasks. LLMs convert raw data into labeled data by leveraging their NLP capabilities.\"}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Labeling data with LLMs\"})}),\" is possible at an incredible speed, far surpassing manual data labeling process and traditional automated systems. This accelerated pace is essential for organizations dealing with large and expanding datasets.\"]}),/*#__PURE__*/t(\"h3\",{children:[\"Key advantages of \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"automated data labeling with LLMs\"})})]}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Speed and Scalability. LLMs can label vast amounts of data in a fraction of the time it would take humans or traditional automated data labeling systems.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Cost-Efficiency. By automating labeling with LLMs, organizations can reduce labor costs and increase their labeling capacity without compromising on quality.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Adaptability. LLMs can handle a wide range of data labeling tasks, from simple classification to complex entity recognition, making them versatile tools for automated data labeling.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Types of tasks performed by LLMs for automated data labeling purposes\"}),/*#__PURE__*/t(\"p\",{children:[\"LLMs can be utilized for \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"automated labeling\"})}),\" in various ways:\"]}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Text Classification. LLMs can classify text documents into predefined categories or assign labels. By fine-tuning these models on specific datasets, data scientists can create text classifiers that can automatically label text data with high accuracy;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Named Entity Recognition (NER). LLMs can be fine-tuned for NER tasks to identify and label entities such as names, dates, locations, and more in unstructured text data;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Sentiment Analysis. LLMs can determine the sentiment of a piece of text (e.g., positive, negative, neutral), which is valuable for tasks like customer reviews, social media sentiment analysis, and others;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Text Generation. In some cases, LLMs can generate labels or summaries for text data, simplifying the labeling process. For instance, you can use LLMs to generate short product descriptions in an e-commerce dataset;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Question Answering. LLMs can answer questions about text, making it possible to automatically generate labels by asking questions about the content of the data;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Language Translation. LLMs can translate text between languages, which can be useful for labeling multilingual datasets.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Integration of Large Language Models has expanded the capabilities of auto-labeling, making it a valuable tool in modern workflows. By automating a significant portion of the labeling process, LLMs enhance productivity in labeling projects, allowing organizations to meet tighter deadlines and use their resources more efficiently. This automation liberates human annotators from mundane and repetitive work, allowing them to focus on more complex and nuanced aspects of the task.\"}),/*#__PURE__*/e(\"p\",{children:\"While Large Language Models offer numerous advantages in natural language understanding and generation, they also come with their fair share of challenges.\"}),/*#__PURE__*/t(\"h3\",{children:[\"Challenges of \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"data labeling with LLMs\"})})]}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Data Biases. LLMs can inherit biases from the data they have been trained on, potentially leading to biased labels;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Limited to Text Data. LLMs are primarily designed for text data, so they may not be as effective for labeling other types of data, such as images or video;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Continuous Maintenance. LLMs require continuous monitoring and maintenance to ensure that they provide accurate and up-to-date labels, as the model's performance may degrade over time;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Overconfidence. LLMs can exhibit overconfidence in their predictions, providing labels with high certainty even when they are incorrect.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"In practice, addressing these challenges involves a hybrid approach that combines the strengths of LLMs for automated data labeling with human labelers for validation and correction. This balance helps leverage the efficiency of LLMs while ensuring the accuracy and quality of labeled data, particularly in complex or sensitive domains.\"}),/*#__PURE__*/e(\"h2\",{children:\"Hybrid Labeling: Combining Human Expertise with LLMs\"}),/*#__PURE__*/e(\"p\",{children:\"Although LLMs offer unparalleled efficiency and quality in automated data labeling, there are still scenarios where human expertise is essential. Hybrid labeling is a powerful tool that combines the strengths of both humans and LLMs.\"}),/*#__PURE__*/t(\"p\",{children:[\"Platforms like Toloka offer \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"hybrid labeling solutions\"})}),\", allowing organizations to make use of the precision of human data labeling alongside the speed and efficiency of LLMs. In this approach, LLMs create pre-labeled data, and human annotators review and refine the labels, ensuring accuracy and compliance with specific requirements.\"]}),/*#__PURE__*/e(\"p\",{children:\"In this approach, the question of who should label raw data isn't always straightforward. Toloka's approach is iterative. Data labeling isn't a one-time task, it's a continuous process of improvement. This iterative approach helps fine-tune LLMs and improves the overall quality of labeled data over time.\"}),/*#__PURE__*/t(\"p\",{children:[\"Here's how Toloka optimizes \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"data labeling pipelines with LLMs\"})}),\":\"]}),/*#__PURE__*/t(\"ol\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"LLM processes data and suggests labels for human annotators;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Qualified annotators step in to label edge cases and other instances that require nuanced judgment. Their domain knowledge ensures accurate labeling in complex scenarios;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Humans conduct selective evaluations of LLM-generated annotations. This step helps identify and correct any discrepancies or errors in the initial labels;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Expert annotators provide quality assurance and feedback. Their expertise ensures that the labeled data meets the highest standards of accuracy and relevance;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Toloka collaborates with domain experts who bring field-specific knowledge to the labeling process. This expertise is essential for tasks that require a deep understanding of the subject.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Benefits of Hybrid Labeling\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"High Precision. Human annotators can handle complex or ambiguous cases, ensuring the highest level of accuracy;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Scalability. LLMs provide the initial labeling, allowing organizations to process large datasets quickly, while humans handle the final quality control;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Flexibility. Organizations can customize the level of human involvement based on the nature of the labeling task, and optimizing resources.\"})})]}),/*#__PURE__*/t(\"h3\",{children:[\"What is required for automated \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"data labeling with LLMs\"})}),\" to provide the best performance possible?\"]}),/*#__PURE__*/e(\"p\",{children:\"LLMs demonstrate remarkable performance in various natural language processing (NLP) tasks, often achieving or even surpassing human-level performance. For example, they excel in tasks like language translation, text summarization, sentiment analysis, and named entity recognition. However, their performance depends on the specific task and the quality of guidance or training they receive.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Guidance and Fine-Tuning\"})}),/*#__PURE__*/e(\"p\",{children:\"While LLMs offer speed and efficiency, they may not perform optimally straightaway. To achieve high accuracy, LLMs often require guidance in the form of fine-tuning. Fine-tuning involves training the model on a smaller, task-specific dataset to adapt it to a particular domain or labeling task, improving its performance. It is often necessary to ensure accurate automated data labeling with LLMs. Without proper guidance, LLMs may not perform optimally, and their outputs may be unreliable.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Human Oversight\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs can handle many tasks autonomously, but still human oversight is crucial. Humans can review and correct LLM-generated labels. LLMs are powerful language models, but they are not infallible. They can make errors, especially when dealing with ambiguous or complex data. Human oversight helps catch and correct these errors, ensuring the final labels are of high quality and accuracy. Reviewers can verify the correctness of LLM-generated labels by comparing them to ground truth or existing labeled data, improving the overall data quality.\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning and human assistance are often used in tandem. Fine-tuning prepares the LLM to be more task-specific and aligned with guidelines, while human assistance provides the critical human oversight necessary to ensure the quality, fairness, and accuracy of the labels generated by the LLM.\"}),/*#__PURE__*/t(\"h3\",{children:[\"Why \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Auto Data Labeling with LLMs\"})}),\" Needs Human Guidance\"]}),/*#__PURE__*/e(\"p\",{children:\"Labeling with Large Language Models is a powerful and efficient approach to data labeling, but it's not always a completely standalone solution. Human annotators can serve as a quality control mechanism by reviewing and validating LLM-generated labels and catching any errors or inaccuracies. There are several reasons why labeling with LLMs may benefit from human assistance:\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Handling Edge Cases\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs may struggle with rare or unusual cases that do not conform to standard patterns. Such instances are called edge cases and they often deviate from the norm and can be challenging to label accurately due to their uniqueness or complexity. Edge cases are characterized by their low frequency or unpredictability. LLMs may struggle to assign accurate labels to edge cases, as they rely on statistical patterns and may lack specific knowledge about these unique instances. Humans can handle edge cases effectively, preventing mislabeling and ensuring comprehensive coverage of the data.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Ambiguity Handling\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs may encounter situations where data is ambiguous. Many words and phrases have multiple meanings depending on the context. LLMs might select a meaning that seems most probable based on statistical patterns, but humans can infer the intended meaning more accurately by drawing on their knowledge of the subject matter. Human annotators can help disambiguate such cases, ensuring the correct label is applied.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Bias Mitigation\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs can inherit biases present in their training dataset, potentially leading to biased labels. Human guidance is essential for recognizing and correcting biased or inappropriate labeling to ensure fairness and ethical considerations. Forming diverse and inclusive annotation teams with a variety of perspectives can contribute to reducing biases. Diverse teams are more likely to catch and address biases effectively.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Ethical and Sensitive Content\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs may not always be equipped to handle content that is sensitive, controversial, or ethically challenging. Human guidance ensures that the labeling process adheres to contemporary ethical standards and sensitivity to cultural shifts. Annotators can exercise judgment and make appropriate decisions in such cases.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Adaptation to Specific Requirements\"})}),/*#__PURE__*/e(\"p\",{children:\"Some labeling tasks have unique requirements that LLMs may not fully understand. Human annotators can tailor the labeling process to meet these specific needs, ensuring the data is labeled according to the desired criteria. In some projects, particularly those involving specialized domains or historical references, human annotators with expertise in the subject matter are invaluable. They can accurately label data based on their domain knowledge, which LLMs may lack.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Evolution of Language and Context\"})}),/*#__PURE__*/e(\"p\",{children:\"Language is dynamic, and contextual nuances evolve over time. LLMs are trained on vast datasets, and their pre-training data can quickly become outdated as language trends and cultural contexts change. Human guidance is essential to bridge these gaps and ensure that labels remain contextually relevant. Feedback can be used to fine-tune LLMs over time, helping them adapt to changing language trends and evolving contexts. This iterative process of improvement through human guidance allows LLMs to remain relevant.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Slang and informal language\"})}),/*#__PURE__*/e(\"p\",{children:\"LLMs can struggle with slang and informal language. This is because slang often involves unconventional word usage, idiomatic expressions, or cultural references that may not be well-represented in their training data. Oversight and review are valuable in these cases, as at times only humans with knowledge of the specific slang can correct and provide context to ensure accurate labeling or understanding of the data. Additionally, domain-specific fine-tuning or training on specialized input data that includes slang can improve an LLM's performance in handling such language variations.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMs are unquestionably powerful tools for automating data labeling, but potential issues related to its use which stem from various edge cases, informal or sensitive content, unique labeling requirements, ambiguous or outdated pre-training data, and evolving language trends underline the importance of human guidance. The combination of automation capabilities and human oversight allows to find a balance between efficiency and quality, ensuring that labels to data points are accurate, reliable, and up-to-date while adhering to ethical standards.\"}),/*#__PURE__*/e(\"p\",{children:\"Moreover, learning from edge cases, ambiguities, and other challenging instances is essential for model improvement. Human feedback on how to handle specific cases can be used to enhance LLMs and make them more robust over time.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Data labeling is a crucial step in training machine learning models. While labeling data manually has been a traditional method of annotation, it is becoming increasingly outdated for modern enterprises due to its limitations in scalability, cost-efficiency, accuracy, and speed. Automated approaches, especially those incorporating Large Language Models, have emerged as superior alternatives that address these shortcomings and pave the way for more efficient and effective data labeling processes in the era of artificial intelligence and machine learning.\"}),/*#__PURE__*/t(\"p\",{children:[\"LLMs bring speed, stability, and cost-efficiency to data labeling, making them the future of automated data annotation. Hybrid labeling, combining human expertise with LLMs, represents a pragmatic approach that leverages the strengths of both to achieve the highest levels of precision and scalability. Platforms like Toloka offer a seamless \",/*#__PURE__*/e(a,{href:\"https://toloka.ai/labeling-with-llms/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"integration of LLMs and human annotators\"})}),\", allowing organizations to unlock the full potential of data labeling.\"]})]});export const richText1=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Researchers and engineers are constantly striving to improve the capabilities of AI systems. One significant advancement in recent years has been the development of multimodal models, which have the potential to revolutionize the way AI understands and interacts with the world. In this article, we delve into the multifaceted world of multimodal models, unveiling their architecture, applications, and their potential to bridge the gap between AI and human understanding.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is Modality?\"}),/*#__PURE__*/e(\"p\",{children:\"Before exploring the intricacies and applications of multimodal models let's analyze what modality refers to. In the context of multimodal models and artificial intelligence, modality refers to the various types of data or information that a system can process or understand. Common modalities include:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Text. This includes written or spoken language, such as text documents, transcripts, or speech. Natural language processing (NLP) models are specialized in handling the text modality;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Video. This modality combines both visual and auditory data, typically involving moving images and accompanying sounds. Models specialized in computer vision (CV) and specifically in video analysis can interpret this combined data;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Images. This modality encompasses visual data, such as photographs, drawings, or any other type of visual content. Computer vision models analyze images. Visual data is examined for object recognition, image classification, and other CV tasks;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Sensor Data. This type of data is collected from various sensors, such as accelerometers, gyroscopes, GPS, or environmental sensors, which are crucial in applications like autonomous vehicles;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Audio. The audio modality involves sound data, including spoken words, music, or environmental sounds. Models for tasks like speech recognition and audio analysis are focused on this modality.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"The pertinent question in multimodal models is how to efficiently combine data from different modalities to maximize their utility in various applications.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is So Unique About Multimodal AI?\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal models represent a significant advancement in the field of artificial intelligence. These machine learning models, also known as multimodal deep learning models, have gained immense popularity and recognition due to their ability to process and understand information from multiple modalities or sources. They are just starting to emerge but are already getting a lot of attention and show promising breakthroughs in how we interact with smart systems.\"}),/*#__PURE__*/e(\"p\",{children:\"What sets multimodal models apart is their capability to integrate and fuse data from different modalities into a unified representation. This fusion process is often achieved through complex neural network architectures, with the transformer model being a prominent choice. The result is a model that can capture intricate relationships and dependencies between various types of data.\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal models excel in tasks that require a holistic understanding of the context, as they process information from multiple sources simultaneously. This allows them to generate responses, make predictions, and perform various AI-related tasks with a depth of comprehension that was previously challenging to achieve.\"}),/*#__PURE__*/e(\"p\",{children:\"AI models that can only handle one modality are called unimodal AI. Their input is limited to a specific source of information, such as text, images, audio, or sensor data. Unimodal AI applications include traditional NLP models that work exclusively with text, image recognition models that identify objects in images, or speech recognition systems that transcribe spoken language and so on.\"}),/*#__PURE__*/e(\"h2\",{children:\"How Multimodal Models Work\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal AI systems can understand many different types of data, such as words, pictures, sounds, and video. To make sense of all this data, the multimodal AI employs multiple unimodal neural networks for each data type. So, there's a part that's great at understanding pictures and another part that's great at understanding words.\"}),/*#__PURE__*/e(\"p\",{children:\"These neural networks extract important features from the input data. They are often constructed using three main modules that work in conjunction to enable the model to process and understand information from diversified modalities. These components are integral to the successful operation of such systems:\"}),/*#__PURE__*/e(\"h3\",{children:\"Input module\"}),/*#__PURE__*/e(\"p\",{children:\"Unimodal encoders are responsible for feature extraction and understanding within their specific modality. These networks are specifically designed and trained to process the data from their respective modality. For example, a convolutional neural network (CNN) may process image data, while a recurrent neural network (RNN) may deal with text data. Each unimodal network is trained independently on a dataset relevant to its modality.\"}),/*#__PURE__*/e(\"h3\",{children:\"Fusion Module\"}),/*#__PURE__*/e(\"p\",{children:\"Now, these obtained attributes need to be mixed. That's where the multimodal data fusion method comes in. It takes the extracted features from the audio, image and/or the textual processing neural networks and blends them into a single understanding or shared feature representation. The primary objective of multimodal fusion is to bring together information from diverse modalities in a way that allows the AI system to understand the relationships and dependencies between them.\"}),/*#__PURE__*/e(\"p\",{children:\"This holistic understanding is essential for tasks that require insights from multiple sources. For example, understanding an image's content is enhanced when it's combined with textual descriptions. Simply speaking, this is the phase where the multimodal AI figures out how the pictures, video, text, and/ or audio recordings relate to each other.\"}),/*#__PURE__*/e(\"h3\",{children:\"Output Module\"}),/*#__PURE__*/e(\"p\",{children:\"A multimodal classifier is a component responsible for making predictions or decisions based on the fused data representation of information from numerous modalities. It is a crucial part of the multimodal model that determines the final output or action the system should take. After all these phases, a multimodal AI system can understand and utilize combinations of data from various sources, such as text, images, audio, or video.\"}),/*#__PURE__*/e(\"h2\",{children:\"Benefits of Multimodal AI\"}),/*#__PURE__*/e(\"p\",{children:\"Here are some of the key advantages of multimodal AI:\"}),/*#__PURE__*/e(\"h3\",{children:\"Enhanced Understanding\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal AI can provide a more thorough and subtle understanding of data by considering information from multiple sources. By combining these diverse data points, the model gains a richer context for analysis. This context allows the model to understand the content from various perspectives and consider nuanced details that may not be evident with a unimodal approach.\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodality gives the system the ability to get what's going on in a conversation or in the data it's dealing with. For example, if you show a model a picture and some words, it can figure out what's happening by looking at both the picture and the words together. Contextual understanding is like giving the algorithm the power to look at the big picture, not just the words, and that's a big deal in making AI more human. In natural language processing, this is crucial for accurate language grasping and bringing about pertinent responses.\"}),/*#__PURE__*/e(\"h3\",{children:\"Real-life Conversations\"}),/*#__PURE__*/e(\"p\",{children:\"Just a couple of years ago AI-assistant sounded robotic and were not very good at understanding. That's because they usually only understood one way of communicating, for instance just text or speech. Now, multimodal models make it easier for machines to interact with people more naturally.\"}),/*#__PURE__*/e(\"p\",{children:\"A multimodal virtual assistant, for instance, can hear your voice commands, but it also pays attention to your face and how you're moving. This way, it can identify your intentions. So, the experience becomes more personalized and exciting.\"}),/*#__PURE__*/e(\"h3\",{children:\"Improved Accuracy\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal AI models can enhance accuracy and reduce errors in their outcomes by bringing information from diverse modalities together. In unimodal AI systems, inaccuracies can arise from the limitations of a single modality. Multimodal AI can help identify and correct errors by comparing and validating information across modalities. By integrating various modalities, multimodal AI models can leverage the strengths of each, leading to a more comprehensive and accurate understanding of the data.\"}),/*#__PURE__*/e(\"p\",{children:\"The advent of deep learning and neural networks has played a pivotal role in enhancing multimodal machine learning accuracy. These models have shown remarkable capabilities in extracting intricate features from data. When extended to handle multiple modalities simultaneously, deep learning techniques enable the creation of complex, interconnected representations that capture the nuances of the information.\"}),/*#__PURE__*/e(\"h2\",{children:\"Challenges of Multimodal Machine Learning Models\"}),/*#__PURE__*/e(\"p\",{children:\"Building multimodal model architectures presents a set of challenges that arise from the need to combine and process information from different modalities effectively. These challenges include:\"}),/*#__PURE__*/e(\"h3\",{children:\"Fusion Mechanisms\"}),/*#__PURE__*/e(\"p\",{children:\"Deciding how to effectively merge information from different modalities is a non-trivial task. Selecting the right fusion mechanism depends on the task and the data. There are several methods for performing fusion, including early fusion, late fusion, and hybrid fusion.\"}),/*#__PURE__*/e(\"p\",{children:\"In early fusion, the data from each source is brought together and integrated before any further processing or analysis takes place. Late fusion takes a different approach by processing each modality separately and then combining their outputs at a later stage of the decision-making process. Hybrid fusion represents a middle ground between early and late fusion. In this approach, some modalities are fused at the input level (early fusion), while others are combined at a later stage (late fusion).\"}),/*#__PURE__*/e(\"p\",{children:\"When selecting a fusion method, it is important to ensure that the fusion process retains the most relevant information from each modality while minimizing the introduction of noise or irrelevant information. This entails a careful consideration of the interplay between the data modalities and the specific objectives of the machine learning model.\"}),/*#__PURE__*/e(\"h3\",{children:\"Co-learning\"}),/*#__PURE__*/e(\"p\",{children:\"Co-learning introduces specific challenges that are related to simultaneously training on varied modalities or tasks. First of all, it can lead to interference, where learning one modality or task negatively affects the performance of a model on other modalities. This in turn can lead to catastrophic forgetting. It can occur when learning new tasks or modalities causes the model to forget how to perform tasks in the same or another modality.\"}),/*#__PURE__*/e(\"p\",{children:\"Another hurdle includes the need to create models that can effectively handle the inherent heterogeneity and variability present in data from different modalities. This means that the model must be adaptable enough to process diverse types of information.\"}),/*#__PURE__*/e(\"h3\",{children:\"Translation\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal translation is a complex task that involves the translation of content that spans multiple modalities from one language to another or between different modalities. Multimodal AI translation includes tasks such as image-to-text, audio-to-text, or text-to-image generation where the meaning and content may differ significantly.\"}),/*#__PURE__*/e(\"p\",{children:\"Ensuring that the model can understand the semantic content and relationships between text, audio, and visuals is a significant challenge. Building effective representations that can capture such multimodal data is also challenging. These representations should be aligned and enable meaningful interactions between modalities.\"}),/*#__PURE__*/e(\"h3\",{children:\"Representation\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal representation means turning information from different sources, like pictures, words, or sounds, into a format that a model can understand. Usually as a vector or tensor. When we work with data from different sources, sometimes some parts of it are more useful than others, which may be complementary or redundant. The goal is to organize it in a way that promotes a better understanding of that data, so we get the most useful information without unnecessary clutter.\"}),/*#__PURE__*/e(\"p\",{children:\"But it's not always easy, as there might be mistakes in it, and some data might be missing. Moreover, each of these data types has its unique way of being shown to a computer i.e. its representation. For instance, images are like grids of numbers, text is like scattered vectors, and sound is a wave line. Since these representations are so different, it's a challenge to create a single computer model that can understand and use them all effectively.\"}),/*#__PURE__*/e(\"h3\",{children:\"Alignment\"}),/*#__PURE__*/e(\"p\",{children:\"Alignment is making sure that different types of information, like video and sounds or images and text, match up correctly. When information from different sources and modalities doesn't line up properly, it can cause issues.\"}),/*#__PURE__*/e(\"p\",{children:\"Aligning modalities can be tricky. First, there might not be clear instructions for the model because there aren't enough annotated datasets. To teach a machine how to align data, we need lots of examples where the matching is done correctly. Second, making rules to compare different types of information is not easy because figuring out how similar data from one source is to data from another source can be tricky. Third, there can be more than one right way for alignment.\"}),/*#__PURE__*/e(\"h2\",{children:\"Multimodal AI Applications\"}),/*#__PURE__*/e(\"p\",{children:\"Despite all the complications, multimodal AI continues to evolve and has many applications. New models are introduced and they become smarter with each new release. Some of their capabilities include:\"}),/*#__PURE__*/e(\"h3\",{children:\"Visual Question Answering (VQA)\"}),/*#__PURE__*/e(\"p\",{children:\"VQA systems enable users to ask questions about images or videos, and the AI system provides answers. This means that instead of merely recognizing what's in a picture, these AI systems can also understand and respond to questions related to that image. VQA is a dynamic and evolving field that combines the power of computer vision and natural language processing to enable AI systems to understand and interact with the visual world in a more human-like manner.\"}),/*#__PURE__*/e(\"h3\",{children:\"Image and Video Captioning\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal AI can generate textual descriptions for images and videos. This is valuable for content indexing, accessibility, and assisting visually impaired individuals. Image and video captioning is also a field at the intersection of computer vision and natural language processing.\"}),/*#__PURE__*/e(\"p\",{children:\"Image captioning AI systems analyze the visual content of an image by identifying objects, scenes, and other elements within the image. Video captioning takes image captioning a step further by understanding not only the content of individual frames but also the temporal dynamics and relationships between frames in a video. It aims to generate coherent narratives or descriptions that cover the entire video sequence.\"}),/*#__PURE__*/e(\"h3\",{children:\"Gesture Recognition\"}),/*#__PURE__*/e(\"p\",{children:\"Gesture recognition is a field of artificial intelligence and computer vision that involves the identification, interpretation, and understanding of human gestures and movements, often for the purpose of interacting with and controlling computers or other devices. Such systems use various sensors and data sources to capture and interpret gestures. These may include cameras, depth sensors, accelerometers, gyroscopes, and more. Gesture recognition often relies on computer vision techniques to track and interpret movements.\"}),/*#__PURE__*/e(\"h3\",{children:\"Natural Language for Visual Reasoning (NLVR)\"}),/*#__PURE__*/e(\"p\",{children:\"NLVR assesses the capabilities of AI models in comprehending and reasoning about natural language descriptions of visual scenes. The primary objective is to determine whether an AI model can correctly identify the image that aligns with a given textual description of a scene. Given two images, one that matches the description and one that does not, the model must make the correct choice.\"}),/*#__PURE__*/e(\"p\",{children:'This involves understanding the text\\'s semantics and reasoning about the visual content in both images to make an accurate decision. NLVR systems often employ textual descriptions with complex linguistic structures, such as spatial relations (\"next to,\" \"above\"), object properties (\"red,\" \"large\"), and context-dependent statements that may involve logical reasoning.'}),/*#__PURE__*/e(\"h2\",{children:\"Examples of Multimodal Learning Models\"}),/*#__PURE__*/e(\"p\",{children:\"Some prominent multimodal architectures include:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"DALL-E by OpenAI can generate images from textual prompts. You can describe a concept or scene in words, and the model will produce an image that corresponds to that description;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:'CLIP, which stands for \"Contrastive Language-Image Pre-training,\" is trained to understand both images and text simultaneously. It can associate images with textual descriptions and vice versa;'})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Stable Diffusion is a text-to-image model, that generates diverse and visually appealing high-quality images;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"KOSMOS-1 by Microsoft is a large language model capable of various tasks related to processing visuals and textual data, like VQA, image captioning, descriptions for images, and so on;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Flamingo by Google DeepMind is a multimodal visual language model that can analyze and process videos by describing their content.\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Multimodal models are pushing the boundaries of AI capabilities by allowing machines to understand and interact with the world in a more human-like way. As they continue to evolve, we can expect to see even more remarkable applications across various industries, offering the potential to revolutionize the way we live and work.\"})]});export const richText2=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Pre-training a model is typically a labor-intensive, time-consuming, and expensive endeavor. However, pre-trained AI models need to be further tuned to perform specific tasks. For instance, an AI will not be able to generate a story in a certain style if it has not been previously trained on texts in that style.\"}),/*#__PURE__*/e(\"p\",{children:\"This is where Low-Rank Adaptation models, or LoRa models, come into play. LoRa models provide a one-of-a-kind solution to the challenges posed by data adaptation in machine learning (ML). In this article, we will explore what LoRa models are, how they work, their applications, and provide some examples of their use.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is a LoRA Model?\"}),/*#__PURE__*/t(\"p\",{children:[\"Low-Rank Adaptation Models, or LoRA models, are a class of ML models designed to adapt and learn from new data efficiently. \",/*#__PURE__*/e(\"em\",{children:\"They are relatively small models that apply minor modifications to standard checkpoint models to achieve better efficiency and adaptability for specific tasks.\"})]}),/*#__PURE__*/e(\"p\",{children:\"Unlike traditional models that require extensive retraining when new data arrives, LoRa models are engineered to dynamically adjust to the evolving information landscape while keeping computational complexity low. They achieve this through innovative techniques in data representation and adaptability.\"}),/*#__PURE__*/e(\"p\",{children:\"LoRA or Low Rank Adaptation is an approach that presents parameter efficient fine tuning for Large Language Models. However, it was only in the early days of LoRA existence that this technique could be applied only to LLMs. Now LoRA training is applied, for example, for image-generating models like Stable Diffusion models as well. Basically, such methods are applied to get a fine-tuned model that has been trained on fewer trainable parameters.\"}),/*#__PURE__*/e(\"h2\",{children:\"Why Do We Need a Fine-Tuning Process?\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning is an integral component of model training as it allows it to adapt to a particular type of application or project. Before realizing why we deem it necessary we need to figure out what fine tuning is.\"}),/*#__PURE__*/e(\"h3\",{children:\"Essence of Fine-Tuning\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning in neural networks is a process where you take a pre-trained model, which has already learned useful features or knowledge from a large dataset, and then you further train it on a smaller, specific dataset for a particular task. This task could be anything from image classification to natural language understanding. It is a continued training process for a model architecture, but using new specified data.\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning involves making small, focused adjustments to the pre-trained model's weights to create fine-tuned weights that are specialized for a particular task. Weights in a neural network are like adjustable parameters that control the flow of information and play a crucial role in how the network learns and makes predictions. The values of these weights are learned from data to improve the network's performance on a specific project.\"}),/*#__PURE__*/e(\"p\",{children:\"Instead of training a new model from scratch for a specific task, you adapt or fine-tune the pre-trained model by modifying its parameters to better suit the new task. The pre-trained model's existing knowledge, represented in its learned parameters, serves as a valuable starting point. The fine-tuning process makes small changes to this knowledge to make it more relevant and accurate for their new purpose.\"}),/*#__PURE__*/e(\"p\",{children:\"For example, you might start with a pre-trained image recognition model that knows about common objects. Then, with fine-tuning, you can adapt it to recognize specific types of flowers. The pre-trained model already understands things like edges, colors, and shapes, so it's easier to teach it to recognize flower types.\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning is a form of transfer learning, where knowledge gained from one task is transferred and applied to another related task. It's inspired by the idea that learning from one experience can help improve performance on a new, yet similar, experience. The model retains the knowledge learned from the source data, but it becomes specialized for the target task. Let's take a look at how fine-tuning works with LLMs as an example.\"}),/*#__PURE__*/e(\"h3\",{children:\"How Does Fine-Tuning Work?\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning large models can be a resource-demanding process. However, it allows you to leverage the impressive language understanding and/or image generation capabilities of these models for task-specific applications. That's why fine-tuning is not to be discarded.\"}),/*#__PURE__*/e(\"p\",{children:\"For example, huge models like LLaMA, Pythia, and MPT-7B have already learned a lot about words from tons of text. People can take these models and teach them to do specific tasks. They already possess their pre-trained weights and data scientists can fine-tune them.\"}),/*#__PURE__*/e(\"p\",{children:\"These models possess many layers, and each layer has some special trainable parameters. They can change a bit to learn new things. When data scientists teach a large model new tasks, it adjusts the weights of parameters based on the new data. Data scientists show the model some examples by feeding it a new dataset during a fine-tuning process, it guesses what comes next. After forecasting the next token, the model compares such an output with the true data also known as ground truth.\"}),/*#__PURE__*/e(\"p\",{children:\"In that way, the Large Language Model changes the values of its weights to be able to predict or generate true data. This process involves multiple iterations, meaning that the model proceeds through these stages over and over again to become good or better at some specifically tailored purpose. After several such operations, which by the way may take a very long time, the model will be ready to be applied for its intended purpose, for example, it will become a banking or a medical chatbot.\"}),/*#__PURE__*/e(\"p\",{children:\"However, if fine-tuning can be performed on the entire model, you may wonder why LoRA exists.\"}),/*#__PURE__*/e(\"h2\",{children:\"Why LoRA is Necessary?\"}),/*#__PURE__*/e(\"p\",{children:\"Why did methods like LoRA emerge? The simple answer is that standard or full-parameter fine-tuning is difficult in all sorts of ways. The final fine-tuned model comes out as bulky as its pre-trained version, so if the results of training are not to your liking the entire model will have to be re-trained to make it work better. And now with the advent of the LoRA approach to fine tuning, even some PCs can accomplish fine tuning on consumer GPUs.\"}),/*#__PURE__*/e(\"p\",{children:\"Pre-trained models, such as large deep neural networks, can have millions or even billions of parameters. Fine-tuning a model with a large number of parameters can be computationally expensive. It requires significant processing power, often involving powerful GPUs and other specialized hardware. Storing such models also entails a significant amount of hard drive space. The cost of electricity, hardware maintenance, and equipment itself must be taken into account as well.\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning requires loading and working with these massive models, which puts a heavy burden on GPU memory. It happens because it typically involves reading and processing a lot of data, which can be a bottleneck in the GPU's processing pipeline. Loading large datasets from storage into GPU memory can be slow, especially for very large models.\"}),/*#__PURE__*/e(\"p\",{children:\"During fine-tuning, the model adjusts its parameters by computing gradients using backpropagation. Backpropagation, short for \\\"backward propagation of errors,\\\" is an algorithm used to compute gradients and update the model's parameters. The gradient is a vector function that is used to update the model's parameters during training to reduce the error between predictions and actual targets. This backpropagation involves lots of multiplications and memory operations, which can be slow on GPUs.\"}),/*#__PURE__*/e(\"p\",{children:\"Moreover, fine-tuning deep learning models can tie up the GPU for extended periods, making it less available for other tasks. Such full-parameter fine-tuning can strain available resources, leading to resource contention if multiple processes or tasks are running on the same hardware simultaneously.\"}),/*#__PURE__*/e(\"h3\",{children:\"A Full-Parameter Fine-Tuning is More Exhausting Than LoRA\"}),/*#__PURE__*/e(\"p\",{children:\"It is possible to perform a full-parameter fine-tuning, but not everyone can afford it. Mostly it can be done by big corporations that possess huge resources and capabilities. These organizations often have the financial means to invest in high-end hardware, employ experienced teams of machine learning experts, and access vast amounts of data for training.\"}),/*#__PURE__*/e(\"p\",{children:\"Sure, it can lead to excellent results. But performing full-parameter fine-tuning as was previously mentioned can be resource-intensive and expensive. It can make it less accessible to smaller companies, startups, and individuals who may have more limited budgets, computational resources, and access to large datasets.\"}),/*#__PURE__*/e(\"p\",{children:\"With pre-trained models, a complete or full fine-tuning, where all parameters are re-trained, makes less sense. Here's why. Large computer models, like those for language or images, learn a lot of general ideas about their area of expertise. They're like jacks of all trades, knowing a bit of everything. They can handle different jobs quite well without much extra training.\"}),/*#__PURE__*/e(\"p\",{children:\"But when we want them to get really good at one specialized task or deal with certain data, we don't need to teach them everything from scratch. They already have most of the tools they need to learn more. We just need to tweak a few things to make them experts. So, instead of making a lot of big changes, we can make small, focused adjustments. These small adjustments can be thought of as a simple set of changes.\"}),/*#__PURE__*/e(\"p\",{children:\"This is where parameter-efficient methods like low-rank adaptation come in. Data scientists apply LoRA to reduce the computational and memory requirements during fine-tuning of neural networks. All of these improvements help to facilitate and speed up such additional training processes.\"}),/*#__PURE__*/e(\"p\",{children:\"For those with constrained resources, there are alternative approaches, such as LoRA, which allow them to benefit from pre-training and achieve good results with less computational cost and data requirements.\"}),/*#__PURE__*/e(\"h3\",{children:\"LoRA Model Approach to Fine-Tuning\"}),/*#__PURE__*/e(\"p\",{children:\"To understand how LoRA works, let's figure out how the weight system in a model is organized. Think of our model as a massive group of big spreadsheets or matrices with lots of numbers in them. These spreadsheets represent our model's knowledge of language or model parameters.\"}),/*#__PURE__*/e(\"p\",{children:'Each spreadsheet has a \"rank,\" which tells us how many unique columns it has. A unique column is like a special piece of information that can\\'t be made by combining other columns. They are called linearly independent columns. However some columns that are dependent are not unique. Therefore, they can be made by mixing other columns.'}),/*#__PURE__*/e(\"p\",{children:\"The concept of the LoRA model says that when we're training our base model for a specific task, we don't need all the information in those spreadsheets (matrices). We can get rid of some columns and still keep most of the useful data.\"}),/*#__PURE__*/e(\"p\",{children:\"In other words, the aim of this approach is to create a LoRA model with a compressed number of columns i.e. lower-rank matrices. That way the number of parameters will decrease and they will be easier to manage. Such LoRA parameters are much fewer than the initial base model weights.\"}),/*#__PURE__*/e(\"p\",{children:\"A matrix containing all the information is called a full-rank weight matrix. To put it simply, a full-rank weight matrix has a complete set of unique and independent parameters. Each parameter in the matrix contributes uniquely to the model's ability to learn and represent complex patterns in data. This makes it very expressive but also potentially large and computationally intensive, as it may contain many parameters.\"}),/*#__PURE__*/e(\"p\",{children:\"Full-rank weight matrices are common in deep learning models, where their high dimensionality allows the model to capture a wide range of features and relationships in the data. However, they can also be computationally expensive to work with, both in terms of training and inference. So, instead of using one big full-rank weight matrix, LoRA uses two smaller less complex ones.\"}),/*#__PURE__*/e(\"p\",{children:\"LoRA freezes the pre-trained model and its original weights then adds smaller matrices to every layer of the model that can be trained. When you freeze a layer, it means that you prevent the layer's weights from being updated during the fine-tuning process. Instead, the layer retains the values it had when it was pre-trained on the original task.\"}),/*#__PURE__*/e(\"p\",{children:\"Small matrices assist the model in adjusting to a variety of applications while keeping all original parameters unchanged. When we're fine-tuning the model with the help of LoRA models, it makes changes to these smaller matrices, while keeping the initial weight matrix and its parameters untouched. They are much easier to work with and train, so it's faster and cheaper.\"}),/*#__PURE__*/e(\"p\",{children:\"Due to the emphasis on actualizing these two smaller matrices instead of the initial complete weight matrix, the efficiency of the computing process can be dramatically enhanced.\"}),/*#__PURE__*/e(\"h3\",{children:\"Creating LoRA Model\"}),/*#__PURE__*/e(\"p\",{children:\"There are many ways to make your own LoRA model, but the basic steps for making one are as follows:\"}),/*#__PURE__*/t(\"ol\",{style:{\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Gather information for training.\"}),\" For example, it could be 5-10 photos, but it would be better to be able to collect 50-100 pics for the best results particularly if you are willing to teach the model to replicate a certain style;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Apply trainer models for training the LoRA model.\"}),\" There are plenty of them available, for example, Kohya Trainer or LoRA training on \",/*#__PURE__*/e(a,{href:\"http://replicate.com/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Replicate.com\"})}),\". Application instructions will differ depending on the training model you choose;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Using a trained LoRA model.\"}),\" Once you have created a LoRA model, you can load it into a main model like Stable Diffusion and generate images. Sometimes LoRA activation requires the use of special trigger words in a textual prompt. They were incorporated into the model during the training process for the LoRA model to function correctly.\"]})})]}),/*#__PURE__*/e(\"h3\",{children:\"LoRA Model Fine-Tuning Benefits as Compared to Full-Parameter\"}),/*#__PURE__*/e(\"p\",{children:\"Compared to the conventional fine-tuning approach, LoRA possesses several distinct advantages:\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Computational Efficiency\"})}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning the entire model can be computationally expensive, especially when dealing with huge models with millions or billions of parameters. LoRA reduces the computational cost by working with low-rank matrices, making it more feasible for resource-constrained environments.\"}),/*#__PURE__*/e(\"p\",{children:\"It focuses on the optimal use of computational resources, such as CPU processing power, GPU capabilities, and memory by decreasing the count of parameters that should be adjusted or trained. LoRA is designed to be resource-efficient, making it a more viable option for organizations with limited computational resources and smaller budgets.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Knowledge Preservation\"})}),/*#__PURE__*/e(\"p\",{children:\"LoRA retains the general knowledge captured during pre-training, which is essential for applications where the model's broad understanding is beneficial. Knowledge preservation can be a key motivation for using LoRA. Instead of completely retraining or fine-tuning a model from scratch, which might result in a loss of valuable pre-trained knowledge, LoRA allows you to adapt the model while minimizing the loss of this knowledge.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Reduced Catastrophic Forgetting\"})}),/*#__PURE__*/e(\"p\",{children:\"When fine-tuning a pre-trained model on a new task, there is a risk that the model might overfit the new data and lose some of the knowledge it gained during pre-training. This is known as catastrophic forgetting. The model's weights are updated primarily for the new task, and it might lose its knowledge of previous tasks. LoRA can potentially mitigate catastrophic forgetting by keeping pre-trained weights frozen so they are not changed during fine-tuning.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"LoRA Models Portability\"})}),/*#__PURE__*/e(\"p\",{children:\"Due to its reduced number of parameters that are trained and original weights frozen, the LoRA model is compact and mobile. The extent to which the rank of weight matrices is reduced affects the final model size. A higher rank reduction will result in a smaller model. In any case, LoRA models weigh less than a fully fine-tuned model. That enables a user, for example, to keep a variety of models for different styles to generate images without filling up their local storage. At the same time retaining only one original base model with its initial weights.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"LoRA Performance\"})}),/*#__PURE__*/e(\"p\",{children:\"When you fine-tune a pre-trained model using LoRA, you aim to balance the task-specific performance with the efficiency of the model. As it was already mentioned, LoRA reduces the rank of weight matrices to make the model more efficient and memory-friendly.\"}),/*#__PURE__*/e(\"p\",{children:\"While this reduction in capacity might lead to a slight degradation in task performance compared to fully fine-tuned models, it's often the case that the difference in performance is relatively small, especially for less complex tasks. At the same time, the savings in terms of computational resources can be substantial.\"}),/*#__PURE__*/e(\"h2\",{children:\"LoRA Models Examples\"}),/*#__PURE__*/e(\"p\",{children:\"Stable Diffusion models are a class of generative models employed in tasks related to image synthesis, style transfer, and image-to-image translation. These models are typically pre-trained on extensive datasets and have a remarkable capacity to capture complex data distributions.\"}),/*#__PURE__*/e(\"p\",{children:\"The success of Stable Diffusion models comes at the cost of large file sizes. These models often require significant storage space, making it challenging for users to manage and store multiple models, especially when they are dealing with limited disk space and resource constraints.\"}),/*#__PURE__*/e(\"p\",{children:\"LoRA offers an approach to tackle this challenge. LoRA models are essentially small Stable Diffusion models. They use a training technique that applies smaller changes to initially huge models, which proceed to be substantially decreased in file size. The file size of LoRA models typically ranges from 2 MBs to 800 MBs, which is significantly less compared to the original model checkpoints.\"}),/*#__PURE__*/e(\"p\",{children:\"These models can be added to the base Stable Diffusion model to produce more specific images, for instance with more details or in a particular style. Any Stable Diffusion model supports LoRA models, the important thing is to make sure that they are compatible. LoRA and the base model must be of the same version. For example, if you have SD v2.x installed, the LoRA must be trained on SD v2.x.\"}),/*#__PURE__*/t(\"p\",{children:[\"It is best to look for LoRA models on \",/*#__PURE__*/e(a,{href:\"https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads&search=lora\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"HuggingFace.co\"})}),\" or on \",/*#__PURE__*/e(a,{href:\"https://civitai.com/models\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Civitai.com\"})}),\". Here are some interesting models:\"]}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://civitai.com/models/58390/detail-tweaker-lora-lora\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Detail Tweaker LoRA\"})}),\". Allows users to add details to realistic photos and artwork, including anime;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://huggingface.co/ostris/ikea-instructions-lora-sdxl\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Ikea Instructions LoRA\"})}),\". Creates instructions and step-by-step guides similar to those from Ikea from nearly any text query;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://civitai.com/models/13941/epinoiseoffset\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Epi_noiseoffset\"})}),\". Creates high-quality, lifelike photos and artwork with an emphasis on increased contrast and color saturation;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://huggingface.co/ostris/watercolor_style_lora_sdxl?text=poppy+flower\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Watercolor Style - SDXL LoRA\"})}),\". Generates images as if they were painted with watercolors;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://civitai.com/models/157801/vangoghsketcher-sd-xl-10\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"VanGoghSketcher SD XL 1.0\"})}),\". Creates sketches similar to Van Gogh's work;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://civitai.com/models/58247/product-design-dark-minimalism-eddiemauro-lora\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Product Design (Elegant minimalism-eddiemauro) LoRA\"})}),\". Creates photographs of elegant, austere objects in a minimalist style;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(a,{href:\"https://civitai.com/models/173116/doctor-diffusions-tarot-card-crafter\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Doctor Diffusion's Tarot Card Crafter\"})}),\". As the name suggests this LoRA model can generate tarot cards.\"]})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"The existence of LoRA is justified by the fact that it serves as a valuable tool that addresses specific challenges associated with fine-tuning a deep learning network. While traditional fine-tuning allows for updates to the entire model, LoRA offers a different approach by tweaking some parts of it. It enables the reduction of model size by reducing the rank of weight matrices while retaining valuable knowledge from the pre-trained model.\"}),/*#__PURE__*/e(\"p\",{children:\"Reduced size allows users to reduce computational power during the fine-tuning. LoRA offers the flexibility to fine-tune different parts of the model to different degrees, enabling a more focused adaptation process. It is possible to download a ready-made LoRA model, or you can build your own customized version, which is also relatively faster and easier compared to full fine-tuning. LoRA models simply help deliver great results faster and more cost-effectively.\"}),/*#__PURE__*/e(\"p\",{children:\"Most likely, this technology will continue to improve and amaze us even more with its capabilities. Technologies are evolving so fast that even an average owner of a more or less powerful PC can now make use of and build AI models. Even though they are not quite fully functional and can operate in conjunction with the basic model, still it is a start.\"}),/*#__PURE__*/e(\"p\",{children:\"The performance of LoRA models may be comparable or slightly degraded compared to fully fine-tuned models. However, all substantial advantages of LoRA models such as reduced processing memory, hard disk storage space, and preservation of pre-trained knowledge resulting in decreased catastrophic forgetting may be decisive for many enterprises.\"}),/*#__PURE__*/e(\"h2\",{children:\"About Toloka\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Toloka is a European company based in Amsterdam, the Netherlands that provides data for Generative AI development. Toloka empowers businesses to build high quality, safe, and responsible AI. We are the trusted data partner for all stages of AI development from training to evaluation. Toloka has over a decade of experience supporting clients with its unique methodology and optimal combination of machine learning technology and human expertise, offering the highest quality and scalability in the market.\"})})]});export const richText3=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"h2\",{children:\"About the client\"}),/*#__PURE__*/e(\"p\",{children:\"Our client is a European company developing a personalized size and fit recommendation solution for apparel brands. Through an innovative, intuitive and secure body scan technology, brands help their customers find the perfect size for each garment. \"}),/*#__PURE__*/e(\"h2\",{children:\"Challenge\"}),/*#__PURE__*/e(\"p\",{children:'To build a robust AI solution, a large dataset is needed for training models. The client\u2019s ML team had a limited timeframe to collect real-life measurements of real people and \"feed\" the data to their computer vision model. The goal was to improve the model\u2019s accuracy. For the initial model training, they collected data from employees and friends, but the dataset wasn\u2019t big enough. Also, this data wasn\u2019t diverse enough to work with different body types and parameters.'}),/*#__PURE__*/e(\"h2\",{children:\"Solution\"}),/*#__PURE__*/t(\"p\",{children:[\"Toloka\u2019s global crowd is the perfect solution for collecting diverse data \u2014 it\u2019s fast and easy to gather any type of data from different people around the world. Our client launched a project with the help of our partner Training Data, a data collection and labeling company that used Toloka\u2019s platform to gather the dataset of human body measurements. To run the project smoothly the client used the benefits of the \",/*#__PURE__*/e(a,{href:\"https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tolokaai1653050813339.toloka_e-commerce?tab=overview\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Microsoft Azure ecosystem\"})}),\" \u2013 one-click shopping, security, automated billing and document flow.\"]}),/*#__PURE__*/e(\"p\",{children:\"The data collection project was more complex than expected. Tolokers were asked to take photos of themselves while measuring 22 parameters of their body, which is not easy to do at the same time. The client prepared comprehensive guidelines on how to take measurements, and these instructions were modified in a crowd-friendly interface to make it faster and easier for Tolokers to understand. We translated the task instructions into multiple languages to engage more Tolokers from a wide range of countries and obtain more diverse results.\"}),/*#__PURE__*/e(\"p\",{children:\"As part of the quality control checks, participants were also asked to submit the measurement numbers separately from the photos. Sometimes there were discrepancies that required verification. The client\u2019s team checked the data and discarded incomplete or invalid measurements.\"}),/*#__PURE__*/e(\"h2\",{children:\"Result\"}),/*#__PURE__*/e(\"p\",{children:\"At the end of the project, 500 complete sets of measurements were collected from the crowd.\"}),/*#__PURE__*/e(\"p\",{children:\"The data plays an essential role in product development and gives the size and fit recommendation solution a competitive advantage. Major French fashion institutes have become interested in using the app and collaborating with the developer on innovative projects.\"}),/*#__PURE__*/e(\"h2\",{children:\"Fair pay for Tolokers\"}),/*#__PURE__*/t(\"p\",{children:[\"At Toloka, we strive to ensure fair and flexible payment for crowd participants based on the task complexity. This task required more time and effort from Tolokers than typical tasks like classification or recognition, and most people spent about 20 minutes taking measurements and submitting photos. Each participant received fair payment for a submitted set of measurements.\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"em\",{children:\"With Toloka we were able to capture a large dataset we needed in a shorter period of time. Other advantages of this crowdsourcing solution are price (compared with other providers) and data management. Toloka is a reliable platform in terms of both technology and customer support. \"}),/*#__PURE__*/e(\"strong\",{children:\"\u2014 Client\u2019s team\"})]})]});export const richText4=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"The Generative AI landscape represents a flourishing ecosystem where artificial intelligence algorithms are designed to not just replicate, but create entirely new and original content. It is expanding rapidly, driven by advancements in deep learning, natural language processing, computer vision, and other AI techniques. This type of AI finds applications in various industries, including entertainment, gaming, advertising, fashion, healthcare, and more.\"}),/*#__PURE__*/e(\"p\",{children:\"The landscape of the generative AI market is diverse, encompassing both established tech giants and startups. Many of these startups focus on specific verticals, such as gaming, creativity, or virtual reality. They often provide APIs or platforms that enable businesses to utilize generative AI in their applications.\"}),/*#__PURE__*/e(\"p\",{children:\"The generative AI market is poised for significant growth as more industries recognize the potential of AI-generated content. However, there are also ethical concerns surrounding the technology, such as the potential for deepfakes or misuse of AI-generated content. As the market matures, regulations and guidelines around generative AI may also evolve to address these challenges.\"}),/*#__PURE__*/e(\"h2\",{children:\"Why is Generative AI in the spotlight today?\"}),/*#__PURE__*/e(\"p\",{children:\"While AI is a broad term encompassing a range of technologies focusing on tasks requiring human-like intelligence, generative AI is a specific subset that concentrates on the creation of new content by applying machine learning methods. Generative AI models are evolving very rapidly right now. They are getting better, can process more data, and perform more calculations as the technical capabilities of the devices have also evolved. By collecting and processing versatile data, deep neural networks, and complex structures designed to mimic the functionality of the human brain serve as the basis for such generative models.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI has demonstrated the ability to produce highly creative and realistic outputs in various domains such as art, music, and literature. This has fascinated people and captured their attention, as it challenges traditional notions of creativity and raises questions about the role of machines in artistic and creative pursuits.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI, as opposed to traditional AI, which can be used for tasks like classification and prediction, focuses on creating new and original content rather than simply imitating existing data patterns. It aims to generate outputs that are not based on pre-defined rules or explicit instructions but instead learn from patterns and data to produce new and creative content.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI utilizes enormous machine learning algorithms, and large-scale neural networks known as foundation models (FMs), which have been pre-trained on extensive data. These FMs include large language models (LLMs), which are trained on an immense volume of words from various natural language sets, accounting for trillions of data points. These large models are becoming the backbone of generative AI-based applications.\"}),/*#__PURE__*/e(\"p\",{children:\"One recent development in generative AI technology is the ability to create high-resolution and realistic images from text descriptions. By leveraging large-scale deep learning models, researchers have made significant advancements in generating images that accurately depict the content described in natural language inputs.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI potential to revolutionize various creative industries is one of the main reasons why it is in the spotlight now. For instance, in the field of art and design, generative AI tools can assist artists in generating new and imaginative ideas. These tools can create stunning visuals, help with composition, or even generate entire pieces of artwork. By collaborating with generative AI, artists can explore novel concepts that they may have never thought of before, opening doors to new possibilities and creativity.\"}),/*#__PURE__*/e(\"p\",{children:\"The power of generative AI lies in its ability to provide new insights, amplify human creativity, and automate complex tasks. It pushes the boundaries of what computers can create and opens doors to innovation in various industries. The spotlight on generative AI stems from its potential to transform creative processes, streamline workflows, and usher in a new era of technology-driven progress.\"}),/*#__PURE__*/e(\"h3\",{children:\"The rise of large language models powered by Generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"Over the past few years, we have witnessed the rise of large language models powered by generative AI. These models have gained immense popularity and excitement due to their ability to generate human-like text and perform a wide range of language-related tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"Large language models thrive due to the development of generative AI, especially after the emergence of neural network architecture called the Transformer. They have become the foundation for more powerful language models, and by processing all information at once rather than sequentially, they learn faster compared to their predecessors - recurrent neural networks. Moreover, transformers can memorize more context. In that way, they have significantly improved language understanding and generation capabilities.\"}),/*#__PURE__*/e(\"p\",{children:\"The introduction of ChatGPT, OpenAI's language model, was a turning point in the generative AI landscape. The GPT architecture-based ChatGPT proved to be a remarkable breakthrough in the comprehension and creation of natural language, providing a vivid demonstration of large-scale Transformer models' power.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMs can increase productivity in many domains, including in business. We can delegate routine, time-consuming tasks to a smart chatbot and simply check the results, as LLMs can interpret our queries and solve quite complex problems. Language models have the potential to yield helpful content and speed up workflows, but if misused or maliciously exploited, they can also pose some dangers.\"}),/*#__PURE__*/e(\"p\",{children:\"For example, they can enable quick access to harmful information or even generate malicious content such as phishing emails and harmful code. Even though there are ways to limit access to dangerous content, these are not always implemented or effective.\"}),/*#__PURE__*/e(\"p\",{children:\"The results of LLM tools should always be verified and treated with an element of hesitation. These complex systems cannot distinguish truth from fabrication. Some of their results may appear very credible, but turn out to be entirely incorrect. Using generative AI like LLMs may prove to be incredibly beneficial in solving a huge number of issues, but it is up to humans to get involved in verifying the accuracy, usefulness, and overall reasonableness of their outputs.\"}),/*#__PURE__*/e(\"p\",{children:\"Right now, many AI industry experts are even suggesting that the further development of some LLMs should be temporarily halted to give time for the implementation of general security protocols to improve and regulate this valuable technology of synthetic data generation.\"}),/*#__PURE__*/e(\"h2\",{children:\"Generative AI application landscape\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI applications have immense potential and are continually evolving. They are used in various industries, including art, entertainment, advertising, and data science. The functionalities of a generative AI system vary based on the modality or nature of the dataset employed.\"}),/*#__PURE__*/e(\"h3\",{children:\"Generative AI landscape categories\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI can be applied in the following modalities:\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Text generation\"})}),/*#__PURE__*/e(\"p\",{children:\"Text generation models aim to generate human-like text. Once trained, these models can generate new, coherent text based on a given prompt. This can include generating stories, poetry, articles, and more.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Code generation\"})}),/*#__PURE__*/e(\"p\",{children:\"AI algorithms can generate code from natural language descriptions. This can be useful for automating repetitive coding tasks or prototyping new features quickly. AI algorithms can analyze code and identify potential bugs or vulnerabilities, providing suggestions on how to fix them.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Image generation\"})}),/*#__PURE__*/e(\"p\",{children:\"Text-to-image models can generate realistic images from textual descriptions or random noise. They can be used for various applications such as digital art generation, creating synthetic data for training algorithms, or enhancing image quality. Generative AI models in this category can create various types of design, including graphic design, logos, website layouts, or product designs. They are often used by designers as sources of inspiration or to automate repetitive design tasks.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Music Generation\"})}),/*#__PURE__*/e(\"p\",{children:\"AI models that can compose original music or generate variations of existing compositions. These models can be used by musicians, composers, or music producers to quickly generate ideas or explore new musical possibilities.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Audio generation\"})}),/*#__PURE__*/e(\"p\",{children:\"Such AI systems summarize, generate human-like speech or convert text into audio, often known as text-to-speech or speech synthesis models. They can be used in voice assistants, audiobook narration, accessibility tools, or customer service applications.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Video generation\"})}),/*#__PURE__*/e(\"p\",{children:\"Generative models in this category can generate videos or alter existing videos by changing their content, improving their quality, or modifying their style. They can be used for applications like video synthesis, deepfake creation, or video editing.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Chatbots\"})}),/*#__PURE__*/e(\"p\",{children:\"This category of generative AI tools includes models that can generate realistic and contextually appropriate responses in conversation. These models are commonly used in customer support, virtual assistants, or interactive storytelling applications.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Search engines\"})}),/*#__PURE__*/e(\"p\",{children:\"Generative AI can enhance search engines' ability to provide contextual suggestions. By understanding the user's search history, preferences, and location, generative AI can generate personalized suggestions for queries, websites, or related topics, offering a more tailored search experience.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Gaming\"})}),/*#__PURE__*/e(\"p\",{children:\"This category includes AI models capable of generating virtual game environments, levels, characters, or NPCs (non-player characters). Such systems can help game developers create content or assist in procedural content generation.\"}),/*#__PURE__*/e(\"p\",{children:\"Only some broadly known generative AI spheres of usage are listed here. And as the technology continues to advance, the potential applications are expected to expand even further.\"}),/*#__PURE__*/e(\"h2\",{children:\"Generative AI market\"}),/*#__PURE__*/t(\"p\",{children:[\"As the influence of artificial intelligence extends to numerous aspects of our day-to-day activities, a wide range of tools has emerged across various industries. According to analytics from \",/*#__PURE__*/e(a,{href:\"https://www.marketsandmarkets.com/Market-Reports/generative-ai-market-142870584.html\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Markets and Markets\"})}),\", the generative AI market will grow from $11.3 billion in 2023 to $51.8 billion by 2028. Currently, AI is helping to create and enhance about 1% of the content on the Internet, and experts expect this figure to rise to about 50% in the next 10 years.\"]}),/*#__PURE__*/e(\"p\",{children:\"One of the drivers behind this dramatic growth in generative AI is a wide adoption of cloud storage among businesses. Experts need huge datasets for training machine learning models to create a generative AI solution, but just a few years ago it was not that easy and cheap for data scientists to get it for their studies. Now that more data gets transferred from physical media to the cloud, it has become more accessible.\"}),/*#__PURE__*/e(\"p\",{children:\"The development of large language models capable of generating text is also driving the development of generative AI in general. LLMs such as GPT and LaMDA form the basis for creating applications for text processing, and also for image generating. For example, GPT-3 modified to generate images is built into Dall-E.\"}),/*#__PURE__*/e(\"h3\",{children:\"Key players\"}),/*#__PURE__*/e(\"p\",{children:\"The list below encompasses an array of tools such as text generation software, image creation programs, music composition systems, and more. It provides a comprehensive outlook on generative AI tools and their applications.\"}),/*#__PURE__*/e(\"p\",{children:\"All generative AI applications are based on foundation models, as has been already mentioned. The development of such major AI systems demands a tremendous amount of resources, so most often they are either developed by large companies with huge funds or by startups with major investments. The generative AI market is diverse, encompassing both established tech giants and smaller startups.\"}),/*#__PURE__*/e(\"p\",{children:\"In this list, we will point out companies that develop foundation models and indicate some applications that are built on top of these models. For convenience, we categorized the application domains of generative AI into three major categories: Natural Language Generation, Image generation as well as Sound and speech generation\"}),/*#__PURE__*/e(\"h3\",{children:\"Natural Language Generation\"}),/*#__PURE__*/e(\"p\",{children:\"Some of the prominent players in the market include:\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"OpenAI\"})}),/*#__PURE__*/e(\"p\",{children:\"OpenAI is a leading AI research laboratory that has developed powerful text generative models, known as GPT (Generative Pre-trained Transformer). The latest versions of it, GPT 3.5 and GPT 4, are a base for a large language model-based chatbot called ChatGPT also developed by OpenAI.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Google\"})}),/*#__PURE__*/e(\"p\",{children:\"Google has been actively researching and developing generative AI models. They have developed models that can generate text based on different input formats:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"BERT (Bidirectional Encoder Representations from Transformers)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"T5 (Text-to-Text Transfer Transformer)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"PaLM and PaLM 2(Pathways Language Model)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"LaMDA and LaMDA 2 (Language Models for Dialog Applications)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Minerva\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"GLaM (Generalist Language Model)\"})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Microsoft\"})}),/*#__PURE__*/e(\"p\",{children:\"Microsoft has been working on text generation models, specifically for natural language processing tasks. Microsoft's Turing-NLG is a language model that has shown promising results in generating coherent and contextually accurate text. The company has also integrated natural language processing capabilities into products like Microsoft Azure Cognitive Services.\"}),/*#__PURE__*/e(\"p\",{children:\"Microsoft and NVIDIA created the Megatron-Turing NLG model with 530 parameters, which made it one of the largest in 2021. This transformer-based LLM can comprehend text and generate responses in natural language.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Meta\"})}),/*#__PURE__*/e(\"p\",{children:\"Meta has developed LLaMA 2 (Large Language Model Meta AI), an open-source language model. It is free to use for research and commercial purposes. The Large Language Model was developed in collaboration with Microsoft. Microsoft and Meta are committed to an open approach to allow expanded access to essential AI technologies for the benefit of businesses worldwide.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"NVIDIA\"})}),/*#__PURE__*/e(\"p\",{children:\"NVIDIA is a technology company known for its graphics processing units (GPUs) and AI capabilities. They have developed powerful hardware technologies that are widely used for training and running generative models efficiently. The company has introduced NVIDIA AI Foundations, the services for creating unique chatbots, picture generators, and other AI tools. It offers the essential instruments for adopting and customizing generative AI. Once the models are ready for deployment, enterprises can run them using NVIDIA AI Foundations cloud services.\"}),/*#__PURE__*/e(\"p\",{children:\"Among the tools incorporated in it is NVIDIA NeMo, which is a cloud-based framework that allows developers to build and deploy large language models. It simplifies the development of conversational AI models by offering a comprehensive toolkit for researchers specializing in automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS). It enables easy reuse of existing code and pre-trained models, facilitating the creation of new conversational AI models for both industry and academic researchers.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Salesforce AI Research\"})}),/*#__PURE__*/e(\"p\",{children:\"Salesforce AI Research released CTRL that can generate human-like text with specific prompts. CTRL stands for Conditional Transformer Language Model and it is designed to take a given prompt and generate relevant text based on that prompt. The model has 1.6 billion parameters, making it one of the largest language models ever created. CTRL is trained on a diverse range of training data sources such as books, websites, and Wikipedia.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Anthropic\"})}),/*#__PURE__*/e(\"p\",{children:\"Anthropic is a startup that has launched a chatbot called Claude, based on a model called Claude 2. The chatbot can generate text and answer questions on a wide range of topics from science concepts to cooking.\"}),/*#__PURE__*/e(\"p\",{children:\"The following are the generative AI apps that use some of the previously mentioned foundation models at their core:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Codey\"}),\" is a Google family of coding models built on PaLM 2, that can complete and generate code, as well as help developers solve debugging issues through a chatbot;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Bard\"}),\" is an experimental AI chatbot by Google with features such as code generation, math problem-solving, and writing assistance. It runs using Google's largest language model called PaLM 2, although originally launched with a lighter version of LaMDA to make it easier to scale;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Copy.ai\"}),\" is an advanced AI utilizing GPT-3 for creating unique tests for marketing and sales;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Jasper.ai\"}),\" is an AI that can create textual content for webpages, blogs, social networks, and other media with the help of GPT-3;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Copysmith.ai\"}),\" is A GPT-3-based AI copywriter that can compose product descriptions, taglines, SEO meta tags, and ad texts;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Rytr\"}),\" leverages GPT-3 to create plagiarism-free content to facilitate the process of writing articles, posts, and more;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Writesonic\"}),\" is powered by GPT-3,5 and GPT-4 and can do everything from creating articles to writing product descriptions or text for landing pages;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Anyword\"}),\" combines pre-trained models such as GPT3, T5 by Google, and CTRL by Salesforce Research, which has been fine-tuned to enhance performance;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"LangAI\"}),\" is an interactive platform for learning languages, that utilizes GPT-3 and GPT-4, offering users the opportunity to engage in spoken or written conversations with an AI in more than 20 different languages;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Genei\"}),\" leverages the capabilities of GPT-3 to transform PDFs and webpages into intelligent summaries. These summaries are not only concise but also come with a comprehensive analysis of the content in the original reading material.\"]})})]}),/*#__PURE__*/e(\"h3\",{children:\"Image generation\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"OpenAI\"})}),/*#__PURE__*/e(\"p\",{children:\"DALL\\xb7E 2 is the successor to OpenAI's DALL\\xb7E, which is a text-to-image model that can generate images from textual descriptions. It is trained using a large dataset consisting of image-text pairs, allowing it to generate more accurate and diverse images.\"}),/*#__PURE__*/e(\"p\",{children:\"It is a neural network model that combines ideas from the GPT-3 to generate images from textual prompts. One of the key features of DALL\\xb7E 2 is its ability to handle complex and abstract prompts. Overall, DALL\\xb7E 2 is an advanced version of the original DALL\\xb7E model, enabling it to generate high-quality images based on textual inputs.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Stability AI\"})}),/*#__PURE__*/e(\"p\",{children:\"Stable Diffusion is an open-source image generation technique developed by Stability AI. It is based on the concept of diffusion models, which aim to estimate the probability distribution of a dataset to generate new samples similar to the original data.\"}),/*#__PURE__*/e(\"p\",{children:\"In Stable Diffusion, an initial image is iteratively modified using a diffusion process. This process involves adding noise to the image and then denoising it. By repeating this process for multiple iterations, the image gradually evolves toward the desired target. Stable Diffusion has shown promising results in generating high-quality images with sharp details and coherent structures. It is a base for DreamStudio by Stability AI.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Midjourney, Inc\"})}),/*#__PURE__*/e(\"p\",{children:\"Midjourney has developed an eponymous neural network that can recognize written text and convert it into images. It uses a deep learning algorithm called Generative Adversarial Networks (GANs) to produce convincing images based on given inputs or prompts.\"}),/*#__PURE__*/e(\"p\",{children:\"The GAN architecture consists of two neural networks: the generator and the discriminator. The generator is responsible for creating images, while the discriminator's job is to distinguish between real images and those generated by the generator. Through an iterative process, both networks train each other, resulting in the generator improving its generated images over time.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Google\"})}),/*#__PURE__*/e(\"p\",{children:\"Google's Image Model Family refers to a series of deep learning models for tasks related to photorealistic image analysis, recognition, and generation. These models have been trained on large-scale image datasets and have achieved state-of-the-art performance on various computer vision tasks by employing diffusion models. Another model developed by Google is DeepDream, which creates unique and surreal images using deep learning techniques.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"NVIDIA\"})}),/*#__PURE__*/e(\"p\",{children:\"NVIDIA AI Foundations includes an image, video, and 3D generation platform called NVIDIA Picasso. It is a cloud-based service for developing and deploying generative AI-based apps with sophisticated text-to-image, video, and 3D transformations for increased creativity, engineering, and digital modeling productivity through simple cloud-based APIs.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Adobe\"})}),/*#__PURE__*/e(\"p\",{children:\"Adobe Firefly is a family of generative AI models developed by Adobe for its Creative Cloud software. It is designed to enrich and modernize creative workflows in Adobe applications through neural network tools. With Firefly, content creators create high-quality raster or vector images upon request.\"}),/*#__PURE__*/e(\"p\",{children:\"Apps based on previously mentioned foundation models include:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Bing Image Creator\"}),\" is a tool based on DALL-E and developed by Microsoft that allows users to create custom images by combining elements from their vast image database;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"MyHeritage's AI Time Machine\"}),\" is an image generator for self-portraits powered by Stable Diffusion;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Lensa\"}),\" is an AI-powered photo editing app that allows users to enhance their photos with various filters and editing tools. Magic Avatar feature can turn selfies into realistic artwork. The app relies on a deep learning model called Stable Diffusion;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Craiyon\"}),\" is an image generation app that is a scaled-down variant of DALL-E. While it may not possess the same level of potency, this AI image generator enables users to effortlessly and swiftly convert their written descriptions into captivating visuals. Craiyon serves as a remarkable resource for individuals seeking to explore the capabilities of AI image generators without any cost;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Jasper.ai\"}),\" besides working with text can also generate very impressive art with the help of DALL-E 2;\"]})})]}),/*#__PURE__*/e(\"h3\",{children:\"Sound and speech generation\"}),/*#__PURE__*/e(\"p\",{children:\"In contrast to LLM, which has seen a significant leap in development, in particular, due to transformer models, or to image generation, which has also reached unprecedented heights due to the development of diffusion models, there have been no similar breakthroughs in audio generation.\"}),/*#__PURE__*/e(\"p\",{children:\"The main distinction consists in the amount of high-quality data available for training: while images and texts are plentiful, audio data is either limited or very expensive. Yet, some ideas borrowed from image and text generation methods have been adapted to audio models to solve or improve several problems specific to the field of audio generation systems.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Google\"})}),/*#__PURE__*/e(\"p\",{children:\"Google has developed the Tacotron and WaveNet models for speech synthesis. WaveNet neural network is capable of generating speech that sounds natural and close to the human voice. WaveNet model is the same technology used to produce speech for Google Assistant, Google Search, and Google Translate. Tacotron is a model that directly maps character sequences to speech waveform, which allows for generating high-quality text-to-speech outputs.\"}),/*#__PURE__*/e(\"p\",{children:\"MusicLM is another AI tool designed by Google to transform descriptions into music. The AI can generate music from a tune, allowing it to convert whistled and hummed melodies according to the style described in a textual description into full-fledged musical compositions.\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Meta\"})}),/*#__PURE__*/e(\"p\",{children:\"Meta has introduced Voicebox, an AI model that both generates and edits spoken speech. They claim it's yet another revolution in the field of generative AI. The model does not only create speech in the exact manner and tone of any person's voice from a short sample, but also knows how to automatically remove noise, correct misstatements, and understand context.\"}),/*#__PURE__*/e(\"p\",{children:\"AudioCraft is a novel open-source AI code unveiled by Meta, catering to music enthusiasts and sound creators. This cutting-edge technology empowers users to compose music and generate sounds using the incredible capabilities of artificial intelligence. AudioCraft is comprised of three models:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"MusicGen utilizes Meta-owned and legally licensed music to produce music based on user inputs in text format\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AudioGen is trained on publicly available sound effects and generates audio based on user inputs in text format\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"EnCodec decoder, which enables superior music generation quality with reduced artifacts.\"})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Microsoft\"})}),/*#__PURE__*/e(\"p\",{children:\"Created by Microsoft, SpeechT5 is an all-in-one architecture that encompasses three different speech models:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"a speech-to-text model that can automatically transcribe spoken language for tasks such as speech recognition or speaker identification;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"a text-to-speech model that synthesizes natural-sounding audio from written text, offering a voice output;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"a speech-to-speech model capable of converting between various voices, facilitating voice conversion, and performing speech enhancement tasks.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"SpeechT5 combines the capabilities of all these distinct speech models, making it a versatile and comprehensive solution for various speech-related applications.\"}),/*#__PURE__*/e(\"p\",{children:\"Microsoft has also developed a model of artificial intelligence called VALL-E which turns text into speech, mimicking the human voice precisely, with a sample recording lasting only three seconds. The AI also preserves the emotional coloring of the voice in the sample.\"}),/*#__PURE__*/e(\"h2\",{children:\"Emerging business models for generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"The potential of generative AI to revolutionize multiple industries and reshape our lifestyles and professions cannot be overstated. With the continuous evolution and maturation of this technology, numerous business models are emerging to exploit its capabilities. Prominent business models used by companies when dealing with generative AI include:\"}),/*#__PURE__*/e(\"h3\",{children:\"Model-as-a-service\"}),/*#__PURE__*/e(\"p\",{children:\"The primary advantage of Model-as-a-Service is that it eliminates the need for companies to invest in the development infrastructure and resources required to build AI models from scratch. Model-as-a-service refers to cloud-based or containerized applications that allow software creators who are not data scientists to use AI models through APIs, software development kits (SDKs), or apps.\"}),/*#__PURE__*/e(\"p\",{children:\"This business model mimics the popular subscription-based approach used for software services. Subscribers can opt for monthly, semi-annual, or annual commitments, ensuring a steady stream of revenue for developers. By leveraging cloud technology, businesses can tap into a wide array of generative AI models to generate fresh and inventive content. As a result, businesses can effortlessly and economically harness the power of generative AI for creating novel and immersive experiences.\"}),/*#__PURE__*/e(\"h3\",{children:\"Built-in apps\"}),/*#__PURE__*/e(\"p\",{children:\"Businesses develop fresh apps utilizing existing generative AI models, thereby offering novel and innovative experiences. For instance, companies can leverage generative AI to craft exclusive and captivating experiences, allowing them to generate videos, music, artwork, and various other forms of creative expression.\"}),/*#__PURE__*/e(\"p\",{children:\"Such applications are called built-in because they use ready-made generative models. That is, they are built into a ready-made system. Such apps have features that are permanently attached to the chosen model or models that are easy to use through the interface of an app. Users often do not realize or even consider that all the features of the application, such as generating photo-based avatars or marketing texts, are possible by means of a particular generative AI model.\"}),/*#__PURE__*/e(\"h3\",{children:\"Vertical integration\"}),/*#__PURE__*/e(\"p\",{children:\"The concept of vertical integration allows companies to use generative AI technologies to improve their current products and services. An illustrative example of this business model is the way some companies are incorporating generative AI capabilities into their existing search engines. That's how generative AI can revolutionize search systems by delivering more precise and personalized results to users, surpassing the reliance on existing web pages alone.\"}),/*#__PURE__*/e(\"p\",{children:\"With vertical integration businesses can utilize generative AI models to analyze vast quantities of data, enabling them to make predictions regarding pricing or enhance the precision of their recommendations. In that way companies can create innovative experiences for their customers, increasing their competitiveness in the market.\"}),/*#__PURE__*/e(\"h2\",{children:\"Generative AI's Impact\"}),/*#__PURE__*/e(\"p\",{children:\"The emergence of Generative AI marks a significant advancement in the progression of artificial intelligence. In the future, it will have a major impact on almost all industries. As businesses race to incorporate and adjust to this technology, it is crucial to comprehend its potential in providing economic and societal benefits.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI has the potential to significantly boost labor productivity throughout the entire economy, in particular by broadening the working options available to some employees through the automatization of some of their routine activities. The acceleration of the technological automation rate is strongly linked to the expansion of generative AI's ability to interpret natural language.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI tools have the potential to greatly influence various business functions. According to McKinsey, among these functions, there are four key areas - customer operations, marketing and sales, software engineering, and research and development - that could collectively contribute around 75 percent of the overall annual value derived from generative AI applications.\"}),/*#__PURE__*/e(\"p\",{children:\"The utilization of AI technology in enhancing business operations allows for greater flexibility and adaptability within these processes. This enables the elimination of traditional pipelines and promotes the integration of advanced AI systems alongside human counterparts.\"}),/*#__PURE__*/e(\"p\",{children:\"By adopting this approach, the interaction between machines and humans can be revolutionized, giving rise to teams consisting of both machines and humans. Such teams will be able to quickly process large amounts of data, grasp new information, and adapt to constantly changing conditions as they fulfill their tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"AI's influence extends to enhancing and supporting human abilities, allowing machines and individuals to collaborate and excel in tasks they are best at. AI systems adeptly handle repetitive and mundane assignments that involve extensive data analysis, while humans are better at handling unconventional information, drawing conclusions in complex situations, decision-making in highly unpredictable conditions, and similar tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"Capabilities of generative AI empower companies to restructure their business operations, resulting in enhanced productivity and reduced costs.\"}),/*#__PURE__*/e(\"h2\",{children:\"The future and challenges of generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"Right now, we can say that the future of all AI depends on the development of generative AI. It has the potential to revolutionize various industries and aspects of our lives. However, it also comes with certain challenges that need to be addressed.\"}),/*#__PURE__*/e(\"p\",{children:\"The generation of AI-powered content can be misused for malicious purposes like creating deepfakes and spreading misinformation. Maintaining security measures and developing robust mechanisms to authenticate the authenticity of generated content is essential.\"}),/*#__PURE__*/e(\"p\",{children:\"Also, generative AI can potentially create content that is misleading, biased, or harmful. AI models learn from the data they are trained on. If the training data contains biases, those biases can be inadvertently replicated in the generated content. Careful selection and preprocessing of training data are necessary to avoid biases and ensure fairness and inclusivity.\"}),/*#__PURE__*/e(\"p\",{children:\"The content generated by AI may not always meet the desired quality standards. Ensuring that generated content is valuable, accurate, and meets human expectations requires continuous monitoring, feedback loops, and refinement.\"}),/*#__PURE__*/e(\"p\",{children:\"As generative AI evolves, there is a need for appropriate regulations and policies to ensure responsible and ethical use. Governments and organizations should work together to establish guidelines that address potential risks, privacy concerns, fairness, and accountability.\"}),/*#__PURE__*/e(\"p\",{children:\"Despite these challenges, the future of generative AI holds immense potential. Generative AI has already shown tremendous progress in generating high-quality content like art, audio, and writing. In the future, we can expect even more sophisticated algorithms capable of producing original and innovative creations.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"The generative AI market will continue to grow and according to estimates will expand by about 5 times in the following 5 years. Key players in the generative AI market such as Google, OpenAI, Microsoft, and others create foundation models that become the basis for many generative AI applications. Creating such systems is not a cheap and quick task, but as the cost of developing such tools may decrease over time, the number of foundation models and applications based on them will increase.\"}),/*#__PURE__*/e(\"p\",{children:\"Indeed, we are at the threshold of a new age, where thousands of jobs will be transformed and new ones probably created. No doubt these breakthrough Generative AI platforms will sustain and improve our day-to-day lifestyle. But it also means that we will have to adapt and possibly re-educate ourselves to work with AI. However, all of these developments will be beneficial to humanity, as these systems can empower people to engage in more meaningful and creative endeavors by taking over routine tasks. Human possibilities are extended through collaboration with AI, boosting people's productivity and handling tasks that were previously thought to be impossible to solve.\"})]});export const richText5=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"With the launch of ChatGPT last year, the innovative and transformative power of generative AI almost instantaneously became known to the world at large. A Deep Learning subset of Machine Learning, generative AI uses neural networks to register patterns from existing information to generate unique new artifacts that display the training data characteristics without replicating it.\"}),/*#__PURE__*/t(\"p\",{children:[\"With nearly \",/*#__PURE__*/e(a,{href:\"https://edition.cnn.com/2023/07/23/business/ai-vc-investment-dot-com-bubble/index.html\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"$15bn invested\"})}),\" [1] in artificial intelligence in the first six months of this year alone, the global generative AI market is expected to hit \",/*#__PURE__*/e(a,{href:\"https://www.bloomberg.com/company/press/generative-ai-to-become-a-1-3-trillion-market-by-2032-research-finds/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"$1tn within a decade\"})}),\" [2]. Let's take a look at what the future of generative AI holds, especially for the business enterprises.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Trends that will shape the future of Generative AI\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Generative AI\u2019s ability to create unique content using natural language processing will dramatically change the process of text-based content creation, impacting everything from creative writing and entertainment, to marketing and customer service.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"As organizations gather tremendous amounts of data, there will be an increasing need for domain-specific and self-hosted Large Language Models or LLMs, leading to specialized generative AI models for different business needs.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Customer experience, revenue growth and productivity as the key focus of business\u2019 AI-related investments will lead to the niche implementation of generative AI.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"\u2018Prompt Engineering\u2019 will become an essential skill in driving optimal results from the generative AI solutions, likely becoming a key job role.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Generative AI will transform critical industry processes, from new drug discovery by creating novel molecular structures after recognizing patterns in existing drug compositions, to financial risk mitigation by detecting anomalies in transactional data.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Increasing concerns about ethical considerations relating to intellectual property rights, information accuracy and privacy will lead to policy developments for better controls and monitoring.\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Industry-specific future use cases of AI Generative Models\"}),/*#__PURE__*/e(\"p\",{children:\"As deep learning in artificial intelligence leads to niche specialization, we will see focused applications of generative AI tools for different business enterprises.\"}),/*#__PURE__*/e(\"h3\",{children:\"Healthcare\"}),/*#__PURE__*/e(\"p\",{children:\"By analyzing vast amounts of case histories and medical data, recurrent neural networks in generative AI can identify patterns to aid disease prediction and diagnosis for early, accurate and effective patient treatment.\"}),/*#__PURE__*/t(\"p\",{children:[\"Significant investments are being made in AI-based drug discovery applications with generative models expected to lead \",/*#__PURE__*/e(a,{href:\"https://www.gartner.com/en/topics/generative-ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"over 30% of new drug discovery by 2025\"})}),\" [3].\"]}),/*#__PURE__*/e(\"h3\",{children:\"Marketing & Advertising\"}),/*#__PURE__*/e(\"p\",{children:\"Current natural language processing AI tools are already augmenting marketing processes like conducting research, creating customer-facing content & outbound messaging, and campaign planning. However, future developments in artificial intelligence will have deep learning language models better understand human psychology to produce emotionally engaging and relevant content for targeted consumers.\"}),/*#__PURE__*/e(\"h3\",{children:\"Design\"}),/*#__PURE__*/e(\"p\",{children:\"Generative design will augment the design process across multiple disciplines, accelerating prototyping with optimal material consumption by generating waste-reducing patterns and suggesting lighter, cheaper or more durable materials. With effective application in automotive, manufacturing and aerospace industries, generative design AI will automate a significant proportion of the design effort.\"}),/*#__PURE__*/e(\"p\",{children:\"Future generative AI tools will require designers to simply input material and product feature requirements, delivering increasing complex design outputs and engineering details.\"}),/*#__PURE__*/e(\"h3\",{children:\"Finance\"}),/*#__PURE__*/e(\"p\",{children:\"Perfectly suited for analyzing financial data, generative models using recurrent neural networks will transform into personal wealth managers, collecting customers\u2019 scattered financial information and portfolios, and consolidating it into a single customer profile with a tailored financial objective. With the opportunity to streamline and integrate areas like taxation, financial institutions will be able to provide value-added services to their customers.\"}),/*#__PURE__*/e(\"p\",{children:\"Deep learning architecture adept at identifying anomalies will aid fraud detection, while synthetic data generation will provide generative models with ample training data to build robust fraud models.\"}),/*#__PURE__*/e(\"h3\",{children:\"Software Development\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial intelligence will augment programming in the future as machine learning algorithms in generative models help automate time-consuming tasks like code generation, translation, optimization and debugging.\"}),/*#__PURE__*/t(\"p\",{children:[\"In the immediate future generative AI will help with legacy code modernization, while nearly \",/*#__PURE__*/e(a,{href:\"https://www.gartner.com/en/topics/generative-ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"15% of new applications will be generated by artificial intelligence by 2027\"})}),\" [3].\"]}),/*#__PURE__*/e(\"h3\",{children:\"Media & Entertainment\"}),/*#__PURE__*/e(\"p\",{children:\"As generative adversarial networks in generative AI models create realistic images and natural language processing aids content creation, AI will become invaluable in the entertainment industry.\"}),/*#__PURE__*/e(\"p\",{children:\"The creation of unique and novel compositions coupled with AI generated songwriting will augment the creative process for musicians, while AI voice synthesis and speech recognition will make automated real-time dubbing, translations and voiceovers possible.\"}),/*#__PURE__*/e(\"p\",{children:\"Combining a transformer neural network and generative adversarial network, the generative AI tool Dall-E can generate images creating highly imaginative AI art. By some estimates, the majority of the content in mainstream movies will be AI-generated by the end of this decade.\"}),/*#__PURE__*/e(\"h3\",{children:\"VR/AR & Gaming\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI will speed up the development of immersive 3D environments in video games and virtual reality spaces like the Metaverse. Such AI systems will be able to generate high resolution images of 3D worlds, and life-like dynamic avatars using computer vision, as well as real-time interactions with adaptive non-player characters no longer restricted by scripts.\"}),/*#__PURE__*/e(\"h3\",{children:\"Retail\"}),/*#__PURE__*/e(\"p\",{children:\"The retail sector can avail multiple benefits from the inclusion of generative AI in its operational processes, as AI models will provide engaging customer support, increasingly personalized offerings and improved demand planning, forecasting and inventory management.\"}),/*#__PURE__*/e(\"h3\",{children:\"Generative AI benefits for enterprises\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI trained on enterprise data will change how businesses access and utilize information internally. Data extracted and structured to generate responses upon prompts will enhance decision making and empowerment of the workforce in nearly all business functions.\"}),/*#__PURE__*/e(\"h3\",{children:\"Business Strategy & Knowledge Management\"}),/*#__PURE__*/t(\"p\",{children:[\"Conversational AI embedded into domain-specific LLMs, fine-tuned to manage unstructured data will create highly efficient retrieval systems. Nearly \",/*#__PURE__*/e(a,{href:\"https://www.gartner.com/en/topics/generative-ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"30% of enterprises\"})}),\" [3] will implement AI-augmented strategy within 2 years, removing information biases, increasing data privacy and simulating future business scenarios.\"]}),/*#__PURE__*/e(\"h3\",{children:\"Customer Experience\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI will increasingly be used to enhance customer experience, provide personalized recommendation and initiate engaging interactions with customers, especially with sentiment analysis and natural language AI chatbot integrations.\"}),/*#__PURE__*/e(\"h3\",{children:\"Brand Marketing and Content\"}),/*#__PURE__*/e(\"p\",{children:\"Besides customer data management, a marketing-specific machine learning algorithm can help organizations conduct better marketing research and offer insights that can shape entire marketing strategies.\"}),/*#__PURE__*/e(\"p\",{children:\"Tools like Dall-E and Jasper will impact brand development and content planning, with text or image based AI generated content delivered as per brand guidelines and personalized.\"}),/*#__PURE__*/e(\"p\",{children:\"NLP may end up providing the first draft of all content including blogs, social media posts, reports and emails in the near future, giving content writers and marketers more time to work on strategy.\"}),/*#__PURE__*/e(\"h3\",{children:\"Productivity\"}),/*#__PURE__*/t(\"p\",{children:[\"Incorporating generative AI into workflows has drastically shortened task durations while automating repetitive tasks, allowing individuals to focus on their core activities. Within 5 years, more than \",/*#__PURE__*/e(a,{href:\"https://www.gartner.com/en/topics/generative-ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"100 million workers\"})}),\" [3] will employ AI to contribute to their work.\"]}),/*#__PURE__*/e(\"h3\",{children:\"Human Resources\"}),/*#__PURE__*/e(\"p\",{children:\"Even as generative AI alters workers\u2019 skillset to manage information, employees\u2019 tasks will be executed in partnership with AI. Talent acquisition tasks will become automated as AI aids the search for suitably matched talent to fill positions, and AI enabled models will provide instant performance analyses of employees from past data and current KPIs.\"}),/*#__PURE__*/e(\"h3\",{children:\"Compliance\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI will assist enterprises in complying with regulations relating to risk mitigation and sustainability. Computer vision, a deep learning technique can visually analyze information and behaviors to detect fraud.\"}),/*#__PURE__*/e(\"h2\",{children:\"Jobs expected to be transformed by Generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"Despite the expected widespread adoption of AI, human intelligence will still be required at both the start and end of the process, from prompt-creation to content evaluation.\"}),/*#__PURE__*/e(\"h3\",{children:\"Writers\"}),/*#__PURE__*/e(\"p\",{children:\"Large language model tools like ChatGPT will generate text in any form prompted by user queries, inspiring writers with original content ideas and improving their work in terms of time saving and content quality as AI generates anything from blogs to articles and full-length novels in any human language.\"}),/*#__PURE__*/e(\"h3\",{children:\"Musicians\"}),/*#__PURE__*/e(\"p\",{children:\"Input based on a concept, theme or thought will generate unique compositions or songs as generative AI employs a neural network language model to produce new data in the form of an original piece of music or songwriting.\"}),/*#__PURE__*/e(\"h3\",{children:\"Artists\"}),/*#__PURE__*/e(\"p\",{children:\"As with other creative forms, art will see a renaissance in the form of image generation. Midjourney, a generative model, is capable of creating photorealistic images and even surreal AI generated art based on both human language and image prompts. These tools support creativity and expression unrestricted by logic and free from licensing requirements associated with stock image usage.\"}),/*#__PURE__*/e(\"h3\",{children:\"Customer Service Agents\"}),/*#__PURE__*/e(\"p\",{children:\"With AI algorithms based chatbot plugins and integrations, repetitive and simple queries will be automated, leaving agents to provide better, focused and personalized service to their clients.\"}),/*#__PURE__*/e(\"h3\",{children:\"Programmers\"}),/*#__PURE__*/e(\"p\",{children:\"Contrary to the belief that AI will take human jobs, code generation tools like Codex and GitHub CoPilot are created with the purpose of pairing with human programmers for greater efficiency and speed.\"}),/*#__PURE__*/e(\"p\",{children:\"With better bug identification and code fixing capabilities, programmers will be able to increase their productivity as well as skills by co-learning with AI.\"}),/*#__PURE__*/e(\"h3\",{children:\"New Skills for the future\"}),/*#__PURE__*/e(\"p\",{children:\"Prompt writing or engineering is one of the skills that will become critical in using generative AI models. As the quality of prompts directly impacts the result generated, prompt design would become one of the top skills in demand for operating specialized AI.\"}),/*#__PURE__*/e(\"p\",{children:\"As domain-specific LLMs will be employed, user experience will change with workers becoming editors rather than creators of information and content.\"}),/*#__PURE__*/e(\"h2\",{children:\"Generative AI challenges and controls\"}),/*#__PURE__*/e(\"p\",{children:\"As with any new technology, there are concerns and challenges associated with generative AI, especially in terms of security and ethical usage.\"}),/*#__PURE__*/e(\"h3\",{children:\"Concerns\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Generative AI does not take consent from authors of works used as input, leading to new data that is often similar to original copyrighted works.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Identity misappropriation for individuals and brands can lead to fraud or misinformation damaging to concerned parties.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Lack of legal frameworks concerning intellectual property makes ownership rights questionable and uncertain.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"GDPR does not cover AI tools trained on public data, hence confidential enterprise data entered into tools like ChatGPT risk becoming public information.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"As the tools do not provide references or credits to the original work, the sources cannot be verified or approached for consent.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Deepfakes, realistic media created by AI, make it difficult to separate AI generated content from what is real.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"As LLMs train on input data, any biases or discriminatory themes in that data will be replicated in the output.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"\u2018Hallucination\u2019 is a concern in generative AI where lack of relevant information leads the system to produce inaccurate or fabricated data asserted as being correct.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Controls\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"To prevent cybersecurity fraud, cyber insurance providers will need to provide adequate coverage under their policies.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Self-hosted LLMs will eliminate privacy concerns by restricting confidential information to the company\u2019s internal systems.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Watermarking AI generated artifacts is one way to control fake or counterfeit imagery, but stricter controls will be needed.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Continuous testing of models by validating correct results and rejecting errors will train the model to deliver accurate results.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Frameworks on ethical use of generative models and increased accountability will need to be developed.\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Despite the relatively short history of generative AI, the technology has a far-reaching impact on the future. Where generative AI is transforming enterprises and creative expression, the threat from lack of oversight and legal frameworks puts both organizations and individuals at risk of fraud, misinformation, identity as well as AI generated property theft.\"}),/*#__PURE__*/e(\"p\",{children:\"Understanding the potential value-addition that generative AI brings to business will allow enterprises and people to adapt to challenges and explore its multiple benefits, some yet to be discovered.\"}),/*#__PURE__*/e(\"h2\",{children:\"References\"}),/*#__PURE__*/t(\"ol\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Pitchbook data, 2023\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Bloomberg, 2023\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Gartner, 2023\"})})]})]});\nexport const __FramerMetadata__ = {\"exports\":{\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],
  "mappings": "+LAAsJ,IAAMA,EAAsBC,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,oUAAoU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oUAAoU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gDAAgD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qUAAqU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8PAA8P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6PAA6P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4CAA4C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4aAA4a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yRAAyR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sdAAsd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0TAA0T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0VAA0V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oDAAoD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oZAAoZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,saAAsa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iaAAia,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kCAAkC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6MAA6M,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,uBAAuB,CAAC,EAAE,6QAA6Q,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,kPAAkP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qRAAqR,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8iBAA8iB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mYAAmY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qDAAqD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4eAA4e,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2NAA2N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAE,qNAAqN,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,CAAC,qBAAkCE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,2JAA2J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+JAA+J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,uLAAuL,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uEAAuE,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4BAAyCE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAE,mBAAmB,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6PAA6P,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0KAA0K,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8MAA8M,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wNAAwN,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kKAAkK,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0HAA0H,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,keAAke,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6JAA6J,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAC,iBAA8BE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qHAAqH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6JAA6J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0LAA0L,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0IAA0I,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kVAAkV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sDAAsD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2OAA2O,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+BAA4CE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,2BAA2B,CAAC,CAAC,CAAC,EAAE,0RAA0R,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,mTAAmT,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+BAA4CE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8DAA8D,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,4KAA4K,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,4JAA4J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gKAAgK,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6LAA6L,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iHAAiH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0JAA0J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6IAA6I,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAC,kCAA+CE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAE,4CAA4C,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,yYAAyY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6eAA6e,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iiBAAiiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wSAAwS,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAC,OAAoBE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAE,uBAAuB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,0XAA0X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6kBAA6kB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6ZAA6Z,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qaAAqa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,qCAAqC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ydAAyd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sgBAAsgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,6BAA6B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,glBAAglB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yiBAAyiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sOAAsO,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ijBAAijB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yVAAsWE,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,0CAA0C,CAAC,CAAC,CAAC,EAAE,yEAAyE,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeC,EAAuBL,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,0dAA0d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yLAAyL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yOAAyO,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qPAAqP,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kMAAkM,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kMAAkM,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6JAA6J,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,idAAid,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mYAAmY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mUAAmU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0YAA0Y,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gVAAgV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAsT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qbAAqb,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,meAAme,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8VAA8V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,obAAob,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uDAAuD,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kiBAAkiB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qSAAqS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kPAAkP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qfAAqf,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2ZAA2Z,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kDAAkD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mMAAmM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gRAAgR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ufAAuf,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+VAA+V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+bAA+b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mVAAmV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yUAAyU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,keAAke,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,scAAsc,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mOAAmO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8dAA8d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0MAA0M,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,idAAid,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8RAA8R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qaAAqa,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ghBAAghB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8CAA8C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wYAAwY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kXAAmX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kDAAkD,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,oLAAoL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mMAAmM,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+GAA+G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0LAA0L,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,oIAAoI,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0UAA0U,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBN,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,2TAA2T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+TAA+T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+HAA4IE,EAAE,KAAK,CAAC,SAAS,iKAAiK,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,icAAic,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uCAAuC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sNAAsN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,saAAsa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2bAA2b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4ZAA4Z,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kUAAkU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,obAAob,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4QAA4Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4QAA4Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0eAA0e,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ifAAif,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+FAA+F,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kcAAkc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8dAA8d,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4VAA4V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kfAAof,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8SAA8S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2DAA2D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wWAAwW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iUAAiU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yXAAyX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kaAAka,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iSAAiS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uRAAuR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gVAAiV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4OAA4O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8RAA8R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,waAAwa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6XAA6X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8VAA8V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oLAAoL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qGAAqG,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kCAAkC,CAAC,EAAE,uMAAuM,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,mDAAmD,CAAC,EAAE,uFAAoGA,EAAEC,EAAE,CAAC,KAAK,wBAAwB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,EAAE,oFAAoF,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,6BAA6B,CAAC,EAAE,wTAAwT,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+DAA+D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gGAAgG,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wRAAwR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sVAAsV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gbAAgb,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iCAAiC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8cAA8c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ijBAAijB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mQAAmQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mUAAmU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2RAA2R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6RAA6R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0YAA0Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6YAA6Y,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yCAAsDE,EAAEC,EAAE,CAAC,KAAK,sFAAsF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,UAAuBF,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,qCAAqC,CAAC,CAAC,EAAeJ,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,4DAA4D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,iFAAiF,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,4DAA4D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAE,uGAAuG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,kDAAkD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAE,kHAAkH,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,6EAA6E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAE,8DAA8D,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,6DAA6D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,2BAA2B,CAAC,CAAC,CAAC,EAAE,gDAAgD,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,kFAAkF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qDAAqD,CAAC,CAAC,CAAC,EAAE,0EAA0E,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,yEAAyE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uCAAuC,CAAC,CAAC,CAAC,EAAE,kEAAkE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6bAA6b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,odAAod,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mWAAmW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0VAA0V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,4fAA4f,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeK,EAAuBP,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8eAA0d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,wbAAibE,EAAEC,EAAE,CAAC,KAAK,qHAAqH,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,2BAA2B,CAAC,CAAC,CAAC,EAAE,4EAAuE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,+hBAA+hB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4RAAuR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6FAA6F,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0QAA0Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,2XAAwYE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4RAA4R,CAAC,EAAeA,EAAE,SAAS,CAAC,SAAS,2BAAiB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeM,EAAuBR,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,2cAA2c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+TAA+T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+XAA+X,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8CAA8C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,snBAAsnB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mVAAmV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2XAA2X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8aAA8a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uUAAuU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ihBAAihB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+YAA+Y,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4DAA4D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wQAAwQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sgBAAsgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAsT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yYAAyY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+PAA+P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0dAA0d,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iRAAiR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+RAA+R,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2DAA2D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8MAA8M,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6RAA6R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yeAAye,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iOAAiO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+PAA+P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uSAAuS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yOAAyO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qLAAqL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,kMAA+ME,EAAEC,EAAE,CAAC,KAAK,uFAAuF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,6PAA6P,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,yaAAya,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+TAA+T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iOAAiO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yYAAyY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2UAA2U,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sDAAsD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8RAA8R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+JAA+J,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gEAAgE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wCAAwC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0CAA0C,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6DAA6D,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8WAA8W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sNAAsN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+WAA+W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wiBAAwiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qiBAAqiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sbAAsb,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oNAAoN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qHAAqH,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,EAAE,iKAAiK,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,MAAM,CAAC,EAAE,qRAAqR,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,uFAAuF,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,EAAE,yHAAyH,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,+GAA+G,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,MAAM,CAAC,EAAE,oHAAoH,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,YAAY,CAAC,EAAE,0IAA0I,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,6IAA6I,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,EAAE,gNAAgN,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,EAAE,mOAAmO,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sQAAsQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0VAA0V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gQAAgQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,obAAob,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2XAA2X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6bAA6b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+VAA+V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8SAA8S,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+DAA+D,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,oBAAoB,CAAC,EAAE,uJAAuJ,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,8BAA8B,CAAC,EAAE,wEAAwE,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,EAAE,sPAAsP,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,8XAA8X,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,EAAE,6FAA6F,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gSAAgS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0WAA0W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4bAA4b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kRAAkR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAA6W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uSAAuS,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8GAA8G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iHAAiH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0FAA0F,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8GAA8G,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0IAA0I,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,4GAA4G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gJAAgJ,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mKAAmK,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+QAA+Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4CAA4C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+VAA+V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wYAAwY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0eAA0e,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8dAA8d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+cAA+c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+UAA+U,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4UAA4U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0YAA0Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2XAA2X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mRAAmR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8TAA8T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gbAAgb,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iJAAiJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4CAA4C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2PAA2P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAqQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oXAAoX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oOAAoO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oRAAoR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gfAAgf,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oqBAAoqB,CAAC,CAAC,CAAC,CAAC,EAAeO,EAAuBT,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,iYAAiY,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,eAA4BE,EAAEC,EAAE,CAAC,KAAK,yFAAyF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,kIAA+IF,EAAEC,EAAE,CAAC,KAAK,gHAAgH,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAE,6GAA6G,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,oDAAoD,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+PAA0P,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mOAAmO,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wKAAmK,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,4JAAkJ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+PAA+P,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kMAAkM,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4DAA4D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wKAAwK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6NAA6N,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,0HAAuIE,EAAEC,EAAE,CAAC,KAAK,kDAAkD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wCAAwC,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iZAAiZ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gZAAgZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oLAAoL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,SAAS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kdAA6c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2MAA2M,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sNAAsN,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gGAA6GE,EAAEC,EAAE,CAAC,KAAK,kDAAkD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8EAA8E,CAAC,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oMAAoM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mQAAmQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sRAAsR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kXAAkX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8QAA8Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iRAAiR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0CAA0C,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,uJAAoKE,EAAEC,EAAE,CAAC,KAAK,kDAAkD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAE,0JAA0J,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iPAAiP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2MAA2M,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oLAAoL,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yMAAyM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4MAAyNE,EAAEC,EAAE,CAAC,KAAK,kDAAkD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,kDAAkD,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAAmW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gOAAgO,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kDAAkD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iLAAiL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,SAAS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mTAAmT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8NAA8N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,SAAS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sYAAsY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kMAAkM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2MAA2M,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gKAAgK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uQAAuQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sJAAsJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uCAAuC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iJAAiJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mJAAmJ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yHAAyH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8GAA8G,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,2JAA2J,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mIAAmI,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iHAAiH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iHAAiH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iLAAuK,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wHAAwH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kIAA6H,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8HAA8H,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mIAAmI,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wGAAwG,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2WAA2W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yMAAyM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EACng3IQ,EAAqB,CAAC,QAAU,CAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,mBAAqB,CAAC,KAAO,UAAU,CAAC,CAAC",
  "names": ["richText", "u", "x", "p", "Link", "motion", "richText1", "richText2", "richText3", "richText4", "richText5", "__FramerMetadata__"]
}
