{
  "version": 3,
  "sources": ["ssg:https://framerusercontent.com/modules/YVgynkVt96a0ES9Gk8yz/VEx2WPNLcsYT7stAmBDF/eo4RAmtig-26.js"],
  "sourcesContent": ["import{jsx as e,jsxs as t}from\"react/jsx-runtime\";import{ComponentPresetsConsumer as a,Link as o}from\"framer\";import{motion as n}from\"framer-motion\";import*as i from\"react\";import{Youtube as r}from\"https://framerusercontent.com/modules/NEd4VmDdsxM3StIUbddO/DDzyuYPF56TuI0bfUu2z/YouTube.js\";export const richText=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"Machine learning (ML) today is something that concerns a great number of people, and many believe that it is the way into the future. Machine learning and Artificial Intelligence (AI) in general are the forces driving the advancement of a diverse range of leading industries and domains, such as \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/ecommerce/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"e-commerce\"})}),\", \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/ml/computer-vision/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"computer vision\"})}),\", and \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/ml/natural-language-processing/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"natural language processing\"})}),\", to name a few. So why are AI and ML so important and how these technologies are changing processes in companies?\"]}),/*#__PURE__*/e(\"h2\",{children:\"The distinction between AI and ML\"}),/*#__PURE__*/e(\"p\",{children:\"Before we begin explaining the importance of AI and ML, we need to figure out how the two terms interact. Machine learning is a segment of AI, a small subset and just one of many of its branches. ML stands for a particular AI application that enables computers to retrieve knowledge from data and autonomously learn from it. AI is a broader term that enables a machine or system to intelligently comprehend, reason, act or adapt similarly to a human.\"}),/*#__PURE__*/e(\"p\",{children:\"Why ML and AI are so important to companies\"}),/*#__PURE__*/e(\"p\",{children:\"With the increasing amount and variety of data availability, the need for computational processing is becoming vital to retrieve important information. The amount of information is growing and it is getting harder and harder for humans to process such a large amount of content. AI and ML solutions are becoming increasingly popular as humans apply them to analyze and process vast amounts of data, enhance the efficiency of decision-making, generate online recommendations and findings, as well as create reliable predictions and forecasts.\"}),/*#__PURE__*/e(\"p\",{children:\"Machine learning has impacted many business areas. Intelligent algorithms substitute human labor in numerous processes. The role of ML in business has become extremely significant as it assists in reducing costs by decreasing the amount of time, effort, and man-hours spent.\"}),/*#__PURE__*/e(\"p\",{children:\"Perhaps the biggest advantage of ML is the fact that significantly more data is processed in less time, which is especially critical for large projects. When processing data, the machine considers a tremendous amount of different aspects in a relatively short period of time and then makes a decision based on them, which would have taken a human a lot more time.\"}),/*#__PURE__*/e(\"p\",{children:\"Aside from the data processing speed, ML also automates processes that will not require constant human intervention later on. Moreover, the longer a machine works on a specific problem, the higher the success of its decisions. All of these factors of machine learning help lower personnel costs as well as customer engagement costs.\"}),/*#__PURE__*/e(\"p\",{children:\"Any modern enterprise or manufacturing facility cannot do without ML solutions. Now the implementation of fast automated technologies is extremely important, but in the future the existence of businesses without such solutions will simply be impossible, as it just will not be able to catch up with its competitors.\"}),/*#__PURE__*/e(\"h2\",{children:\"How does ML operate?\"}),/*#__PURE__*/e(\"p\",{children:\"Machines can perform intellectual actions similar to humans, only much faster and more such actions in a given time interval. It is necessary to mention that AI based on machine learning cannot be considered completely independent of humans. Certainly, after some manipulation on the part of machine learning practitioners, the ML-based models can work independently. However, it is still impossible to completely exclude human involvement in creating such ML systems. Firstly, to make such a system work at all, a human has to find a raw data set and turn it into annotated data.\"}),/*#__PURE__*/e(\"h2\",{children:\"Labeled data\"}),/*#__PURE__*/e(\"p\",{children:\"It is simply not achievable to train a model without high-quality labeled data, and an untrained ML model is pointless. Data labeling or data annotation is the process of assigning labels (or attributes) to items in a dataset. These labels indicate characteristics that help train the ML model. A labeled dataset is called a training set.\"}),/*#__PURE__*/e(\"p\",{children:\"Data labeling is a crucial and indispensable step in the ML process. Ensuring that the data is correctly labeled is a key factor that affects the quality of the model performance and its learning. The amount of raw data to be used in labeling projects depends on the specific task at hand and the type of model you are using. Broadly speaking, the more data you have, the finer the results you are likely to get.\"}),/*#__PURE__*/e(\"p\",{children:\"If annotators lack large datasets for data labeling, they resort to employing algorithms that can work with less data. For example, deep learning may be effective in such cases. Experts may also try to utilize ML models to generate new data, that can then be applied in manual labeling.\"}),/*#__PURE__*/e(\"h2\",{children:\"Ways to label your data\"}),/*#__PURE__*/e(\"p\",{children:\"The data labeling process in machine learning projects may be done in various ways. For instance, some large companies have already accumulated the information they need for labeling. This could be audio recordings of phone calls or frequently used documents, such as invoices or reports. If future machine learning projects do not involve the utilization of any highly specific information, it can be obtained from publicly available databases.\"}),/*#__PURE__*/e(\"p\",{children:\"The users of crowdsourcing platforms may also collect information specifically for your project. For example, field data for marketing research in retail chains, geopositioning services, street ads, local public services, or any kind of software that requires field data. The users of the platform may take photos in their city, shoot video, and record audio in real-time at your request, as well as answer your questions.\"}),/*#__PURE__*/e(\"p\",{children:\"Once the data has been found, it may be annotated by specially hired staff of the same company. Such a method is only suitable if there is enough time, human and financial resources available, and if the company possesses its own infrastructure.\"}),/*#__PURE__*/t(\"p\",{children:[\"Experts often employ the services of annotators on crowdsourcing platforms. The customer has to register as a requester there and assign various labeling tasks to available annotators. This strategy is quite affordable and relatively fast, but it does not always ensure the high quality of the annotated data. To ensure that your tasks are thoroughly fulfilled you have to choose platforms with \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/knowledgebase/quality-control/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"built-in quality control measures\"})}),\", like Toloka.\"]}),/*#__PURE__*/e(\"p\",{children:\"It is also possible to outsource and have the data labeled by professional companies that have a specific focus on such tasks. There are also freelance specialists, who are ready to implement data labeling projects on specialized platforms. This is probably the cheapest, although a reliable quality check has to be devised to ensure that the work is done properly.\"}),/*#__PURE__*/e(\"h2\",{children:\"Types of data annotation\"}),/*#__PURE__*/e(\"p\",{children:\"Annotated data may be acquired manually or automatically. With manual labeling, data is reviewed and assigned appropriate labels by a human annotator, whereas with auto labeling, data is analyzed based on specific algorithms and automatically assigned labels with the help of special software.\"}),/*#__PURE__*/e(\"h3\",{children:\"Manual labeling process\"}),/*#__PURE__*/e(\"p\",{children:\"Manual annotation or manual labeling of data entails the process of labeling data elements based on certain criteria by people. As an example, when text is labeled, keywords or phrases that are important to the task may be highlighted. In image annotation, the objects in a picture may be marked with bounding boxes, or labeled according to their type (e.g., human, animal, car).\"}),/*#__PURE__*/e(\"p\",{children:\"For manual labeling, specialists may use specialized platforms. Manual annotation specialists have to be very thorough and precise in their approach to labeling to avoid errors and guarantee the quality of the training data. Annotators are capable of significantly reducing the annotation time due to automated data labeling as it exists only as a result of manual labeling. Overall, manual labeling represents a vital step in ML that delivers the most reliable and precise training data for model training.\"}),/*#__PURE__*/e(\"h3\",{children:\"Automatic data annotation\"}),/*#__PURE__*/e(\"p\",{children:\"In addition to manual labeling, software assistance can also be used. Labels can be automatically detected and added to a training dataset using a technique called active learning. To annotate a dataset automatically and turn it into training data, an annotator has to load the relevant information into an AI tool that already has the ability to qualitatively assign data labels.\"}),/*#__PURE__*/e(\"p\",{children:\"But just as was mentioned earlier about how ML models cannot exist without human intervention, auto labeling also implies it. In active learning, the ML algorithm cooperates with some source of information capable of annotating the input data. This information source is commonly a person or even a group of people.\"}),/*#__PURE__*/e(\"p\",{children:\"Essentially, human labelers create an AI model for automated labeling, tagging raw, unlabeled data, so it could automatically label new information. They then determine whether the model has performed the labeling correctly. If errors occur, human labelers fix them and re-train the model. Certainly, the auto-labeling algorithm simplifies the data labeling process, but still, auto annotation is possible due to manual labeling.\"}),/*#__PURE__*/e(\"p\",{children:\"Why is the data labeling process faster with this approach? The conventional assumption is that all data is equal, but in most datasets there is noisy data, no class balance, and a great deal of excessive data. In the automated data labeling approach, time is not wasted on data annotation that does not improve the performance of your model.\"}),/*#__PURE__*/e(\"p\",{children:\"To create an automated data annotation tool, the system doesn't require tons of randomly labeled samples to realize the distinction between junk mail and regular mail. You may provide it with a few instances of what you require it to learn, it will quickly grasp it, and then ask follow-up questions if it is in doubt. Active learning, employed in automated data labeling, utilizes a learning model to search for and label only the most valuable data.\"}),/*#__PURE__*/e(\"h3\",{children:\"Automated data labeling\"}),/*#__PURE__*/e(\"p\",{children:\"To create automated labeling applications:\"}),/*#__PURE__*/t(\"ol\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The data science team first feeds a small number of labeled examples to a model;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The model learns from this dataset;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Then when the model encounters edge cases or is unsure of making the right decision, a person or a team of people helps it to figure out all these confusing cases;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"A human creates the labels for these examples;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The model is upgraded once again and the process is reiterated until a sufficiently good accuracy is achieved.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Once again, it is impossible to claim that this approach represents fully automated labeling, but once such a system achieves high accuracy, it can operate autonomously. However, this does not mean that in the future it will not appeal to humans, as edge cases may arise in completely unforeseen situations since our world is incredibly versatile and often the outcome of events cannot be predicted.\"}),/*#__PURE__*/e(\"h2\",{children:\"Stages of data labeling\"}),/*#__PURE__*/e(\"p\",{children:\"The labeling process begins with obtaining data and completes when the model trained on such data is applied. There are the following stages:\"}),/*#__PURE__*/t(\"ol\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Data retrieval.\"}),\" The process starts with data gathering from all kinds of sources (e.g., databases, documents, audio or video files, websites, etc.).\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Data pre-processing and refinement.\"}),\" It consists of all kinds of activities related to preparing data for labeling:\"]})})]}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Verifying the data for mistakes and inconsistencies;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Processing the data to eliminate irrelevant elements, such as spaces, punctuation, etc;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Deleting duplicates;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Scaling of the data;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Formatting, etc.\"})})]}),/*#__PURE__*/t(\"ol\",{start:\"3\",style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Data labeling.\"}),\" The process of identifying features in unlabeled data via a data labeling tool;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"QA process.\"}),\" At this stage quality control of the training data is carried out by:\"]})})]}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Verifying the appropriateness of identified labels;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Evaluating the accuracy and completeness of the data labeling, etc.\"})})]}),/*#__PURE__*/e(\"ol\",{start:\"5\",style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Model training.\"}),\" This stage consists of:\"]})})}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Training the model on the labeled data;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Testing the model on new data.\"})})]}),/*#__PURE__*/e(\"ol\",{start:\"6\",style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Applying the model\"}),\" to make predictions or decisions based on the training data.\"]})})}),/*#__PURE__*/e(\"h2\",{children:\"How to speed up the labeling process\"}),/*#__PURE__*/e(\"h3\",{children:\"Automate data labeling\"}),/*#__PURE__*/e(\"p\",{children:\"The first way to speed up manual labeling is to automate data labeling. To do this, you may apply auto-labeling tools that have already been trained by other specialists, or you may apply active learning to create your own annotations. After creating such a model, you can auto annotate all the rest of the data on your custom-designed model.\"}),/*#__PURE__*/e(\"h3\",{children:\"Choose an approach to labeling that suits your annotation projects best\"}),/*#__PURE__*/e(\"p\",{children:\"Various ways to perform data labeling are available, as described earlier. The choice of approach is determined by the complexity of the task, the amount of data to be labeled, the size of the labeling team, and, understandably, by the financial resources and time available. Each type has its limitations and advantages, which should be determined for each data labeling project separately. The entire course of the project depends on this decision, which is made at the outset of the project. The right approach can significantly reduce annotation time.\"}),/*#__PURE__*/e(\"h3\",{children:\"Pick a crowdsourcing platform with high-quality tools\"}),/*#__PURE__*/t(\"p\",{children:[\"In case you have found that manual labeling works best for you, Crowdsourcing with reliable \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/knowledgebase/quality-control/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"QA tools\"})}),\" is one of the best choices for such projects at the moment, particularly if there is a limited budget. For instance, in addition to providing great QA, Toloka offers a huge selection of manual annotation tools, which are essential for creating an ML product.\"]}),/*#__PURE__*/e(\"p\",{children:\"Tasks such as cross-referencing identical products with different names to increase the product match coverage, matching items in an online store with related goods to increase the accuracy of the recommendation system, or testing a new brand design is handled better by human annotators than by a machine. Toloka allows you to set up crowd management tools for labeling while ensuring reliable QA tools targeted specifically to your project.\"}),/*#__PURE__*/e(\"h2\",{children:\"Business applications of ML\"}),/*#__PURE__*/e(\"h3\",{children:\"E-commerce\"}),/*#__PURE__*/t(\"p\",{children:[\"Artificial intelligence and machine learning are now gradually becoming more widespread in almost every industry. They have especially gained significance in the sphere of \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/ecommerce/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"e-commerce\"})}),\", which requires a systematic focus on finding new ways to inspire consumers to buy and facilitate their interaction with the platform. Intelligent chatbots and assistants with embedded ML algorithms help online retail businesses automate communication with users, allowing human operators to avoid some of their routine duties.\"]}),/*#__PURE__*/e(\"p\",{children:\"ML technology also helps recommendation systems boost click-through rates and the average cost of an online purchase. The algorithms of such systems are employed by e-commerce companies to recommend related products when customers pick items online. Data labeling improves the quality of searches by customizing search algorithms to efficiently distribute search results.\"}),/*#__PURE__*/e(\"p\",{children:\"For a machine to comprehend how to address a certain issue, it requires a vast amount of examples to be presented to it. Therefore, creating an effective recommendation system requires a great deal of manually labeled data, which is constantly updated to keep the ML model up-to-date.\"}),/*#__PURE__*/e(\"p\",{children:\"Improving price comparisons with competitors through ML can also have a substantial impact on achieving better coverage of key products and boosting sales. Manual annotation in this case works best to improve the quality of comparisons and address miss matches, since inaccurate data generated by automated solutions or in-house annotators can ultimately mean losing profits.\"}),/*#__PURE__*/e(\"p\",{children:\"Algorithms keep track of every product on the market and change prices on an ongoing basis, determining the most favorable price for both the seller and the consumer. In e-commerce, companies need to consider competitors' pricing, because effective online price management can boost sales and give your company a competitive advantage in the industry.\"}),/*#__PURE__*/e(\"p\",{children:\"Online retailers may employ ML technology to provide their customers with an improved product or information search experience on their sites. As people expect instant and relevant search results, search relevance plays a major role in the platform's usability. ML models make it easier to achieve improved search relevance. Convenient search experience is vital to the success of any online store. However, it often requires huge amounts of manually labeled data. This is also where Toloka can help, as its data labeling platform allows annotators to evaluate and then improve the quality of your search algorithms.\"}),/*#__PURE__*/e(\"h3\",{children:\"\u0421omputer vision (CV)\"}),/*#__PURE__*/e(\"p\",{children:\"Computer vision projects deal with image and video analysis. Custom models constructed based on this analysis help smartphones recognize their owners, highway cameras identify license plates, and even robots avoid obstacles.\"}),/*#__PURE__*/e(\"p\",{children:\"Computer vision is essential for the development of automated vehicles and robots. It enables us to see things that humans might not notice. For example, when analyzing X-rays and other medical scans, or when detecting flaws in manufacturing. CV has made it possible to develop an AI-based device that makes it possible to operate a wheelchair hands-free through the user's facial expression and gesture recognition.\"}),/*#__PURE__*/e(\"p\",{children:\"In Computer vision, depending on the task, experts may employ multiple types of image/video annotation tools. These are some of them:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(o,{href:\"https://toloka.ai/image-data/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Image classification\"})}),\". The image is assigned one or more labels based on the object it depicts, which class it belongs to;\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Semantic segmentation. The purpose of this type is to associate each pixel of an image with the class of objects to which the pixel belongs;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Instance segmentation. As opposed to semantic segmentation, instance segmentation assigns a label to each instance of each object presented in the image, instead of assigning a label to a class of objects;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Polygon annotation. This kind of labeling highlights the exact boundaries of objects in the image, with each pixel receiving its value, according to which the algorithm determines the boundaries of the object, as well as its association with a particular group;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Bounding box. Bounding boxes suggest that the expert labels the desired object in the image employing a rectangle or a box and assigns a label to it for labeling the image.\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Natural language processing (NLP)\"}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(o,{href:\"https://toloka.ai/ml/natural-language-processing/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"NLP\"})}),\" refers to the computer science discipline of analyzing and processing natural language of humans. Among its multiple undertakings are speech recognition, machine translation, information retrieval from texts, documentation categorization, and many other things. NLP makes human-computer interaction a lot easier, so it eliminates the need for sophisticated coding languages. In sectors such as customer service chatbots and voice assistants can easily recognize and react effectively to buyer inquiries. Large webmail providers use NLP to review the text in emails that pass through their filters, enabling them to filter out spam before it reaches the user's inbox.\"]}),/*#__PURE__*/e(\"p\",{children:\"NLP is crucial for processing vast amounts of unorganized text data that is impossible to handle manually. Sentiment analysis, for example, is widely employed in market research to gain insight into customer attitudes toward a product, brand, or service.\"}),/*#__PURE__*/e(\"p\",{children:\"The ML algorithm evaluates the person's speech, builds a custom vocabulary, and then decodes the words. The result is provided in audio or text form. Annotators utilize audio and text annotation tools to label the data for NLP. There are multitude of ways to annotate data for NLP purposes. Below are some of them:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(o,{href:\"https://toloka.ai/text-data/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Text\"})}),\" or \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/audio-data/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"audio\"})}),\" classification\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Speech recognition\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Emotional content annotation of a text or an audio\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Text or audio categorization\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Extraction and labeling of key phrases or words\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Part-of-speech labeling\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Speaker identification\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"ML is a rapidly growing industry that is becoming more and more influential in business in particular. It is everywhere we go, perhaps without you even realizing it. Nowadays, these technologies are getting increasingly accessible, including through crowdsourcing platforms like Toloka. Large companies and startups alike can benefit from this solution. In most cases, it does not require special knowledge or a great deal of time. Successful implementation involves understanding your company's internal processes and the desire to enhance them.\"})]});export const richText1=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Many smart applications ranging from chatbots and voice assistants to security systems with speech recognition capabilities are the products of the up-and-coming field of artificial intelligence \u2013 machine learning.\"}),/*#__PURE__*/e(\"p\",{children:\"These advanced features of smartphones and computers, in turn, are only possible thanks to audio annotation. We will look at the ways of audio annotation and why it is essential for the modern technologies needed below.\"}),/*#__PURE__*/e(\"h2\",{children:\"Audio labeling\"}),/*#__PURE__*/e(\"p\",{children:\"Audio annotation or speech labeling is the procedure of giving labels and metadata to audio recordings and transforming them into formats that can be comprehended by a machine learning model.\"}),/*#__PURE__*/e(\"p\",{children:\"Audio labeling represents a vital technique for designing robust natural language processing (NLP) models. NLP refers to a machine learning method that enables machines to interpret, manipulate, and comprehend human language. The NLP market represents a great area of interest since the AI model with such capabilities is in high demand by companies.\"}),/*#__PURE__*/e(\"p\",{children:\"Although it is necessary to note that audio annotations are not only useful for classifying audio components coming from people but also different sounds from animals, background noise, the environment, instruments or vehicles, etc.\"}),/*#__PURE__*/e(\"p\",{children:\"Annotating audio, like all other types of annotations, require both manual work and special software for the annotation process. In the case of audio annotations, experts point out labels or tags in a given recording through the use of applications and feed the relevant audio information into ML models to create a trained system.\"}),/*#__PURE__*/e(\"p\",{children:\"To properly annotate audio, experts often have to first transcribe it into text form or break it up into sections. Frequently it also happens that an entire audio file is assigned a label or metadata.\"}),/*#__PURE__*/e(\"h2\",{children:\"Audio data\"}),/*#__PURE__*/e(\"p\",{children:\"Audio annotation is not the only one kind of annotation. Labels are also applied to, for example, video, images, or text. Clearly, the purposes of these types of labeling will be different, as well as the data to be labeled. In the case of audio annotation, the object of annotation is an audio file.\"}),/*#__PURE__*/e(\"p\",{children:\"The audio files dataset for annotation has to be large so that the future system has as much context as possible to solve the tasks it faces. Apart from the amount of information, you should try to create a high-quality annotation with the correct labels.\"}),/*#__PURE__*/e(\"p\",{children:\"Even If you have a small dataset, you should try to develop a workflow that allows you to perfect the dataset. For quality ML models, the quality of data collection is as important as its quantity.\"}),/*#__PURE__*/e(\"h2\",{children:\"Audio annotation types\"}),/*#__PURE__*/e(\"p\",{children:\"Without audio annotation, many tasks would not be possible. There are different types of annotation for each specific purpose and they are as follows:\"}),/*#__PURE__*/e(\"h3\",{children:\"Audio classification\"}),/*#__PURE__*/e(\"p\",{children:\"The annotators classify each audio recording into pre-specified classes to perform classification tasks. Such categories may include connotation, number or type of speakers, their language or dialect, background noise, intentions, or semantics-related information.\"}),/*#__PURE__*/e(\"p\",{children:\"Music classification by genre or instrument also assumes sound annotation. Recommendations of tracks based on what you have listened to, as well as the organization of music libraries, are possible due to it.\"}),/*#__PURE__*/e(\"h3\",{children:\"Audio transcription\"}),/*#__PURE__*/e(\"p\",{children:\"Annotators convert the audio file into text, which is then annotated. Audio files may be of varied quality and contain interfering factors, such as background noise or features of pronunciation, all of which have labels assigned to them. Transcription converts sound into text which is critical for training ML models for making sense of human speech.\"}),/*#__PURE__*/e(\"h3\",{children:\"Multilingual audio file collection\"}),/*#__PURE__*/e(\"p\",{children:\"To generate datasets of annotated data a crowd records possible user requests to voice assistants. As an alternative to spoken words, it can be various sounds, such as sneezing or humming a tune. This kind of data makes it possible to create smart systems, such as the already mentioned voice assistants, which can make our lives better and easier.\"}),/*#__PURE__*/e(\"h3\",{children:\"Side-by-side audio comparison\"}),/*#__PURE__*/e(\"p\",{children:\"Such comparisons involve annotators listening to two or more audio files to determine which one best fits particular criteria. For instance, annotators may use context to identify the recorded speech that sounds most natural, or whether the voices of several speakers match.\"}),/*#__PURE__*/e(\"h2\",{children:\"How audio annotations are used\"}),/*#__PURE__*/e(\"p\",{children:\"Once the audio annotation is completed and training data is collected, specialists may proceed to the creation of ML models that will have the ability to perform the following functions:\"}),/*#__PURE__*/e(\"h3\",{children:\"Voice assistants\"}),/*#__PURE__*/e(\"p\",{children:\"Voice assistants respond to the voice command of a user. Such systems are also trained based on labels. Virtual assistants can recognize and synthesize speech, report the weather forecast, or make a query in a search engine. These virtual assistants help people who cannot type. For example, elderly or disabled users.\"}),/*#__PURE__*/e(\"h3\",{children:\"Speech emotion recognition\"}),/*#__PURE__*/e(\"p\",{children:\"Emotional content detection of the audio allows to identify the feelings of the speaker: joy, sadness, rage, anger, fear, astonishment, and so on. This allows to automate the process of monitoring the quality of customer service in call centers.\"}),/*#__PURE__*/e(\"h3\",{children:\"Natural utterance collection\"}),/*#__PURE__*/e(\"p\",{children:\"Natural language utterance annotation requires specialists in data annotation to classify minute details of a speech. They create labels that describe the extracted natural language utterance in terms of intonation, dialects, semantics, context, correct punctuation, and intonation.\"}),/*#__PURE__*/e(\"h3\",{children:\"Automatic speech recognition\"}),/*#__PURE__*/e(\"p\",{children:\"Also referred to as Speech-to-Text. In this case, speech-to-text transcription is required when annotating. An entire audio file can be segmented into smaller fragments, each with its own features on the audio recording track. Experts teach the ML model to match these audio features to text, and then learn how to reproduce text from these examples independently.\"}),/*#__PURE__*/e(\"h3\",{children:\"Text-to-speech\"}),/*#__PURE__*/e(\"p\",{children:\"Also known as speech synthesis, this technology works similarly to speech-to-text, but in different order. Specialists annotate audio recordings with textual content and teach the ML model to match text to audio. Then the smart system can reproduce voice from this data without any external help.\"}),/*#__PURE__*/e(\"h3\",{children:\"Speaker diarisation\"}),/*#__PURE__*/e(\"p\",{children:\"It is the process of adding marked areas to audio streams and determining the start and end time stamps of speech allocated to different speakers.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is the best way to do audio annotation?\"}),/*#__PURE__*/e(\"p\",{children:\"There are various ways to carry out audio annotation. The most common ones are:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"In-house annotation. Data annotation by an in-house team of experts offers many benefits. They are most likely to ensure high accuracy and quality of the work. The downside of this approach is that it is frequently one of the most high-cost ways, and requires employing a large number of professionals.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Crowdsourcing. Crowdsourcing platforms as opposed to in-house annotation, is a more cost-effective method. They allow a large number of people from different parts of the world to join in and perform the task of annotation.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Outsourcing. Outsourcing may consist in hiring freelancers or a specialized company that offers experts to fulfill the task of annotating.\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"Data annotation is an integral part of any machine learning system. Professionals create sound recognition models that allow developing chatbots, machine transcription, translation software, language learning, pronunciation assessment software, and speech recognition systems.\"}),/*#__PURE__*/e(\"p\",{children:\"For the resulting machine learning model to meet all QA requirements, there has to be plenty of qualitative data with appropriate labels assigned to it. The critical factor for collecting such a dataset lies in the fact that the responsible managers have to choose the right methods and approaches for annotation, at the beginning of the project.\"})]});export const richText2=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"h2\",{children:\"Introduction\"}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(o,{href:\"https://toloka.ai/large-language-models/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Large language models\"})}),\" are making waves with each new model larger than the one before, boasting an impressive performance across a variety of tasks. Moreover, these large language models have infinite potential, but with that also come some considerable challenges.\"]}),/*#__PURE__*/e(\"p\",{children:\"If you want to learn more about the ins and outs of training large language models as well as next generation innovations, you\u2019ve come to the right place. We take an in-depth look at some recent examples and case studies in addition to various pros/cons and practical applications to uncover the details behind this trailblazing technology. Keep reading to learn more.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is a large language model?\"}),/*#__PURE__*/e(\"p\",{children:\"A large language model (LLM) is a type of machine learning model \u2014 or more specifically, a deep learning model \u2014 that is able to comprehend and generate human language via deep neural networks. In short, deep neural networks are defined as a class of ML algorithms that aim to imitate how the human brain processes information. While there\u2019s no set-in-stone definition of a large language model, generally speaking, it refers to a language model comprising a large number of parameters (for example, GPT has 100+ billion). Large language models are able to generate text akin to human writing and are becoming an increasingly critical component of the internet\u2019s infrastructure. LLMs have many various uses including summarizing different texts, building more effective digital search tools, and serving as chatbots.\"}),/*#__PURE__*/e(\"p\",{children:\"However, we know that the internet can be a toxic place. Given that LLMs are trained on huge amounts of online data, it doesn\u2019t take much for them to start producing potentially dangerous responses. That\u2019s why many AI developers are working to make their models safer; reinforcement learning using human feedback is a key component of this. There is still a lot of work that needs to be done before many of these conversational AI models can be dispersed into everyday life.\"}),/*#__PURE__*/e(\"h2\",{children:\"Advantages of LLMs\"}),/*#__PURE__*/e(\"p\",{children:\"LLMs play an influential role in driving rapid innovation across multiple domains. Since they have more parameters and are able to capture nuances, LLMs provide a more accurate picture of the data they\u2019re working with. This is key for natural language data since the meaning behind words is so dependent on context. Additionally, LLMs can be trained on considerably larger datasets \u2014 and the more data a language model is trained on, the better it will be at adapting to new data. This is principally true for language models given the limited data available for training.\"}),/*#__PURE__*/e(\"p\",{children:\"Here\u2019s a breakdown of the main benefits:\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Tones and subtleties of language\"})})})}),/*#__PURE__*/e(\"p\",{children:\"LLMs can capture the intricacies of language, which allows them to better understand words in the context of a sentence or a piece of writing.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Wide ranging uses\"})})})}),/*#__PURE__*/e(\"p\",{children:\"LLMs are good for a wide variety of general uses, but they can also be fine-tuned to deal with narrower domains and tasks such as translating languages, answering questions, or developing chatbots.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Greater comprehension\"})})})}),/*#__PURE__*/e(\"p\",{children:\"The greater comprehension that comes with LLMs holds infinite opportunities that can lead to more accurate translations, improved text classification, and more natural-sounding text generation across various scenarios and programming languages.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Faster training time and reduced training data\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Along with greater precision, LLMs also have the potential to optimize training time and decrease the amount of data required for training a large language model \u2014 the more parameters a model has, the more information it can learn from a given dataset.\"}),/*#__PURE__*/e(\"p\",{children:\"As an example, a language model with 1 billion parameters can learn from a dataset that is 10 times smaller than a model with 100 million parameters.\"}),/*#__PURE__*/e(\"h2\",{children:\"Drawbacks of LLMs\"}),/*#__PURE__*/e(\"p\",{children:\"While LLMs are on the cutting edge of innovation with far-reaching, real-world applications, they still have some significant drawbacks: namely, they can be unreliable, authoritative, and overconfident in presenting erroneous information with potentially harmful outcomes. This can be especially dangerous when it comes to a person\u2019s health or finances.\"}),/*#__PURE__*/e(\"p\",{children:\"Here\u2019s an overview of some of the drawbacks:\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Bias and stereotypes\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Since LLMs are trained on various sources, they can unintentionally replicate the bias in those sources. They also can\u2019t update their knowledge without being retrained.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Misinterpretation and false information\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Even though LLMs can generate human-like text, they don\u2019t always understand the given context and can generate inaccurate or false information as a result.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Resource consumption and cost\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Model training for LLMs requires significant computational resources, which equates to steep costs and energy consumption.\"}),/*#__PURE__*/e(\"p\",{children:\"While the largest autoregressive transformers use different evaluation protocols and new techniques such as zero-, few shot learning, one-shot, and fine-tuning with notable results, it comes at a cost of gigantic compute and energy requirements. However, with multiple advancements on the horizon, these models will undoubtedly be taken to the next level in the near future.\"}),/*#__PURE__*/e(\"h2\",{children:\"Training data for LLMs\"}),/*#__PURE__*/e(\"p\",{children:\"The majority of LLMs are pre-trained so that when they are provided with a training dataset comprising a large corpus of text tokens, the language model is able to predict the tokens in the test dataset. To build a large model from a pretrained model, there are two pretraining approaches:\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Autoregressive (GPT-style; predicting the next word)\"})})})}),/*#__PURE__*/e(\"p\",{children:'Given a segment of text like \"I like to eat\", the model predicts the next tokens, such as \"vanilla pudding\".'}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Masked (BERT-style)\"})})})}),/*#__PURE__*/e(\"p\",{children:'Given a segment of text like \"I like to [MASK] [MASK] pudding\", the model predicts the masked tokens, such as \"eat vanilla\".'}),/*#__PURE__*/e(\"p\",{children:\"LLMs can also be trained on auxiliary tasks that test their comprehension of the data distribution. For example, in Next Sentence Prediction (NSP) where pairs of sentences are displayed, and the model has to determine whether the sentences appear sequentially in the training body.\"}),/*#__PURE__*/e(\"p\",{children:\"Furthermore, LLMs require an enormous amount of training data in addition to robust, flexible, and highly optimized data pipelines that can easily include new sources of data. Through self-supervised learning, LLMs are pre-trained on huge amounts of unlabeled text data extracted from sources such as books, articles, and websites.\"}),/*#__PURE__*/e(\"p\",{children:\"Given the vast amounts of text data on which they\u2019re trained, LLMs have the capacity to learn complex patterns and structures found in natural language. However, what they actually do during the training process is relatively simple: they determine the next word (or token) in a sequence, referred to as an \u201Cautoregressive\u201D language model, which uses past outputs as input for future predictions while progressively generating output.\"}),/*#__PURE__*/e(\"h2\",{children:\"Training compute-optimal large language models\"}),/*#__PURE__*/e(\"p\",{children:\"The AI lab DeepMind, owned by Alphabet, carried out research with the goal of determining the optimal model, parameter size, and number of tokens needed to train a transformer language model within compute budget constraints. The team trained over 400 language models extending from 70 million to 16 billion parameters on 5-500 billion tokens. The team discovered that for compute-optimal training, the model size and number of tokens must be evenly measured.\"}),/*#__PURE__*/e(\"h3\",{children:\"Chinchilla case study\"}),/*#__PURE__*/e(\"p\",{children:\"The team presented three different approaches to determine the relationship between model size and number of training tokens; all three indicated that increasing both the model size and the number of training tokens in the neural network to roughly equal proportions would result in better performance.\"}),/*#__PURE__*/e(\"p\",{children:\"The team tested their hypothesis that model size and number of training tokens should be scaled equally by training a model called Chinchilla, which comprised the same compute budget as Gopher, its larger model equivalent, but with less parameters and four times the data. They discovered that smaller, more optimally trained models have a better performance: their compute-optimal 70 billion model Chinchilla trained on 1.4 trillion tokens outpaced Gopher (a 280 billion parameter model), while reducing inference costs significantly. Not only does Chinchilla outperform Gopher, it also exceeds several other prominent models such as GPT-3 and Jurrasic-1 on a range of downstream evaluation tasks. It also uses less computing for model fine-tuning and inference with a 7% improvement in accuracy over Gopher.\"}),/*#__PURE__*/e(\"h3\",{children:\"Sparrow case study\"}),/*#__PURE__*/e(\"p\",{children:\"DeepMind trained its chatbot Sparrow on the lab\u2019s large language model Chinchilla to learn from human feedback and scour the internet for data to support its responses. From its research, DeepMind reasoned that an effective AI-powered chatbot requires human input to tell it how to act and make the model support its statements using information found online.\"}),/*#__PURE__*/e(\"p\",{children:\"The chatbot interacted with humans and answered questions leveraging a live Google search and was then trained via a reinforcement learning algorithm. Following 23 rules, the model was able to provide realistic answers with supporting data sources about 78% of the time. However, participants were able to make the model break the rules about 8% of the time. When it comes to safe interactions between these artificial intelligence models and humans, there\u2019s still a lot of work to be done.\"}),/*#__PURE__*/e(\"h2\",{children:\"NextGen LLMs\"}),/*#__PURE__*/e(\"p\",{children:\"With AI moving at the speed of light, you may be wondering what the next generation of LLMs will look like. Startups and research groups alike are already on it. Let\u2019s take a look at three emerging areas of innovation that will likely define the next wave of LLMs:\"}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"1. Models that can self-improve by producing their own training data\"})}),/*#__PURE__*/e(\"p\",{children:\"A new area of AI aims to enable LLMs to mimic the innately human ability to generate novel ideas and insights through inward reflection and deliberation. Imagine if models could generate their own ideas and original written content based on all the information they\u2019ve previously acquired? They could then use that newfound knowledge to improve themselves even further. There are already models out there that can generate their own natural language processing instructions and fine-tune themselves accordingly.\"}),/*#__PURE__*/e(\"p\",{children:\"Given that we may at some point run out of text training data, this area of innovation is of vital importance. Estimates of the world\u2019s cumulative text data are somewhere between 3.2 trillion or 4.6 trillion and 17.2 trillion tokens, which encompasses all the books, academic papers, articles, shared code, and more. As mentioned above, it took 1.4 trillion tokens to train DeepMind\u2019s Chinchilla, one of today\u2019s foremost LLMs.\"}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"2. Models that can assess their own accuracy\"})}),/*#__PURE__*/e(\"p\",{children:\"Today\u2019s LLMs are known for generating inaccurate, deceptive, or just plain wrong information, no matter how assertively they present it \u2014 often termed \u201Challucinations\u201D. However, recent advancements may soon help to overcome this challenge. Given that LLMs can obtain information from external sources and provided references and citations, they\u2019re already on the path to becoming ever more accurate. As recently as last year, OpenAI published WebGPT which is able to navigate online search engines just like humans can while providing credible information and sources. Likewise, DeepMind\u2019s Sparrow can produce the same results.\"}),/*#__PURE__*/e(\"h3\",{children:/*#__PURE__*/e(\"strong\",{children:\"3. Enormous sparse expert models\"})}),/*#__PURE__*/e(\"p\",{children:\"While differences in size, hidden layers, training data, and more may exist between existing models, today\u2019s key LLMs all basically have the same architecture: they\u2019re pre-trained, self-supervised, autoregressive, densely activated, transformer-based models. However, progress is being made toward the creation of an alternative architecture referred to as \u201Csparse expert models\u201D \u2014 the opposite of \u201Cdense\u201D.\"}),/*#__PURE__*/e(\"p\",{children:\"The idea behind sparse expert modes is that they don\u2019t activate all their parameters for a given input, only those that are most relevant. The advantage to these models compared to their dense counterparts is that they\u2019re simultaneously larger, yet less demanding, computationally speaking, along with having improved runtime. They\u2019re also open to greater interpretability \u2014 understanding the \u201Cwhy\u201D behind a model\u2019s actions.\"}),/*#__PURE__*/e(\"p\",{children:\"Each one of today\u2019s largest LLMs is considered to be sparse, and new models are continuing to grow in size. As an example, Google and Meta have both produced models that have significantly outperformed their predecessor versions on a wide variety of benchmarks, including energy efficiency and interpretability.\"}),/*#__PURE__*/e(\"h2\",{children:\"How Toloka can help\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka makes working with LLMS simple and efficient. Our platform helps AI developers get their apps up and running by automating model training, tuning, deployment, and monitoring. We help developers everywhere fine-tune their pre-trained language models to align with human expectations by:\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Collecting labeled data\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Via an efficient combination of automated and human labeling in every language.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Training and deploying models\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Integrating TensorFlow and PyTorch or auto-tuning and deploying via our ML platform.\"}),/*#__PURE__*/e(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Receiving human feedback\"})})})}),/*#__PURE__*/e(\"p\",{children:\"Leveraging our crowd to moderate output, assess quality, and monitor human feedback loops.\"}),/*#__PURE__*/e(\"p\",{children:\"Drawing upon the latest machine learning models and the collective efforts of our diverse global crowd, we work across a variety of areas to help you get the results you need. To name a few of these areas: chatbots and AI assistants, content generation, summarization and moderation, code assistance and generation, and finance data analysis.\"}),/*#__PURE__*/t(\"p\",{children:[\"Our services include reinforcement learning with human feedback (RLHF), model pre-training, fine-tuning and output moderation, and human-lead quality checks. \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/large-language-models/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Learn more\"})}),\" about what we offer, as well as our latest insights, advice, and solutions.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Key takeaways\"}),/*#__PURE__*/e(\"p\",{children:\"This is undoubtedly the age of the LLM with new advancements and innovations around every corner. With due credit to self-supervised learning, zero-shot, few-shot, and fine-tuning methods, language models are growing in size at a rapid rate. These large models require better and higher-performing hardware, software, and algorithms for training.\"}),/*#__PURE__*/e(\"p\",{children:\"However, there also needs to be a greater emphasis on dataset scaling and high-quality data along with greater accountability for ethical and privacy issues, among other concerns.\"}),/*#__PURE__*/e(\"p\",{children:\"Moreover, large language models offer significant potential for the future of machine learning. As datasets and computing power continue their growth, it\u2019s highly probable that we\u2019ll be seeing even larger and more complex models in the coming years.\"})]});export const richText3=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"strong\",{children:\"Since Toloka\u2019s founding in 2014, the data labeling platform has grown and evolved into a data-centric environment with enterprise solutions for AI and ML development. Toloka's platform runs on a robust, secure infrastructure that supports a combination of ML technologies like adaptive AutoML and best-in-class crowdsourcing technologies for data quality management. But the backbone of our solutions is still the human insight gathered from the crowd. \"})}),/*#__PURE__*/e(\"p\",{children:\"We\u2019ve built one of the largest and most diverse data labeling crowds on the planet, with millions of registered users and hundreds of thousands of people actively earning money each month. But who are these people we call Tolokers, the crowd contributors behind the usernames?\"}),/*#__PURE__*/e(\"p\",{children:\"A recent global survey of Tolokers (10,000 respondents) offers a glimpse into what our global community looks like in 2023. The goals of the survey were to explore the identity of Tolokers, learn what motivates them, and find out how happy they are in Toloka.\"}),/*#__PURE__*/e(\"p\",{children:\"Here\u2019s what we discovered.\"}),/*#__PURE__*/e(\"h2\",{children:\"Tolokers: age and gender\"}),/*#__PURE__*/e(\"p\",{children:\"Age-wise, the Toloka community is mostly young. The majority of Tolokers, 55%, fall within the 20 to 30-year-old age bracket, while 22% are between 30 and 40 years old. Only 1% of the crowd is over 60 years old. The single most common age is 23, representing 7% of all Tolokers.\"}),/*#__PURE__*/e(\"img\",{alt:\"Age of Tolokers\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/H8f6zfSJ1Ly5GjBmGKaCy0ghdhA.jpeg\",srcSet:\"https://framerusercontent.com/images/H8f6zfSJ1Ly5GjBmGKaCy0ghdhA.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/H8f6zfSJ1Ly5GjBmGKaCy0ghdhA.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/H8f6zfSJ1Ly5GjBmGKaCy0ghdhA.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/H8f6zfSJ1Ly5GjBmGKaCy0ghdhA.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Age of Tolokers\"})}),/*#__PURE__*/e(\"p\",{children:\"When it comes to gender, male Tolokers outnumber other genders, with about 62% identifying as male, 36% as female, and 2% as non-binary.\"}),/*#__PURE__*/e(\"img\",{alt:\"Gender of Tolokers\",className:\"framer-image\",height:\"630\",src:\"https://framerusercontent.com/images/ExaLW62RrUIOEYL99G3YOQjhUaA.jpeg\",srcSet:\"https://framerusercontent.com/images/ExaLW62RrUIOEYL99G3YOQjhUaA.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/ExaLW62RrUIOEYL99G3YOQjhUaA.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/ExaLW62RrUIOEYL99G3YOQjhUaA.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/ExaLW62RrUIOEYL99G3YOQjhUaA.jpeg 2400w\",style:{aspectRatio:\"2400 / 1260\"},width:\"1200\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Gender of Tolokers\"})}),/*#__PURE__*/e(\"h2\",{children:\"Ethnicity and nationality\"}),/*#__PURE__*/e(\"p\",{children:\"Three racial identities are equally represented in the global crowd, at about 30% each: Asian, African, and white (from across all continents). Indigenous and Polynesian ethnicities represent smaller fractions of our community, but they are particularly important because of their ability to speak rare and culturally significant languages, some of them on the brink of extinction.\"}),/*#__PURE__*/e(\"p\",{children:\"Tolokers live in more than 100 different countries, spread across every time zone in the world. There are no particular regions where Tolokers are based, but some of the larger concentrations include Pakistan, Kenya, Brazil, Turkey, India, Egypt, the Philippines, and the US.\"}),/*#__PURE__*/e(\"h2\",{children:\"Languages\"}),/*#__PURE__*/e(\"p\",{children:\"When asked to identify their first language, Tolokers list over 40 major languages. About half of Tolokers are native speakers of English, Urdu, Arabic, Russian, or Spanish. But the other half represents dozens of different languages, showcasing the diversity of our crowd: Portuguese, Ukrainian, French, German, Italian, Polish, Latvian, Bulgarian, Czech, Turkish, Hindi, Vietnamese, Japanese, Chinese, Korean, and Indonesian, to name a few. We even have Tolokers who speak uncommon languages like Tatar and Quechua.\"}),/*#__PURE__*/e(\"p\",{children:\"Almost all Tolokers speak at least some English, even if English is not their first language. Nearly 30% of our survey respondents describe their English skills as advanced. Only about 3% of Tolokers do not speak any English.\"}),/*#__PURE__*/e(\"p\",{children:\"In addition to English, 20% of Tolokers have some proficiency in other languages. The most common non-native languages spoken by Tolokers are Spanish, French, Arabic, German, Russian, Portuguese, and Italian.\"}),/*#__PURE__*/t(\"p\",{children:[\"Tolokers bring a variety of linguistic skills to data collection, data labeling, and data evaluation processes (including \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/blog/rlhf-ai/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"RLHF\"})}),\" for large language models).\"]}),/*#__PURE__*/e(\"h2\",{children:\"Urban demographics and social status\"}),/*#__PURE__*/e(\"p\",{children:\"Tolokers come from bustling cities, quiet towns, and everything in between \u2014 there is no dominating urban or rural lifestyle. Almost a quarter of Tolokers reside in large cities with over one million residents, while roughly 23% live in small cities (10,000 to 100,000 people) and 22% call mid-sized cities their home (100,000 to 500,000 people). A sizable 20% of Tolokers live in smaller towns of fewer than 10,000 residents.\"}),/*#__PURE__*/e(\"p\",{children:\"Regarding social class, Tolokers are evenly divided between middle class and working class.\"}),/*#__PURE__*/e(\"p\",{children:\"A variety of religious faiths are represented in the crowd. Nearly 71% of Tolokers affirm some form of religious belief, while 20% are not religious and 9% are unsure.\"}),/*#__PURE__*/e(\"h2\",{children:\"Family and household\"}),/*#__PURE__*/e(\"p\",{children:\"In terms of marital status, the majority of Tolokers (almost 60%) are single, while almost 30% are married. A portion of the remaining Tolokers (about 7%) live with a partner, while roughly 3% are separated or divorced.\"}),/*#__PURE__*/e(\"img\",{alt:\"Family and household\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/2fvby0rQDETkdriHbuAIgFNt3E.jpeg\",srcSet:\"https://framerusercontent.com/images/2fvby0rQDETkdriHbuAIgFNt3E.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/2fvby0rQDETkdriHbuAIgFNt3E.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/2fvby0rQDETkdriHbuAIgFNt3E.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/2fvby0rQDETkdriHbuAIgFNt3E.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:\"Tolokers have a range of household sizes, which includes relatives, partners, and housemates within shared accommodation. The survey\u2019s data shows that most (over 50%) live in households of three to five members. Households with one to two individuals make up roughly 20% of the total, while the remaining quarter of the respondents live in households with five or more individuals.\"}),/*#__PURE__*/e(\"p\",{children:\"Parental status also varies among Tolokers. The majority (around 65%) do not have children. Just over 15% have one child, while close to 15% have two to three children. Families with four or more children make up a smaller fraction of the community.\"}),/*#__PURE__*/e(\"h2\",{children:\"Education and occupation\"}),/*#__PURE__*/e(\"p\",{children:\"Tolokers have a wide range of educational backgrounds, but the majority have pursued higher education. Over a third (almost 37%) hold a bachelor\u2019s degree, while close to 18% have not continued their education past high school. About 16.5% have attended some college, while slightly over 10% have a master\u2019s degree. Notably, over three quarters of all Tolokers have some form of additional education, such as courses, professional certificates, or other training.\"}),/*#__PURE__*/e(\"img\",{alt:\"Education completed by Tolokers\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/SO2VsWXLzwkfsectRKnrURES9I.jpeg\",srcSet:\"https://framerusercontent.com/images/SO2VsWXLzwkfsectRKnrURES9I.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/SO2VsWXLzwkfsectRKnrURES9I.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/SO2VsWXLzwkfsectRKnrURES9I.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/SO2VsWXLzwkfsectRKnrURES9I.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Education completed by Tolokers\"})}),/*#__PURE__*/e(\"p\",{children:\"Employment status varies for Tolokers, but most use Toloka to supplement their income from another job. About a quarter of them do freelance work, while 22% have full-time jobs. 21% are currently unemployed and seeking work, while 16% are working part-time and 5% are running or involved in a business.\"}),/*#__PURE__*/e(\"img\",{alt:\"Employment status of Tolokers\",className:\"framer-image\",height:\"630\",src:\"https://framerusercontent.com/images/TWvtHYJ7t0HFwJjBBweQf7X879Q.jpeg\",srcSet:\"https://framerusercontent.com/images/TWvtHYJ7t0HFwJjBBweQf7X879Q.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/TWvtHYJ7t0HFwJjBBweQf7X879Q.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/TWvtHYJ7t0HFwJjBBweQf7X879Q.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/TWvtHYJ7t0HFwJjBBweQf7X879Q.jpeg 2400w\",style:{aspectRatio:\"2400 / 1260\"},width:\"1200\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Employment status of Tolokers\"})}),/*#__PURE__*/e(\"p\",{children:\"Tolokers work in a variety of industries, bringing a range of skills and expertise to the platform. The service sector has the largest representation, followed by finance, education, and manufacturing.\"}),/*#__PURE__*/e(\"img\",{alt:\"Many Tolokers are at the beginning of their careers\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/WGKRj9Xm8waSW1SE8ZscclhY30.jpeg\",srcSet:\"https://framerusercontent.com/images/WGKRj9Xm8waSW1SE8ZscclhY30.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/WGKRj9Xm8waSW1SE8ZscclhY30.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/WGKRj9Xm8waSW1SE8ZscclhY30.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/WGKRj9Xm8waSW1SE8ZscclhY30.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:\"Many Tolokers are at the beginning of their careers. About 40% have one to three years of work experience. This is followed by those with five to ten years of experience (16%), four to five years (15%), and ten to twenty years (12%).\"}),/*#__PURE__*/e(\"img\",{alt:\"Years of work experience\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/khLYNkakg04ZskiljooeQG2MhcQ.jpeg\",srcSet:\"https://framerusercontent.com/images/khLYNkakg04ZskiljooeQG2MhcQ.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/khLYNkakg04ZskiljooeQG2MhcQ.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/khLYNkakg04ZskiljooeQG2MhcQ.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/khLYNkakg04ZskiljooeQG2MhcQ.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Years of work experience\"})}),/*#__PURE__*/e(\"p\",{children:\"When it comes to household income, nearly 20% of all surveyed Tolokers are the primary breadwinners in their household, while about 12% report that their spouse is the chief earner. Roughly 30% describe their income as somewhat stable, 22% as quite stable, and 6.5% as very stable, with the remaining contributors reporting that their monthly income fluctuates.\"}),/*#__PURE__*/e(\"h2\",{children:\"Toloker happiness\"}),/*#__PURE__*/e(\"p\",{children:\"We asked Tolokers how they feel about being a Toloker, and why they joined. 85% responded that they feel good or very good about Toloka. Three-quarters of the contributors think that Toloka is a great way to supplement their main income, while a quarter of the respondents rely on Toloka as their main source of earnings.\"}),/*#__PURE__*/e(\"img\",{alt:\"How Tolokers feel about Toloka\",className:\"framer-image\",height:\"945\",src:\"https://framerusercontent.com/images/GzRyzvB3GO9RzXgC6Tb7AtWZW4.jpeg\",srcSet:\"https://framerusercontent.com/images/GzRyzvB3GO9RzXgC6Tb7AtWZW4.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/GzRyzvB3GO9RzXgC6Tb7AtWZW4.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/GzRyzvB3GO9RzXgC6Tb7AtWZW4.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/GzRyzvB3GO9RzXgC6Tb7AtWZW4.jpeg 3600w\",style:{aspectRatio:\"3600 / 1890\"},width:\"1800\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"How Tolokers feel about Toloka\"})}),/*#__PURE__*/e(\"p\",{children:\"Here are their top reasons to use Toloka for earning extra income:\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"It\u2019s a great way to develop new skills and pave the way for professional advancement (20%)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"It\u2019s a chance to make the world a better place (15%)\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"It\u2019s fun (10%)\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"Seeing the real people in the crowd\"}),/*#__PURE__*/e(\"p\",{children:\"The survey findings confirm the diversity of our crowd, which is important to prevent bias in training data. But it also helps us see the real people who are contributing their expertise to make AI better. The most valuable outcome is that Tolokers feel good about what they are doing.\"}),/*#__PURE__*/t(\"p\",{children:[\"The Toloka team believes that a positive environment for annotators is an essential part of \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/responsible-ai/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Responsible AI\"})}),\". This includes offering annotators fair wages, opportunities to develop new skills, and the freedom to choose their own tasks, hours, and locations. To learn about how we set up fair wages and access to tasks for the BigCode project, \",/*#__PURE__*/e(o,{href:\"https://toloka.ai/blog/bigcode-project/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"read our blog post\"})}),\".\"]})]});export const richText4=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"In our previous article we tried to cover various different approaches to building a text classifier model based on what modern NLP offers us at the moment. \"}),/*#__PURE__*/e(\"p\",{children:\"There are classic old school TF-IDF approaches, pre-trained embedding models, and transformers of various shapes and sizes, which you can choose from \u2013 we wanted to give you practical advice based on our own experience which models are best suited for different situations and use cases you can find in your own line of work.\"}),/*#__PURE__*/e(\"p\",{children:\"Now, to add a bit of extra flavor to that knowledge we want to show you a concrete real life example of the benchmarks for these different approaches and compare them with each other on a particular dataset that we chose for this quick follow up article.\"}),/*#__PURE__*/e(\"h2\",{children:\"Describing the Dataset and the Task\"}),/*#__PURE__*/t(\"p\",{children:[\"To illustrate our ideas we chose \",/*#__PURE__*/e(o,{href:\"https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"The Twitter Financial News dataset\"})}),\" which is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is commonly used to build finance-related tweets classification models that categories tweets into a number of topics.\"]}),/*#__PURE__*/e(\"p\",{children:\"It is a medium-sized dataset, which is just the right amount of data for us to illustrate how different models will perform on such tasks. It is fairly diverse and its size allows for relatively fast training and evaluation of the produced models.\"}),/*#__PURE__*/e(\"p\",{children:\"What is interesting about its domain is that the financial language is usually very crisp and laconic, it has a lot of terminology, a lot of proper nouns describing brands and terms and related entities, which the models should train to distinguish from the common nouns that have completely different meanings. The intuition about this dataset property is that the fine-tuning of the pre-trained generic-language model into this domain should give a considerable boost in the overall model performance and accuracy.\"}),/*#__PURE__*/e(\"p\",{children:\"The dataset consists of around 21 thousand data items \u2013 it's not too small, but also not too large, so we think we will be able to see all the positive and negative effects of each model and each approach we will be taking on this dataset. Interesting topic to discuss further, but we will return to this in the conclusion once we have the model's results.\"}),/*#__PURE__*/e(\"p\",{children:\"And finally, the dataset has 20 classes so it's not quite a trivial classification task, when you need to distinguish between a handful of sentiment classes and emotional tones \u2013 it's a bit trickier. There is some degree of imbalance in this dataset \u2013 60x+ difference between the most and least frequent classes support, which can usually cause some approaches to underperform. Okay, now let's see how different models will perform in our benchmarking test.\"}),/*#__PURE__*/e(\"h2\",{children:\"Describing the Approach\"}),/*#__PURE__*/e(\"p\",{children:\"Based on our previous article, we chose FastText, BERT, RoBERTa (with an idea of a second-stage tuning) and GPT-3 to assess their performance and efficiency on the described dataset. The dataset was splitted into train and test sets (with 16.5K and 4.5K items respectively), models were trained on the train set, and their performance and efficiency (in terms of inference time) were measured on the test set.\"}),/*#__PURE__*/t(\"p\",{children:[\"To train a FastText model we used the \",/*#__PURE__*/e(o,{href:\"https://fasttext.cc/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"fastText libraryt\"})}),\" with the corresponding command line tool. After dataset preparation (like inserting labels into texts with proper prefix) we run the \",/*#__PURE__*/e(\"em\",{children:\"fasttext supervised\"}),\" command to train a classifier. It took a couple of minutes to produce a model, using a CPU-only machine. The next command, \",/*#__PURE__*/e(\"em\",{children:\"fasttext predict\"}),\" allowed us to obtain predictions for the test set and evaluate models' performance.\"]}),/*#__PURE__*/t(\"p\",{children:[\"As of Transformers, we chose three slightly different models to compare \u2013 BERT (more formally, \",/*#__PURE__*/e(\"em\",{children:\"best-base-uncased\"}),\"), RoBERTa-large and an adapted version of the latter one, tuned for sentiment prediction on a couple of finance-related datasets (you can refer to this model on the \",/*#__PURE__*/e(o,{href:\"https://huggingface.co/Jean-Baptiste/roberta-large-financial-news-sentiment-en\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"HuggingFace website\"})}),\"). We used the \",/*#__PURE__*/e(\"em\",{children:\"transformers\"}),\" library to perform our experiments, though it requires the user to write some code to actually run training and evaluation procedures. We used a single machine with the A100 GPU to run training, and it took about 20-28 minutes to train each model until early stopping conditions were met. Trained models were stored to a MLFlow registry for further usage.\"]}),/*#__PURE__*/t(\"p\",{children:[\"To train a classifier based on the GPT-3 model, we referred to the \",/*#__PURE__*/e(o,{href:\"https://platform.openai.com/docs/guides/fine-tuning/advanced-usage\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"official documentation\"})}),\" on the OpenAI website, and used the corresponding command line tool to submit data for training, track its progress and make predictions for the test set (more formally, completions, as this term is more suitable in the case of generative models). We have not used any specific hardware as all the actual work is done on the OpenAI\u2019s servers, so we have created this cloud-based model using a regular laptop. We trained two GPT-3 variations \u2013 Ada and Babbage, to see if there will be remarkable difference in performances of these models. It takes about 40-50 minutes to train a classifier with these settings.\"]}),/*#__PURE__*/t(\"p\",{children:[\"As of hyperparameters, we used the following settings. A fastText model was trained for 30 epochs with the learning rate of 0.5. Other parameters were kept to defaults, and you can refer to their values on the \",/*#__PURE__*/e(o,{href:\"https://fasttext.cc/docs/en/options.html\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"official documentation\"})}),\". Worth saying, though, that we used only word unigrams and the size of the word vector were set to 100. Transformer models were trained with learning rate of 5e-6, batch size of 32, and using 32-bit float precision. With the fixed patience of 5 epochs, the algorithm automatically took \u201Cthe best\u201D checkpoints for each model (the best performances on the validation set were obtained on third or forth epochs, so these checkpoints were used later to produce predictions). As for GPT-3, we have not set any specific hyperparameters, and relied on the defaults used by OpenAI. You can see their values on the \",/*#__PURE__*/e(o,{href:\"https://platform.openai.com/docs/guides/fine-tuning/hyperparameters\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"corresponding page\"})}),\".\"]}),/*#__PURE__*/e(\"p\",{children:\"After training, we have evaluated all the models on the test set to obtain classification metrics. We chose macro average F1 and weighted average F1 to compare models, as it allows us to estimate both precision and recall, and also see if dataset imbalance influences the metrics. We also compared models based on their inference speed (in terms of milliseconds per item) with a batch size of 1. For the RoBERTa model we also include ONNX-optimized version, as well as inference using an A100 GPU accelerator. To assess the GPT-3 speed we measured the average response time from our tuned Babbage model (note that OpenAI applies some rate limiters, so the actual speed might be lower or higher, depending on your terms of usage).\"}),/*#__PURE__*/e(\"h2\",{children:\"Describing the Results\"}),/*#__PURE__*/e(\"p\",{children:\"Now let's discuss the results of these training attempts. We arranged all the results of the trainings in a couple of tables to present you the conclusions and observed effect.\"}),/*#__PURE__*/e(\"img\",{alt:\"Describing the Results\",className:\"framer-image\",height:\"424\",src:\"https://framerusercontent.com/images/EwvVbA1eTQLREMbOoHSehGAM8dM.png\",srcSet:\"https://framerusercontent.com/images/EwvVbA1eTQLREMbOoHSehGAM8dM.png?scale-down-to=512 512w,https://framerusercontent.com/images/EwvVbA1eTQLREMbOoHSehGAM8dM.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/EwvVbA1eTQLREMbOoHSehGAM8dM.png?scale-down-to=2048 2048w,https://framerusercontent.com/images/EwvVbA1eTQLREMbOoHSehGAM8dM.png 2098w\",style:{aspectRatio:\"2098 / 848\"},width:\"1049\"}),/*#__PURE__*/e(\"p\",{children:\"What catches our eye first of all is that fasttext indeed demonstrated far worse performance, but with all fairness it took the tiniest amount of computational resources, time and setup to train, so ok, it gives us some low bar benchmark for all the other models evaluation.\"}),/*#__PURE__*/e(\"p\",{children:\"Next, let's discuss the transformers. As expected, RoBERTa shows us better results compared to BERT, and it's very easy to attribute it to the fact that RoBERTa is simply larger than BERT and in general demonstrates better results on such domain specific classification tasks. To be fair, we specifically selected RoBERTa large architecture to draw this comparison, however the base RoBERTa model might have been returning the same level of metrics as BERT, even though they have some differences in the underlying corpus and ways they were trained.\"}),/*#__PURE__*/e(\"p\",{children:\"What is more, we can say that this tangible gap between F1 metrics of BERT and RoBERTa might be caused by the fact that we are dealing with a fairly large number of classes and the dataset has some imbalances that larger models tend to capture better. However, it's our intuition and it probably requires more experiments to get a more solid proof. You can also see that the domain-pretrained RoBERTa gave us a tiny boost in accuracy, but it's rather insignificant and we can't say that seeking this pre-trained domain-tuned model actually was worthwhile for our experiment.\"}),/*#__PURE__*/e(\"p\",{children:\"Now comes GPT-3. We selected Ada and Babbage models to draw a fair comparison with BERT and RoBERTa large since their parameter sizes are nicely and gradually growing (from 165M parameters in BERT to 355M params in RoBERTa large to 2.7B in Ada and 6.7B in Babbage) and can demonstrate to us whether the size of the model really will give more and more performance boost going forward or if it reaches its performance plato somewhere. So, surprisingly, as you can see, both Ada and Babbage give almost the same metrics and they actually lose to RoBERTa even without domain-specific pre-training, but there is a reason for it. We need to remember that GPT-3 API-accessible models are actually giving its users the generative inference interface, so they try to predict a token that would classify each given example in the classification task. Whereas, RoBERTa and other models from transformers will have the last layers of their architecture configured for the classification appropriately (imagine a proper logit or softmax at the end that returns the likelihood of all the classes for any data item that you pass to it). So yes, maybe the huge GPT-3 will be able to tackle the classification to 1 of 20 classes by generating the right token class well enough, but that seems to be a slight overkill for such a task. However, let's not forget that the GPT-3 model is fine-tuned and accessed literally with 3 lines of code unlike RoBERTa, which you should roll out on your architecture with various amounts of sweat here and there.\"}),/*#__PURE__*/e(\"img\",{alt:\"which you should roll out on your architecture with various amounts of sweat here and there\",className:\"framer-image\",height:\"365\",src:\"https://framerusercontent.com/images/WPIxFgeCkrdmv9Pjsl5UHjdU1sE.png\",srcSet:\"https://framerusercontent.com/images/WPIxFgeCkrdmv9Pjsl5UHjdU1sE.png?scale-down-to=512 512w,https://framerusercontent.com/images/WPIxFgeCkrdmv9Pjsl5UHjdU1sE.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/WPIxFgeCkrdmv9Pjsl5UHjdU1sE.png?scale-down-to=2048 2048w,https://framerusercontent.com/images/WPIxFgeCkrdmv9Pjsl5UHjdU1sE.png 2098w\",style:{aspectRatio:\"2098 / 730\"},width:\"1049\"}),/*#__PURE__*/e(\"p\",{children:\"And now finally let's compare these models (and their respective inference setups) in terms of their request execution speed \u2013 we are not training the models just for their performance and accuracy, we also need to take into account how fast they will return us their inference for the new data we will be feeding them. We clocked the online synchronous requests to these models and tried to understand, which one has its own preferred scenario when it will fit the best.\"}),/*#__PURE__*/e(\"p\",{children:\"So here we have an absolute winner in fasttext. However, its accuracy leaves us with no choice, but to take a look at the other models in our list\u2026 Okay, moving on.\"}),/*#__PURE__*/e(\"p\",{children:\"Between various setups of RoBERTa and GPT-3 we can see that GPT-3 despite being the large model among them actually performs relatively fast, especially taking into account the fact that it response time includes two-sided network communication to their API endpoint, so the actual time model computes its inference here is actually fairly small. And that's obviously good, especially keeping in mind that this is a pretty simple solution in terms of setting it up, fine-tuning your model and implementing the model calls in your project. Can be a bit too expensive, especially if you plan on sending fairly a lot of data fairly frequently, but then it becomes a cost-benefit trade off for your tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"In terms of RoBERTa's the obvious winner is the GPU hosted version. It's clear that the GPUs add a huge performance boost to inference computations, but again, first of all hosting your model server on GPU machines might cost a bit too much than you can afford in your project, plus rolling up a GPU based model server might be tricky and challenging, especially if you haven't done it before.\"}),/*#__PURE__*/e(\"p\",{children:\"However, you also need to remember that these benchmarks are still pretty fast \u2013 all of them \u2013 in terms of returning you the results of your model requests. Don't forget to do some arithmetics and break down how you plan on using these models in your project \u2013 will it be for real-time inference or asynchronous batch requests, will you be accessing it over the internet or inside your own local network, is there any other overhead for your business logic operations on top of the model response etc. \u2013 all this can actually add significantly more time to each request rather than the actual model inference calculation itself. So be mindful about the requirements and limitations of your end use case here.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusions and Ideas to Follow Up\"}),/*#__PURE__*/e(\"p\",{children:\"So you are still with us, great, let's draw some conclusions together. We tried to demonstrate to you a real life vivid example of the balance between the difficulty of running various models, their resulting accuracy metrics and their response speed when they are ready to be used. Obviously it is not an easy task to figure out what to use when and which approach will be the most optimal shot at a task at hand, but at least we hope to leave you with some sort of a guideline on what to consider and when \u2013 unfortunately, GPT models are not the silver bullet for any task you will face, plus sometimes everyone of us has to count their money and be smart about how they spend it on solving their problems, even in machine learning.\"}),/*#__PURE__*/t(\"p\",{children:[\"On our side at Toloka we are working hard on the platform that would enable users like you to be able to train, deploy and use a transformer like RoBERTa in the same 3 API calls fashion as GPT-3 API \u2013 might come very handy in your next text classification project \u2013 you can check our free beta here: \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"www.tolokamodels.tech\"})})]}),/*#__PURE__*/e(\"p\",{children:\"In the next article we will run a couple more experiments and investigate how you can mitigate the effects of disbalanced datasets and how you can up- or downsample some classes to introduce some balance to the dataset \u2013 we have an intuition that in this case GPT-3 generative approach can actually perform better that RoBERTa large. And also we will discuss how ONNX optimization can be implemented for GPU-based model servers to give you a lightning fast boost in model performance speed. Stay tuned, and there will be more interesting topics from us at the Toloka ML team blog.\"})]});export const richText5=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"h2\",{children:\"What is video annotation?\"}),/*#__PURE__*/e(\"p\",{children:\"Video annotation (or video labeling) adds metadata to a video or image to categorize the content, label objects, or organize the data. The annotated video data is used for training computer vision AI models to perform object detection, facial recognition, and motion tracking in AI systems. In other words, machines learn to analyze images and videos to identify objects such as faces, buildings, and cars. For instance, AI systems can use this information to monitor security footage or automatically track road traffic patterns.\"}),/*#__PURE__*/e(\"h2\",{children:\"Annotation workflows\"}),/*#__PURE__*/e(\"p\",{children:\"With the help of sophisticated video annotation tools, experts can manually label video data. However, augmenting the process with AI can provide faster and more accurate results.\"}),/*#__PURE__*/e(\"p\",{children:\"An efficient workflow uses AI to annotate videos and then show the labeled videos to human annotators to correct or adjust the results. In this scenario, non-experts can participate in video annotation, so a larger pool of annotators is available \u2014 reducing costs and speeding up projects significantly while improving accuracy.\"}),/*#__PURE__*/e(\"h2\",{children:\"Applications of video annotation\"}),/*#__PURE__*/e(\"p\",{children:\"Video annotation is a powerful tool to create training data for computer vision models with multiple real-world applications. It can be used to create digital replicas of human behavior and actions, such as hand gestures, walking, or playing an instrument.\"}),/*#__PURE__*/e(\"h3\",{children:\"Games and simulations\"}),/*#__PURE__*/e(\"p\",{children:\"The annotated data can be used to build realistic virtual environments for games or simulations.\"}),/*#__PURE__*/e(\"h3\",{children:\"Medical research\"}),/*#__PURE__*/e(\"p\",{children:\"In the medical field, video annotation is used to track changes in tumors over time and analyze microscopic images of cells.\"}),/*#__PURE__*/e(\"h3\",{children:\"Sports analytics\"}),/*#__PURE__*/e(\"p\",{children:\"Sports analytics use this technology to track player performance and identify game strategies.\"}),/*#__PURE__*/e(\"p\",{children:\"AI-based video analysis systems can detect specific activities in a video, such as sports, dancing, or other activities.\"}),/*#__PURE__*/e(\"h3\",{children:\"Security and surveillance systems\"}),/*#__PURE__*/e(\"p\",{children:\"AI video analysis can detect anomalies in videos, such as suspicious activities or objects that could pose a security risk.\"}),/*#__PURE__*/e(\"h3\",{children:\"Autonomous navigation systems\"}),/*#__PURE__*/e(\"p\",{children:\"Navigation systems for self-driving vehicles use annotated video footage to learn to recognize objects in their environment and respond accordingly.\"}),/*#__PURE__*/e(\"h3\",{children:\"Industrial robotics\"}),/*#__PURE__*/e(\"p\",{children:\"Computer vision models in industrial robotics improve safety and efficiency. Annotated video is used for training AI models to identify target objects on production lines, spot defects, sort waste, and sense their surroundings to plan movements.\"}),/*#__PURE__*/e(\"h3\",{children:\"Retail\"}),/*#__PURE__*/e(\"p\",{children:\"Computer vision solutions can help monitor self-checkouts to prevent theft. AI can also track patterns of customer traffic in stores to help make decisions on product placement.\"}),/*#__PURE__*/e(\"h2\",{children:\"How to annotate video: techniques\"}),/*#__PURE__*/e(\"p\",{children:\"Video annotation involves labeling visual data with text or other labels and is an important part of many computer vision algorithms. Two main techniques are used for annotating videos: single image and continuous frame.\"}),/*#__PURE__*/e(\"h3\",{children:\"Single image method\"}),/*#__PURE__*/e(\"p\",{children:\"Single image annotation involves labeling a single image from a video, such as a face or object in the frame. This technique of video annotation is suitable for tasks that require annotations on individual frames, including facial recognition and other scenarios involving object identification and detection. Allowing the annotator to focus on one frame at a time can be more efficient than annotating the entire video clip all at once.\"}),/*#__PURE__*/e(\"h3\",{children:\"Continuous frame method\"}),/*#__PURE__*/e(\"p\",{children:\"Continuous frame annotation requires labeling multiple frames in sequence so that annotations for each frame are consistent across the duration of the video clip. This rapid annotation technique is more suitable for complex tasks requiring understanding motion or context across multiple frames, such as activity recognition or autonomous navigation. It can also be more accurate than single-image annotation since it allows the annotator to track objects over longer periods.\"}),/*#__PURE__*/e(\"h2\",{children:\"Why is annotating videos better than annotating individual images?\"}),/*#__PURE__*/e(\"p\",{children:\"By using video data, businesses can achieve more accurate results and gain insights that would be impossible to obtain with image annotation alone. For instance, in the surveillance field, analyzing continuous video streams allows automated alerts for suspicious activities that can be quickly identified and acted upon, reducing potential risks and costs.\"}),/*#__PURE__*/e(\"p\",{children:\"In some cases, combining both video annotation techniques can be beneficial to achieve better accuracy \u2014 for example, by using single image annotation to identify objects in each frame and then using continuous frame annotation to assess their trajectories over time. Similarly, if you have a particularly complex task that requires a detailed assessment of each object's movements over time, then combining both techniques may help improve accuracy rates.\"}),/*#__PURE__*/e(\"p\",{children:\"Ultimately, choosing between these two techniques depends on your specific requirements and data type. It's important to consider factors such as complexity and accuracy when making your decision.\"}),/*#__PURE__*/e(\"h2\",{children:\"Video annotation software\"}),/*#__PURE__*/e(\"p\",{children:\"Because video annotation is highly complex, there are many specialized services available that offer sophisticated video annotation tools. Well-designed tools are an important component for efficient and high-quality video annotations.\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka includes data labeling tools for a range of methods of annotating video: bounding box annotation, polygon annotation, key points annotation, semantic segmentation, classification, and flexible customization for bespoke projects.\"}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Bounding boxes\"}),\" are an easy way to select an area on an image. This technique is the least accurate, but it is the easiest way to use a large crowd for fast labeling without extensive training or special skills.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Polygons\"}),\" capture more complex shapes by connecting dots around an object with straight lines. This technique is used in segmentation methods.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Key points\"}),\" are generally used for facial recognition by defining points on the eyes, nose, and mouth of people.\"]}),/*#__PURE__*/e(\"h2\",{children:\"How automation improves the video annotation process\"}),/*#__PURE__*/e(\"p\",{children:\"Auto-labeling (or auto-annotation) can greatly improve the video annotation process. Auto-labeling is a form of automated analysis which uses machine learning algorithms to tag, label, or categorize objects and scenes in videos. By using auto-labeling, companies can reduce costs associated with manual video annotation and achieve more accurate results.\"}),/*#__PURE__*/e(\"h3\",{children:\"Faster results\"}),/*#__PURE__*/e(\"p\",{children:\"The main advantage of auto-labeling is that it allows for faster completion times than manual labeling. Since the automation process does not require human interaction, it eliminates the need for annotators to review each frame and tag each object manually. This saves time and resources which would otherwise be spent on manual labor.\"}),/*#__PURE__*/e(\"h3\",{children:\"Better accuracy\"}),/*#__PURE__*/e(\"p\",{children:\"On straightforward annotation tasks, auto-labeling provides better consistency because it removes the problem of human error. Additionally, since AI-based auto-labeling systems can learn from their mistakes, they become more proficient at accurately identifying objects over time.\"}),/*#__PURE__*/e(\"p\",{children:\"Quality assurance checks allow businesses to verify whether the annotated labels match the actual content of the video footage and make sure that any discrepancies between human annotations and machine labels are identified quickly so they can be corrected accordingly. This helps businesses get accurate results from their video annotation projects quickly and cost-effectively.\"}),/*#__PURE__*/e(\"h2\",{children:\"Challenges of implementing AI for video annotation projects\"}),/*#__PURE__*/e(\"p\",{children:\"The use of Artificial Intelligence (AI) for video annotation has its challenges. Despite the ability of AI-based algorithms to label, classify, or categorize objects and actions in videos, some potential issues must be considered for accurate results.\"}),/*#__PURE__*/e(\"h3\",{children:\"Accuracy\"}),/*#__PURE__*/e(\"p\",{children:\"Although accuracy is a strength of automated annotation, it is also the biggest challenge. An effective model requires proper training with strong datasets to recognize visuals correctly. Creating the necessary datasets can be a problem when resources are limited. Moreover, it can be expensive for businesses to retain qualified experts in AI and video annotation.\"}),/*#__PURE__*/e(\"h3\",{children:\"Data privacy and security\"}),/*#__PURE__*/e(\"p\",{children:\"It is essential that data privacy and security laws such as GDPR or CCPA are adhered to when dealing with personal information collected during these projects.\"}),/*#__PURE__*/e(\"h3\",{children:\"Continual retraining\"}),/*#__PURE__*/e(\"p\",{children:\"Manual input may be needed at times to modify results generated by AI models; this may require regular updates on models due to advances in technology or sensor capabilities which can add further complexity to the equation for businesses already under pressure from resource constraints.\"}),/*#__PURE__*/e(\"h2\",{children:\"Best practices for video annotation\"}),/*#__PURE__*/e(\"p\",{children:\"By following best practices for successful video annotation projects, businesses can obtain more accurate results from AI-driven tasks while reducing costs associated with traditional manual methods of annotation. Here are some tips for successful video annotation:\"}),/*#__PURE__*/e(\"h3\",{children:\"Organize data into manageable chunks\"}),/*#__PURE__*/e(\"p\",{children:\"Managing the data is one of the main challenges of a large-scale video annotation project. By dividing the data into smaller, manageable chunks, it becomes easier to manage and annotate a video. Additionally, this ensures that each chunk receives sufficient attention while maintaining an accurate and consistent level of quality throughout the project.\"}),/*#__PURE__*/e(\"h3\",{children:\"Combine auto-annotation and human annotation\"}),/*#__PURE__*/e(\"p\",{children:\"Design workflows that use automation for straightforward tasks and human input for handling edge cases or evaluating results. Toloka's solutions offer pre-trained models to handle auto-labeling with reliable accuracy, combined with human annotators who can provide more nuanced annotations than automated algorithms alone.\"}),/*#__PURE__*/e(\"h3\",{children:\"Use quality assurance checks\"}),/*#__PURE__*/e(\"p\",{children:\"Quality assurance checks should be incorporated into the process to optimize the accuracy of results. With Toloka, businesses can access a team of human annotators for their video annotation tasks and get quality assurance checks to make sure their results are correct.\"}),/*#__PURE__*/e(\"h3\",{children:\"Test different methods\"}),/*#__PURE__*/e(\"p\",{children:\"To achieve better accuracy, test different video annotation methods to find the one that works best. For example, some projects may require single image annotations, while others may require continuous frame annotations. By testing different methods, businesses can identify which technique will yield more accurate results for their particular task.\"}),/*#__PURE__*/e(\"h3\",{children:\"Evaluate Results\"}),/*#__PURE__*/e(\"p\",{children:\"Finally, businesses should evaluate the results of annotated videos to identify improvement areas and make necessary adjustments as needed. This could include changing techniques or processes used during the project or training models on new datasets to obtain more accurate results.\"}),/*#__PURE__*/e(\"p\",{children:\"Human annotators can efficiently evaluate the output of computer vision models to provide metrics. Continuous monitoring with human-in-the-loop workflows is a good approach for catching problems in the model before they become serious problems in the real world.\"}),/*#__PURE__*/e(\"h2\",{children:\"How Toloka can help overcome the challenges of video annotation\"}),/*#__PURE__*/e(\"p\",{children:\"Video annotation can often present several challenges for businesses. From accuracy and resource constraints to the need to recruit qualified personnel to data privacy and security laws, these issues can be daunting.\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka offers a solution to these problems. Companies have access to a global pool of talent which helps them quickly and cost-effectively produce high-quality results with an emphasis on data security. Additionally, Toloka's platform combines manual input with automated labeling solutions for ground truth accuracy and superior scalability.\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka allows businesses to benefit from faster completion times than manual annotation methods and achieve improved accuracy. Moreover, Toloka provides access to experts in AI and data labeling who can develop custom solutions tailored specifically for video annotation tasks.\"}),/*#__PURE__*/e(\"p\",{children:\"Finally, quality assurance checks ensure high-quality video annotation even when dealing with more sophisticated tasks like motion tracking or facial recognition that require an understanding of context across multiple video frames together.\"}),/*#__PURE__*/e(\"h2\",{children:\"Maximizing the efficiency of video annotation with human input\"}),/*#__PURE__*/e(\"p\",{children:\"In summary, Toloka\u2019s data labeling platform is an invaluable asset for businesses looking for effective solutions to the challenges posed by video annotation projects, such as accuracy concerns, resource constraints, and data privacy protocols. By leveraging Toloka's global pool of talent combined with automated techniques and expert advice in AI-driven solutions, companies can maximize the efficiency of their projects.\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka combines machine learning models with human intelligence to annotate video footage quickly without sacrificing the accuracy of results. Our data labeling platform supports flexible solutions for a wide range of video annotation capabilities.\"}),/*#__PURE__*/e(\"p\",{children:\"To request a live demo or discuss pricing and timeframes for your video annotation project, contact our team of experts.\"})]});export const richText6=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"We're excited to announce that Toloka has released the beta version of our \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/how-it-works\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:/*#__PURE__*/e(\"strong\",{children:\"new ML Platform\"})})}),\", designed to deliver custom ML models in just a few clicks. \"]}),/*#__PURE__*/e(\"p\",{children:\"No need for ML infrastructure \u2014 choose a pre-trained model that matches your task, adapt it to fit your data, and access it via API.\"}),/*#__PURE__*/t(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Auto-training fine-tunes the model for you. All you need is data.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Covers almost every data type \u2014 text, image, and video (expanded support coming soon).\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Built-in data labeling. Do it yourself in our handy tool or send it to the Toloka crowd.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Easy deployment and model hosting with low-latency inference.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Experiment metadata stored on the platform.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Completely free for early adopters (up to 3 models).\"})})]}),/*#__PURE__*/e(\"h2\",{children:\"ML model use cases\"}),/*#__PURE__*/t(\"p\",{children:[\"Here are some examples of tasks that our ML models can handle. Feel free to reach out to \",/*#__PURE__*/e(o,{href:\"mailto:ml-toloka@toloka.ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"ml-toloka@toloka.ai\"})}),\" with your project needs. We can point you to the right model or provide a customized solution.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Try out the Toloka ML platform\"}),/*#__PURE__*/e(\"p\",{children:\"Just jump right in to explore the platform capabilities and try out some of the pre-trained models.\"}),/*#__PURE__*/e(\"img\",{alt:\"Try out the Toloka ML platform\",className:\"framer-image\",height:\"861\",src:\"https://framerusercontent.com/images/Qype6FQ65zuv6jawQh20MsvYQ.webp\",srcSet:\"https://framerusercontent.com/images/Qype6FQ65zuv6jawQh20MsvYQ.webp?scale-down-to=512 512w,https://framerusercontent.com/images/Qype6FQ65zuv6jawQh20MsvYQ.webp?scale-down-to=1024 1024w,https://framerusercontent.com/images/Qype6FQ65zuv6jawQh20MsvYQ.webp?scale-down-to=2048 2048w,https://framerusercontent.com/images/Qype6FQ65zuv6jawQh20MsvYQ.webp 2560w\",style:{aspectRatio:\"2560 / 1722\"},width:\"1280\"}),/*#__PURE__*/e(\"p\",{children:\"To get started, you'll go through these basic steps:\"}),/*#__PURE__*/t(\"ol\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/keycloak/realms/pulsar_auth/protocol/openid-connect/auth?client_id=pulsar_back&response_type=code&redirect_uri=https://tolokamodels.tech/api/auth/obtain_token&scope=email&state=https://tolokamodels.tech/models/podium\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Sign up\"})}),\" and log into the platform.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[\"Choose a pre-trained model from \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/models/podium\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"our catalog\"})}),\".\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[\"For fine-tuning, \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/datasets?limit=10&page=1&table_filters=%7B%7D&sort_column_name=submit_datetime&sort_direction=desc\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"upload your training data\"})}),\" in CSV format. Use our visual data labeling tool to add labels or check existing labels in your dataset after uploading.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Run auto training to tune the model using the data you uploaded.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[\"Find your model in \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/models/registry?limit=10&page=1&table_filters=%7B%7D&sort_column_name=submit_datetime&sort_direction=desc\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"the registry\"})}),\", run it, and check the quality of responses. You can apply the trained model to a dataset offline or deploy the model as a service on Toloka and access it via API for inference in your application.  \"]})})]}),/*#__PURE__*/e(\"img\",{alt:\"apply the trained model to a dataset offline or deploy the model as a service on Toloka and access it via API for inference in your application.\",className:\"framer-image\",height:\"703\",src:\"https://framerusercontent.com/images/WWKnkXICltwPRsvtLxgyXdZkUSk.webp\",srcSet:\"https://framerusercontent.com/images/WWKnkXICltwPRsvtLxgyXdZkUSk.webp?scale-down-to=512 512w,https://framerusercontent.com/images/WWKnkXICltwPRsvtLxgyXdZkUSk.webp?scale-down-to=1024 1024w,https://framerusercontent.com/images/WWKnkXICltwPRsvtLxgyXdZkUSk.webp?scale-down-to=2048 2048w,https://framerusercontent.com/images/WWKnkXICltwPRsvtLxgyXdZkUSk.webp 2560w\",style:{aspectRatio:\"2560 / 1407\"},width:\"1280\"}),/*#__PURE__*/t(\"p\",{children:[\"For a visual guide, follow the steps and videos in \",/*#__PURE__*/e(o,{href:\"https://tolokamodels.tech/how-it-works\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"How it works\"})}),\" on the platform.\"]}),/*#__PURE__*/t(\"p\",{children:[\"If you need help, click the \",/*#__PURE__*/e(\"strong\",{children:\"Get expert help\"}),\" button and submit your questions \u2014 we'll help you choose the best model for your needs and adapt it to your task.\"]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(o,{href:\"https://toloka.ai/talk-to-us\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Get expert help\"})})}),/*#__PURE__*/e(\"p\",{children:\"Beta is free, including storage and usage \u2014 now's the perfect time to test it out!\"}),/*#__PURE__*/t(\"p\",{children:[\"Not sure where to start? Wondering if there's a model that can handle your task? Just drop us a line at \",/*#__PURE__*/e(o,{href:\"mailto:ml-toloka@toloka.ai\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"ml-toloka@toloka.ai\"})}),\" \u2014 we'll be happy to help.\"]})]});export const richText7=/*#__PURE__*/t(i.Fragment,{children:[/*#__PURE__*/e(\"h2\",{children:\"Why BigCode is a big deal\"}),/*#__PURE__*/t(\"p\",{children:[\"Toloka recently labeled training data to support BigCode, an open scientific collaboration jointly led by \",/*#__PURE__*/e(o,{href:\"https://huggingface.co/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"HuggingFace\"})}),\" and \",/*#__PURE__*/e(o,{href:\"https://www.servicenow.com/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"ServiceNow\"})}),\". \"]}),/*#__PURE__*/e(\"p\",{children:\"BigCode's mission is focused on the responsible development and use of large language models for code. This aligns perfectly with Toloka's values. We believe that a high-tech approach to data labeling requires a combination of science and high-quality human input obtained in a responsible and scalable way. That's why working on the BigCode project is so organic to Toloka and mutually beneficial for all the teams involved.\"}),/*#__PURE__*/e(\"div\",{className:\"framer-text-module\",style:{\"--aspect-ratio\":\"560 / 315\",aspectRatio:\"560 / 315\",height:\"auto\",width:\"100%\"},children:/*#__PURE__*/e(a,{componentIdentifier:\"module:NEd4VmDdsxM3StIUbddO/DDzyuYPF56TuI0bfUu2z/YouTube.js:Youtube\",children:t=>/*#__PURE__*/e(r,{...t,play:\"Off\",shouldMute:!0,thumbnail:\"Medium Quality\",url:\"https://www.youtube.com/watch?v=a_d36OBK5Lk\"})})}),/*#__PURE__*/e(\"h2\",{children:\"The PII challenge\"}),/*#__PURE__*/t(\"p\",{children:[\"Code LLMs are making developers more productive by taking over the mundane and error prone parts of programming. However, current systems face regulatory challenges such as GDPR, so the BigCode project looked to Toloka for help with removing sensitive information from their dataset called \",/*#__PURE__*/e(o,{href:\"https://huggingface.co/datasets/bigcode/the-stack\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"The Stack\"})}),\". This would mean getting help with the annotation of usernames, passwords, and security tokens. With 6.4 TB of code in multiple programming languages in the dataset, this was not a trivial task.\"]}),/*#__PURE__*/e(\"p\",{children:\"The curators set out to prepare a PII model training dataset to automatically detect and mask personal data in the The Stack. They identified 14 categories of sensitive data that needed to be accurately recognized. The remaining challenge was to find enough experts to label the training data.\"}),/*#__PURE__*/e(o,{href:\"https://toloka.ai/large-language-models?utm_source=blog&utm_medium=banner&utm_campaign=sftrlhf&utm_content=bigcode\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{className:\"framer-image\",\"data-preset-tag\":\"img\",children:/*#__PURE__*/e(\"img\",{alt:\"\",className:\"framer-image\",height:\"240\",src:\"https://framerusercontent.com/images/L1MWCdKVPqiR785bI54iY2TIc9k.png\",srcSet:\"https://framerusercontent.com/images/L1MWCdKVPqiR785bI54iY2TIc9k.png?scale-down-to=512 512w,https://framerusercontent.com/images/L1MWCdKVPqiR785bI54iY2TIc9k.png?scale-down-to=1024 1024w,https://framerusercontent.com/images/L1MWCdKVPqiR785bI54iY2TIc9k.png 1498w\",style:{aspectRatio:\"1498 / 480\"},width:\"749\"})})}),/*#__PURE__*/e(\"h2\",{children:\"Crowd magic: 4349 hours in 4 days\"}),/*#__PURE__*/e(\"p\",{children:\"Toloka teamed up with BigCode to take on the dataset preparation. The traditional approach would be to hire a team of programmers who could read the source code and accurately label all 14 categories of data. Considering the size of the dataset (12,000 code chunks), it would take a lot of programmers or many months of effort.\"}),/*#__PURE__*/e(\"p\",{children:\"Fortunately, Toloka is anything but traditional. Our team got the data labeled in just 4 days with the combined efforts of 1399 Tolokers (some non-programmers) from 35 different countries. This was a total of 4349 person-hours \u2014 the equivalent of an entire year of work for one programmer working alone.\"}),/*#__PURE__*/e(\"h2\",{children:\"Cracking the magic trick\"}),/*#__PURE__*/e(\"p\",{children:\"The secret to making the task doable was to break it down into smaller steps, a process that we call decomposition. Let's dissect the project to see how we pulled it off.\"}),/*#__PURE__*/e(\"h3\",{children:\"Labels and skills\"}),/*#__PURE__*/e(\"p\",{children:\"The goal was to find and label 14 categories of personal data: 7 main categories (names, email addresses, passwords, usernames, SSH and API keys, cloud IDs, and IP addresses), 6 subcategories, and an \u201Cambiguous\u201D category. While names and email addresses are easy to understand, some of the categories are difficult for non-experts to recognize when looking at programming code.\"}),/*#__PURE__*/e(\"p\",{children:\"We used a quiz to teach Tolokers about all the categories. The quiz gave detailed instructions for each category, followed by a test. After the quiz, Tolokers were assigned a skill level for each category and granted access to tasks based on skill. So if a certain person was good at labeling names but not good at IP addresses, they were only allowed to do tasks for labeling names.\"}),/*#__PURE__*/e(\"img\",{alt:\"Project decomposition\",className:\"framer-image\",height:\"1269\",src:\"https://framerusercontent.com/images/hZ5A7cA5G3KE8mJI8YN5zVorcVc.jpeg\",srcSet:\"https://framerusercontent.com/images/hZ5A7cA5G3KE8mJI8YN5zVorcVc.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/hZ5A7cA5G3KE8mJI8YN5zVorcVc.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/hZ5A7cA5G3KE8mJI8YN5zVorcVc.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/hZ5A7cA5G3KE8mJI8YN5zVorcVc.jpeg 2865w\",style:{aspectRatio:\"2865 / 2538\"},width:\"1432\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Project decomposition before we set it up in Toloka\"})}),/*#__PURE__*/e(\"h3\",{children:\"The interface\"}),/*#__PURE__*/e(\"p\",{children:\"The 12,000 code samples were divided into chunks of 50 lines each. We decomposed the labeling into separate projects by category, which meant that each chunk was labeled in 7 different projects \u2014 once for each of the main categories. In any given task, the Toloker was asked to look for just one category of personal data at a time. This meant that more people could successfully contribute to the project with less expertise needed.\"}),/*#__PURE__*/e(\"p\",{children:\"We designed a custom Javascript interface to display the code formatting correctly. Tolokers used color highlighting to select the personal data they found for each category. Each task had a maximum of 4 subcategories to choose from, which made it easy to differentiate them with contrasting colors. The visual simplicity speeds up labeling (imagine what it would be like if we tried to label all 14 categories with different colors at once). This is what it looked like:\"}),/*#__PURE__*/e(\"img\",{alt:\"visual interface of labeling projects\",className:\"framer-image\",height:\"606\",src:\"https://framerusercontent.com/images/NKZkn4UVrBmLPCAmXUAPaplJqDc.jpeg\",srcSet:\"https://framerusercontent.com/images/NKZkn4UVrBmLPCAmXUAPaplJqDc.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/NKZkn4UVrBmLPCAmXUAPaplJqDc.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/NKZkn4UVrBmLPCAmXUAPaplJqDc.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/NKZkn4UVrBmLPCAmXUAPaplJqDc.jpeg 2136w\",style:{aspectRatio:\"2136 / 1212\"},width:\"1068\"}),/*#__PURE__*/e(\"img\",{alt:\"visual interface of labeling projects\",className:\"framer-image\",height:\"467\",src:\"https://framerusercontent.com/images/IuBi69Tb5720A6mEiFOvVUKuqrs.jpeg\",srcSet:\"https://framerusercontent.com/images/IuBi69Tb5720A6mEiFOvVUKuqrs.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/IuBi69Tb5720A6mEiFOvVUKuqrs.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/IuBi69Tb5720A6mEiFOvVUKuqrs.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/IuBi69Tb5720A6mEiFOvVUKuqrs.jpeg 2136w\",style:{aspectRatio:\"2136 / 934\"},width:\"1068\"}),/*#__PURE__*/e(\"h3\",{children:\"The steps\"}),/*#__PURE__*/e(\"p\",{children:\"The project was broken down into 4 steps:\"}),/*#__PURE__*/t(\"ol\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\",\"--list-style-type\":\"unset\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Quiz.\"}),\" We showed Tolokers explanations and examples of how to find each category of personal data and how to handle exceptions. The quiz tested their skills in each category separately.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Labeling in 7 separate projects.\"}),\" Each code chunk was labeled separately for each main category.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Validation in 7 separate projects.\"}),\" Each labeled chunk was shown to other Tolokers to decide if the label is correct or incorrect. Validation was repeated 3 \u2013 5 times, depending on the confidence and consistency of the answers.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Automatic aggregation.\"}),\" All of the labels were combined for each chunk. Intersecting labels were marked as \u201CAmbiguous\u201D.\"]})})]}),/*#__PURE__*/e(\"h2\",{children:\"Taking care of the crowd\"}),/*#__PURE__*/e(\"p\",{children:\"The BigCode team and Toloka share a commitment to responsible data collection and fair treatment of crowd workers.\"}),/*#__PURE__*/e(\"p\",{children:\"A main concern for the BigCode project was to pay Tolokers more than the minimum wage. Together, we analyzed minimum wage rates and purchasing power in different countries and set an hourly pay rate of $7.30. The project was open to Tolokers in countries where this pay rate would have the same purchasing power as the highest minimum wage in the US ($16.50).\"}),/*#__PURE__*/e(\"p\",{children:\"We wanted to reward Tolokers for their performance, so we started with a fair market price on tasks and then awarded bonuses to reach the hourly rate. When the project finished under budget, we allocated all of the remaining funds as final bonuses.\"}),/*#__PURE__*/e(\"p\",{children:\"The project was a big hit with Tolokers. Many people welcomed the opportunity to work with real programming code and learn more about coding in the process. We received waves of positive feedback from people asking for more tasks.\"}),/*#__PURE__*/e(\"img\",{alt:\"Tolokers feedback \",className:\"framer-image\",height:\"525\",src:\"https://framerusercontent.com/images/5PrZZVnm2TfRQcdOxIOAMoHcRPw.jpeg\",srcSet:\"https://framerusercontent.com/images/5PrZZVnm2TfRQcdOxIOAMoHcRPw.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/5PrZZVnm2TfRQcdOxIOAMoHcRPw.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/5PrZZVnm2TfRQcdOxIOAMoHcRPw.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/5PrZZVnm2TfRQcdOxIOAMoHcRPw.jpeg 2892w\",style:{aspectRatio:\"2892 / 1050\"},width:\"1446\"}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"em\",{children:\"Some feedback received from Tolokers\"})}),/*#__PURE__*/e(\"h2\",{children:\"The ethics of BigCode\"}),/*#__PURE__*/e(\"p\",{children:\"We did have concerns about the ethics of sharing personal data with the crowd. BigCode took pains to make sure that all the code samples were taken from permissively licensed open sources that were already publicly available, and we made sure that Tolokers understood that the dataset would be used to train a model to handle this task automatically.\"}),/*#__PURE__*/e(\"p\",{children:\"As a company, Toloka supports the BigCode goals of transparency, openness, and responsible AI. Open datasets are just part of the equation, but it's essential that those datasets are collected in a fair and transparent way \u2014 Toloka's number one priority.\"})]});\nexport const __FramerMetadata__ = {\"exports\":{\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText6\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText7\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],
  "mappings": "qSAAyS,IAAMA,EAAsBC,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,2SAAwTE,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,KAAkBF,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAE,SAAsBF,EAAEC,EAAE,CAAC,KAAK,oDAAoD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,6BAA6B,CAAC,CAAC,CAAC,EAAE,oHAAoH,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ocAAoc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6CAA6C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+hBAA+hB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oRAAoR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAA6W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8UAA8U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,skBAAskB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oVAAoV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8ZAA8Z,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gSAAgS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+bAA+b,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,waAAwa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uPAAuP,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,8YAA2ZE,EAAEC,EAAE,CAAC,KAAK,mDAAmD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAE,gBAAgB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,+WAA+W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uSAAuS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6XAA6X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6fAA6f,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8XAA8X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+aAA+a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wVAAwV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qcAAqc,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4CAA4C,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kFAAkF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qCAAqC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qKAAqK,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gDAAgD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gHAAgH,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iZAAiZ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+IAA+I,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uIAAuI,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,qCAAqC,CAAC,EAAE,iFAAiF,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sDAAsD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yFAAyF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,IAAI,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,gBAAgB,CAAC,EAAE,kFAAkF,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,aAAa,CAAC,EAAE,wEAAwE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qDAAqD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qEAAqE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,IAAI,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0BAA0B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yCAAyC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,IAAI,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,oBAAoB,CAAC,EAAE,+DAA+D,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sCAAsC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wVAAwV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yEAAyE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6iBAA6iB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uDAAuD,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+FAA4GE,EAAEC,EAAE,CAAC,KAAK,mDAAmD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,qQAAqQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,4bAA4b,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+KAA4LE,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,0UAA0U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,qXAAqX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8RAA8R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yXAAyX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0mBAA0mB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kOAAkO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kaAAka,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uIAAuI,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,gCAAgC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAE,uGAAuG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8IAA8I,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+MAA+M,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,uQAAuQ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8KAA8K,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,oDAAoD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,KAAK,CAAC,CAAC,CAAC,EAAE,6pBAA6pB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,gQAAgQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4TAA4T,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,OAAoBF,EAAEC,EAAE,CAAC,KAAK,gCAAgC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,OAAO,CAAC,CAAC,CAAC,EAAE,iBAAiB,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,oDAAoD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iDAAiD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oiBAAoiB,CAAC,CAAC,CAAC,CAAC,EAAeG,EAAuBL,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,6NAAwN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6NAA6N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iMAAiM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gWAAgW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0OAA0O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6UAA6U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0MAA0M,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8SAA8S,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uMAAuM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wJAAwJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0QAA0Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8VAA8V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oRAAoR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4LAA4L,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uPAAuP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8BAA8B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4RAA4R,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8BAA8B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8WAA8W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0SAA0S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oJAAoJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8CAA8C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iFAAiF,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iOAAiO,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,4IAA4I,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sRAAsR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4VAA4V,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBN,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,2CAA2C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,EAAE,sPAAsP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,uXAAkX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,s0BAAkzB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,seAA4d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wkBAA8jB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+CAA0C,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gJAAgJ,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uMAAuM,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sPAAsP,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,gDAAgD,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mQAA8P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uJAAuJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wWAAmW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mDAA8C,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+KAA0K,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,yCAAyC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kKAA6J,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4HAA4H,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wXAAwX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mSAAmS,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,sDAAsD,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8GAA8G,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8HAA8H,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2RAA2R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6UAA6U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mcAAob,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gDAAgD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6cAA6c,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2yBAA2yB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8WAAyW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ifAA4e,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+QAA0Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,sEAAsE,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sgBAAigB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2bAA4a,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,8CAA8C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mpBAAqnB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2bAAwZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6cAA0a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8TAAyT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sSAAsS,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iFAAiF,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sFAAsF,CAAC,EAAeA,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAsBA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4FAA4F,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wVAAwV,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,iKAA8KE,EAAEC,EAAE,CAAC,KAAK,2CAA2C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,8EAA8E,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4VAA4V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qLAAqL,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAA2P,CAAC,CAAC,CAAC,CAAC,EAAeK,EAAuBP,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,4cAAuc,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2RAAsR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAqQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iCAA4B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wRAAwR,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,kBAAkB,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0IAA0I,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,qBAAqB,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+XAA+X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qRAAqR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ugBAAugB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mOAAmO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,6HAA0IE,EAAEC,EAAE,CAAC,KAAK,kCAAkC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,8BAA8B,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,sCAAsC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ibAA4a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6FAA6F,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yKAAyK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6NAA6N,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,uBAAuB,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oYAA+X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2PAA2P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0dAAgd,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,kCAAkC,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,gCAAgC,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2MAA2M,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,sDAAsD,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2OAA2O,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,2BAA2B,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2WAA2W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mUAAmU,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,iCAAiC,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oEAAoE,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,iGAA4F,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,2DAAsD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qBAAgB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+RAA+R,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,+FAA4GE,EAAEC,EAAE,CAAC,KAAK,oCAAoC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,8OAA2PF,EAAEC,EAAE,CAAC,KAAK,0CAA0C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBR,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,+JAA+J,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4UAAuU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gQAAgQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,oCAAiDE,EAAEC,EAAE,CAAC,KAAK,wEAAwE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oCAAoC,CAAC,CAAC,CAAC,EAAE,qOAAqO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,yPAAyP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sgBAAsgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2WAAsW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qdAA2c,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2ZAA2Z,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yCAAsDE,EAAEC,EAAE,CAAC,KAAK,uBAAuB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,EAAE,yIAAsJF,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAE,+HAA4IA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAE,sFAAsF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,uGAA+GE,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAE,yKAAsLA,EAAEC,EAAE,CAAC,KAAK,iFAAiF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,kBAA+BF,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAE,sWAAsW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,sEAAmFE,EAAEC,EAAE,CAAC,KAAK,qEAAqE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAE,+mBAAqmB,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,qNAAkOE,EAAEC,EAAE,CAAC,KAAK,2CAA2C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,EAAE,4mBAA+mBF,EAAEC,EAAE,CAAC,KAAK,sEAAsE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,2tBAA2tB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kLAAkL,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,yBAAyB,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oRAAoR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uiBAAuiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gkBAAgkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6/CAA6/C,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,8FAA8F,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,qWAAqW,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8dAAyd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2KAAsK,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+rBAA+rB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2YAA2Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0tBAAssB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oCAAoC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,quBAAguB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yTAA4TE,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,2kBAAskB,CAAC,CAAC,CAAC,CAAC,EAAeO,EAAuBT,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ohBAAohB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qLAAqL,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+UAA0U,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kCAAkC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kQAAkQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kGAAkG,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8HAA8H,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gGAAgG,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0HAA0H,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6HAA6H,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sJAAsJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uPAAuP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mLAAmL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8NAA8N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ubAAub,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8dAA8d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oEAAoE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sWAAsW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+cAA0c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sMAAsM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6OAA6O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6OAA6O,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,gBAAgB,CAAC,EAAE,sMAAsM,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,UAAU,CAAC,EAAE,uIAAuI,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,YAAY,CAAC,EAAE,uGAAuG,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sDAAsD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oWAAoW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iVAAiV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0RAA0R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6XAA6X,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6DAA6D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6PAA6P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+WAA+W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iKAAiK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iSAAiS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2QAA2Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sCAAsC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mWAAmW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8CAA8C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oUAAoU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8BAA8B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+QAA+Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gWAAgW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6RAA6R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wQAAwQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iEAAiE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0NAA0N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wVAAwV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uRAAuR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mPAAmP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gEAAgE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8aAAya,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0HAA0H,CAAC,CAAC,CAAC,CAAC,EAAeQ,EAAuBV,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,8EAA2FE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAE,+DAA+D,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2IAAsI,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mEAAmE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6FAAwF,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0FAA0F,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+DAA+D,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,6CAA6C,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sDAAsD,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4FAAyGE,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,iGAAiG,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qGAAqG,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,iCAAiC,UAAU,eAAe,OAAO,MAAM,IAAI,sEAAsE,OAAO,iWAAiW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sDAAsD,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAEC,EAAE,CAAC,KAAK,qPAAqP,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAE,6BAA6B,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,mCAAgDE,EAAEC,EAAE,CAAC,KAAK,0CAA0C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,GAAG,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,oBAAiCE,EAAEC,EAAE,CAAC,KAAK,+HAA+H,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,2BAA2B,CAAC,CAAC,CAAC,EAAE,2HAA2H,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,kEAAkE,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,sBAAmCE,EAAEC,EAAE,CAAC,KAAK,sIAAsI,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,0MAA0M,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,MAAM,CAAC,IAAI,mJAAmJ,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,sDAAmEE,EAAEC,EAAE,CAAC,KAAK,yCAAyC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,mBAAmB,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,+BAA4CE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yHAAoH,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAEC,EAAE,CAAC,KAAK,+BAA+B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,iBAAiB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,yFAAoF,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,2GAAwHE,EAAEC,EAAE,CAAC,KAAK,6BAA6B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAE,iCAA4B,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeO,EAAuBX,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,6GAA0HE,EAAEC,EAAE,CAAC,KAAK,0BAA0B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,QAAqBF,EAAEC,EAAE,CAAC,KAAK,8BAA8B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,YAAY,CAAC,CAAC,CAAC,EAAE,IAAI,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,2aAA2a,CAAC,EAAeA,EAAE,MAAM,CAAC,UAAU,qBAAqB,MAAM,CAAC,iBAAiB,YAAY,YAAY,YAAY,OAAO,OAAO,MAAM,MAAM,EAAE,SAAsBA,EAAEU,EAAE,CAAC,oBAAoB,sEAAsE,SAASC,GAAgBX,EAAEY,EAAE,CAAC,GAAGD,EAAE,KAAK,MAAM,WAAW,GAAG,UAAU,iBAAiB,IAAI,6CAA6C,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeX,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,qSAAkTE,EAAEC,EAAE,CAAC,KAAK,oDAAoD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,EAAE,qMAAqM,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,uSAAuS,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,qHAAqH,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,UAAU,eAAe,kBAAkB,MAAM,SAAsBF,EAAE,MAAM,CAAC,IAAI,GAAG,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,uQAAuQ,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yUAAyU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAiT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4KAA4K,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qYAA2X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iYAAiY,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,wBAAwB,UAAU,eAAe,OAAO,OAAO,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,qDAAqD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wbAAmb,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ydAAyd,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,wCAAwC,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,wCAAwC,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2CAA2C,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,OAAO,oBAAoB,OAAO,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,OAAO,CAAC,EAAE,qLAAqL,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kCAAkC,CAAC,EAAE,iEAAiE,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,oCAAoC,CAAC,EAAE,uMAAkM,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,wBAAwB,CAAC,EAAE,4GAAkG,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oHAAoH,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yWAAyW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wOAAwO,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,qBAAqB,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,yWAAyW,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,SAAS,sCAAsC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gWAAgW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAgQ,CAAC,CAAC,CAAC,CAAC,EACr46Ha,EAAqB,CAAC,QAAU,CAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,mBAAqB,CAAC,KAAO,UAAU,CAAC,CAAC",
  "names": ["richText", "u", "x", "p", "Link", "motion", "richText1", "richText2", "richText3", "richText4", "richText5", "richText6", "richText7", "ComponentPresetsConsumer", "t", "Youtube", "__FramerMetadata__"]
}
