{
  "version": 3,
  "sources": ["ssg:https://framerusercontent.com/modules/YVgynkVt96a0ES9Gk8yz/mFiv8ywq5DZfpSmtbIfZ/eo4RAmtig-15.js"],
  "sourcesContent": ["import{jsx as e,jsxs as n}from\"react/jsx-runtime\";import{Link as t}from\"framer\";import{motion as i}from\"framer-motion\";import*as a from\"react\";export const richText=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"In recent years, Artificial Intelligence (AI) agents have appeared as groundbreaking tools capable of performing complex tasks, learning from data, and interacting with humans and the environment. From autonomous systems that optimize routine business tasks to virtual assistants that streamline customer interactions, AI agents are driving efficiency and delivering personalized experiences.\"}),/*#__PURE__*/e(\"p\",{children:\"What are AI agents, and why are they crucial to the future of AI? In this article, we'll dive into the concept of an AI agent, exploring its definition, types, applications, and potential future developments.\"}),/*#__PURE__*/e(\"h2\",{children:\"What are AI Agents?\"}),/*#__PURE__*/e(\"p\",{children:\"At its essence, an AI agent is a system that can carry out activities independently for a user or another application. Their understanding of the environment includes making choices and executing decisions to accomplish objectives. The central aspect of AI agents' functionality is their independence, meaning they don't require human assistance.\"}),/*#__PURE__*/e(\"p\",{children:\"The independence and flexibility of AI agents set them apart from conventional applications, which usually adhere to a predetermined set of rules without the capacity to learn or adjust. AI agents can also collaborate to create a team of agents. Several agents can work together to achieve complex objectives, with one often serving as the primary agent while the others act as subagents. Intelligent agents are also related to software agents, self-governing applications that perform user tasks. AI agents can also exist as either purely digital entities or as physical systems, also called embodied systems.\"}),/*#__PURE__*/e(\"h3\",{children:\"Software-Only AI Agents\"}),/*#__PURE__*/e(\"p\",{children:\"The majority of AI agents we deal with nowadays function solely through software. These systems are embedded within computer networks, executing tasks that do not possess any real-world form. They utilize sophisticated algorithms, machine learning, and natural language understanding (NLP) to engage with users and data, providing essential services without needing a physical presence.\"}),/*#__PURE__*/e(\"h3\",{children:\"Embodied AI Agents\"}),/*#__PURE__*/e(\"p\",{children:\"Beyond purely software-based systems, there are AI systems that possess a tangible form, often known as embodied AI or interface agents. These systems are seamlessly integrated into robots or other tangible entities, enabling them to engage with the physical environment. Some of these systems may only have a virtual form, which means they are represented visually. These systems often take the form of avatars in video games, virtual helpers in software programs, or characters within simulations.\"}),/*#__PURE__*/e(\"h2\",{children:\"AI Agents Examples\"}),/*#__PURE__*/e(\"p\",{children:\"AI-based systems are crucial in streamlining business operations by automating various tasks. They also aid in refining decision-making processes and boosting operational effectiveness. The examples below illustrate how AI agents can revolutionize our everyday lives and business operations by handling repetitive tasks more efficiently. By incorporating these AI solutions into their operations, businesses can achieve higher efficiency, lower expenses, and enhance overall performance.\"}),/*#__PURE__*/e(\"h3\",{children:\"Chatbots\"}),/*#__PURE__*/e(\"p\",{children:\"Chatbots can provide automated customer support through chat interfaces, answer frequently asked questions, assist with troubleshooting, and guide users through processes like account setup or product returns. They understand and respond to user queries in real-time, resolving common issues and often escalating complex issues to human agents when necessary.\"}),/*#__PURE__*/e(\"p\",{children:\"They use NLP to understand customer queries and provide relevant responses. For instance, a customer might ask about product details or order status, and the AI agent can retrieve and present the information quickly.\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents can conduct surveys and collect customer feedback. They can analyze responses to identify trends and insights, helping businesses understand areas for improvement and enhance customer interactions.\"}),/*#__PURE__*/e(\"p\",{children:\"Chatbots integrate with company knowledge bases and customer databases to achieve the utmost accuracy and provide timely responses to user inquiries. For instance, customer service bots can be encountered on websites like banks or online retailers.\"}),/*#__PURE__*/e(\"h3\",{children:\"Personalized Recommendation Systems\"}),/*#__PURE__*/e(\"p\",{children:\"Companies deploy AI agents to analyze customer data to offer personalized recommendations and experiences. For example, in e-commerce, autonomous agents can suggest products based on a customer's browsing history and preferences, enhancing the shopping experience.\"}),/*#__PURE__*/e(\"p\",{children:\"Such systems analyze user data and behavior using machine learning algorithms to provide personalized recommendations, enhancing user engagement and customer satisfaction. Such AI agents can suggest movies, products, or music based on user preferences and behavior.\"}),/*#__PURE__*/e(\"h3\",{children:\"Robots\"}),/*#__PURE__*/n(\"p\",{children:[\"AI-powered robots, often called autonomous robots or intelligent robots, are integrated into various industries to perform tasks that require physical interaction with the environment. For example, \",/*#__PURE__*/e(t,{href:\"https://www.kuka.com/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"KUKA Robots\"})}),\" use AI to perform welding, assembling, painting, and packaging tasks.\"]}),/*#__PURE__*/e(\"h3\",{children:\"Virtual Assistants\"}),/*#__PURE__*/e(\"p\",{children:\"Such assistants help with such tasks as setting reminders, sending messages, making calls, providing weather updates, and answering questions using natural language processing. They operate on smartphones, smart speakers, and other devices, responding to voice commands and integrating with various services. When paired with a smart home system, they can help control lights, heat, and electronic devices using a person's voice. The most popular virtual assistants are Siri (Apple), Google Assistant, and Alexa (Amazon).\"}),/*#__PURE__*/e(\"h2\",{children:\"Key Features of AI Agents\"}),/*#__PURE__*/e(\"h3\",{children:\"Autonomy\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents operate independently without requiring constant human intervention. They can make decisions and perform tasks independently based on the data and algorithms they are programmed with.\"}),/*#__PURE__*/e(\"h3\",{children:\"Perception\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents gather data from their environment through sensors or data intake mechanisms. This capability allows them to understand and interpret their surroundings or the digital context in which they operate.\"}),/*#__PURE__*/e(\"h3\",{children:\"Learning\"}),/*#__PURE__*/e(\"p\",{children:\"Some AI agents can improve their performance over time through learning mechanisms, including supervised learning, unsupervised learning, and reinforcement learning. By continuously updating their knowledge base, they can adapt to new data and situations.\"}),/*#__PURE__*/e(\"h3\",{children:\"Reasoning and Decision-Making\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents use advanced algorithms and models, such as neural networks, decision trees, and rule-based systems, to analyze data, draw conclusions, and make informed decisions. This capability enables them to solve complex problems and optimize outcomes.\"}),/*#__PURE__*/e(\"h3\",{children:\"Action\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents can perform actions based on their decisions. This includes controlling physical devices, executing commands, or interacting with users. Their actions are geared towards achieving specific goals or responding to user inputs.\"}),/*#__PURE__*/e(\"h3\",{children:\"Communication\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents can communicate with users and other systems. They use natural language processing (NLP) to understand and generate human language. It allows them to interact with humans through text or speech.\"}),/*#__PURE__*/e(\"h2\",{children:\"Types of AI Agents\"}),/*#__PURE__*/e(\"p\",{children:\"AI agents can be categorized into several types based on functionality and complexity.\"}),/*#__PURE__*/e(\"h3\",{children:\"Simple Reflex Agents\"}),/*#__PURE__*/e(\"p\",{children:\"The most basic type of agent is a simple reflex agent. This type of agent bases its choice on the received information, ignoring all other knowledge. Most simple reflex agents use the condition-action rule. It means that they act according to the current situation. They base their actions on the current perception and do not consider previous events. Simple reflex agents have a library of if-then rules to act upon specific situations and use minimum reasoning.\"}),/*#__PURE__*/e(\"h3\",{children:\"Model-Based Reflex Agents\"}),/*#__PURE__*/e(\"p\",{children:\"Using a model-based agent is one of the most effective ways to work in a partially observable environment. It keeps track of the part of the environment it interacts with at a a certain period of time. This means that the agent maintains its inner state, which depends on the history of interactions, and understands unobservable aspects of the current state.\"}),/*#__PURE__*/e(\"p\",{children:\"Knowledge of two kinds is employed in the agent's programming to update the inner state information. Firstly, information about how the environment changes independently of the agent, and secondly, information about how the agent's actions affect the surrounding world. Based on the first type of information, a world model is constructed.\"}),/*#__PURE__*/e(\"h3\",{children:\"Goal-Based Agents\"}),/*#__PURE__*/e(\"p\",{children:\"Goal-Based Agent demands not only information about the environment or its inner state but also information about the goal, which will outline the target conditions. The agent's agenda may combine these types of information to select actions that will achieve the goal.\"}),/*#__PURE__*/e(\"h3\",{children:\"Utility-Based Agents\"}),/*#__PURE__*/e(\"p\",{children:\"The utility function in utility-based agents represents a number that reflects the agent's degree of satisfaction. This function helps in case several goals contradict each other. For example, the utility function can help find a compromise between quality and speed of work. Also, if there are several goals to be fulfilled by the agent but each goal does not seem to be fully likely to be successful, the utility function allows to estimate the probability of success, taking into account the priority of the goal.\"}),/*#__PURE__*/e(\"h3\",{children:\"Learning Agents\"}),/*#__PURE__*/e(\"p\",{children:\"All previously mentioned agents do not have learning capabilities. For an intelligent agent, this is one of the most essential characteristics, as it can make the agent more valuable than it was at the start.\"}),/*#__PURE__*/e(\"p\",{children:\"A learning agent has four components. The learning component makes improvements, while the productive component is responsible for choosing external actions. The learning component is entirely dependent on the productive component. The structure also includes a critic who acts as an evaluator of the agent's actions with respect to a standard performance. The critic is necessary in this structure because the agent does not understand whether or not its actions are successful.\"}),/*#__PURE__*/e(\"p\",{children:\"The learning component uses the information from the critic to evaluate the agent's actions and determine its future actions. The problem generator in the learning agent structure is intended to pick actions to generate an entirely new experience. It is designed to allow the system to experiment to find the best solutions.\"}),/*#__PURE__*/e(\"h3\",{children:\"Multi-agent Systems (MAS)\"}),/*#__PURE__*/n(\"p\",{children:[\"A multi-agent system (MAS) is a system that is composed of agents interacting with each other. These multiple agents can be software-based or robotic entities that work together to achieve a common goal or solve a problem that may be too complex for a single agent to handle. MAS can handle larger and more complicated issues by distributing tasks among agents.\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"em\",{children:\"To see a real-world example of a production-ready multi-agent orchestrator, check out the \"}),/*#__PURE__*/e(t,{href:\"https://www.shakudo.io/agentflow-demo\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Shakudo AgentFlow\"})})}),/*#__PURE__*/e(\"em\",{children:\", and see how agents can be deployed, coordinated, and scaled within enterprise environments.\"})]}),/*#__PURE__*/e(\"h3\",{children:\"Hierarchical Agents\"}),/*#__PURE__*/e(\"p\",{children:\"Hierarchical AI agents or Hierarchical agent-based models are a sophisticated extension of traditional ABMs that incorporate multiple levels of organization or scales of interaction within an environment. They are particularly useful for representing complex systems where interactions occur at different hierarchical levels.\"}),/*#__PURE__*/e(\"p\",{children:\"In a hierarchical agent-based model, agents communicate with each other and their environment via messages passing through input and output channels. The system is structured to allow for the processing of data at varying levels of abstraction. This structure supports self-adaptive behavior by enabling agents to acquire and utilize fine- and coarse-grained knowledge depending on their position within the hierarchy.\"}),/*#__PURE__*/e(\"h2\",{children:\"Components of AI Agents\"}),/*#__PURE__*/e(\"p\",{children:\"An AI agent comprises four fundamental components: the environment, sensors, actuators, and the decision-making mechanism.\"}),/*#__PURE__*/e(\"h3\",{children:\"Environment\"}),/*#__PURE__*/e(\"p\",{children:\"The environment is the external world in which an AI system functions. It includes all aspects outside the system that can impact its actions and that the system can affect. This can range from homes and offices to streets, factories, and other locations.\"}),/*#__PURE__*/e(\"p\",{children:\"The environment where an AI system operates can also exist in a virtual realm. Virtual environments are created by humans to be either artificial or simulated, offering spaces where AI agents can engage and carry out activities similar to those in the physical world. These simulated or digital settings can be found in online platforms, video games, virtual reality environments, and testing simulations.\"}),/*#__PURE__*/e(\"p\",{children:\"Virtual environments can be crafted to resemble real-life situations or to introduce entirely new challenges. They can be fully controlled and adjusted to examine specific situations. The environment can be either static or changing, with some aspects fully visible and others only partially so. It sets the stage and limitations for the AI system to operate and accomplish its objectives.\"}),/*#__PURE__*/e(\"h3\",{children:\"Sensors\"}),/*#__PURE__*/e(\"p\",{children:\"Sensors are devices that gather data from the physical environment, such as cameras, microphones, GPS, and temperature sensors. Virtual sensors, on the other hand, operate in digital environments, collecting data from web services, monitoring virtual events, and tracking activities within applications. These are the components that allow agents to study the environment.\"}),/*#__PURE__*/e(\"h3\",{children:\"Actuators\"}),/*#__PURE__*/e(\"p\",{children:\"Actuators are the components through which the agent takes actions affecting the environment. They execute decisions made by the agent. Actuators enable agent to perform actions that can change the state of the environment or achieve specific goals. The nature and design of actuators can vary significantly depending on whether the environment is physical or virtual.\"}),/*#__PURE__*/e(\"p\",{children:\"Mechanisms that perform actions in the physical environment include, for example, robotic arms, speakers, or displays. Virtual actuators, on the other hand, do not have a physical presence and represent software algorithms that perform actions in virtual environments. For instance, they can be used as tools for creating new documents or sending messages and notifications. Any tool that helps an AI agent act can be considered an actuator.\"}),/*#__PURE__*/e(\"h3\",{children:\"Decision-making mechanism\"}),/*#__PURE__*/e(\"p\",{children:\"The decision-making mechanism is the core component that processes information received from the sensors and decides on actions to be taken via the actuators. It interprets raw sensor data to form a coherent understanding of the environment.\"}),/*#__PURE__*/e(\"p\",{children:\"Information about the environment, goals, and learned experiences is stored on a knowledge base. Based on such experience and feedback it improves the agent's behavior over time. Decision-making mechanisms apply logic and algorithms to make decisions and plan actions. They use mechanisms from simple rule-based systems to complex neural networks.\"}),/*#__PURE__*/e(\"h2\",{children:\"How does an AI Agent work?\"}),/*#__PURE__*/e(\"p\",{children:\"An AI agent works by interacting with its environment through a cycle of perception, decision-making, and action. Here's a breakdown of the process:\"}),/*#__PURE__*/e(\"h3\",{children:\"Perception\"}),/*#__PURE__*/e(\"p\",{children:\"The process begins with data collection through sensors or data intake mechanisms. In the absence of such data, the agent cannot take any further actions. of the agent. The AI agent collects data from its environment. For instance, in a physical environment, sensors like cameras and GPS might gather visual and positional data. Once the raw data is collected, it undergoes preprocessing to make it usable for further analysis.\"}),/*#__PURE__*/e(\"h3\",{children:\"Processing and Analysis\"}),/*#__PURE__*/e(\"p\",{children:\"Once data is acquired, it must be processed and analyzed to derive meaningful insights. This involves several steps, including noise reduction, normalization, and feature extraction, to convert raw data into a usable format. The agent then uses machine learning and artificial intelligence algorithms to examine and draw insights from the data.\"}),/*#__PURE__*/e(\"h3\",{children:\"Decision Making\"}),/*#__PURE__*/e(\"p\",{children:\"After the analysis, the AI agent engages in a complex decision-making process. This can involve sophisticated algorithms, rule-based logic, or predictive modeling. For instance, an autonomous vehicle might leverage decision trees and reinforcement learning to plan the safest route, navigating obstacles to reach its destination. Similarly, an AI assistant could employ natural language processing and machine learning to comprehend user requests and determine the most suitable response.\"}),/*#__PURE__*/e(\"h3\",{children:\"Action Execution\"}),/*#__PURE__*/e(\"p\",{children:\"After deciding on a course of action, the agent implements that decision. This may entail updating a database, transmitting a command to another system, or manipulating a physical device. For instance, in a robotic system, the agent's decisions prompt actuators to execute movements like navigating a path or grasping an object. Similarly, in a virtual environment, the agent's action could manifest as a database update or a user notification.\"}),/*#__PURE__*/e(\"h2\",{children:\"Workflow of an AI Agent\"}),/*#__PURE__*/e(\"p\",{children:\"The typical sequence of actions for an AI agent's workflow involves the following steps:\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Receive Data\"}),\". The agent obtains new information from either the environment or a user. For example, a sensor might detect a change in the physical environment, or a user might input a query.\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Analyze Data\"}),\". The agent contextualizes and interprets the information using AI models. This step involves preprocessing and running the data through various machine-learning algorithms to extract meaningful insights.\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Decide on Action\"}),\". The agent assesses the situation and chooses the optimal path forward. This may entail choosing the correct response, devising a series of actions, or forecasting future events.\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Act\"}),\". The decision made by the agent is put into effect through an act, which can manifest as either a physical action, like the movement of a robotic arm, or a virtual action, like the sending of a notification or the updating of a database.\"]}),/*#__PURE__*/e(\"h2\",{children:\"The future of AI agents\"}),/*#__PURE__*/e(\"p\",{children:\"advance with molikely re sophisticated cognitive capabilities, enabling them to handle increasingly complex tasks and make decisions more accurately. It is plausible that AI agents will further evolve to engage in interactions with users in more organic and intuitive way. Improved natural language processing (NLP) could help them achieve this goal. With improved NLP solutions, AI agents will better understand context and nuances in human communication. AI agents will likely combine text, images, audio, and video information to better understand situations.\"}),/*#__PURE__*/e(\"p\",{children:\"In the future, the development of AI agents will probably incorporate cutting-edge algorithms that are currently not being developed. As they progress, these algorithms will likely play a significant role in revolutionary technological advancements, influencing the way AI agents engage with and influence our society.\"})]});export const richText1=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"One of the types of AI is AGI or Artificial General Intelligence. It is a theoretical idea that computer science sees as one of the highest levels of AI development. Further, we will compare AGI with a few different types of artificial intelligence.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is AGI?\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial General Intelligence (AGI) is also called strong AI. It performs tasks as it has a brain of its own. AGI will possess the cognitive capabilities of a human and eventually even surpass them. It will naturally understand and generate human language, making it capable of effective communication with humans.\"}),/*#__PURE__*/e(\"p\",{children:\"Although only a concept, general artificial intelligence is expected to perform any intellectual task a human can. It will not be limited to specific domains but will generalize its knowledge and skills across various fields.\"}),/*#__PURE__*/e(\"p\",{children:\"AGI will be purposed to learn new ideas from its personal experiences. It would require no regular human supervision, adapt to novel situations, and resolve problems that it hasn\u2019t encountered before. AGI\u2019s enormous natural language processing abilities will assist it in participating in complicated reasoning and interacting with humans. It will apprehend context, draw conclusions, and apply logic to make decisions. Such qualities are inherent only in human beings.\"}),/*#__PURE__*/e(\"p\",{children:\"AGI, designed to replicate human-level intelligence, may be able to recognize, understand, and respond to feelings. This could involve perceiving emotional cues from language, facial expressions, and other non-verbal cues. If equipped with a robotic or some other form of body, AGI could interact with physical environments using tactile sensors. This would permit AGI systems to manipulate objects and interact with physical objects or surfaces around them.\"}),/*#__PURE__*/e(\"p\",{children:\"It is also important to note that AGI remains largely theoretical at present. The journey to achieving AGI is still an open challenge, with many unresolved technical and ethical questions. While some specialists speculate that AGI could be a reality by 2030, these are only predictions at this stage.\"}),/*#__PURE__*/e(\"h2\",{children:\"Artificial General Intelligence vs Artificial Intelligence\"}),/*#__PURE__*/e(\"p\",{children:\"AI is a broad field of computer science dedicated to creating systems that can perform tasks with the help of human intelligence. AGI, on the other hand, is a subset of AI aiming to replicate human cognitive abilities. Such systems are self-conscious, just like human beings, and can employ their artificial brains like people can use theirs. AI is a broad term that includes all types of artificial intelligence, from simple task-specific systems like narrow or weak AI to theoretical general AI and super AI.\"}),/*#__PURE__*/e(\"h2\",{children:\"Artificial General Intelligence vs Generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI is a type of AI that can generate new content, such as text, images, music, or other data, resembling human-created content. It often uses models like Generative Adversarial Networks (GANs) and Transformer models. Generative AI is transforming industries today with its specialized capabilities.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI is a currently implemented subset of AI that focuses on creating new content based on learned patterns from specific datasets. It can be called narrow AI because it can only imitate human capabilities and perform simple, specific tasks. However, by simple, we mean more straightforward than the ones performed by AGI, which basically has to be a machine with human cognition.\"}),/*#__PURE__*/e(\"p\",{children:\"When generative AI generates new data, it doesn\u2019t realize what it\u2019s actually doing because it lacks proper understanding or reasoning abilities and generates outputs based on statistical patterns from training data. In contrast, general AI systems would possess genuine understanding and reasoning abilities, allowing them to realize their actions.\"}),/*#__PURE__*/e(\"h2\",{children:\"Artificial General Intelligence vs Artificial Superintelligence\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial General Intelligence (AI) and Artificial Superintelligence (ASI), like general AI vs. narrow AI, represent different levels of advancement in the field of AI. ASI refers to a level of intelligence that surpasses the smartest and most gifted human minds in practically every field.\"}),/*#__PURE__*/e(\"p\",{children:\"While ASI can potentially revolutionize society, it also carries significant ethical and existential risks. It could lead to a misalignment with human values and goals and the creation of entities with motivations and capabilities beyond human control. It's crucial to understand that ASI is purely theoretical and remains a concept explored in speculative discussions about the future of AI. There are no existing or soon-to-be-developed implementations of ASI.\"}),/*#__PURE__*/e(\"p\",{children:\"At first glance, the concepts of ASI and AGI seem very similar, but they have key differences. AGI aims to replicate human-level intelligence, capable of performing any intellectual task that a human can. In contrast, ASI surpasses human intelligence in all respects, achieving superhuman performance in every domain.\"}),/*#__PURE__*/e(\"p\",{children:\"AGI learns and adapts autonomously, similar to human cognitive processes. ASI, on the other hand, not only learns and adapts but also self-improves at an exponential rate, which can potentially lead to rapid advancements beyond human control.\"}),/*#__PURE__*/e(\"p\",{children:\"In addition to the above, artificial superintelligence could develop emotional understanding, just like AGI. However, being superintelligent, ASI might possess a deep understanding of human emotions far beyond what humans can comprehend. It could possibly analyze emotional patterns on a global scale and foresee emotional responses to various events.\"}),/*#__PURE__*/e(\"p\",{children:\"ASI could have advanced sensory perception capabilities surpassing human senses in range. This could enable ASI to gather and analyze vast amounts of sensory data from the environment with unprecedented precision.\"}),/*#__PURE__*/e(\"p\",{children:\"All this information about ASI and even AGI can be considered fantasy for now, as there is no evidence of any AI system to develop 'self-awareness '. 'Self-awareness' in the context of AI refers to the ability of a system to understand its own existence and recognize its own actions and thoughts. While modern AI technologies surpass human beings in terms of information processing speed, they do not possess 'self-awareness' like humans do. Understanding the current limitations and future possibilities of AI is crucial for a comprehensive view of the topic.\"}),/*#__PURE__*/e(\"h2\",{children:\"Artificial General Intelligence vs Artificial Narrow Intelligence\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial Narrow Intelligence (ANI), also known as Weak AI, is an AI system trained for a specific or narrow range of tasks. These systems excel in performing these tasks using patterns learned from training data, yet their abilities don\u2019t allow them to generalize their knowledge to perform tasks outside their specific domain. For instance, ANI is used in various applications today, such as voice assistants, recommendation systems, and image recognition. These examples help illustrate the current practical uses of AI.\"}),/*#__PURE__*/e(\"p\",{children:\"Right now, narrow AI technology heavily relies on deep learning, particularly neural networks. Thanks to advancements in hardware and software optimizations, neural networks can scale to handle large datasets and complex problems efficiently.\"}),/*#__PURE__*/e(\"p\",{children:\"Deep learning techniques have revolutionized computer vision tasks such as image classification, object detection, image segmentation, and facial recognition. In Natural Language Processing neural networks, such as recurrent neural networks (RNNs) and transformer models, have significantly advanced NLP tasks like text classification, machine translation, sentiment analysis, and language generation. Deep learning models have also improved speech recognition accuracy and enabled more natural-sounding speech synthesis.\"}),/*#__PURE__*/e(\"p\",{children:\"Machine intelligence in narrow AI has little resemblance, especially in generative AI, to artificial general intelligence (AGI). Both Generative AI and AGI involve the creation of new content. Generative AI focuses on generating content based on learned patterns from specific textual, visual, or audio datasets, and AGI aims to generate content consciously with creativity comparable to human intelligence.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI models learn patterns from data to produce outputs that mimic the training data's style, structure, and characteristics. AGI, in its theoretical form, would also be capable of recognizing patterns and generating outputs across a wide range of domains. However, generative AI, as a representative of narrow AI, still lacks true understanding and reasoning capabilities. Strong AI, on the other hand, is characterized by genuine comprehension of various tasks and contexts.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"AGI is a computer science concept in its early stages of development. Narrow AI systems excel in specific tasks or domains but lack the general intelligence of AGI or the superhuman capabilities of ASI.\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI, a subset of Narrow AI, showcases remarkable advancements in content generation and creative tasks. Techniques like deep learning help it mimic human creative outputs. However, it does not possess human cognitive abilities. Therefore, narrow AI and generative AI, in particular, only imitate human behavior.\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial General Intelligence and Artificial Superintelligence represent theoretical goals for achieving human-level and superhuman intelligence. Basically, AGI is a human being in a machine form, while ASI is a superhuman. ASI remains speculative and confined mainly to discussions. According to the leading AI specialists, AGI will probably emerge sometime around 2030.\"}),/*#__PURE__*/e(\"p\",{children:\"The development of AI technologies can improve our lives in many ways. However, speculation about the long-term consequences of advanced AI, such as artificial superintelligence or general artificial intelligence, raises many concerns. The emergence of ASI can mean the loss of human control, which can lead to the possibility of catastrophic outcomes.\"}),/*#__PURE__*/e(\"p\",{children:\"Regardless, artificial intelligence (AI) development represents a turning point in human history. As we continue to explore AI's boundaries, it is important to maintain a balance between innovation and ethical considerations that will help create a future in which AI serves as a force for positive change.\"})]});export const richText2=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"As generative AI continues to evolve and be deployed in various industries, governing these robust systems development presents a particular set of concerns. Issues such as accuracy, privacy, security, and transparency are becoming increasingly urgent, highlighting the need to develop particular operating policies.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps, or large language model operations, is an emerging field dealing with these complex challenges. In this article, we will explore the fundamental principles of LLMOps, its critical role in the lifecycle of AI models, and how it intersects with traditional MLOps practices to provide reliable and secure AI deployment.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is LLMOps?\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps, an abbreviation for large language model operations, encompasses the methods, strategies, and instruments required to effectively manage and maintain large language models (LLMs). It is a branch of Machine Learning Ops (MLOps).\"}),/*#__PURE__*/e(\"p\",{children:\"Any ML system needs to be managed, especially its training and deployment process. MLOps is specifically intended to bridge the gap in organization and technology between all participants in developing, deploying, and operating machine learning systems.\"}),/*#__PURE__*/e(\"p\",{children:\"Large language models are becoming larger and more complex, making it harder to maintain and manually manage them. This results in higher costs, decreased productivity, and reduced model performance. LLMOps, a type of MLOps for observing the LLM lifecycle from model training to maintenance using innovative tools and methodologies, can help avoid this.\"}),/*#__PURE__*/e(\"p\",{children:\"Modern large language models are rarely taught entirely from scratch and are generally used as a service. This means that LLM producers, such as OpenAI, Microsoft, Google, etc., offer an LLM API deployed on their infrastructure as a service. Therefore, LLMOps pay a lot of attention to fine-tuning large language models, also called foundation models.\"}),/*#__PURE__*/e(\"p\",{children:\"More specifically, LLMOps addresses the operational capabilities and infrastructure essential for fine-tuning the existing foundation model and deploying this enhanced model. Since training huge language models requires enormous amounts of data and time to perform computations, it is vital to have an infrastructure that enables parallel use of GPUs and processing of huge datasets.\"}),/*#__PURE__*/e(\"h2\",{children:\"LLMOps vs. MLOps\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps principles are largely the same as MLOps; however, large foundation language models require new methods, guidelines, and tools. When working with large language models, the machine learning (ML) workflows and requirements undergo significant changes due to their scale, complexity, and unique demands.\"}),/*#__PURE__*/e(\"p\",{children:\"MLOps deals with general ML model deployment, maintenance, and governance. LLMOps focuses specifically on the unique challenges posed by large language models. This includes handling the substantial computational resources required, ensuring data privacy, and maintaining transparency in their decision-making processes. LLMops is MLOps for large language models.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMs have billions of parameters, significantly more than traditional ML models. This complexity requires advanced LLMOps techniques to manage the computational load. Compared to traditional ML models that excel in discriminative tasks, generative LLMs take much longer to train due to their size and the volume of data they process.\"}),/*#__PURE__*/e(\"p\",{children:\"Training can span weeks or months, requiring careful planning and resource allocation. Training of LLMs necessitates high-performance computing resources, often involving clusters of GPUs or TPUs and extensive memory and storage capabilities. The classical machine learning lifecycle typically requires less intensive resources.\"}),/*#__PURE__*/e(\"p\",{children:\"The creation and training of LLM require substantial investments and computational power. Therefore, only major research teams and IT companies develop such foundation models.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMs fine-tuning on specific tasks or domains is a common practice to improve pre-trained model performance. This process can also be resource-intensive and must be integrated into the workflow for continuous improvement and adaptation.\"}),/*#__PURE__*/e(\"h2\",{children:\"Stages of LLMOps\"}),/*#__PURE__*/e(\"h3\",{children:\"Development of LLM\"}),/*#__PURE__*/e(\"p\",{children:\"In the initial stage of large language model development, the tools and infrastructure are selected and prepared to support the LLMOps workflow. First, model development involves choosing a suitable pre-trained foundation model and establishing environments for experimentation and model testing.\"}),/*#__PURE__*/e(\"p\",{children:\"A more favorable solution at the initial stage of LLM creation is to transform an existing pre-trained model created earlier. This is a more cost-effective solution primarily regarding resources, as fine-tuning requires fewer resources than pre-training a new large language model from scratch.\"}),/*#__PURE__*/e(\"p\",{children:\"ML engineers evaluate their objectives and potential to choose between proprietary models and open-source models. Open-source models are available to the public and are low-cost, more transparent, customizable, and flexible. However, compared to proprietary models, they can be less powerful and productive.\"}),/*#__PURE__*/e(\"p\",{children:\"Closed-source or proprietary models are highly performant, large-scale models. However, they cannot be customized at will as their source code is unavailable to the public. In addition, they tend to be less cost-effective.\"}),/*#__PURE__*/e(\"h3\",{children:\"Data Management and Preparation\"}),/*#__PURE__*/e(\"p\",{children:\"The development process continues with exploratory data analysis (EDA), which involves setting up pipelines for data collection and processing. During that phase, the required datasets are gathered, cleaned, and examined to be used for model training and fine-tuning.\"}),/*#__PURE__*/e(\"p\",{children:\"Such data preparation sets the foundation for the effective training and deployment of large language models in the LLMOps lifecycle. During this process, data scientists should ensure the training data is high quality, unbiased, and representative of the desired application.\"}),/*#__PURE__*/e(\"p\",{children:\"The first step in data preparation is to collect data from different sources. Such data can come from structured databases, unstructured text documents, web resources, or public datasets. The goal is to collect a complete and diverse dataset that covers all scenarios that the model will encounter during its operation.\"}),/*#__PURE__*/e(\"p\",{children:\"Once the data is collected, it needs to be cleaned to remove any inconsistencies, duplicates, and irrelevant information. Data preprocessing transforms the cleaned data into a format suitable for training the model. This, for example, includes tokenization or breaking down text into tokens (words, subwords, or characters) that the model can process.\"}),/*#__PURE__*/e(\"p\",{children:\"Unlike pre-training, which is usually unsupervised, supervised learning techniques such as fine-tuning require high-quality data labels to be created for a specific task. Organizations can opt to have experts label data internally, though this process can be time-consuming and costly. Data partners like Toloka allow organizations to outsource labeling tasks to a large pool of AI Tutors, including experts in various domains like coding, law, or engineering. This approach is cost-effective and scalable while also offering reliable quality control mechanisms.\"}),/*#__PURE__*/e(\"h3\",{children:\"Training\"}),/*#__PURE__*/e(\"p\",{children:\"Next is the training stage, where the selected model is trained using large datasets to learn patterns and generate relevant outputs based on the input data. The model training requires powerful computational resources, such as GPUs or cloud-based solutions.\"}),/*#__PURE__*/e(\"p\",{children:\"Training is a pivotal stage in the LLMOps lifecycle, where the large language model learns to generate meaningful outputs. This intricate process involves multiple steps to guarantee that the model is trained effectively, efficiently, and ethically.\"}),/*#__PURE__*/e(\"p\",{children:\"Future models need to be configured through hyperparameter tuning before training. Hyperparameter tuning is the process of optimizing the parameters that govern the training process. It includes deciding on the size of the LLM or the number of layers in the model, its learning rate, and batch size. The number of complete passes through the training dataset, also known as epochs, is also determined during this step.\"}),/*#__PURE__*/e(\"h3\",{children:\"Fine-tuning\"}),/*#__PURE__*/e(\"p\",{children:\"Fine-tuning is a phase in the life cycle of LLMops where a pre-trained model is adapted to a specific task using high-quality labeled data. Unlike the extensive data needed for the initial pre-training phase, the amount of data required for fine-tuning is considerably smaller.\"}),/*#__PURE__*/e(\"p\",{children:\"It focuses on refining the model's parameters using task-specific labeled data. This process helps organizations optimize the model's accuracy and ensure that its capabilities are fine-tuned to deliver useful insights and solutions in real-world applications.\"}),/*#__PURE__*/e(\"p\",{children:\"Prompt engineering is an iterative process of refining prompts based on task requirements and model performance. It can be employed to direct the model to do something that was not the intended purpose of the previous fine-tuning. It guarantees that the prompts are properly optimized to give the model an insight into the query purpose and context. However, it doesn\u2019t work long term since prompt engineering doesn't influence the inner architecture and weights of the model.\"}),/*#__PURE__*/e(\"h3\",{children:\"Evaluation\"}),/*#__PURE__*/e(\"p\",{children:\"Accuracy metrics such as precision, recall, and F1 score measure how well the model generates correct outputs compared to ground truth data. Latency evaluation determines how quickly the LLM processes input and generates responses, which is crucial for real-time LLM-based applications. LLM's throughput metrics measure how many queries it can handle or how many outputs it can generate within a specific time frame.\"}),/*#__PURE__*/e(\"p\",{children:\"If a large language model receives a poor model review, it indicates significant challenges in its performance and functionality. This could manifest as inaccurate predictions, low scores on key metrics like precision and recall, or negative feedback from users regarding the relevance and quality of its outputs. Ethical concerns may also arise if the model produces biased or inappropriate content.\"}),/*#__PURE__*/e(\"p\",{children:\"Addressing these shortcomings requires improving the model's training data quality, fine-tuning its parameters, and implementing ethical AI practices to mitigate biases and ensure fairness. Continuous feedback and iterative improvement can help refine the LLM's capabilities throughout the LLMOps process.\"}),/*#__PURE__*/e(\"h3\",{children:\"Deployment\"}),/*#__PURE__*/e(\"p\",{children:\"Before deployment, organizations need to configure the environment where the LLM will operate. ML specialists usually choose between cloud-based solutions (e.g., AWS, Google Cloud), on-premises servers, or hybrid solutions based on scalability, performance requirements, and cost considerations.\"}),/*#__PURE__*/e(\"p\",{children:\"After a model review and confirmation that it meets the necessary performance, accuracy, and ethics standards, it can be deployed. Model inference and model serving are vital stages of deploying large language models into production environments.\"}),/*#__PURE__*/e(\"p\",{children:\"Inference\"}),/*#__PURE__*/e(\"p\",{children:\"Model inference refers to using a trained model to make predictions or generate outputs based on new input data. For LLMs, this typically involves generating text or answering questions. Model serving is the process of making a trained model available to users or applications so that it can perform inference in real-time or on demand.\"}),/*#__PURE__*/e(\"h3\",{children:\"LLM Monitoring and Maintenance\"}),/*#__PURE__*/e(\"p\",{children:\"Deploying a large language model into production is just the start of the journey. Model monitoring and maintenance is the final and ongoing stage of the LLMOps lifecycle, which means that it will occur throughout the LLM's lifetime.\"}),/*#__PURE__*/e(\"p\",{children:\"Effective large language model monitoring includes various model management techniques to ensure the model remains robust, reliable, up-to-date, and relevant over time. Critical aspects of LLM monitoring and maintenance include updating the model, bug fixing, enhancing performance, and managing versions of the model.\"}),/*#__PURE__*/e(\"p\",{children:\"The stage involves continuously tracking and analyzing an LLM's performance and behavior in production. The goal is to ensure that the model operates as expected according to desired standards and to identify and rectify any issues that arise. Continuous monitoring of the model\u2019s outputs for errors and issues involves identifying the root causes of these errors and updating the model or its training data to fix them.\"}),/*#__PURE__*/e(\"p\",{children:\"Regularly retraining the LLM with new data helps keep it relevant, which allows it to adapt to new information and changing contexts. Fine-tuning the model on specific tasks or domains can further enhance its performance.\"}),/*#__PURE__*/e(\"p\",{children:\"For the model to remain reliable and accurate, it is necessary to identify and troubleshoot the problems that may arise during the model's operation. Such continuous improvement is vital for effective bug fixing. This includes keeping detailed documentation of the detected bugs and the implemented fixes, which helps build a knowledge base for future use.\"}),/*#__PURE__*/e(\"p\",{children:\"Human feedback is a powerful tool for improving the performance and reliability of an LLM. It includes gathering insights from users and experts who interact with the model and providing feedback on its outputs. This can be done through ratings, comments, or tagging specific issues. Establishing continuous feedback from users and stakeholders guarantees that any new issues are detected and resolved on time, and regular audits and performance reviews will identify potential problems before they become critical.\"}),/*#__PURE__*/e(\"h2\",{children:\"Benefits of LLMOps\"}),/*#__PURE__*/e(\"h3\",{children:\"Cost-effectiveness\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps enhances team collaboration by providing a unified platform where data scientists, ML engineers, and stakeholders can collaborate swiftly. This streamlined communication fosters quicker insights sharing, accelerates model development, and speeds up deployment, resulting in faster project delivery.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps contribute to cost-effective operations with optimized resource use and minimized unnecessary costs. Cloud-based deployment and automated workflows decrease infrastructure costs associated with model training and deployment. In addition, the efficient use of computational resources and data management practices within LLMOps reduces operational costs and maximizes the return on investment in LLM technologies.\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps play a crucial role in improving model performance through continuous monitoring and updating, ensuring that models operate at peak performance levels. This proactive approach not only maintains but also enhances the effectiveness of models over time.\"}),/*#__PURE__*/e(\"p\",{children:\"In essence, LLMOps optimize the entire model development and deployment lifecycle by incorporating quality data, continuous monitoring, and streamlined processes to drive improved performance and faster creation of advanced language models.\"}),/*#__PURE__*/e(\"h3\",{children:\"Scalability\"}),/*#__PURE__*/e(\"p\",{children:\"Scalability is one of the key benefits of LLMOps. It simplifies the management and oversight of data when thousands of models need continuous integration, delivery, and deployment (CI/CD). Effective model monitoring within a CI/CD framework simplifies scalability.\"}),/*#__PURE__*/e(\"p\",{children:\"LLM pipelines foster collaboration, reduce conflicts, and speed up the process of LLM preparation. Their reproducibility enhances cooperation among data teams and accelerates release cycles. Moreover, LLMOps efficiently manage fluctuating workloads, handling large volumes of concurrent requests.\"}),/*#__PURE__*/e(\"h3\",{children:\"Risk Reduction\"}),/*#__PURE__*/e(\"p\",{children:\"Implementing advanced LLMOps can significantly enhance security and privacy measures within organizations. LLMOps help mitigate vulnerabilities and unauthorized access attempts by prioritizing protecting sensitive information. Such a forward-thinking approach protects critical data and creates a secure environment for handling sensitive information throughout its lifecycle.\"}),/*#__PURE__*/e(\"h2\",{children:\"Meaning of LLMOps\"}),/*#__PURE__*/e(\"p\",{children:\"LLMOps, or Large Language Model Operations, encompasses the comprehensive lifecycle management of large language models, from their initial deployment to ongoing maintenance. Through best practices from software engineering and data science, LLMOps ensure efficient deployment, continuous monitoring, and effective maintenance of LLMs. Ultimately, the integration of LLMOps practices empowers organizations to realize the full potential of their LLMs and maintain high standards of reliability, accuracy, and performance.\"})]});export const richText3=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"OpenAI's groundbreaking technology, Q* (pronounced \u201CQ star\u201D), has sparked widespread public interest and ignited intense discussions due to its unique and innovative approach to artificial intelligence development. This algorithm, potentially a game-changer in pursuing artificial general intelligence (AGI), which is speculated to surpass human intelligence, presents a blend of thrilling opportunities and serious challenges. Let's explore what Q* is, how it operates, and its potential implications.\"}),/*#__PURE__*/e(\"h2\",{children:\"What\u2019s project Q*?\"}),/*#__PURE__*/e(\"p\",{children:\"Q* is a pioneering initiative by OpenAI to propel artificial intelligence forward through novel methods and technologies. The Q* project, potentially built on a new language model of the same name, holds the promise of a seismic shift in the field of generative AI, bringing us closer to artificial general intelligence (AGI) and even artificial superintelligence (ASI).\"}),/*#__PURE__*/e(\"p\",{children:\"Given the lack of official documentation on Q*, all discussions and references to it are based on general knowledge of artificial intelligence and news articles, such as a report from the Reuters news agency. While the Q* model remains a mystery now, media reports hint at its ability to tackle grade school math, a significant stride towards artificial general intelligence. However, it's crucial to underscore the ethical considerations and potential impact on humanity that accompany such advancements.\"}),/*#__PURE__*/e(\"p\",{children:\"According to Reuters, the Q* model demonstrates the ability to solve basic math problems at the level of elementary school students. This is an important step, as the successful solution of first-grade school math problems requires not only prediction skills, which modern AI systems are renowned for, but also the ability to reason, analyze, and make decisions.\"}),/*#__PURE__*/e(\"h2\",{children:\"Why is performing math at the level of grade school students important?\"}),/*#__PURE__*/e(\"p\",{children:\"What is the big deal about an AI system being able to solve simple school problems? It's a crucial breakthrough because if an AI can solve math problems, even basic-level ones, it can be trained to learn more complex concepts in the future. Memorized basic math facts provide a solid foundation for learning more advanced mathematical skills for humans.\"}),/*#__PURE__*/e(\"p\",{children:\"Q*technology's ability to solve math problems is a testament to AI's growing capacity for human-like cognitive activity. This advancement is a significant step towards creating an AGI that can perform diverse tasks at a level comparable to human abilities. Current large language models (LLMs) excel at language-related tasks such as translations, summaries, and generating coherent text. However, they face significant challenges when it comes to math, logic, and strategy tasks. They base their predictions only on training data, whereas true AGI possesses general reasoning abilities that help it solve more complicated problems and demonstrate human-like abilities.\"}),/*#__PURE__*/e(\"p\",{children:\"However, large language models produce results close to reasoning if guided in the right direction. Generally, some LLMs are not naturally good at performing tasks that require thinking step by step, often called System 2 tasks. Researchers pointed out that Chain of Thought (CoT) prompting can significantly improve their task performance by guiding them through the reasoning process.\"}),/*#__PURE__*/e(\"p\",{children:\"Although LLMs can generate text that appears logical and coherent, they do not tend to grasp logical sequences or perform multi-step computations reliably. Chain of thought prompts provide LLMs with examples illustrating the step-by-step reasoning required to solve a problem. By showing the reasoning steps, CoT prompting helps the LLM understand and apply the logical sequence needed to arrive at the correct answer.\"}),/*#__PURE__*/e(\"p\",{children:\"Still, CoT only helps LLMs arrive at the correct final answer step-by-step. LLMs are not reasoning in the way humans do. Instead, they are simulating reasoning through learned patterns and statistical correlations. In that sense, Q star is going to be a whole new system that actually realizes what it is doing.\"}),/*#__PURE__*/e(\"h2\",{children:\"How does Q* work?\"}),/*#__PURE__*/e(\"p\",{children:\"According to experts, the key feature of Q* technology may lie in its utilization of Q-learning, a type of reinforcement learning algorithm. This sets it apart from more traditional rule-based AI approaches. Some researchers also speculate that it incorporates the use of a search algorithm called A*.\"}),/*#__PURE__*/e(\"h3\",{children:\"What is Q-learning?\"}),/*#__PURE__*/e(\"p\",{children:\"Q-learning is a type of algorithm that helps an agent learn how to act optimally in an environment by interacting with it. The main goal of Q-learning is for the agent to learn the best action to take in each state to maximize its total reward over time. The agent does this by learning a Q-value, which estimates the quality or usefulness of taking a certain action in a certain state.\"}),/*#__PURE__*/e(\"p\",{children:\"The agent starts with no knowledge about the environment. As it interacts with the environment, it chooses actions. Sometimes, it tries new actions to discover their effects and other times, it chooses the best-known actions based on the current Q-values.\"}),/*#__PURE__*/e(\"p\",{children:\"Each time the agent takes an action, it receives a reward and transitions to a new state. It then updates the Q-value for the previous state-action pair using the given reward and the maximum Q-value of the next state. This update helps the agent learn which actions are better in the long run.\"}),/*#__PURE__*/e(\"p\",{children:\"Even though methods such as Q-learning and reinforcement learning have been around for several decades, OpenAI is likely to implement and represent them with modern adaptations and advancements. The idea behind the Q* may represent a combination of various approaches and algorithms, including Q-learning and RL, to create a scalable implementation capable of achieving impressive results. This innovative approach will allow the use of these techniques at a large scale, providing machines with the ability to deliver effective solutions.\"}),/*#__PURE__*/e(\"h3\",{children:\"What is the A* algorithm?\"}),/*#__PURE__*/e(\"p\",{children:\"The main goal of the A* search algorithm is to find the shortest path from a start node to a goal node in a graph or from a starting point to a destination in a space like a map or grid.\"}),/*#__PURE__*/e(\"p\",{children:\"A node here means potential unique positions or stops of an algorithm. Each time a node is accessed, its cost is calculated. Thus, the algorithm checks all nearby nodes and calculates the one with the minimum value.\"}),/*#__PURE__*/e(\"p\",{children:\"A* is one of the most popular methods for solving shortest route search problems. It is optimal, which means that it guarantees the best possible solution. The A* algorithm is also complete, meaning it will always find a solution if one exists.\"}),/*#__PURE__*/e(\"p\",{children:\"A* can efficiently determine the optimal path from one point to another, considering obstacles and costs associated with different paths. A* can also help determine the most efficient order of actions to complete the task for ones that require a sequence of actions.\"}),/*#__PURE__*/e(\"h4\",{children:\"Combining A* with Q-learning for efficient learning\"}),/*#__PURE__*/e(\"p\",{children:\"Combining A* with Q-learning can enhance the learning process of Q*. A* may be an excellent guide for the exploration process in Q-learning. Instead of exploring the environment randomly, A* can provide an efficient path to promising states based on heuristic information. This can speed up the learning process by focusing on more relevant parts of the state space.\"}),/*#__PURE__*/e(\"p\",{children:\"The heuristic information used in A* can also help shape the reward function in Q-learning, providing additional guidance to the agent. By providing a clearer signal of which actions lead to better outcomes, the agent will be able to learn more effectively.\"}),/*#__PURE__*/e(\"h2\",{children:\"Limitations and ethical considerations of OpenAI's Q*\"}),/*#__PURE__*/e(\"p\",{children:\"The long-standing debate on the risks of creating superintelligent machines remains highly relevant. As AI technology evolves, issues related to the safety and ethics of artificial intelligence are becoming increasingly important.\"}),/*#__PURE__*/e(\"p\",{children:\"OpenAI researchers have expressed concerns about the powerful new Q* artificial intelligence algorithm in a letter to the board of directors. This emphasizes the importance of taking the development of such highly intelligent machines seriously.\"}),/*#__PURE__*/e(\"p\",{children:\"Sophisticated AI systems, such as Q*, are typically designed as black boxes, meaning it is difficult to understand their inner workings and the choices behind their architecture. This complicates monitoring and predicting their behavior.\"}),/*#__PURE__*/e(\"p\",{children:\"The level of authority and autonomy granted to Q* may increase the risk of unsupervised actions. For example, if Q* were to manage critical systems without proper control, this could lead to undesirable consequences. Should AI have significant computing power and control over important systems, it could pose a threat to humanity if its actions prove unsafe or malicious.\"}),/*#__PURE__*/e(\"p\",{children:\"In the initial stages of Q* development, its authority should be limited to minimize the risk of uncontrolled actions. The scope of responsibility can be gradually increased as trust and understanding of the system grows.\"}),/*#__PURE__*/e(\"p\",{children:\"Researchers worry that mishandling this technology could jeopardize humanity's existence. However, other experts believe that the real danger of AGI is not in some form of AI maliciousness that is attributed to it in science fiction but rather in the fact that it can perfectly fulfill a task that was not properly defined or was not intended to be malicious or even threaten humanity in the first place.\"}),/*#__PURE__*/e(\"p\",{children:\"Aside from the main concerns related to possible Q* disobeying or misinterpreting tasks, effective Q* implementation may require significant computational resources and sophisticated algorithms to handle the vast number of possible system states that the current state of technology will not be able to provide.\"}),/*#__PURE__*/e(\"p\",{children:\"The development of AI capabilities through systems such as Q* holds great promise but also requires a careful and thorough consideration of various aspects to ensure that the development and use of these technologies are done responsibly. Early resolution of such issues will enable us to maximize the benefits of advanced AI while minimizing potential risks and ensuring that its deployment contributes positively to society.\"}),/*#__PURE__*/e(\"h2\",{children:\"Q* and Artificial General Intelligence\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial General Intelligence (AGI) is a hypothetical AI system that exhibits intelligence and cognitive abilities comparable to that of human beings. Unlike current AI systems, which are typically narrow and specialized in specific domains (such as image recognition or language processing), AGI aims to possess general intelligence akin to human intelligence.\"}),/*#__PURE__*/e(\"p\",{children:\"From a theoretical standpoint, AGI is considered possible. The human brain, which serves as the model for AGI, demonstrates general intelligence, such as reasoning, learning, perception, and decision-making. Given advances in neuroscience and computational theory, many researchers believe that it should be possible to replicate or simulate these cognitive abilities in artificial systems.\"}),/*#__PURE__*/e(\"p\",{children:\"Let\u2019s assume that the future Q* algorithm will be an AGI. The Q-learning and A* algorithms alone will not be enough to create such a system. These algorithms, while powerful, do not on their own encompass the broad range of capabilities required for AGI.\"}),/*#__PURE__*/e(\"p\",{children:\"Something else must give this system the ability to think and act like a human. A combination of several advanced techniques and principles is likely required to build a machine like that. The real challenge lies in integrating these diverse components to work synergistically to produce coherent, human-like intelligence.\"}),/*#__PURE__*/e(\"p\",{children:\"OpenAI may develop an algorithm that, in combination with Q-learning, A* algorithm, and some others, will help the system exhibit human-like traits. For now, it's quite a challenge to imagine an algorithm capable of thinking and acting like a human at this stage of development. Q* does not appear to meet all of the criteria for AGI based on currently available information.\"}),/*#__PURE__*/e(\"h2\",{children:\"Is Q* an AGI that threatens humanity?\"}),/*#__PURE__*/e(\"p\",{children:\"While the specifics of Q* are not provided, information available to the public suggests it\u2019s a powerful artificial intelligence discovery that pushes the boundaries toward AGI. In other words, Q* may represent a theoretical step forward in creating AI systems with the versatility and intelligence approaching that of human beings.\"}),/*#__PURE__*/e(\"p\",{children:\"While achieving AGI is theoretically possible and represents a compelling goal for AI research, it remains to be seen when or if AGI will be realized. Q* may demonstrate advancements in certain cognitive tasks, such as solving basic math problems; without additional details on its capabilities in broader areas of cognition, it would be premature to classify it as AGI.\"}),/*#__PURE__*/e(\"p\",{children:\"After information emerged that Q* could be the first step towards AGI, researchers became concerned about its safety. Since the exact fears related specifically to Q* are unknown to the general public, it can be assumed that they relate to issues of ethics, safety, social influence, and technical reliability.\"}),/*#__PURE__*/e(\"p\",{children:\"If Q* were hypothetically developed as an AGI, possessing superintelligent capabilities that exceed human cognitive abilities, it could potentially pose significant risks. Even well-intentioned AGI systems could cause harm due to unintended consequences, errors in programming, or misinterpretations of their objectives.\"}),/*#__PURE__*/e(\"p\",{children:\"Preventing the risks associated with AI technologies like Q* requires policy development, international collaboration, and proactive measures to ensure that AI technologies are developed and deployed to maximize benefits while minimizing risks to humanity.\"}),/*#__PURE__*/e(\"p\",{children:\"Currently, Q* appears to be an innovative AI system focused on specific tasks, such as solving mathematical problems at the level of elementary school students. It does not represent the level of general intelligence or autonomy characteristic of AGI. Therefore, discussions about Q* as an AGI and its potential risks to humanity stay speculative and theoretical.\"})]});export const richText4=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Artificial intelligence is a concept that draws inspiration from human intelligence. With the appearance of the first computers, scientists and philosophers began discussing the fundamental structure of the human brain and the possibility of recreating it on a machine.\"}),/*#__PURE__*/e(\"p\",{children:\"AI systems strive to be as intelligent and agile as a human brain. Most algorithms and machine learning architectures try to achieve human-level performance. That's why a concept of strong AI or artificial general intelligence (AGI) appeared, however only as a concept. Further, we will define artificial general intelligence and discuss its benefits and ethical considerations regarding machine intelligence.\"}),/*#__PURE__*/e(\"h2\",{children:\"What is Artificial General Intelligence?\"}),/*#__PURE__*/e(\"p\",{children:\"Artificial general intelligence, the pinnacle of artificial intelligence, promises to solve any problem a human can and perhaps even challenge the intellect of geniuses. This potential, however, remains untapped, as true AGI has yet to be realized.\"}),/*#__PURE__*/n(\"p\",{children:[\"According to the paper,\\xa0\",/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/2404.10731\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"What is Meant by AGI? On the Definition of Artificial General Intelligence\"})}),\", there is no one true definition for AGI. It'll be given once it becomes available, and most researchers will recognize it as true AGI.\"]}),/*#__PURE__*/e(\"p\",{children:\"For now, AI is at a level of artificial intelligence development that will enable AI systems to solve any task with the help of generalized human cognitive abilities.\"}),/*#__PURE__*/e(\"p\",{children:\"While not yet perfected, current AI systems include some language models that exhibit features of AGI. For instance, GPT -3, a language model developed by OpenAI, can generate human-like text based on a given prompt. It can process data such as human language faster than we can. Yet, it's still incapable of abstract thinking, strategic reasoning, or using its thoughts and memories to make rational decisions or develop original concepts, key aspects of AGI.\"}),/*#__PURE__*/e(\"p\",{children:\"Certainly, some LLM models can produce the necessary information when prompted to develop, for example, a marketing strategy or to draw a conclusion from a given body of text. However, what would appear to be deliberate and thoughtful is actually an attempt by the AI model to guess the sequence of words rather than a reasoned decision since such systems have no consciousness.\"}),/*#__PURE__*/e(\"p\",{children:\"While the capabilities demonstrated by large language models, text-to-image, text-to-audio, and speech-to-text systems are impressive, they do not qualify as artificial general intelligence. These systems lack self-awareness and fail to replicate human intelligence in its entirety. This raises important ethical considerations and underscores the need for responsible AI development.\"}),/*#__PURE__*/e(\"p\",{children:\"Our human intelligence makes us superior to machines. At the same time, the processes of our cognitive mechanisms are the most difficult to understand and, therefore, the most difficult to reproduce.\"}),/*#__PURE__*/n(\"p\",{children:[\"According to some predictions, true AGI can already be expected in the foreseeable future. Such a future type of AI will be able to do a whole range of tasks and make independent conclusions based on the information that is fed to it, not only learn but even become aware of its own existence at some point in time. AGI tools will even be able to master such abilities as sensory perception and fine motor skills, given that it has a robotic body. While the potential benefits of AGI are vast, including advancements in healthcare, transportation, and education, there are also significant risks and ethical considerations, such as job displacement, privacy concerns, and the potential for misuse of AGI technology, that need to be carefully addressed.\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"A strong AI is expected to be able to reason, integrate prior knowledge into decision-making, cope with challenges, use judgment in the face of uncertainty, plan, and generate creative ideas. To pursue these ambitions, however, AI researchers must find a way to grant the machines consciousness. This raises significant ethical considerations and societal implications. Granting machines consciousness could potentially lead to a new life form with its own rights and responsibilities. It's a complex and controversial topic that requires careful consideration and discussion.\"]}),/*#__PURE__*/e(\"p\",{children:\"Although AGI is only a concept for now, the next stage of AI development that supersedes human cognitive abilities, called super AI, is already being discussed. Let's take a closer look at other stages of AI development.\"}),/*#__PURE__*/e(\"h2\",{children:\"Three Stages of AI Development\"}),/*#__PURE__*/e(\"p\",{children:\"A general definition of AI technology is a large branch of computer science that seeks to make it appear that a machine, i.e., a computer, has human intelligence. So if a machine exhibits cognitive abilities inherent in humans, it is called AI. Scientists have come up with three stages of AI development:\"}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-decoration\":\"none\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Weak AI or narrow AI\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Strong AI or general AI\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"Super AI\"})})]}),/*#__PURE__*/e(\"h3\",{children:\"Weak AI\"}),/*#__PURE__*/e(\"p\",{children:\"Weak AI, also known as narrow AI, illustrates that if a machine can behave intelligently, it does not prove that it is actually as intelligent as a human being. Weak AI has limited functionality. The advanced algorithms used at the core of weak AI perform specific tasks to solve problems that do not cover the full range of human cognitive abilities.\"}),/*#__PURE__*/e(\"p\",{children:\"Weak AI responds to inputs based on algorithms. Tools like this may appear capable of reasoning, but they simply cannot do it. For example, voice assistants such as Siri, Cortana, and Alexa don't comprehend the words their users say and their meaning. These tools listen to audio cues and follow programmed instructions to respond accordingly.\"}),/*#__PURE__*/e(\"h3\",{children:\"Super AI\"}),/*#__PURE__*/e(\"p\",{children:\"While no Artificial General Intelligence (AGI) system has been achieved yet, the next stage of AI development has been theorized. Super artificial intelligence will not just be able to do whatever a human does but will outperform any genius. In addition to surpassing humanity's best minds in all fields, this kind of AI will likely be able to re-configure itself while improving and even developing new systems and algorithms.\"}),/*#__PURE__*/e(\"h2\",{children:\"Technologies to develop AGI\"}),/*#__PURE__*/e(\"h3\",{children:\"Deep learning models\"}),/*#__PURE__*/e(\"p\",{children:\"Deep learning models, a subset of machine learning models, are at the forefront of recent advancements in AI due to their ability to learn complex patterns and representations from large amounts of data. They are a key technology in the development of AGI, as they enable machines to learn and understand complex information, similar to how the human brain does. Most commonly, they are represented by multilayered neural networks. Examples of deep learning models include Convolutional Neural Networks (CNNs) for image processing and Recurrent Neural Networks (RNNs) or Transformers for natural language processing, both of which are crucial for AGI development.\"}),/*#__PURE__*/e(\"h3\",{children:\"Natural Language Processing\"}),/*#__PURE__*/e(\"p\",{children:\"Natural language processing (NLP) is a critical area of AI research and development that enables machines to understand, interpret, and generate human language. It is foundational for achieving artificial general intelligence because language is a primary medium through which humans communicate complex ideas and knowledge. NLP models utilize computational linguistics and machine learning to convert language data into basic units known as tokens and comprehend the context in which these tokens appear.\"}),/*#__PURE__*/e(\"h3\",{children:\"Generative AI\"}),/*#__PURE__*/e(\"p\",{children:\"Generative AI models play a significant role in driving the development of AGI by enabling machines to create new data, explore diverse possibilities, and simulate creative processes. AGI systems must exhibit generative capabilities to develop novel solutions, adapt to new environments, and interact with humans and the world. By leveraging generative AI, AGI researchers aim to imbue artificial systems with creativity, innovation, and adaptability akin to human behavior.\"}),/*#__PURE__*/e(\"h3\",{children:\"Computer vision\"}),/*#__PURE__*/e(\"p\",{children:\"AGI systems require the ability to reason about visual information and predict future events based on observed patterns. Computer vision (CV) techniques, such as image captioning, visual question answering, and scene prediction, support visual reasoning by bridging the gap between perception and cognition. Computer vision enables AGI to understand complex scenes by analyzing the spatial relationships between objects, their attributes, and contextual information. Scene understanding facilitates higher-level reasoning and decision-making, allowing AGI systems to comprehend the context in which they operate. CV plays a crucial role in embodied AI by providing real-time feedback on the robot's perception of its surroundings, guiding its actions and decision-making processes.\"}),/*#__PURE__*/e(\"h3\",{children:\"Robotics\"}),/*#__PURE__*/e(\"p\",{children:\"Robotics is of considerable significance in the development of artificial general intelligence by providing physical embodiments for intelligent systems to interact with and learn from the environment. Embodied AI involves integrating AGI systems with physical bodies or robotic platforms, enabling them to interact with the physical world.\"}),/*#__PURE__*/e(\"p\",{children:\"Robotics provides AGI systems with physical bodies, allowing them to perceive and act upon the world in ways analogous to humans. Embodied AGI systems can gather sensorimotor experiences, interact with objects, and navigate real-world environments.\"}),/*#__PURE__*/e(\"p\",{children:\"Human intelligence is a guiding principle and inspiration for AGI research, informing intelligent systems' design, development, and evaluation. By understanding and replicating key aspects of human intelligence, researchers strive to create AI systems capable of emulating human-like cognitive abilities and achieving the level of general intelligence exhibited by humans.\"}),/*#__PURE__*/e(\"h2\",{children:\"Approaches to AGI\"}),/*#__PURE__*/e(\"p\",{children:\"Attempts to create a real AGI system have included several diverse and multifaceted approaches, reflecting the complexity and ambition of creating machines that possess human-like cognitive abilities. The following are some basic examples of how researchers can reach AGI in the near future.\"}),/*#__PURE__*/e(\"h3\",{children:\"Connectionist Approach\"}),/*#__PURE__*/e(\"p\",{children:\"Connectionist or neural network-based approaches model intelligence inspired by the structure and function of the human brain. Connectionist systems consist of interconnected nodes (neurons) organized in layers, capable of learning from data through iterative training processes.\"}),/*#__PURE__*/e(\"h3\",{children:\"Symbolic approach\"}),/*#__PURE__*/e(\"p\",{children:'The symbolic approach in artificial intelligence is grounded in the use of logic networks and symbolic representations to encode knowledge and facilitate learning. This approach is based on the premise that intelligent behavior can be achieved by processing symbolic representations of the world. The manipulation of symbols is often governed by logic networks, which include rules such as \"if-then\" statements. These rules define how symbols interact and how knowledge is inferred from existing information.'}),/*#__PURE__*/e(\"h3\",{children:\"Whole organism approach\"}),/*#__PURE__*/e(\"p\",{children:\"Unlike traditional approaches that focus solely on software-based models, the whole organism architecture emphasizes the integration of AI systems with physical embodiments resembling human bodies. Proponents of the whole organism architecture believe that AGI is best achieved when systems learn from physical interactions with the world rather than solely from simulated environments or digital data.\"}),/*#__PURE__*/e(\"p\",{children:\"AGI systems embodied in physical robots can undergo developmental stages, starting with basic sensorimotor skills and gradually progressing to more complex cognitive abilities through independent exploration and learning.\"}),/*#__PURE__*/e(\"h3\",{children:\"Human brain emulation\"}),/*#__PURE__*/e(\"p\",{children:\"Whole brain emulation is a theoretical approach to achieving artificial general intelligence by replicating a human brain's detailed structure and function in a computational framework. The idea is to create a digital model that mimics the human brain's neural activities and cognitive processes, potentially capturing human-like consciousness and intelligence.\"}),/*#__PURE__*/e(\"h2\",{children:\"Benefits of AGI\"}),/*#__PURE__*/e(\"p\",{children:\"Developing and deploying artificial general intelligence has the potential to significantly benefit society.\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Complex Problem Solving in various domains.\"}),\"\\xa0AGI will be able to tackle complex and large-scale problems that are currently beyond human capabilities, such as climate change, pandemics, and global poverty. It will also be able to integrate knowledge from different fields to create innovative solutions that might not be apparent from a single-disciplinary perspective;\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Accelerating Scientific Discovery.\"}),\"\\xa0AGI is expected to process and analyze large datasets faster than humans, leading to scientific discoveries and innovations. AGI can identify correlations and insights that humans might miss, opening up new areas of research;\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Risk Assessment.\"}),\"\\xa0AGI system will evaluate risks and accurately predict outcomes, aiding in strategic planning and crisis management. It will analyze vast amounts of data to provide insights and recommendations, helping humans make better-informed decisions in business and daily life;\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Creative Assistance.\"}),\"\\xa0By analyzing existing research papers and market trends, AGI will be capable of identifying gaps and opportunities in different domains, accelerating innovation. AGI can provide creative professionals with new tools and inspirations, aiding in the creation of art, music, literature, and design;\"]}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Threat Detection.\"}),\"\\xa0AGI will enhance security systems by detecting potential threats and suspicious activities in real-time, improving public safety. It will manage automated surveillance systems, reducing the need for human monitoring and increasing efficiency. AGI will enhance early warning systems for natural disasters like earthquakes, tsunamis, and hurricanes, thus improving preparedness and response.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Requirements for AGI\"}),/*#__PURE__*/e(\"p\",{children:\"Although strong AI does not yet exist, some of the qualities required for an AI to be considered an AGI have already been defined. The requirements for artificial general intelligence encompass several aspects that define its capabilities and characteristics:\"}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-decoration\":\"none\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AGI must be able to perform a wide range of tasks currently carried out by humans without the need for specialized adjustments for each new task;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"A strong AI has to outperform humans intellectually, being able to learn at the same speed as or faster than humans;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AGI must be proficient in solving various problems across different domains, from mathematical puzzles to complex real-world scenarios. It should easily switch between tasks and apply knowledge from one domain to another, demonstrating true versatility;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AGI should be capable of learning from experience and adapting to new situations and changes in the environment. In other words, it should be able to reuse previously gained experience. This includes the ability to self-learn and autonomously improve its skills;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AGI must understand and generate natural language at a level comparable to humans, which includes text, speech processing, and complex forms of communication and interaction;\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"AGI should possess cognitive abilities such as attention, memory, planning, reasoning, problem-solving, and creativity. It has to demonstrate creativity in generating novel ideas, solutions, and artistic expressions.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Above all else, AGI must be developed with ethical norms and standards to ensure its actions are safe for society and individuals. This includes measures to prevent harm, uphold human rights and freedoms, and ensure transparency and accountability.\"}),/*#__PURE__*/e(\"h2\",{children:\"Ethical Considerations\"}),/*#__PURE__*/e(\"p\",{children:\"Without adequately drafted guidelines and regulations for Artificial General Intelligence (AGI), systems with advanced capabilities and the potential to surpass human thought processes could pose significant risks. While the idea of AGI deciding humans are enemies and taking harmful actions is speculative, ethical considerations are critical to ensure AGI's beneficial and safe integration into society.\"}),/*#__PURE__*/e(\"p\",{children:\"If developed without robust ethical frameworks and regulations, AGI could lead to unintended consequences and potential risks. However, the notion of AGI autonomously deciding to harm humans is speculative and not grounded in the current understanding or capabilities of AI systems.\"}),/*#__PURE__*/e(\"p\",{children:\"Ethical guidelines and regulatory frameworks must address issues such as fairness, transparency, accountability, privacy, security, and the alignment of AI goals with human values. These considerations are essential to promote responsible AI development and deployment, safeguarding against potential harm.\"}),/*#__PURE__*/e(\"p\",{children:\"An Artificial General Intelligence (AGI) that surpasses human intelligence could pose significant risks if its goals are misaligned with human values. This misalignment could lead to uncontrollable behavior, potentially becoming an existential threat to humanity.\"}),/*#__PURE__*/e(\"p\",{children:\"Furthermore, an AGI capable of recursive self-improvement could rapidly evolve beyond human understanding and control. If safeguards and regulatory measures are not in place, this could result in unpredictable outcomes and potentially catastrophic scenarios.\"}),/*#__PURE__*/e(\"p\",{children:\"While these concerns are valid and reflect ongoing discussions in AI ethics and safety, it's important to note that AGI's actual development and behavior remain hypothetical. Current AI systems, including those with advanced capabilities, are far from achieving AGI and the level of autonomy and self-improvement described in the statement.\"}),/*#__PURE__*/e(\"p\",{children:\"Ethical considerations, robust regulatory frameworks, transparency, and ongoing research into AI safety are essential components of responsible AI development to mitigate potential risks associated with AGI in the future.\"}),/*#__PURE__*/e(\"h2\",{children:\"Future of AGI\"}),/*#__PURE__*/e(\"p\",{children:\"Addressing the ethical challenges associated with AGI requires a proactive and collaborative approach. Society can work towards overcoming the moral problems posed by AGI in the future by establishing comprehensive ethical frameworks, promoting transparency, ensuring fairness, prioritizing safety, implementing ethical oversight, and encouraging responsible innovation.\"}),/*#__PURE__*/e(\"p\",{children:\"Creating AGI could potentially change everything\u2014how we work, live, and make decisions. So, we need to tread carefully to ensure this powerful technology benefits everyone and doesn\u2019t harm anyone. Solving these problems is a collective effort from scientists, policymakers, businesses, and everyday people. Combining diverse perspectives helps create solutions that consider different viewpoints and potential impacts.\"}),/*#__PURE__*/e(\"p\",{children:\"The future of AGI holds immense promise but also significant challenges. Machine learning innovations, particularly deep learning, will drive progress toward more advanced AGI systems. Technological advancements and careful consideration of ethical, societal, and economic implications will be crucial in guiding the development of AGI.\"})]});export const richText5=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Current close and open LLMs do not disclose the data used for training due to potential risks and liabilities associated with training on copyright content. Recent developments in large open datasets with permissible licenses and new demands for increased regulation and reproducibility are pushing for a change. This blog post will discuss the landscape and importance of open data in the ML ecosystem.    \"}),/*#__PURE__*/n(\"p\",{children:[\"The \",/*#__PURE__*/e(t,{href:\"https://llama.meta.com/llama3/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"release of Llama 3\"})}),\" is a breakthrough for open LLMs: a model that can be hosted everywhere with little license restriction is now nearing the capacities of frontier model GPT-4 or Claude 3. Provided there is enough computing power, any person or organization can host its own powerful version of ChatGPT.\"]}),/*#__PURE__*/n(\"p\",{children:[\"Despite their massive advantages for end use, open-weight models like Llama, \",/*#__PURE__*/e(t,{href:\"https://mistral.ai/news/announcing-mistral-7b/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Mistral\"})}),\", or \",/*#__PURE__*/e(t,{href:\"https://huggingface.co/Qwen\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Qwen\"})}),\" still fall short on the other dimensions of openness. A language model is not just a set of parameters. It\u2019s complex scientific infrastructure that intermingles data, code, and architecture.\"]}),/*#__PURE__*/n(\"p\",{children:['Data is the most egregious case. If they exist at all, sections on \"training data\" will only mention in passing that the model was trained on vague \"publicly available data\" or a wide selection of \"books, websites, and code\". The authors of GPT-4 ',/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/2303.08774\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"explicitly state\"})}),' that \"competitive and safety considerations\" weigh above \"the scientific value of further transparency\" (p. 2).']}),/*#__PURE__*/e(\"p\",{children:\"Enhancements to the LLM are largely attributed to \u201Cbetter data\u201D. And yet, we know absolutely next to nothing about the training set: where does it come from? Only Common Crawl or additional sources? What has been selected? According to which criteria? Which languages are represented?\"}),/*#__PURE__*/e(\"p\",{children:\"Given that LLMs are largely \u201Ccultural models,\u201D these are not just technical questions. They determine the model's nature, biases, and impact on society at large.\"}),/*#__PURE__*/e(\"h2\",{children:\"From a culture of openness to trade secrets\"}),/*#__PURE__*/e(\"p\",{children:\"Training data has not always been closed. LLM is, in fact, one of the few research fields where open and transparent norms have regressed. By 2024, the open science movement had gradually expanded to include a wide variety of research artifacts: publication, data, code, reviews, and intermediary processes. Open science has been repeatedly proven to be more beneficial to science through enhanced reproducibility and to society as a whole, as research can freely circulate beyond specialized academic circles.\"}),/*#__PURE__*/n(\"p\",{children:[\"In 2018-2020, frontier models like \",/*#__PURE__*/e(t,{href:\"https://huggingface.co/docs/transformers/model_doc/bert\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"BERT\"})}),\", \",/*#__PURE__*/e(t,{href:\"https://huggingface.co/openai-community/gpt2\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"GPT-2\"})}),\", or \",/*#__PURE__*/e(t,{href:\"https://huggingface.co/docs/transformers/model_doc/t5\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"T5\"})}),\" were extensively documented to the point where they could be quoted as positive examples of open science. Researchers from universities and private labs like Google or OpenAI released not only the actual weights of the model but also the training code, the intermediary documentation, and even the dataset used for training, or, at the very least, enough information to reconstruct it. This openness largely contributed to quickly integrating a model like BERT into major NLP pipelines and industrial processes.\"]}),/*#__PURE__*/e(\"p\",{children:'Fast-forward a few years, major LLM research papers have become secretive. The big releases of Google, OpenAI, Anthropic, or even committed open-weight companies like Mistral are essentially covered by \"non-papers\", that won\\'t say anything about the actual details that matter: the data used for training, the architecture of the model, the hyperparameter.'}),/*#__PURE__*/e(\"h2\",{children:\"The rising copyright problem of LLM data\"}),/*#__PURE__*/e(\"p\",{children:\"There is a common explanation for the lack of data transparency in LLM training: models are trained on unreleasable data because the datasets are so big. Llama 3 has been trained on 15 trillion tokens, and likely as much, if not more, for GPT-4 (we don\u2019t even know the size!). This is big enough to fit 1,000-2,000 editions of the English Wikipedia. Consequently, the model has to use a wide range of problematic sources, most of them under copyright or outright pirated. While it's still highly controversial whether a model can be trained on proprietary sources, releasing it creates added layers of liabilities\"}),/*#__PURE__*/e(\"p\",{children:\"Even before questioning this line of reasoning, this raises the question of why it was admissible in the first place. Using pirated content could have been massively convenient for many industries in the past. This is just something that is not done.\"}),/*#__PURE__*/n(\"p\",{children:[\"Back in 2015, the very first components of the LLM stack were trained exclusively on open content. I remember training my first \",/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/1310.4546\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Word Embedding\"})}),\" model on a selection of 100 million words on Wikipedia (without a GPU!). Years later, I got introduced to the first real \u201Cproto-GPT\u201D, \",/*#__PURE__*/e(t,{href:\"http://karpathy.github.io/2015/05/21/rnn-effectiveness/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"LSTM\"})}),\", with tutorials on \",/*#__PURE__*/e(t,{href:\"https://gist.github.com/mostafa-mahmoud/b7058bb8e5b2079ad1cb45d0873de67d\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"public domain texts from 19th-century philosophers\"})}),\". Researchers, engineers, early users, and companies strived to use open, shareable content\u2026 until they stopped caring.\"]}),/*#__PURE__*/e(\"p\",{children:'It was a slow drift from starting to using questionable data sources with many precautions to gradually lifting them or repurposing them as \"free\" content that was merely accessible. We can quote here two emblematic tales:'}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/n(\"li\",{\"data-preset-tag\":\"p\",children:[/*#__PURE__*/n(\"p\",{children:[\"BookCorpus is a compilation of self-published e-books from SmashWords.com. While the non-professional authors provided the books for free, they never used a free license that would allow for republication. In 2015, a selection of 10,000 works \",/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/1506.06724\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"was randomly\"})}),' selected for a sentence similarity task, under the unclear and erroneous claim they were \"free books\". In 2018, BookCorpus was ',/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/1810.04805\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"one of the two main corpora of BERT\"})}),\", alongside the English Wikipedia, and despite being obviously dwarfed now by the massive pre-training dataset, it seems to be still in use for \u201Cquality\u201D training (like late pre-training data, fine-tuning, etc.) \"]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})})]}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:['Web archives were primarily thought for long-time preservation. They were naturally covered by fair use and other similar exceptions. In the early 2010s, web archives started to be used for training on \"transformative\" data, like ngrams. Ngrams do not make it possible to recreate the original text, and provided they are completely shuffled, they can be shared without copyright concerns while still being of great value for classification use cases. In 2018, OpenAI ',/*#__PURE__*/e(t,{href:\"https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"started to experiment\"})}),\" with the creation of a filtered version from large collections of web archives: WebText contains 8 million \u201Cqualitative\u201D documents selected by at least 5 likes on Reddit. Web archives are now the absolute backbone of LLM pretraining.\"]})})]}),/*#__PURE__*/e(\"p\",{children:'In both cases, the increasing sophistication of model training entailed an erosion of the guardrail put into place to avoid potential misuse and a rising ambiguity of the meaning of open and usable data for training. This was not only due to the need for more data but more \"expansive\" data due to the lengthening of the context window: not just ngrams or short sentences, but full texts. Since GPT-3, LLM needs full samples of thousands of words, which would not fit into any copyright exception for short quotations.'}),/*#__PURE__*/n(\"p\",{children:[\"The copyright issue goes beyond \u201Cgrey\u201D areas like web archives. There are many rumors circulating about the reuse of shadow libraries like Libgen or Anna\u2019s Archive and pirated content being used as a source for major LLMs. This is especially the case for their scientific content (coded \u201CSTEM\u201D corpus in the few available public information delivered by LLM companies), which provides a major source of reasoning. Due to less potential exposure, Chinese LLM like Deepseek \",/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/2403.05525\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"openly admits\"})}),\" they are training on 800,000 Chinese scientific books from Anna\u2019s Archive. Once more, the lengthening of context size must be a major intensive: web archives are poor on long texts, and models able to ingest 1 million tokens are hungry for books.\"]}),/*#__PURE__*/e(\"h2\",{children:\"Building a pre-training commons\"}),/*#__PURE__*/n(\"p\",{children:[\"In March 2024, PleIAs coordinated the release of \",/*#__PURE__*/e(t,{href:\"https://huggingface.co/collections/PleIAs/common-corpus-65d46e3ea3980fdcd66a5613\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Common Corpus\"})}),\", the largest available open corpus for pre-training to date: about 500 billion words in a wide variety of European languages. This is already sufficient to train a model like Llama 2 since corpora are apparently frequently repeated through pre-training (against something we guesstimate but don't know!).\"]}),/*#__PURE__*/n(\"p\",{children:[\"Using exclusively content under a permissible resource is not only an ethical commitment but a major scientific initiative to ensure reproducible and qualitative research on LLM. Until now, released collections for pre-training have always been vulnerable due to the potential liabilities associated with the publication of copyrighted content: in the summer of 2023, one of the most popular accessible datasets for pretraining, \",/*#__PURE__*/e(t,{href:\"https://pile.eleuther.ai/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"The Pile\"})}),\", was \",/*#__PURE__*/e(t,{href:\"https://mashable.com/article/books3-ai-training-dmca-takedown\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"removed\"})}),\" following DMCA notices.\"]}),/*#__PURE__*/e(\"p\",{children:'The lion\u2019s share of open content is made of documents with expired copyright (\"public domain\") or produced for public use (\"open data\" in Europe and the older \"federal public domain\" in the United States): this is not just a few large-scale projects, but massive amounts of texts simply lying there, waiting for years to be collected and properly dealt with.'}),/*#__PURE__*/e(\"p\",{children:'Other initiatives will significantly reinforce this emerging \"pre-training data commons\" in the months to come. Common Corpus will be significantly expanded as a large share of available text is still waiting to be released until copyright can be thoroughly checked. Eleuther is to release a new version of \"The Pile\" with a major focus on permissibly licensed content. Other ongoing initiatives are being prepared by Allen AI, Together AI, Cohere, or Sprawling AI.'}),/*#__PURE__*/e(\"p\",{children:\"At this point, it is fairly obvious that there is enough open content online to train a model like GPT-4 or llama 3: 3-5 trillion tokens, repeated 3-4 times. The shocking thing is not that this is possible at all but that this has never been attempted. All this open content has been hidden in plain sight for years.\"}),/*#__PURE__*/e(\"h2\",{children:\"Conclusion\"}),/*#__PURE__*/e(\"p\",{children:\"The recent emergence of an open data movement in LLM research is an important development that will increase the reproducibility of model training and favor better scientific standards of data use. It is also a crucial step to ensure the social acceptability of generative AI and its integration into existing norms and regulations. The expansion of a full open dataset with permissible licenses is potentially a paradigm shift that has the potential to limit the questionable use of copyrighted content and bring back much-needed transparency over model training.\"}),/*#__PURE__*/n(\"p\",{children:[\"As the landscape of LLM development evolves, it's crucial for the industry to commit to ethical and responsible AI practices. One key aspect of this is the pre-training of foundational models. However, pre-training is not the only aspect that matters; ethical concerns can affect the entire lifecycle of the model. Given the recent development of fine-tuning, the adaptation of existing models to specific tasks and knowledge domains is equally important. There are several expert data providers in this field, and \",/*#__PURE__*/e(t,{href:\"https://toloka.ai/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"Toloka\"})}),\" is one of them. They develop sophisticated technologies for collecting high-quality datasets and performing in-depth evaluations of LLMs responsibly. Book a \",/*#__PURE__*/e(t,{href:\"https://toloka.ai/talk-to-us/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"demo\"})}),\" if you're interested.\"]})]});export const richText6=/*#__PURE__*/n(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"Transfer learning is a powerful machine learning (ML) methodology that leverages pre-trained models for similar tasks. This approach significantly reduces the time and computational resources required to train models for specific projects. In this article, we will explore the mechanics, applicability, and challenges of transfer learning. We will examine the foundations of transfer learning and review practical cases to determine when its adaptation is reasonable.\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning (TL) has gained popularity in deep learning projects because it enables training deep neural networks with relatively small amounts of data. In data science, this is particularly helpful since real-world problems often lack millions of labeled data points for training complicated models from scratch. Access to pre-trained models fine-tuned for specific tasks makes deep learning more accessible and efficient.\"}),/*#__PURE__*/e(\"p\",{children:\"TL is primarily used in computer vision and natural language processing tasks, addressing the substantial computational demands of these fields. As a methodology, transfer learning can be successfully combined with active learning to enhance model performance.\"}),/*#__PURE__*/e(\"h2\",{children:\"Transfer Learning vs. Training a Specific Machine Learning Model\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning refers to reusing a previously trained model to solve a different problem. This approach allows a model to leverage the knowledge and insights it has acquired from one task to improve its predictions on a different yet related task.\"}),/*#__PURE__*/e(\"p\",{children:\"For instance, a convolutional neural network (CNN) trained for general object classification on a commonly available dataset can be fine-tuned to analyze X-ray images and detect particular diseases.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning vs. Training a Specific Machine Learning Model 1\",className:\"framer-image\",height:\"226\",src:\"https://framerusercontent.com/images/PaAOnGpfIxM7QtUeNufCIWtnCE8.jpeg\",srcSet:\"https://framerusercontent.com/images/PaAOnGpfIxM7QtUeNufCIWtnCE8.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/PaAOnGpfIxM7QtUeNufCIWtnCE8.jpeg 802w\",style:{aspectRatio:\"802 / 453\"},width:\"401\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Founding principles of classic Machine Learning vs. Transfer Learning. Source: \"}),/*#__PURE__*/e(t,{href:\"https://slds-lmu.github.io/seminar_nlp_ss20/introduction-transfer-learning-for-nlp.html\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Modern Approaches in NLP\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"Instead of starting from scratch, the new model builds upon the pre-existing model\u2019s parameters, significantly speeding up the training process and enhancing performance, especially in scenarios with limited data.\"}),/*#__PURE__*/e(\"p\",{children:\"TL principles can be applied to a variety of machine learning models. However, it is commonly associated with deep learning and neural networks due to their ability to learn and transfer complex representations.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning vs. Training a Specific Machine Learning Model 2\",className:\"framer-image\",height:\"182\",src:\"https://framerusercontent.com/images/bBXMbXOeu8fpBiDPBOmvQ9lzk.jpeg\",srcSet:\"https://framerusercontent.com/images/bBXMbXOeu8fpBiDPBOmvQ9lzk.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/bBXMbXOeu8fpBiDPBOmvQ9lzk.jpeg 850w\",style:{aspectRatio:\"850 / 364\"},width:\"425\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Comparison between traditional machine learning models (a) requiring manual feature extraction and modern deep learning structures (b). Source: \"}),/*#__PURE__*/e(t,{href:\"https://idus.us.es/handle/11441/99506?locale-attribute=en\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Energies\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h2\",{children:\"Transfer Learning Mechanics\"}),/*#__PURE__*/e(\"p\",{children:\"Initially, transfer learning was meant to address the limitations of traditional machine learning models. First, it is computationally efficient and achieves better results with smaller datasets. Features from a pre-trained model, though may not be directly applicable to tackle specific tasks and projects, can be fine-tuned for similar goals even in a different domain.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning Mechanics 1\",className:\"framer-image\",height:\"912\",src:\"https://framerusercontent.com/images/n6FZeNKpxNGHvLDBnWU1aHvs.jpeg\",srcSet:\"https://framerusercontent.com/images/n6FZeNKpxNGHvLDBnWU1aHvs.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/n6FZeNKpxNGHvLDBnWU1aHvs.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/n6FZeNKpxNGHvLDBnWU1aHvs.jpeg?scale-down-to=2048 2048w,https://framerusercontent.com/images/n6FZeNKpxNGHvLDBnWU1aHvs.jpeg 2672w\",style:{aspectRatio:\"2672 / 1825\"},width:\"1336\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Applying transfer learning on ImageNet\u2014a dataset of more than 14 million pictures distributed over 1000 classes\u2014for medical image analysis. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.mdpi.com/1424-8220/23/2/570\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Sensors 2023, 23(2)\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"In the early layers, deep neural networks trained on images usually learn low-level features like determining edges, colors, shapes, and intensity variations. These features are not task-specific, as they appear in different image processing tasks, whether detecting a traffic light or a can of soda.\"}),/*#__PURE__*/e(\"p\",{children:\"In the field of NLP, transfer learning has been instrumental in improving the performance of various text-related tasks, including sentiment analysis. The latter involves determining the emotional tone behind text bodies, such as customer reviews, social media posts, or product feedback. Traditional NLP models require large, accurately labeled datasets specific to one task, which may be time-consuming and costly to create.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning Mechanics 2\",className:\"framer-image\",height:\"271\",src:\"https://framerusercontent.com/images/RQRjBnJ0KUeCzsyvr6YqDjclt5M.jpeg\",srcSet:\"https://framerusercontent.com/images/RQRjBnJ0KUeCzsyvr6YqDjclt5M.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/RQRjBnJ0KUeCzsyvr6YqDjclt5M.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/RQRjBnJ0KUeCzsyvr6YqDjclt5M.jpeg 1098w\",style:{aspectRatio:\"1098 / 542\"},width:\"549\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Overview of transfer learning benefits. The training starts at a higher point as the source model has the initial skill, then demonstrates a higher skill improvement rate and can finally get a better-converged skill. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.amazon.com/Handbook-Research-Machine-Learning-Applications/dp/1605667668/ref=as_li_ss_tl?ie=UTF8&qid=1505780557&sr=8-2&keywords=Handbook+of+Research+on+Machine+Learning+Applications&linkCode=sl1&tag=inspiredalgor-20&linkId=0a779d8001b719c349fa7fbb23855922\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Handbook of Research on Machine Learning Applications and Trend\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"Here\u2019s where pre-trained language models come into play, forming the foundation of transfer learning in NLP. Initially, these models are trained on vast text corpora that may contain billions of words. This allows them to grasp the intricacies of language, including semantics and syntax, and capture nuances, context, and linguistic patterns that traditional ML models may struggle with.\"}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning Key Stages\"}),/*#__PURE__*/e(\"p\",{children:\"Leveraging a pre-trained model to address new tasks includes several primary stages. Each is critical for the ultimate efficiency of the transfer learning process.\"}),/*#__PURE__*/e(\"h4\",{children:\"1. Selecting a Pre-Trained Model\"}),/*#__PURE__*/e(\"p\",{children:\"The first stage is to choose a model already trained on a large and diverse dataset. Commonly used pre-trained models span various domains, such as ResNet for image classification, BERT for natural language processing, and OpenAI's GPT for generative tasks. These models have learned general features and patterns from extensive training data, providing a solid foundation for new tasks.\"}),/*#__PURE__*/e(\"h4\",{children:\"2. Base Model Utilization\"}),/*#__PURE__*/e(\"p\",{children:\"The base or source model consists of layers that have learned hierarchical feature representations from the training data. These layers serve as the foundation for further task-specific learning, offering a robust starting point. For instance, convolutional layers in a CNN trained on ImageNet can capture textures useful for various image-related tasks.\"}),/*#__PURE__*/e(\"h4\",{children:\"3. Identifying Transfer Layers\"}),/*#__PURE__*/e(\"p\",{children:\"Certain layers of the base model capture generic information relevant to both the original and the new task. A few layers, often found near the bottom of the network, are adept at learning low-level features. Identifying these layers is crucial as they form the basis for transferring knowledge to the new task.\"}),/*#__PURE__*/e(\"img\",{alt:\"Identifying Transfer Layers\",className:\"framer-image\",height:\"190\",src:\"https://framerusercontent.com/images/xrBIfmgn1h2cVcJhWqfLqmQefE.jpeg\",srcSet:\"https://framerusercontent.com/images/xrBIfmgn1h2cVcJhWqfLqmQefE.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/xrBIfmgn1h2cVcJhWqfLqmQefE.jpeg 805w\",style:{aspectRatio:\"805 / 381\"},width:\"402\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"You can freeze the initial layers of the pre-trained model to preserve the learned information and train a new model with the remaining layers. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.machinelearningnuggets.com/transfer-learning-guide/#how-can-you-use-pre-trained-models\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Machine Learning Nuggets\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h4\",{children:\"4. Feature Extraction\"}),/*#__PURE__*/e(\"p\",{children:\"Using the identified layers, the pre-trained model is employed to extract features from the new dataset. This process leverages the general representations learned during the pre-training phase, providing a head start in understanding the new data. For example, features extracted from a pre-trained CNN can be used for tasks like object detection, segmentation, or even different types of image classification.\"}),/*#__PURE__*/e(\"h4\",{children:\"5. Fine-Tuning\"}),/*#__PURE__*/e(\"p\",{children:\"This step adjusts the model's weights and parameters to better suit the new task's specific requirements. It may presume to make some of the initially frozen layers trainable, aiming to preserve the valuable knowledge gained from the pre-training phase while optimizing the model for the new challenge.\"}),/*#__PURE__*/e(\"img\",{alt:\"Fine-Tuning\",className:\"framer-image\",height:\"221\",src:\"https://framerusercontent.com/images/T4ZFqTWSygyWB9MOy1Bkfhu0.jpeg\",srcSet:\"https://framerusercontent.com/images/T4ZFqTWSygyWB9MOy1Bkfhu0.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/T4ZFqTWSygyWB9MOy1Bkfhu0.jpeg 962w\",style:{aspectRatio:\"962 / 442\"},width:\"481\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Fine-tuning (b) and feature extraction (c). Source: \"}),/*#__PURE__*/e(t,{href:\"https://arxiv.org/pdf/1606.09282\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Learning without forgetting (PDF)\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"This structured approach allows for efficient adaptation and enhanced performance in various machine learning domains, from image processing and natural language understanding to more specialized tasks like medical diagnosis or financial forecasting.\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning does not always require the use of a third-party pre-trained model. In some cases, the initial model can be trained from scratch on an available related dataset. This approach is particularly helpful when the business has access to relevant data that is not publicly available.\"}),/*#__PURE__*/e(\"h2\",{children:\"Types of Transfer Learning\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning encompasses various methods of knowledge adaptation and performance enhancement. Here are some of the main types of transfer learning and their key characteristics.\"}),/*#__PURE__*/e(\"h3\",{children:\"Inductive Transfer Learning\"}),/*#__PURE__*/e(\"p\",{children:\"The source and target tasks are different, but the source model helps improve the target task\u2019s performance. It concerns using a model trained for object detection to improve performance on image segmentation or another related task.\"}),/*#__PURE__*/e(\"h3\",{children:\"Transductive Transfer Learning\"}),/*#__PURE__*/e(\"p\",{children:\"The source and target tasks are the same, but the domains differ. It covers adapting a spam detection model to work effectively on a different dataset.\"}),/*#__PURE__*/e(\"h3\",{children:\"Unsupervised Transfer Learning\"}),/*#__PURE__*/e(\"p\",{children:\"Both the source and target tasks are unsupervised, and knowledge transfer aims to enhance feature learning. For example, unsupervised learning techniques can be used on a large text corpus to improve feature extraction for clustering tasks on a different, smaller text corpus.\"}),/*#__PURE__*/e(\"h3\",{children:\"Domain Adaptation\"}),/*#__PURE__*/e(\"p\",{children:\"The source and target domains have the same feature spaces but different distributions. A straightforward example involves adapting a speech recognition model trained in American English to recognize British English.\"}),/*#__PURE__*/e(\"h3\",{children:\"Multi-task Learning\"}),/*#__PURE__*/e(\"p\",{children:\"Several tasks from the same domain are learned simultaneously without distinguishing between source and target tasks. For instance, we may train a model to perform both language translation and part-of-speech tagging using the same text data, improving performance through shared knowledge.\"}),/*#__PURE__*/e(\"h3\",{children:\"One-shot Learning\"}),/*#__PURE__*/e(\"p\",{children:\"A classification task where only one or a few examples are available for learning and classifying many new examples in the future. It concerns recognizing a new person's face based on just one photo.\"}),/*#__PURE__*/e(\"h3\",{children:\"Zero-shot Learning\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning using zero instances of a class, relying on additional data during training to understand unseen data. For example, leveraging semantic relationships between known and unknown classes can help the model classify images of animals it has never seen before.\"}),/*#__PURE__*/e(\"img\",{alt:\"Zero-shot Learning\",className:\"framer-image\",height:\"311\",src:\"https://framerusercontent.com/images/V9ZUQhRWpio727ha5CvkCX6ci8A.jpeg\",srcSet:\"https://framerusercontent.com/images/V9ZUQhRWpio727ha5CvkCX6ci8A.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/V9ZUQhRWpio727ha5CvkCX6ci8A.jpeg 1024w\",style:{aspectRatio:\"1024 / 622\"},width:\"512\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Distinction between usual machine learning setting and transfer learning, and positioning of domain adaptation. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.semanticscholar.org/paper/A-survey-on-domain-adaptation-theory-Redko-Morvant/082d70e93af82d3ad289795312c717e7f1858e5f\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Semantic Scholar\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"Each of these types of transfer learning addresses different scenarios of reusing previously acquired knowledge to improve performance on new tasks or in new domains.\"}),/*#__PURE__*/e(\"h2\",{children:\"Transfer Learning Practical Cases\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning has proven practical in diverse fields, reducing training time and resource requirements. Transfer learning is particularly beneficial in the following scenarios:\"}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Lack of Data:\"}),\" when the target task has insufficient labeled data for training a model from scratch.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Similar Domains:\"}),\" when the source and target domains are similar or share common features.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Complex Models:\"}),\" when you need a deep learning model with large architecture that is too expensive to train from scratch.\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning is mainly associated with computer vision and NLP tasks, although it\u2019s applied to various projects across multiple domains.\"}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning in Natural Language Processing\"}),/*#__PURE__*/e(\"p\",{children:\"Transfer learning has been transformative in this field with pre-trained language models like BERT, GPT, and RoBERTa, which have set new benchmarks in various NLP tasks:\"}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Text Classification:\"}),\" Fine-tuning BERT on news categorization datasets.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Named Entity Recognition (NER):\"}),\" Using pre-trained models for entity extraction with minimal training data.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Machine Translation:\"}),\" Leveraging models like mBERT for multilingual translation tasks.\"]})})]}),/*#__PURE__*/n(\"p\",{children:[\"The platform \",/*#__PURE__*/e(t,{href:\"https://www.startus-insights.com/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"StartUs Insights\"})}),\" listed transfer learning as one of the top-9 NLP trends for 2023. In particular, \",/*#__PURE__*/e(t,{href:\"https://www.startus-insights.com/innovators-guide/natural-language-processing-trends/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"their research mentions\"})}),\" the startup QuillBot which makes an \",/*#__PURE__*/e(t,{href:\"https://www.elegantthemes.com/blog/business/quillbot-ai-review\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:\"AI-powered paraphrasing tool\"})}),\". Transfer learning powers its text slider and thesaurus that suggests synonyms. The tool also checks grammar, creates summaries, generates citations, and checks plagiarism.\"]}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning in Natural Language Processing\",className:\"framer-image\",height:\"408\",src:\"https://framerusercontent.com/images/xCZX6qKV0L41RY1wXxd5fjFlOs0.jpeg\",srcSet:\"https://framerusercontent.com/images/xCZX6qKV0L41RY1wXxd5fjFlOs0.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/xCZX6qKV0L41RY1wXxd5fjFlOs0.jpeg 1000w\",style:{aspectRatio:\"1000 / 817\"},width:\"500\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"A figure summarizing some of the QuillBot experiments. Source: \"}),/*#__PURE__*/e(t,{href:\"https://quillbot.com/blog/compressing-large-language-generation-models-with-sequence-level-knowledge-distillation/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"QuillBlog\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning in Computer Vision\"}),/*#__PURE__*/e(\"p\",{children:\"Computer vision is another field where transfer learning has made significant impacts:\"}),/*#__PURE__*/n(\"ul\",{style:{\"--framer-font-size\":\"18px\",\"--framer-text-alignment\":\"start\",\"--framer-text-color\":\"rgb(30, 33, 38)\",\"--framer-text-stroke-width\":\"0px\",\"--framer-text-transform\":\"none\"},children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Image Classification:\"}),\" Using models like VGG, ResNet, and EfficientNet pre-trained on ImageNet for various classification tasks.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Object Detection:\"}),\" Fine-tuning models like YOLO or Faster R-CNN on specific detection tasks.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Semantic Segmentation:\"}),\" Applying models like U-Net pre-trained on medical image datasets for different segmentation tasks.\"]})})]}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning in Computer Vision 1\",className:\"framer-image\",height:\"152\",src:\"https://framerusercontent.com/images/8OUeHCcLCBXI6w0LoaEoC9J2bc.jpeg\",srcSet:\"https://framerusercontent.com/images/8OUeHCcLCBXI6w0LoaEoC9J2bc.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/8OUeHCcLCBXI6w0LoaEoC9J2bc.jpeg 850w\",style:{aspectRatio:\"850 / 304\"},width:\"425\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"Data characteristics overview from the literature review on transfer learning for medical image classification. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.researchgate.net/figure/Studies-of-transfer-learning-in-medical-image-classification-over-time-y-axis-with_fig4_359935888\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"BMC Medical Imaging\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"In 2024, a group of scientists from the University of Nottingham suggested a method for real-time heating optimization based on the clothing insulation level classification. They applied transfer learning to teach their model to distinguish between light, medium, and warm clothes people can wear inside.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning in Computer Vision 2\",className:\"framer-image\",height:\"302\",src:\"https://framerusercontent.com/images/1iLGn6w4fyDhowAyuwgHTBVtE.jpeg\",srcSet:\"https://framerusercontent.com/images/1iLGn6w4fyDhowAyuwgHTBVtE.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/1iLGn6w4fyDhowAyuwgHTBVtE.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/1iLGn6w4fyDhowAyuwgHTBVtE.jpeg 1069w\",style:{aspectRatio:\"1069 / 604\"},width:\"534\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"The suggested method for detecting and classifying clothing levels for indoor thermal environment control. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.sciencedirect.com/science/article/pii/S0360132324001197\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Building and Environment\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning in Speech Recognition\"}),/*#__PURE__*/e(\"p\",{children:\"AI assistant developers use transfer learning to improve their voice assistants\u2019 speech recognition capabilities. Models pre-trained on vast amounts of general audio data are fine-tuned with specific voice commands and accents. This helps provide more accurate and context-aware responses, improving user experience in smart home environments.\"}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning in Robotics\"}),/*#__PURE__*/e(\"p\",{children:\"OpenAI developed a robotic hand that can manipulate objects with remarkable dexterity. Using reinforcement learning, they initially trained their model in a simulated environment and then transferred this knowledge to the physical robot.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning in Robotics\",className:\"framer-image\",height:\"360\",src:\"https://framerusercontent.com/images/TZk9OvzQPxR1UxkbPYSJR2cOBq8.jpeg\",srcSet:\"https://framerusercontent.com/images/TZk9OvzQPxR1UxkbPYSJR2cOBq8.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/TZk9OvzQPxR1UxkbPYSJR2cOBq8.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/TZk9OvzQPxR1UxkbPYSJR2cOBq8.jpeg 1233w\",style:{aspectRatio:\"1233 / 720\"},width:\"616\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"The Dactyl system is trained in simulation and transfers its knowledge to reality, adapting to real-world physics. Source: \"}),/*#__PURE__*/e(t,{href:\"https://openai.com/index/learning-dexterity/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"OpenAI\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"p\",{children:\"This transfer from simulation to real-world application enables the robotic hand to perform complex tasks like solving a Rubik\u2019s Cube, which requires fine motor skills and adaptability to various scenarios.\"}),/*#__PURE__*/e(\"h3\",{children:\"Transfer Learning in Financial Forecasting\"}),/*#__PURE__*/e(\"p\",{children:\"Financial institutions use transfer learning to improve stock market prediction models. A model pre-trained on a large corpus of financial data across different markets can be fine-tuned on specific stocks or market conditions. This allows the model to leverage learned patterns and improve the accuracy of predictions, aiding in investment strategies and risk management.\"}),/*#__PURE__*/e(\"img\",{alt:\"Transfer Learning in Financial Forecasting\",className:\"framer-image\",height:\"464\",src:\"https://framerusercontent.com/images/hnSEumrr4TA2Q8pDvdIg72Efhj8.jpeg\",srcSet:\"https://framerusercontent.com/images/hnSEumrr4TA2Q8pDvdIg72Efhj8.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/hnSEumrr4TA2Q8pDvdIg72Efhj8.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/hnSEumrr4TA2Q8pDvdIg72Efhj8.jpeg 1774w\",style:{aspectRatio:\"1774 / 929\"},width:\"887\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"The transfer learning process for models trained on the source task with 379 stock market indices by industry. Source: \"}),/*#__PURE__*/e(t,{href:\"https://ouci.dntb.gov.ua/en/works/4Kyya1J9/\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Expert Systems with Applications\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h2\",{children:\"Transfer Learning Limitations\"}),/*#__PURE__*/e(\"p\",{children:\"The powerful approach of transfer learning still comes with several limitations and challenges.\"}),/*#__PURE__*/e(\"h3\",{children:\"Domain Discrepancy\"}),/*#__PURE__*/e(\"p\",{children:\"One primary limitation is the issue of domain mismatch. Models pre-trained on a particular dataset might not transfer well to a different domain with distinct characteristics. For instance, a model successfully trained on general text data may not perform optimally when fine-tuned on legal or medical texts.\"}),/*#__PURE__*/e(\"p\",{children:\"This domain discrepancy can lead to suboptimal performance, requiring large amounts of domain-specific data to achieve the desired accuracy.\"}),/*#__PURE__*/e(\"h3\",{children:\"Potential Bias\"}),/*#__PURE__*/e(\"p\",{children:\"Additionally, the quality and size of the pre-trained model's data significantly impact the transferability. If the pre-trained data is biased or unrepresentative of the target task, the model might perpetuate these biases, leading to inaccurate or unfair outcomes.\"}),/*#__PURE__*/e(\"img\",{alt:\"Potential Bias\",className:\"framer-image\",height:\"173\",src:\"https://framerusercontent.com/images/Qka80nkuNeHhkaeA3cwB8NKuRew.jpeg\",srcSet:\"https://framerusercontent.com/images/Qka80nkuNeHhkaeA3cwB8NKuRew.jpeg?scale-down-to=512 512w,https://framerusercontent.com/images/Qka80nkuNeHhkaeA3cwB8NKuRew.jpeg?scale-down-to=1024 1024w,https://framerusercontent.com/images/Qka80nkuNeHhkaeA3cwB8NKuRew.jpeg 1068w\",style:{aspectRatio:\"1068 / 346\"},width:\"534\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"em\",{children:\"In 2020, Barreth Zoch and colleagues questioned the efficiency of transfer learning as the dominant approach in machine vision. They stressed it hurts performance when more robust data augmentation is used. Source: \"}),/*#__PURE__*/e(t,{href:\"https://www.semanticscholar.org/paper/Rethinking-Pre-training-and-Self-training-Zoph-Ghiasi/368c72c2298e5f8276398b2cb198702281eac4f8\",motionChild:!0,nodeId:\"eo4RAmtig\",openInNewTab:!0,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(i.a,{children:/*#__PURE__*/e(\"em\",{children:\"Neural Information Processing Systems\"})})})]}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(\"br\",{className:\"trailing-break\"})}),/*#__PURE__*/e(\"h3\",{children:\"Resource Consumption\"}),/*#__PURE__*/e(\"p\",{children:\"Another challenge is the computational and resource requirements for transfer learning. While transfer learning can reduce the need for large training datasets, the initial models often require significant computational power and time to pre-train. Fine-tuning these models, although less intensive than training from scratch, still demands considerable computational resources, especially for large-scale models like BERT or GPT.\"}),/*#__PURE__*/e(\"h2\",{children:\"Final Thoughts\"}),/*#__PURE__*/n(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Due to its efficiency, transfer\"}),\" learning has become the dominant approach for many tasks in NLP, computer vision, and speech recognition. It excels in scenarios where large-scale annotated data is scarce or costly to obtain, making it invaluable for specialized applications like medical image analysis and financial forecasting.\"]}),/*#__PURE__*/e(\"p\",{children:\"However, it is essential to recognize that transfer learning is not a one-size-fits-all solution. Its success highly depends on the compatibility between the source and target domains, and a significant domain mismatch can limit its effectiveness. Additionally, the resource demands for fine-tuning large pre-trained models can be substantial, necessitating significant computational power and expertise.\"}),/*#__PURE__*/e(\"p\",{children:\"Thus, while transfer learning offers impressive benefits, it should be considered alongside other approaches. Data scientists and engineers must carefully evaluate its suitability for their specific tasks, balancing the potential gains against the inherent limitations and resource requirements.\"})]});\nexport const __FramerMetadata__ = {\"exports\":{\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText6\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],
  "mappings": "+LAAsJ,IAAMA,EAAsBC,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,0YAA0Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4VAA4V,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,omBAAomB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oYAAoY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qfAAqf,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yeAAye,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yWAAyW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0NAA0N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iNAAiN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0QAA0Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2QAA2Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,yMAAsNE,EAAEC,EAAE,CAAC,KAAK,wBAAwB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,aAAa,CAAC,CAAC,CAAC,EAAE,wEAAwE,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2gBAA2gB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mMAAmM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8PAA8P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4OAA4O,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8MAA8M,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wFAAwF,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kdAAkd,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yWAAyW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qVAAqV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+QAA+Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sgBAAsgB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kNAAkN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ieAAie,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sUAAsU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,4WAAyXE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4FAA4F,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,wCAAwC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+FAA+F,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uUAAuU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oaAAoa,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4HAA4H,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uZAAuZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uYAAuY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,SAAS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kXAAkX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2bAA2b,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mPAAmP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6VAA6V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sJAAsJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6aAA6a,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0VAA0V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0eAA0e,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8bAA8b,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0FAA0F,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,oLAAoL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,8MAA8M,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,EAAE,qLAAqL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,KAAK,CAAC,EAAE,gPAAgP,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ojBAAojB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,CAAC,CAAC,CAAC,EAAeG,EAAuBL,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,2PAA2P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,cAAc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8TAA8T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mOAAmO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ieAAud,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4cAA4c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8SAA8S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4DAA4D,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ggBAAggB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kDAAkD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uTAAuT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uYAAuY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wWAA8V,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iEAAiE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qSAAqS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gdAAgd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+TAA+T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oPAAoP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uNAAuN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mjBAAmjB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mEAAmE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mhBAA8gB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oPAAoP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2gBAA2gB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yZAAyZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ueAAue,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4MAA4M,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mUAAmU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uXAAuX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kWAAkW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oTAAoT,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBN,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,8TAA8T,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sUAAsU,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6OAA6O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+PAA+P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mWAAmW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iYAAiY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAsT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAA6W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+UAA+U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0UAA0U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iLAAiL,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8OAA8O,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0SAA0S,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wSAAwS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qTAAqT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gOAAgO,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6QAA6Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sRAAsR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iUAAiU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ojBAAojB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oQAAoQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2PAA2P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oaAAoa,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uRAAuR,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAqQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,meAA8d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kaAAka,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kZAAkZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mTAAmT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ySAAyS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wPAAwP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,WAAW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kVAAkV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2OAA2O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gUAAgU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2aAAsa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+NAA+N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sWAAsW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qgBAAqgB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mTAAmT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qaAAqa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oQAAoQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kPAAkP,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,aAAa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0QAA0Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0SAA0S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0XAA0X,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2gBAA2gB,CAAC,CAAC,CAAC,CAAC,EAAeK,EAAuBP,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,kgBAAwf,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oXAAoX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2fAA2f,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4WAA4W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yEAAyE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mWAAmW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+pBAA+pB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oYAAoY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oaAAoa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yTAAyT,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+SAA+S,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oYAAoY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iQAAiQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wSAAwS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6hBAA6hB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4LAA4L,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yNAAyN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sPAAsP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4QAA4Q,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qDAAqD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gXAAgX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mQAAmQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uDAAuD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wOAAwO,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uPAAuP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+OAA+O,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+NAA+N,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sZAAsZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yTAAyT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4aAA4a,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAA6W,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wYAAwY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAgQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oUAAoU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yXAAyX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uCAAuC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mVAA8U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oXAAoX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wTAAwT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kUAAkU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kQAAkQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6WAA6W,CAAC,CAAC,CAAC,CAAC,EAAeM,EAAuBR,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,+QAA+Q,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2ZAA2Z,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0CAA0C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,8BAA2CE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,4EAA4E,CAAC,CAAC,CAAC,EAAE,0IAA0I,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,wKAAwK,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8cAA8c,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4XAA4X,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kYAAkY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yMAAyM,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,mvBAAgwBE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,kkBAAkkB,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8NAA8N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mTAAmT,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,2BAA2B,OAAO,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sBAAsB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,SAAS,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yVAAyV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6aAA6a,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ypBAAypB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2fAA2f,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4dAA4d,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+wBAA+wB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sVAAsV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qSAAqS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yRAAyR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8fAA8f,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yBAAyB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oZAAoZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+NAA+N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2WAA2W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,iBAAiB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8GAA8G,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,6CAA6C,CAAC,EAAE,0UAA0U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,oCAAoC,CAAC,EAAE,uOAAuO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,EAAE,iRAAiR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,sBAAsB,CAAC,EAAE,6SAA6S,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,mBAAmB,CAAC,EAAE,2YAA2Y,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qQAAqQ,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,2BAA2B,OAAO,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,mJAAmJ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,sHAAsH,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+PAA+P,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wQAAwQ,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,gLAAgL,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,0NAA0N,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0PAA0P,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,wBAAwB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,uZAAuZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4RAA4R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oTAAoT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yQAAyQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oQAAoQ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sVAAsV,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+NAA+N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,eAAe,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oXAAoX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8aAAoa,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kVAAkV,CAAC,CAAC,CAAC,CAAC,EAAeO,EAAuBT,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,yZAAyZ,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,OAAoBE,EAAEC,EAAE,CAAC,KAAK,iCAAiC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oBAAoB,CAAC,CAAC,CAAC,EAAE,+RAA+R,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,gFAA6FE,EAAEC,EAAE,CAAC,KAAK,iDAAiD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAE,QAAqBF,EAAEC,EAAE,CAAC,KAAK,8BAA8B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,sMAAiM,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,0PAAuQE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,kHAAkH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,wSAA8R,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6KAAmK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6CAA6C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ggBAAggB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,sCAAmDE,EAAEC,EAAE,CAAC,KAAK,0DAA0D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,KAAkBF,EAAEC,EAAE,CAAC,KAAK,+CAA+C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,OAAO,CAAC,CAAC,CAAC,EAAE,QAAqBF,EAAEC,EAAE,CAAC,KAAK,wDAAwD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,IAAI,CAAC,CAAC,CAAC,EAAE,kgBAAkgB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,sWAAuW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,0CAA0C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4mBAAumB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,mIAAgJE,EAAEC,EAAE,CAAC,KAAK,kCAAkC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,gBAAgB,CAAC,CAAC,CAAC,EAAE,oJAAuJF,EAAEC,EAAE,CAAC,KAAK,0DAA0D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,uBAAoCF,EAAEC,EAAE,CAAC,KAAK,2EAA2E,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,oDAAoD,CAAC,CAAC,CAAC,EAAE,8HAAyH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,gOAAgO,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAS,CAAcA,EAAE,IAAI,CAAC,SAAS,CAAC,sPAAmQE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAE,mIAAgJF,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,qCAAqC,CAAC,CAAC,CAAC,EAAE,gOAAsN,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAC,udAAoeE,EAAEC,EAAE,CAAC,KAAK,sEAAsE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,uBAAuB,CAAC,CAAC,CAAC,EAAE,sPAA4O,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,wgBAAwgB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,ofAAweE,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,EAAE,8PAAyP,CAAC,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,iCAAiC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,oDAAiEE,EAAEC,EAAE,CAAC,KAAK,mFAAmF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,eAAe,CAAC,CAAC,CAAC,EAAE,mTAAmT,CAAC,CAAC,EAAeJ,EAAE,IAAI,CAAC,SAAS,CAAC,gbAA6bE,EAAEC,EAAE,CAAC,KAAK,4BAA4B,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,EAAE,SAAsBF,EAAEC,EAAE,CAAC,KAAK,gEAAgE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAE,0BAA0B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,6WAAwW,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mdAAmd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8TAA8T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,YAAY,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sjBAAsjB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,sgBAAmhBE,EAAEC,EAAE,CAAC,KAAK,qBAAqB,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,EAAE,iKAA8KF,EAAEC,EAAE,CAAC,KAAK,gCAAgC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,MAAM,CAAC,CAAC,CAAC,EAAE,wBAAwB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeM,EAAuBV,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAS,qdAAqd,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+aAA+a,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sQAAsQ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kEAAkE,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wMAAwM,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,qEAAqE,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,0KAA0K,MAAM,CAAC,YAAY,WAAW,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,iFAAiF,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,0FAA0F,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4NAAuN,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qNAAqN,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,qEAAqE,UAAU,eAAe,OAAO,MAAM,IAAI,sEAAsE,OAAO,sKAAsK,MAAM,CAAC,YAAY,WAAW,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,kJAAkJ,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,4DAA4D,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,UAAU,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qXAAqX,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,gCAAgC,UAAU,eAAe,OAAO,MAAM,IAAI,qEAAqE,OAAO,6VAA6V,MAAM,CAAC,YAAY,aAAa,EAAE,MAAM,MAAM,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,gKAAsJ,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,0CAA0C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8SAA8S,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4aAA4a,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,gCAAgC,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,0QAA0Q,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,mOAAmO,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,8QAA8Q,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,iEAAiE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2YAAsY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,8BAA8B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qKAAqK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kCAAkC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qYAAqY,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,2BAA2B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oWAAoW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yTAAyT,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,8BAA8B,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,wKAAwK,MAAM,CAAC,YAAY,WAAW,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,0JAA0J,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,qGAAqG,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,uBAAuB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6ZAA6Z,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gTAAgT,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,cAAc,UAAU,eAAe,OAAO,MAAM,IAAI,qEAAqE,OAAO,oKAAoK,MAAM,CAAC,YAAY,WAAW,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,sDAAsD,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,mCAAmC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4PAA4P,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ySAAyS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4BAA4B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wLAAwL,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,6BAA6B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gPAA2O,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yJAAyJ,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sRAAsR,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0NAA0N,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oSAAoS,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mBAAmB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yMAAyM,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mRAAmR,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,qBAAqB,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,2KAA2K,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,0HAA0H,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,oIAAoI,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wKAAwK,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sLAAsL,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wFAAwF,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kBAAkB,CAAC,EAAE,2EAA2E,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2GAA2G,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oJAA+I,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,kDAAkD,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2KAA2K,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,sBAAsB,CAAC,EAAE,oDAAoD,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,iCAAiC,CAAC,EAAE,6EAA6E,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,sBAAsB,CAAC,EAAE,mEAAmE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,gBAA6BE,EAAEC,EAAE,CAAC,KAAK,oCAAoC,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,kBAAkB,CAAC,CAAC,CAAC,EAAE,qFAAkGF,EAAEC,EAAE,CAAC,KAAK,wFAAwF,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,EAAE,wCAAqDF,EAAEC,EAAE,CAAC,KAAK,iEAAiE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAS,8BAA8B,CAAC,CAAC,CAAC,EAAE,+KAA+K,CAAC,CAAC,EAAeF,EAAE,MAAM,CAAC,IAAI,mDAAmD,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,2KAA2K,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,iEAAiE,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,qHAAqH,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,WAAW,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sCAAsC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wFAAwF,CAAC,EAAeF,EAAE,KAAK,CAAC,MAAM,CAAC,qBAAqB,OAAO,0BAA0B,QAAQ,sBAAsB,kBAAkB,6BAA6B,MAAM,0BAA0B,MAAM,EAAE,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,uBAAuB,CAAC,EAAE,4GAA4G,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,mBAAmB,CAAC,EAAE,4EAA4E,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,wBAAwB,CAAC,EAAE,qGAAqG,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,yCAAyC,UAAU,eAAe,OAAO,MAAM,IAAI,uEAAuE,OAAO,wKAAwK,MAAM,CAAC,YAAY,WAAW,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,0HAA0H,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,wIAAwI,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kTAAkT,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,yCAAyC,UAAU,eAAe,OAAO,MAAM,IAAI,sEAAsE,OAAO,oQAAoQ,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,qHAAqH,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,sEAAsE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,yCAAyC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8VAAyV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+OAA+O,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,gCAAgC,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,0QAA0Q,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,6HAA6H,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,+CAA+C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,QAAQ,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qNAAgN,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,4CAA4C,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sXAAsX,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,6CAA6C,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,0QAA0Q,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,yHAAyH,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,8CAA8C,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,kCAAkC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,+BAA+B,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iGAAiG,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,oBAAoB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sTAAsT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8IAA8I,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2QAA2Q,CAAC,EAAeA,EAAE,MAAM,CAAC,IAAI,iBAAiB,UAAU,eAAe,OAAO,MAAM,IAAI,wEAAwE,OAAO,0QAA0Q,MAAM,CAAC,YAAY,YAAY,EAAE,MAAM,KAAK,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,SAAS,yNAAyN,CAAC,EAAeA,EAAEC,EAAE,CAAC,KAAK,uIAAuI,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,UAAU,CAAC,EAAE,QAAQ,oBAAoB,aAAa,GAAG,SAAsBD,EAAEE,EAAE,EAAE,CAAC,SAAsBF,EAAE,KAAK,CAAC,SAAS,uCAAuC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAsBA,EAAE,KAAK,CAAC,UAAU,gBAAgB,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,sBAAsB,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,gbAAgb,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAS,gBAAgB,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,iCAAiC,CAAC,EAAE,4SAA4S,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,sZAAsZ,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ySAAyS,CAAC,CAAC,CAAC,CAAC,EACv6uIS,EAAqB,CAAC,QAAU,CAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,mBAAqB,CAAC,KAAO,UAAU,CAAC,CAAC",
  "names": ["richText", "u", "x", "p", "Link", "motion", "richText1", "richText2", "richText3", "richText4", "richText5", "richText6", "__FramerMetadata__"]
}
