{
  "version": 3,
  "sources": ["ssg:https://framerusercontent.com/modules/mDkIT313K0Wt5oSoVDyx/WOT4pYAofN0bzLPakFji/X7VlByTfx-25.js"],
  "sourcesContent": ["import{jsx as e,jsxs as t}from\"react/jsx-runtime\";import{Link as o}from\"framer\";import{motion as n}from\"framer-motion\";import*as a from\"react\";export const richText=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"[00:00:22] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Hello everyone, welcome back to the AI Native Dev. Today we have a really exciting guest who I've known for the past five, six years now, and, I've had amazing conversation with and that's Tamar Yehoshua, Tamar, thanks for coming onto the show.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:35] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Thanks so much for having me.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:36] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Tamar is, is amazing in various ways, and had an illustrious career, right?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:41] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" maybe put some highlights from that, from that sort of, engineering and product leadership career, which include, being a VP at Amazon or A9, being a VP at Google, notably on Search, as part of the journey, which is relevant to what you're doing today. You were Chief Product Officer at Slack, where we met because you also became a board member, at, at Snyk at the time.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:59] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" and now you are the president of products and technology at, at Glean. is that right? Am I missing anything important in the, in the journey?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:08] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" That's awesome. And the most important part was being on the board at Snyk where I got to meet you.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:13] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, this was definitely a fun, fun highlights times on it. And, and got conversations that span well past the,the Snyk context. Today,as we get going, we'll talk about, AI in the enterprise. We'll talk about building products with AI context. we'll talk about figuring out how to,how to wrangle them, in the first place.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Maybe start us off by just explaining a little bit, what does Glean do?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:34] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So Glean is an enterprise AI platform. What we do is we read the content of all of your SaaS applications, we ingest information from Microsoft Office 365, from Google, from Slack, from Salesforce, from JIRA, and make it easy to find information across all of your SaaS tools.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:53] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And also to ask questions in natural language. So you can think of it as ChatGPT for your enterprise and ChatGPT, you ask questions about the world. And in Glean, you can ask questions about your enterprise proprietary information within your enterprise. And we also have a platform that you can access all of this information.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" We call it the knowledge graph of your enterprise, through our APIs and have a platform also for building no code applications. So anything that you want to do, that's building AI on top of all of the information that's stored in all of these different SaaS apps, you can do through Glean.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:27] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. So super interesting. And the breadth here is also interesting. So you're, first of all, just clarification. Is it more about discovering, a piece of information that's scattered somewhere across the, the, the enterprise systems and linking to them, more kind of search orientation, or is it more around you actually processing and understanding those things and bringing back some summarized answer for it?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:48] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So both, so you can use it for both. So let's say I'm going on a customer call and I want to know who's the account executive for it. I can ask Glean really simply. Who's the AE for this account? Or who's the CSM? I can also ask what were the latest outages? I can say, write me a brief, a customer brief for this, for this meeting.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And then it'll go through everything. It'll go through what were the latest meetings? What's the status of the account in Salesforce? What's the, what were the outages that I should know? And it'll put that together, I have to write the prompt that will put it together in the way that I want to consume it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:24] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And it will summarize that and give me the information. I do this often, like any prospect, tell me about the prospect, what stage is it in? What should I know? And now I don't have to have a CSM write a briefing for me, I can do it myself. So that's an example. So it does both of any kind of capabilities that ChatGPT would have, we can do through Glean and we're keep on adding functionality as well to be able to do better at summarization and insights.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:52] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Got it. So it sounds like it is primarily the ChatGPT, like you've used that analogy. So it's more the ChatGPT than the Google for your\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:59] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, so we have both interfaces. So we have an interface that looks like Google search, but over your, your enterprise. So we have two different, the search enterprise and the chat interface. And then we have what we called AI, AI answers. So think of Google introduced AI overviews recently, where you can get an AI overview in Slack.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:18] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" We have the equivalent of AI answers in the search interface. So that kind of brings them together as well. And then we also have the equivalent of custom GPTs. that you can do on ChatGPT for a specific use case. Those are called Glean Apps.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:34] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Got it. Andthey extend and personalize the, I guess kind of the system prompts or the ability to,to represent some subset of the enterprise data and in some form of interaction, like ChatGPT.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:47] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Exactly, so a good example of a Glean App is most of our customers want to build an HR app to say these are the canonical documents for HR. So that let's say somebody wrote an answer in a slack channel to a question that was wrong. It wouldn't pick that up, but here are all the HR apps. So here's your knowledge base you should be looking at.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:05] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Here's when it should trigger. Here's the kind of, prompts that it should use, and it's specialized for that. And Glean will actually auto route. So if you ask a question in Glean and say, what's our vacation policy, it will auto route to the Glean app, the HR app. And then you can also, the HR team who built it, so Glean app can build actions to actually file your vacation through that Glean app.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:28] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And what's nice about it is that it is no code. So an HR person who does not know how to code can build a Glean app.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:34] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Interesting. So I guess we'll come back to the, to the building piece. Cause I think that's really interesting. the search angle of it, just to say I guess the, maybe this is still the Snyk DNA piece of it, but like immediately I get a little bit worried on the security front of that, it implies a system that is very knowledgeable and you're telling me stuff about a client, but how do you know if I'm allowed to do that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:56] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I think that's a thorny domain at the moment in, in the, in the world of AI, I guess, how do you, how do you think or handle compartmentalizing who should have access to what data?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So that was probably the most important thing that Glean did in the early days. It took security, privacy policies really seriously. So it didn't start by search and then we'll figure out the privacy later. It started privacy first from the get go. So we have over a hundred connectors to all your different SaaS apps and we understand the privacy and governance of each app.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:33] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And we, understand the metadata that tells us, like for Slack, we know which channels are private, public, what are private DMs. For Google Docs, think of it, you've got a Google Doc, and let's say you share it with me, with the permissions, anyone with the link can see it. But now my neighbor over here, you didn't share it with them.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So they are not supposed to see it. Glean needs to know that I can see it, but they can't. And so how do we know that? We know because we can see your email and your Teams conversations and your Slack conversations. We can know if it was shared with you. If we don't see that it was shared with you. We don't let you have access to that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:07:13] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So we really, understand the different policies for each of the SaaS apps, and we make sure that we abide by them. And it's very rigorous in what we do. But now as you, like you, with your CISO mind, we'll say. Wait a minute, maybe I don't want everyone in the organization to have access to all information that is not locked down because there might be some people who created a document and didn't do the permissions right.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:07:41] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's security through obscurity. You never would have found that document, but now with Glean, you can find this. So I can say, when's the next RIF? And if there happens to be a document about a RIF that somebody in HR didn't get the permissions right, I would find it. So another thing that we've done is we have AI governance modules that we will enable a company before they launch to assess if they have documents that shouldn't be open.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:08] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And we will flag certain types of documents for people to review. And we will enable a company to redlist or greenlist. Like you can redlist whole directories or whole types of documents to say no matter what the permissions are, we never want these to show up in Glean.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:24] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. So how's the, I mean, be doubtful a little bit here for a second. So like in LLM world, when you feed the LLM knowledge, if the LLM understands or is aware of a RIF, generally the perception in the security world is you can convince it to tell you, there isn't like the ability to, it's one thing to define the policy,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:48] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" which is a challenge in its own right, which I guess is easier for me to grok how you would do that. And also maybe a human will verify that and confirm it. But in the moment requests, how, who enforces these policies? And is it LLM? Is it the same Gen AI systems that know this?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:07] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Are there. What's\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:08] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I know,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:09] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" the secret sauce here? How is this, how is this compartmentalized in a more assured way?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:13] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So I have to explain the architecture more. So we do not fine tune models with your enterprise data because you're absolutely right. If you fine tune a model with your enterprise data or you do a large context window with all of the information, the LLM is just going to know that information. And that's why it's so hard to do this in the enterprise.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So it is a RAG based solution and all of the enforcements are based on. The, the search, the retrieval part of Rav. So we only, so the way that the architecture works in the, for the assistant in the natural language query, you put in a query in natural language. We call the LLM for the planning phase, where we take that query and we translate it into search queries.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Then we create our retrieval engine and we get back the relevant snippet. So this is just.It's finding what are the right documents based on your permissions. And then that is the context window that's fed into the act phase. And now we call the LLMs in order to generate an answer.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:15] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So that information, if you didn't have the right security for it to see, the information will never get fed into the LLM. And so that's how we secure it and ensure it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And I think that makes sense. And I think aligns with the best practice that, that I'm aware of in the world of security, which is you want to pull in the data from a bit more, bread and butter systems or traditional systems, if you will, with authorization systems in which it says, Tamar is allowed to see this and Guy is allowed to see that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:44] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And then, yeah, you can massage and present and process that data,in the right fashion, for it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:51] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And the other thing that people are very worried about, the security in organizations, it really is sensitive because we're, we are indexing all of your organization's data, is we actually allow people to host it in their own environment. So in their own GCP project or AWS project, so it doesn't leave their cloud premises.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:13] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So what type of, limitations do you need to take on because you don't train on the customer data and what could you do, and this is, I'm thinking about this a little bit with the, with the mindset of code and I'm thinking, indexing on your code base is a topic we've already discussed a few times in the podcast,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:35] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" and there's elements, all sorts of pros and cons to it. There was a certain amount of, if you brought in bad code, you might replicate mistakes and things like that, but generally there is, it's a scary proposition for a lot of organizations because they will be giving one of these giants access to their code and they're afraid legitimately or otherwise that their data will be there.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:56] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But if you don't do that, there's a lot less, or at least my understanding is there's a lot less you could do in terms of feeding that insight of your code into the code generation. is that true over here? What, what is sacrificed?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:12:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, I think the use cases are not as important in this case. So you could say, I want to make sure that all of the tone of, any documentation or any marketing collateral, anything that comes out of Glean is consistent across everything that everyone is doing. So you can still have as input, guidelines, and that can be part of your input.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:12:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And so you can create a Glean app that says, here are guidelines of how we want it to be, but then you're, the user is going to have to more explicitly do that and say, follow these guidelines as opposed to Glean understanding without the user telling them, here's what I would like to do. But what I find is in those kind of cases, like marketing will say, these are the marketing guidelines or engineering will say, here's the template for the design doc and product will say, here's how we follow the template for PRD.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:01] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So it isn't as much of a problem in practice because you don't ingest all the code and find the style of this organization. So it's not as it hasn't come up as an issue.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:12] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, it's interesting. It's a, it feels a little bit like you're not trying to regenerate things in the style of the data that you encounter. You're trying to find it and interpret it, but you're not trying to create a new customer record based on this conversation, because of this, and therefore you want to have, processed and digested all the previous custom records to doing it. You're trying to understand if you want to get a custom record in a standard fashion, then someone can build a Glean app that defines what is that fashion. How do you want to see that? And then you can find the right data to populate into it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:44] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Exactly. And that's going to become really interesting with the Glean apps of where are the ways that we can automate. So once you get actions, then you get much more automation, much more productivity savings, because you can ingest all these things, so for example, you can ingest all of your Gong calls.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:02] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So one thing that we built a, the PM team at Glean built an app to look through Gong calls. Tease out the information in the call, put it in a spreadsheet and then be able to assess what are the most frequently asked requests from customers. So like this is like multi step super interesting examples, but you don't need it all fed into an LLM But I need to, it's more specific use cases and that's where things get like really interesting.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:32] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, no, I can see the power in it. And so still a little bit on the sort of the Glean journey. And we'll talk about the sausage making and how, I guess we got a little bit into the sausage making here in architecture, Glean, predates, I think, Gen AI, right? I think it was founded, and it was an AI company at the time as well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:48] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Can you tell us a little bit about, what was before, like what you're describing right now sounds a very Gen AI ish company. tell us a little bit about what was there before? Was this a pivot? Was this a natural expansion? How did that go about?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:00] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, so Glean started in 2019 It was founded by Arvind Jain. Arvind was an early Google search infrastructure engineer. So he was one of the people built the original search infrastructure for Google search. Actually worked with him and Google back in the day in like 2011, so we know each other from there and, and then we, he went on to Rubrik where he was the founder, one of the co founders of Rubrik.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:24] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And then at Rubrik, as they were growing, he found that productivity decreased instead of increased. And a lot of it was around finding information. And by the way, we see that with our customers too. They come to us and they're like, ah, we've grown. We can't find the information. We need to get our job done. So he's like, why is there not a good enterprise search solution? So that's how it started is building enterprise search. But because he came from Google, he knew that AI was being used in search at the time. Like BERT, the original models, the precursor to LLMs, were built by Google. Why? To improve search.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:59] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So they had been used since, I think, 2016, 17, in search, the BERT models, vector embeddings, were used to improve the quality of search. So from the beginning, Glean was an AI search company. It was using BERT for its models. It was building vector embeddings before people were talking about vector databases.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:18] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" They fine tuned their own models per enterprise to build the vector embeddings. So this is why Glean's enterprise search was so much better than anything else on the market.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:29] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But this is, this was search. It was Google search for your organisational data\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:34] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Exactly.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:35] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It was around AI, the same way Google uses AI, which is, I can connect the dots between whatever words that are actually similar. I can understand maybe a bit of context, so like all of these vector databases, the embeddings\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:50] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" was for semantic matching.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:53] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Got it. And I guess to an extent he built what would have been the Google Appliance that at some point came and went, for the enterprise that connects all these dots, but it wasn't generating answers. It\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:04] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Correct. It was not,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:06] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" to a source of information.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:07] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" it was using AI as in ML. So it was using what was AI at the time, but it wasn't using generative AI. So it was just enterprise search, you put in a query, you find the documents. But it was a better enterprise search than anything else out there. The answers were just more relevant.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:27] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Then and started selling as enterprise search.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And then how was the transition? Was it a sort of run the alarm, ring the alarm, all hands on deck or switching to this sort of Gen AI thing? Was it a more gradual,let's try this out.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:42] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It was, GPT 3 came out, and because we already had an AI team, people were keeping up with things, so it wasn't like a surprise, right? And GPT 3 came out and said, we should have a natural language assistant interface into Glean. And there was a war room, get everyone together and build the assistant.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:18:04] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And then it was build the, with the architecture that I described, it was put the generative aspect into the product. And then they started, it already always sold, enterprise search, and then they added on the assistant. And as you could buy either search or the assistant, now we're one product together.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:18:23] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" But this was, it was a very fast pivot. to let's add this assistant because now ChatGPT came out and every CIO was saying we need an AI solution. So then they can turn to Glean and Glean has an assistant. It's an AI solution. So it was, I wouldn't, exactly, I wouldn't say it was a pivot because all of the technology that was built for search was the R and RAG.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:18:51] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's the retrieval and RAG was at a clean at a head start on everyone else because for four years had been building connectors, had taken understood privacy and security and had built the retrieval engine because the exact same retrieval engine is what that was built for enterprise search is being used for the assistant.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:12] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It feels like a totally different user experience though, and almost a different positioning, right? What you've described right now wasn't, Hey, we started with your, that's history. Now that's a piece of the puzzle and the interface, the front door, the way you engage with the product as a Gen AI product is that assistant is the composition, not just finding the data, but most of the time it's less about sending me to the authoritative source and more about,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:35] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" digesting what you found for me.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:37] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So we were going through this transition as everyone was starting to get used to ChatGPT and doing some searches on Google, some searches on ChatGPT or Perplexity, Bard / Gemini. And it's the same thing. You can toggle between the interfaces in Glean. You can go to the search interface, you can go to the assistant interface.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So yeah, it was another product built on top of the same foundation. There's a foundation, how we deploy, how we secure, all of that, stayed the same. So it was the user interface on top. And of course, marketing had to change and the messaging had to change and all that. And there's a new team that's building the assistant.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:20:14] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So yeah, it was an expansion, absolute expansion. And then more recently, about six months ago, we added the platform on it as well. So each of these are gradual expansions to the amount of functionality and capabilities.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:20:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Understood. And I think, this is probably a good time to raise this. I have this hypothesis that I asked Des from Intercom as well in the previous episode, which is that when you think about AI, I oftentimes think about, tools across these sort of two axes. I think about a trust axis and a change axis.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:20:43] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So on one hand is how much does it, ask me to change, how I work. And on the other hand, how much do I need to trust it to get it right? So the more autonomous it is, the more I need to trust it to get it right. I'm going to put it, say, for example, in Fin, the Intercom support, agent, I need to put it in front of customers.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:00] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I need to trust that it gets it right. I can spot check things, but I can't have a human verify the results that defeats the whole purpose. And I guess my, my, my thesis is that, if your product is already text based, so it's already a search, it's already a chat, it's already,I guess chat has like the Slack version of chat, has the intercom version of chat, then at least the change vector, it's easier to introduce AI because, I guess those are more naturally the way that you would interface and it's easier to imagine a user on the other side.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:32] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Does that sound right to you? I mean, you've seen Slack, you've been Google, you're now at Glean on it, is it easier? Or I guess in general, how hard is it for people to go from accepting, you're going to point me to a link to somewhere else versus you'll summarize to me or even just,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:51] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" the interaction, the fact that they are in chat mode versus in search mode.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:57] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I believe that behavior change is hard. People just, they get used to doing things in a certain way, even look at the retention numbers for ChatGPT. People try it, they play with it, and then they forget that they can use it. Because you're not used to using it, it's not in your daily routine. Change is hard, and it needs a lot of repetition.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:20] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And while I agree that if you have a text interface, getting a chat interface, because you're already talking in natural language, like with Slack you're putting in messages, and there with Glean you're putting in queries. But the biggest stumbling block we have is people understanding what they can and can't do with it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:38] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So it's not that, okay, now I have a chat interface, it's what can I use this chat interface for. Search, if you think about it, when Google Search first came out with queries, people didn't understand it either. What's a query like? Now everybody knows what a query is. How to formulate a query, how to reformulate a query.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" You do a query, you don't get the results, you refactor it and you do it again, and this is just like part of life and people got used to it, but it wasn't that way in the beginning. You didn't know how to put a query in to get the right answer. So we're going through that phase with the assistant now, is that people come to the assistant and some people understand AI.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:14] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" They understand like the whole flow and like engineers understand how it's built, so they have a mental model of what it can and can't do. But then you put it out to people who have not interacted with chatbots before, with AI chat. Not, I'm not talking about customer support because customer support is such a different use case.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's like very small set of questions that people ask as opposed to universe is open and I can ask anything. So we have people coming to Glean and being like, what should my priority be for next week? Where there's no way Glean could know that information because it doesn't have the information. And people don't yet realize these are the questions that can be answered.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:52] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So while I agree with you that transition is easier, when you go to the assistant, what you can do, what will work, and what won't work. Like the example I gave you of the PMs creating this way to analyze Gong calls, somebody outside of the product team at Glean,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:09] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" about that as a possibility.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:10] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, they wouldn't even occur to them that they could do this.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:13] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" But as we get more used to this, and I contend as the next generation grows up with ChatGPT and grows up with these products, it'll be like the generation that grew up with Google search. It'll become clearer and become more obvious. And the models will get better and they'll get stronger.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. Interesting. I find the, the decision that Google has made to,introduce the AI answers into the same interface, putting aside potential antitrust aspects to it, but from a UX perspective, it's a great way to engage people on it, because they put in the search, they get a thing, they get some answer, and yeah, they can just scroll down and get what they are used to getting.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:56] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But in the meantime, they've been exposed to that answer and they build those out. And I'm curious, Glean in theory could do the same thing, right? You could toggle, as you mentioned, between the interfaces, but you could also build something that's the same, that answer is at the top,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" We, we actually do have that. We have what we call AI answers, and we actually had it before the AI overviews at Google. but we do exactly that. We have an AI answer, and you can click more, and then it expands down, and it'll get the full answer. And that is exactly that. We're careful on how often we trigger, because you want to make sure that it's going to have, it's going to be the\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" if it's a good answer, basically.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, because one of the challenges is also is when you go to search, you voice a query in a certain way. And when you go to the assistant, you voice it in a different way. So there's also the expectation of the user and how they're communicating. So you get better quality assistant answers when you understand you're going to the assistant to chat.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:50] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" But we absolutely do that. A lot of the engagement with the assistant is through the search interface and the AI answers that they get in the search interface. And we're going to be doing more of that, more of automatically triggering one or the other. I feel like in the future, there won't be these two interfaces.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:05] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And the other thing that we do is that when you're in the assistant, we give you, we tell you which query we used to query the RAG, and we give you the documents that were accessed so that you can go back and you can click on those documents to go in. It's not only, there's the citations, but there's also, here's the list of documents that we used for the answer.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:27] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So in a way, you're merging both directions. You're making it easy to go from the assistant back to search, search to the assistant, but So in a year I think these all will get merged into one and you won't have one or the other.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:39] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, it's interesting. I find a lot of compelling aspects to integration like this, because the user gets the choice and you get to interact with it. And I keep coming back a little bit to that limitations question, which is, I think, a bit hard to tell and depends on how well people build those policies.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:55] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I think it's also easier to, accept that in enterprise, in the enterprise, even if the, the breadth of knowledge is bigger than just customer support, it is still vastly smaller than the whole web, and that information. And so maybe it's okay to say that over here, the security and the element of controlling who can see what takes precedence and you lose some neural connections that might've been made that would have been non trivial, if you had fed all this information into the LLM, but that's okay, it's a fair trade off, to make.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:28] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yes.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So I guess, let's talk, indeed, I kept promising, talking about the sausage making. Let's do that now. So I guess what is it like building a product that works this way? The results of the product vary by the data that comes in, the success rates are very different, and that's on top of the fact that the LLMs move at such a kind of lightning pace and so you get new models all the time.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:52] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And if I understand correctly, you actually even don't pick the model. You have to work with models that the customer does. So maybe tell us a little bit about how do you interact with the models or just to set that, but what I'm really interested is, you talked about LLM as a judge, you had a post about this, just how do you know that your product works?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:28:09] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's very difficult. No, it's, the non deterministic aspect is the most interesting and challenging. My first week at Glean, I was talking to the head of the assistant quality team just to learn what do you do and how does it work and realize that a lot of their time was spent talking to customers, Who did not have the right expectations of what the, what it could do, or were complaining that it was non deterministic.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:28:36] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I did this query once, I did it again, and it didn't get the same thing. And we're starting to get used to ChatGPT and this, the concept of non deterministic. But in an enterprise, you're a CIO. You buy software, you pay a lot of money for it. You expect it to have the same answer every time. And so one is getting our customers comfortable with what LLMs can do and what the boundary conditions are is part of what we have to do in our product.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:03] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And it can't just be in the marketing and in the enablement. You have to have some way in the product to take care of this as well, to understand what are the challenges going to be for people. So I explained the RAG architecture. So we have our way of evaluating the search, which most of the team came from Google search ranking, built eval tools, just like Google search had.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:26] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So that kind of, that's like bread and butter, like how do you eval search? We have a whole process for\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:31] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" this is not the user side. This is internal. This is to\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:35] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" This is the.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:35] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" whether your search is working correctly with information the engineers see. Your engineers.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:40] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" This is our ranking. The ranking algorithms with a lot of information on exactly what was triggered and what wasn't and what scores it had. Now it is a little bit more difficult because at Google you could use third party raters to say, here's a change we're making. Is this change good or bad? Evaluate it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" But because this is enterprise data, we can't use third party raters. We can only use our tools and engineers to look at the data. So that's another wrinkle on top for enterprise. But, but going back to your question of what's different is understanding the user mindset when they're using this product, how do you help them through that to give them guardrails so they can better understand what the product can do and then how do you evaluate it and how do you make sure that it's working as intended with this new technology that nobody fully understands why it's giving the answer that it does.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah. What are some examples of things that you would do in the product to, to help people understand this?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:38] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" A big one is suggest. one of the things about Glean is that we understand your organization. So in the connectors that we do, we also have a Workday connector, Active Directory. So we understand who you are, who's, who are your peers, who's on your team. So we can suggest prompts that people in your team has been, have been using.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:58] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Oh, you're a PM, here's a prompt from another PM that the things that they've been doing that you might want to try. And so we can suggest or generic suggestions, we've been experimenting with all different ones, but that can help. guide people into areas of this is the right, this is ways that you can get value.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:18] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And that's going to be really important. And we want to do a lot more of that. And the Glean apps was a big way of doing that as well. Here's some more structured prompts and triggers. This is where, if you want to have an IT question, you'll go here. If you have a, want to build a customer brief, here's a clean app for building a customer brief.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:34] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So that helps people go to things that somebody has curated. Somebody like the 5 percent of people in the company who really understand how to work with prompts and LLMs, they're going to do that and they're going to help. And we're going to be doing more and more of that, and I hope that is a way that's a whole nother angle of how we are doing that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:55] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And then we also have just teams dedicated to eval, and understanding what changes need to be made, what don't, how to evaluate new models. As you mentioned, we, customers can decide what model they want to use. We, validate the model. We, certify, I should say, if Gemini 1.5 Pro comes out. So we will certify it for our customers before we enable them to use it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:19] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" But we have let our customers pick OpenAI, Anthropic or Gemini for the LLM aspect of the work. And so that's another thing that's tricky also is working with the different models.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:33] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But how,so I understand it like the suggest notion or the idea of disseminating actually probably a useful practice for any product, not an LLM specific one, which is take the forerunners and provide an easy way to disseminate the, their sample uses to the rest of the organization. But on the other side, in terms of what happens if it fails, what happens if it, hallucinations are a thing in this world, so ranges from, it didn't.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:58] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Okay. Maybe you're good on the search side. So it finds the relevant data. You feel pretty good, but didn't understand it. Did it process it correctly? Did it present it correctly? How do you evaluate it when you certify? What types of tools are, at your disposal to know when you're using a new model or even just evolving your software, that it got better\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:17] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So in, so first of all, in the product, customers can do a thumbs up, thumbs down. Obviously we get more thumbs down than thumbs up because that's just the nature of people. But we, but that's helpful until all those queries go, come back to us so that we know here's the setup of bad queries. We also evaluate things like in search.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:35] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's easy, did somebody click on it? did they find what they were looking for, in the system it's trickier because they might have gotten the answer or not gotten the answer. But for example, if they try a couple queries in the system and go to search afterwards and then find the document that they needed, we know that the assistant didn't give them the answer.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:50] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So we have some, we have a metric, the index, the satisfaction index from search and the assistant. So we look at that and we measure that very heavily of how many bad queries did we get, how many comments down did we get? So that's one\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:02] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And that's posted. Those are all things that\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:05] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Yeah, that's the\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:06] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" people using the product.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:07] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Right. That's the proxy for how well are we doing?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:11] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And if we're doing hill climbing to improve, is it going up or going down? And then evaluating is super tricky. So we have, as you mentioned, we've started using LLM as a judge. And there are many ways that the LLM could go wrong, or the whole assistant could go wrong. It could pick the wrong query to send to the retrieval engine.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:33] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" The retrieval could not find the document. It could find the wrong one, or it could miss something, and then in the generative, It could not be a complete answer, it could not be grounded in the facts, it might pull in public data instead of the data that you had. And then, so you've got the completeness, the groundedness, and the factualness.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:54] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So we've been using LLM's to judge our answers for the assistant in these different areas. So completeness is one that we get the most thumbs down. If it's not a complete answer, we'll get thumbs down for it. And then we've correlated the thumbs down with LLM as a judge and the completeness.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:35:14] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So that's the most easiest for an LLM to then evaluate the results of the LLM. And so we have completeness, we have grounded,did it, tell you which which context it came from? And then the factualness is the hardest. And the factualness, what you need is a golden set,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:35:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Factualness being the, you did not hallucinate,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:35:33] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Grounded is more of the hallucinations. We don't have a big problem with hallucinations at Glean because of the RAG based architecture and because we do the citations. So the groundedness is the most aligned to, to hallucinations, but sometimes it's not grounded in enterprise stock because it might be like the stock price that you ask for a company, and it might just be public knowledge.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:35:54] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" The factfulness is, was it, LLMs are very confident. So they'll say with great confidence that something is correct. And then a user will not thumbs down those because they'll just assume it's correct. And those are the most dangerous.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:36:07] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Right.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:36:08] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And that what we're doing is trying to actually have a golden set.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:36:13] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Where having an LLM extract queries from documents and then measure the effectiveness of are we finding them. So we get the golden set and then we're so this one actually we're still working on the fact how to best measure it, but the best part that we've done now is now we have a repeatable process for LLM as judge, we have an eval framework of how we use, and we just turn the crank when a new model comes out, and we can get an evaluation across these metrics for new models and for changes as we're making changes in the code, we can evaluate them more easily.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:36:52] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Is it perfect? No, but it's a lot better than engineers manually going and looking at every query, which\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:36:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" at the results. And that golden set needs to be created per customer.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:37:01] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So we use, a lot of times we use our data for some of these golden sets to make sure, but we do have in our eval, we actually run queries in our customer environments in their deployments, because we can't look at their content, but we can run things and evaluate them and get the results of the evaluation.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:37:19] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Interesting. And that's a good learning. So you can't look at their data. You have to make sure that your check did not cause problems with their data or see, at least try to assess, that's the case. so what you're agreeing with them is that you'd be able to run\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:37:35] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" some processes that are not actually functional or run a bunch of these tests on their platform with their data, but you won't access the data, you'll just get the results of like thumbs up, thumbs down, We're\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:37:45] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" different version of the thumbs up, thumbs down to say, yeah, it feels good for you to, to deploy this, this new version or to upgrade to this new model.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:37:52] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" We're very cautious of making sure we have very strict agreements of how we're handling customer data, but we absolutely run regressions and,which it was a,it's a interesting process that we've gone through.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:38:05] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And, so we'll talk about LLM as a judge, which is. the LLM looks at the answer and says, the result I see in the golden set seems sufficiently familiar, sufficiently similar to the result the product, the live product gave right now, and then you, I found interesting in the blog post that you, you wrote about this.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:38:24] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" There's this notion of LLM jury, which I don't know, has the risk of taking the analogy a bit too far, but, I guess is that, do you want to say a couple of words about that?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:38:32] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" It's exactly what it sounds like, just multiple, you want to assess not just one voice, but multiple voices to make sure that you're aligning.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:38:42] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I guess, partly a means to deal with the fact that LLMs themselves, like the evaluator itself is a random or is a non deterministic entity. So oddly, actually quite aligned to the reason there is a judge and jury in the actual human judicial system.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:38:58] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" That's a, yeah, that's a good point. Yeah. Because people are non deterministic too.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:39:02] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. Yeah. it's like it's built in our image, versus, maybe non enterprise or a consumer version of it, but are there other highlights? I think the things I captured so far have been, the fact that you have to use their model, you can necessarily see their data.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:39:18] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Maybe even the fundamental, architecture of being RAG based because you're going to be dealing with sensitive data. I guess what other things jumped to mind in terms of the difference between building a ChatGPT for everybody and the ChatGPT for the enterprise?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:39:36] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" The biggest one is the one you cited, for ChatGPT for consumers, it's run off the internet where everything is public knowledge, so you'd never have to worry about permissions. But the other thing is when you're building search for consumer versus enterprise, the signals that you use for ranking are different.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:39:52] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" So in consumer search, you have a lot of click signals because you have millions, maybe billions of people using it. So it's very dense, your click signals and you're like semantic similarities, your synonyms. You just have a lot more data to go on. So the kind of signals you get in the enterprise, you're not gonna get the anchors that you have in search,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:40:14] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" this, the density of click, they're very sparse. Your click data in the enterprise is too sparse to use that, but then you have different data. You have activity data. You have, Oh, this document, like a lot of people are looking at this document in the last week. So it must be important. So let's bump it up.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:40:31] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" You have the, what I talked about of knowing your team. If I'm looking for an onboarding document. For my team, I'm not going to use one for marketing. I'm going to use one from somebody else in engineering or somebody in my team to as a template to start. So we, that's a signal that has no meaning in consumer, but is very important in the enterprise.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:40:50] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Things like activity and proximity, like how proximity of the person looking at the document. What is their proximity to me? So we personalization is a very strong signal in Glean. for doing ranking, which it's a much weaker signal in consumer. So that is what the team spent the first couple of years figuring out is what are the enterprise signals that you need for ranking.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:41:17] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And the first, the whole architecture predicates on the fact that the first problem is find the relevant pieces of information in the enterprise. And then subsequently, as we've discussed before, the Gen AI UX on top of them is how you interact with that data versus pointing it elsewhere, which opens up its own set of opportunities and,and at the same time, maybe challenges around adoption, because it's a new practice.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:41:41] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And I guess the. what is your sense around enterprise's readiness to embrace all this new stuff, on it? Are they chomping at the bit to, to take it? Are they trying to stay away and just do the minimal whatever board, requirement to the AI? So they're more ready than I thought, like CIOs are like, we need an AI solution and they're all going out there and evaluating different AI solutions. Now, part of it is a checkbox because the board is telling them they need an AI solution, but there's a lot who are genuinely interested and genuinely curious.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:42:14] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I mean, you see a lot of IT departments in large companies. building, using OpenAI to try and build their own solutions. And a lot of come to us a year later and we tried to build this. It's hard, didn't realize how hard RAG is. We should use your APIs, but that, so I do think that there is a willingness.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:42:30] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" There's also some fear. So some companies are like, I, even if you say you're secure, I just don't want any AI coming anywhere near me, and those we don't even talk to, but I have been encouraged by how many want to bring it in, and then after they bring it in, you have the champions who really are like vocal and figure out how to use it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:42:54] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" That's what's going to take the biggest, the biggest change management is how do you under, how do you really educate everyone on how much they can make their jobs better? How much they can leverage products like Glean to be more productive? Because as I said with the ChatGPT example, People just forget sometimes and just don't even think of it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:43:15] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And humans change the slowest. So that's a good tee up to the final question here is we talked about the zero to one. How do people even wrap their heads around what is it that they could do. What do you imagine as the far down the road reality? I talk about AI native software development, talk about AI native in a bunch of things.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:43:34] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" What is the AI native reality? I don't know if that's the right term here for enterprise search, for Glean's vision.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:43:41] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I think that people are going to be able to automate away a lot of what they do today, a lot of the toil that they do today. And if you think of people, some people, executives has had assistance, you'll have essentially an assistant for every aspect of what you do, and you're not going to have to do a lot of this, like writing the first versions of your documents, prepping for your customers.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:44:07] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" We had a finance person build a Glean prompt to help her figure out comp for salespeople, read the spreadsheet and understand for which salesperson is getting what comp based on what they've done. Like all these things people are doing manually, and people will just get used to, it'll be a new thing, they won't even, they'll even forget when they used to have to do it manually and that'll be okay because they don't need that skill anymore and they can spend time on other things other hopefully more creative higher leverage things.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:44:36] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I'm looking forward to that day.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:44:38] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. and is it safe? It sounds compelling for sure. And I guess, you and I have both been in sort of leadership positions and so sometimes spoiled with having some of that work happen with,with others on the team. Do you find that to be a good, mental model? Is it like everybody would be a bit more senior in their org with having, a whole bunch of helpers around them and the type of work?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:45:00] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" The average person or, whatever, even the more sort of lower down the hierarchy of an organization would be doing would end up being more senior compared to what we're doing today. More managerial.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:45:12] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" I've heard somebody describe it as it's like having teams of interns. And so I don't, I wouldn't put it as everyone will be more managerially. I don't believe that. But I believe if you had interns who could do, a lot of the rote automated work that can't easily be automated, so you can do the creative work.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:45:33] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" And so I think you will get more productivity if if you all, if everybody had this five or six interns to go, build this code here, build that document there, that presentation. I think we'd all be, I think happier in, in how we're spending our time.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:45:49] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. Yep. I absolutely agree with that. Tamar, this has been excellent. I'm sure we could go on for a full extra hour to dig into it, but thanks a lot for coming on the show and sharing a whole bunch of these learnings.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:46:00] \",/*#__PURE__*/e(\"strong\",{children:\"Tamar Yehoshua:\"}),\" Thank you for having me. It's always great to talk to you.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:46:03] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And thanks everybody for tuning in and I hope you join us for the next one.\"]})]});export const richText1=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Introduction\"})}),/*#__PURE__*/e(\"p\",{children:\"In this month's episode of the Tessl podcast, hosts Simon Maple and Guy Podjarny dive into the world of AI and its impact on software development. Featuring insights from industry leaders like Itamar Friedman from Codium AI, James Ward from AWS, Jason Warner from Poolside, and Bouke Nijhuis, this episode covers a range of topics from AI-generated code to the future roles of developers. This episode is a must-listen for anyone interested in the intersection of AI and software development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"AI Testing with Itamar Friedman\"})}),/*#__PURE__*/e(\"p\",{children:'The discussion kicks off with Itamar Friedman from Codium AI, who delves into the complexities of AI in testing. Itamar emphasizes that the hardest part of generating tests for code is understanding what to test for. He states, \"The hardest thing is to know what to test for, to understand the code, to understand what are correct and incorrect systems within it.\" This understanding is crucial, as it forms the foundation for generating effective tests.'}),/*#__PURE__*/e(\"p\",{children:\"Codium AI is addressing these challenges by creating tools that help developers understand their code better, thereby making it easier to generate relevant tests. This approach is not just about generating tests but understanding the code to identify what needs to be tested. This insight drives the development of more sophisticated AI testing tools.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Transition to AWS with James Ward\"})}),/*#__PURE__*/e(\"p\",{children:'James Ward\\'s transition from Google to AWS is another highlight of the episode. Now serving as a Developer Advocate in AI at AWS, James brings a wealth of experience from his time at Google and his extensive background in Java. Guy Podjarny mentions, \"James, just moving over to AWS, actually from Google in a very interesting role, as a dev advocate in the world of AI.\"'}),/*#__PURE__*/e(\"p\",{children:\"James's role at AWS involves advocating for AI and its applications in development. He provides insights into the evolving world of Java and how AWS is leveraging AI to enhance their developer tools. His transition signifies a broader trend in the industry where experienced developers are moving into roles that focus on integrating AI into existing technologies.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Code Generation Models with Jason Warner\"})}),/*#__PURE__*/e(\"p\",{children:'The conversation then shifts to Jason Warner, who explores the intricacies of code generation models. Jason, the former CTO of GitHub and now CEO of Poolside, provides a unique perspective on the evolution of these models. He explains, \"The LLMs as they stand today are actually much better at understanding code than generating it.\"'}),/*#__PURE__*/e(\"p\",{children:\"Jason emphasizes the importance of understanding what to test for in code generation. He believes that while AI has made significant strides in understanding code, generating new code is still a work in progress. This understanding is crucial for creating effective code generation models that can assist developers in their day-to-day tasks.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"TDD and Code Generation with Bouke Nijhuis\"})}),/*#__PURE__*/e(\"p\",{children:'Bouke Nijhuis takes a hands-on approach to discuss using Test-Driven Development (TDD) to generate code. He introduces an iterative loop tool that builds components from tests. Bouke explains, \"Developers are increasingly test writers, and if the tests are good enough, you don\\'t need to look at the code.\"'}),/*#__PURE__*/e(\"p\",{children:'This approach, termed \"conference-driven development,\" emphasizes the feedback loop between tests and code. Bouke\\'s tool allows developers to generate code directly from their tests, ensuring that the code meets the specified requirements. This iterative process not only streamlines development but also enhances the reliability of the generated code.'}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Dichotomy of Understanding and Generating Code\"})}),/*#__PURE__*/e(\"p\",{children:'A significant part of the episode focuses on the dichotomy between understanding and generating code. Itamar and Jason provide valuable insights into this complexity. Itamar states, \"The hardest thing is to know what to test for is to understand the code.\" On the other hand, Jason believes that \"the LLMs are actually further along down the route of understanding code than they are generation.\"'}),/*#__PURE__*/e(\"p\",{children:\"This discussion highlights the different perspectives on the role of Large Language Models (LLMs) in development. While understanding code is crucial, generating new code presents its own set of challenges. The episode explores how these perspectives influence the development of AI tools and their applications in software development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Future Developer's Role\"})}),/*#__PURE__*/e(\"p\",{children:\"The future role of developers in the age of AI is another key topic. Jason and Guy discuss how developers' roles are evolving towards product management and architecture. Jason notes, \\\"AI assistants are evolving, but there's still a leap to make before we could even start thinking about them as autonomous junior developers.\\\"\"}),/*#__PURE__*/e(\"p\",{children:'Guy adds, \"If you\\'re drawn into software development because of the problem-solving aspects, then you might go more down an architect route.\" This shift signifies a broader trend where developers need to adapt and continuously learn to stay relevant in a rapidly changing landscape.'}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Importance of Tests Over Code\"})}),/*#__PURE__*/e(\"p\",{children:\"Bouke's perspective on tests becoming the most important artifact in development is particularly intriguing. He argues, \\\"You wouldn't need to look at the code if the tests are good enough; you have to trust the generated code from the AI.\\\"\"}),/*#__PURE__*/e(\"p\",{children:\"This shift from focusing on code to focusing on tests for validation changes the development workflow significantly. It emphasizes the importance of writing comprehensive tests to ensure the reliability of the generated code. This approach aligns with the broader trend towards automation and AI-driven development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Specialized vs. Generalized AI Models\"})}),/*#__PURE__*/e(\"p\",{children:'The competition between specialized and generalized AI models is another fascinating topic. Jason provides insights into the benefits and challenges of each approach. He states, \"In a world with infinite resources, the general-purpose model is key to AGI. But in reality, we face constraints on energy, data, and time.\"'}),/*#__PURE__*/e(\"p\",{children:\"Specialized models, like the one Jason is developing at Poolside, offer targeted solutions for specific tasks. In contrast, generalized models aim to provide broader capabilities but may face limitations due to resource constraints. This ongoing competition will shape the future of AI in development, influencing how tools are built and used.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Recent AI Developments and Announcements\"})}),/*#__PURE__*/e(\"p\",{children:\"The episode concludes with an overview of recent funding announcements in the AI dev space. Significant announcements from Cursor, Codeium, and Magic dev highlight the growing interest and investment in AI development tools.\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Cursor\"}),\": Raised $60 million for their AI-focused IDE.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Codeium\"}),\": Secured $150 million with 700,000 active users and over 1,000 customers.\"]})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Magic dev\"}),\": Raised $320 million to advance their model capabilities.\"]})})]}),/*#__PURE__*/e(\"p\",{children:\"These developments underscore the rapid growth and innovation in the AI and developer tools ecosystem, promising exciting advancements in the near future.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Summary\"})}),/*#__PURE__*/e(\"p\",{children:\"This episode provided a deep dive into the current state and future of AI in software development. Key takeaways include:\"}),/*#__PURE__*/t(\"ul\",{children:[/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The complexities of AI testing and code generation.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The evolving roles of developers towards product management and architecture.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The increasing importance of tests over code.\"})}),/*#__PURE__*/e(\"li\",{\"data-preset-tag\":\"p\",children:/*#__PURE__*/e(\"p\",{children:\"The ongoing competition between specialized and generalized AI models.\"})})]}),/*#__PURE__*/e(\"p\",{children:\"Stay tuned for more insights and discussions in upcoming episodes.\"})]});export const richText2=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"Introduction\"}),\" - Simon Maple and Guy Podjarny introduce the episode and the topics to be covered.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"strong\",{children:\"[00:00:57] AI Testing with Itamar Friedman\"}),\" - Itamar Friedman discusses the complexities of AI in testing and Codium AI's approach to generating effective tests.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"strong\",{children:\"[00:02:02] Code Generation Models with Jason Warner\"}),\" - Jason Warner explores the intricacies of code generation models and the importance of understanding code.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"strong\",{children:\"[00:04:52] TDD and Code Generation with Bouke Nijhuis\"}),\" - Bouke Nijhuis discusses using Test-Driven Development (TDD) to generate code and his iterative loop tool.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"strong\",{children:\"[00:10:08] The Dichotomy of Understanding and Generating Code\"}),\" - Itamar and Jason provide insights into the complexity of understanding versus generating code.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:13:28] The Future Developer's Role\"}),\" - Discussion on how AI is changing the roles of developers towards product management and architecture.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:19:14] The Importance of Tests Over Code\"}),\" - Bouke's perspective on tests becoming the most important artifact in development.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:27:26] Specialized vs. Generalized AI Models\"}),\" - Jason discusses the competition between specialized and generalized AI models.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:30:10] Recent AI Developments and Announcements\"}),\" - Overview of recent funding announcements and developments in the AI dev space.\"]})]});export const richText3=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[\"[00:00:21] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Hello and welcome again to another monthly episode, our second monthly episode.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:27] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" We're getting, getting to be pros here or rather, recurring.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:29] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" We're getting into a rhythm. So this month, we're getting more into, uh, our normal cadence of weekly episode. And we're actually going one week each. So myself doing one and then yourself. So we did four,this month, excluding of course, the monthly, which we released shortly as, as, learning the right piece. Yeah. so we started off with, Itamar, from Codium AI and talked about AI testing. Then we moved on to, the wonderful James Ward, a friend of mine, from many years back.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" From the ancient world of Java,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:00:58] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" From the ancient and existing world of Java, James, just moving over to AWS, actually from Google in very interesting role, as a dev advocate in the world of AI and Q Developer from Amazon, and we also talked about Jason Warner, wonderful episode, with yourself and Jason.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:13] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Super interesting stuff. Yeah. Digging into code generation models and more.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:16] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, of course. Jason, previously CTO of GitHub, now CEO of, Poolside, and Bouke Nijhaus, really interesting hands on episode about how we can use TDD to generate, code from our own tests, which we write and this lovely little iterative loop that he created,as a tool,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:34] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" to build, effectively components from tests. Yeah.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:38] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I love that episode very much came from him sharing his work, it's not anything around broader, maybe this sort of the previous establishment of, some, credentials to maybe you have something smart in this space.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:49] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's he's actually built a thing. He presented about it. And that's the only thing that caught our attention when we reached out talk to him.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:01:53] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, very interesting. It was very conference driven development, that one actually, because he was talking about, I can't remember what the topic was, he was talking about something and someone said, why can't we do it the other way around?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:02] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Why can't you get code rather than, generate tests about your code, which is where we were talking with Codium. The other way around. what about if your code was generated from your test and you thought, oh, it's an interesting project, let's go away and do it. And so that's how, yeah, conference driven development.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:14] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" I love it. All about the feedback. Yeah. So let's talk about some topics then. One, one, which I know very much interested you was the idea of how LLMs have a greater capacity or sometimes find it easier to effectively understand, more than generate. And then sometimes generate is easier than understanding.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:35] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" I know Jason had, very good insights into this as well. And sometimes maybe different from others that were there.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:02:41] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" From those of Itamar. So first of all, I think it's a, it's interesting to, to set the stage around what is the problem or what is the, the, different perspectives you might have, which is one assumes that the complicated thing, when you talk about generating anything about code, generating docs, generating, tests, generating,new codes to add into that system that the complicated piece is understanding the code and Itamar was very crisp about saying, I think the hardest thing is to know what to test for is to understand the code, to understand what are correct and incorrect systems within it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:14] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And it's not all the knowledge about what's test for sits in the, in the code itself. So maybe it's a bit more than that. But that's the hardest thing. And then it's not that it's trivial to generate the tests, but that is the easier part once you know what you're testing for. So that was interesting and it drove, I think we'll touch on that a little bit.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:29] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It drove them to actually create more tools that revolved around whatever it is that they understood in there.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:35] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And incidentally, actually on that, I remember, One of the early sessions that we did with Rishabh from Sourcegraph, the makers of Cody, they, he also mentioned when you think about the human interaction that, if you've got 200 tests, there's a certain area of code that you want the human to focus on.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:03:53] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And it's almost like helping the AI to understand what to test, not the fact that it's created so many, but the areas the AI need to focus on. So it's an interesting kind of, yeah.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:03] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Delta between those. And I think, there's something sensible about this, right? If you think about what it means to be a good developer, if you think about the effort of writing tests today, there's an effort in, whatever, writing down the lines of code, definitely in maintaining those,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:15] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" that's a different story. the real kind of tough questions is, do you know what type of tests do you want to run? Then yeah, you need to not be lazy and actually write those down. So it's interesting just that, that difference between understanding what to test for and with it, understanding the code, and then generation.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:33] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And,and I think Itamar was sort of implying that's the complexity and the hard thing that they're dealing with, which again, is not just understanding the code, but it is about identifying what they did. And Jason's view seems to have been, that the LLMs are actually further along down the route of understanding code, than they are generation.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:04:52] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" First of all, he also, unsolicited,pointed out this dichotomy, and interesting to just think about this as two things, understanding the code and creating it. And it's, it's not surprising. A lot of the sort of startups in the domain, of AI co generation are priding themselves around the, their ability to index an enterprise code, but, but also, understanding the code and then generating it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:15] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But I guess the perspective, that I understood from Jason is that, the LLMs as they stand today are actually much better because I asked him about what are the edges right now of capabilities, and he felt like understanding code is actually something LLMs are a lot better at today.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:29] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And that the generation parts of it is maybe the parts that they're still working on, they're still evolving, and he was pointing out that, these LLMs are not yet junior developers. And again, we can talk a little bit more about sort of the evolution of the individual.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:41] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So I guess what I found interesting, in the observation is just, first of all, Coming back to understanding that when you talk about whether something works or it doesn't work, there seems to be a focus on, did it not work because it didn't understand your context? It didn't understand your existing code base.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:05:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" last time we talked about how that context might be bad. there, there's a lot that you might not want it to,mimic in your context. But it needs to understand what the context was, and then the other part is the Gen part of the Gen AI.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:10] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And maybe the piece that I talked to, Itamar about is this notion that it's called Gen AI and we think a lot we lean into the generative, but actually still the interpretation, still the sort of the analytical part of AI, but using still the same LLMs and their innovations, have driven,maybe how much of the value I guess that we're occurring right now comes from that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:29] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. Yeah. And it's interesting actually 'cause when we go back to the session that we had with Amir Shevat, we were talking a lot about the categorization and I think on the last, monthly you mentioned about how we kind of,initially started off with all these categories of tools, and then when we talked to the vendors, of course, vendors are doing multiple things.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:06:47] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" I know this is something that you've thought a lot about in terms of when we think about the commoditization effectively of that understanding of that code search , the additional aspects to actually provide the value to the user, whether it's testing or whether it's something else, it's actually probably a smaller step and actually becomes a,a more unique part for that tool. So there's less of a jump to get a testing tool or documentation tool and so forth on that. what are your thoughts, in terms of the tooling ecosystem, going forward? if that becomes commoditized and how will that change how tools are produced?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:07:25] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I think that's really the eventual conclusions from this. There's two theories that you might embrace. One is that the hardest part is understanding the code and that becomes the center of gravity and that a few big platforms become really good at understanding your code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:07:38] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And because they really get it, they understand the code, they understand what's right about it. Why, how it works. they are able to produce a bunch of these surrounding functionalities like generating tests, generating documentation, generating new code within that surrounding resolving bugs because they get it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:07:54] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" They understand your code, they understand your context very well. And so that theory of how the market might evolve leads into more of like single platforms that as a company, maybe you would really centralize on. I'm going to use this platform and I would, I'm going to use its set of tools because it really understands my code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:11] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" That's one path. The second path assumes that understanding your code, while very important, will get commoditized. That the LLMs will just understand it, or there will be some diminishing return,paths or, approaches to, to improve it, to get better at understanding your code. And so everybody will do it at around the same level and without much effort.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:30] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So you just point it to my code base and I'll understand it. And from there, that leads more to best of breed tools. Now it's okay, if you already, everybody understands the code and it's not hard to get the tool to understand it. Now you really want to say, I want my testing tool to really get, testing.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:44] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I wanted to understand maybe my product analytics and I wanted to understand,real world traffic that flows through my system, and I would like you to understand my business case and what is important to me. my documentation tool maybe relates to how I disseminate that and which platforms are using it and how do I integrate.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:08:59] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And, similarly for cogeneration or bug resolution of those, those, maybe it's related to what's my platform, maybe what's doing it more secure. So each of these, different tools might bring its own specialization. And so that's more of a best of breed tool. and they both, they revolve really around what's your perspective around how hard, both from an IP perspective and from an implementation perspective, would it be to understand your code and how much is that eventually the deciding factor?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:25] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. Cause I guess if it's not, and it does get commoditized, like you say, into a model, let's say it puts everyone on a much more level playing field, and then I guess it comes down to, again, That dev stickiness, the UX,as our dev tooling aficionado Guy from, I think you've had what, 50 successful startups now in the dev tools space?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:42] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" yeah,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:43] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I've met 50 startups in the dev tools space!\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:09:45] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" You've invested in 50. It's again, down to whatever the dev favorite tool is. Not about the capability of the tool per se, but about how easy it is to use and potentially it comes down to consolidation. Again, if there's one tool that can do enough of it well enough and it's very easy to, for me to use in my workflow.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:05] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" It very often wins.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:08] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And I think, I would like the second theory that assumes that the understanding of the code would either prove not as critical or be sufficiently commoditized, because I think that leads to a more thriving ecosystem, right? If you create, if you assume that there's going to be one, like deeply understanding platform that gets your code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And therefore it would have all the cards. then it doesn't lead to competition. It doesn't lead to innovation. versus if you assume that everybody can do that, everybody can understand. Everybody's probably an exaggeration. Fine. Not everybody. You need some level of competency. and investment.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:41] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But once you've built that understanding of the code, then you can build these things. Now, each of these tools can specialize and can optimize. They might optimize indeed on ease of use. They might optimize on a specific ecosystems and verticals. They might optimize, on maybe a new stack that, that comes along.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:10:58] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Maybe it's just opinions. There are different ways to document. They're different, but they're sometimes opposed. So if you subscribe to, if you've prefer if your taste, if your preference is more on one route, you'll choose that tool.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:08] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Great example being QDeveloper,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:09] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" which yes, it's a generic code completion,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:12] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" actually other things as well, a coding system there, it has a quite a specialization around the Amazon, the AWS services base. So if if you wanted to code against an Amazon AWS API, it's very good at actually attaching to existing services and things like that. So those kinds of specializations whereby if you're developing an Amazon stack, your milage will go much further.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:33] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And I think that's a good example. I think in general, anything that's within an ecosystem, might specialize because people would have invested in that. While if you're sort of imagining a place in which it's so hard to understand the code and that's important or critical to the success, then yeah, maybe there's a few companies that are happy about it if they are the ones that have won, but then it means probably companies pick one, and maybe eventually they have more.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:11:56] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But, mostly they try to concentrate on one, and that means, a new vendor, has to plug into it, build on their understanding. So there are, it's not all gloom. If it is about that sort of central element, then what you'd expect is it expected those that, make available or are able to produce that understanding of the code to become a platform and allow others to connect.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:12:16] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But, I, I guess my, my, my hope is that it's less about this holistic understanding of the code. but also, also I think that's my expectation. I don't know. So looking at the code and maybe it's aligned with what Jason was saying, which is he feels the systems understand the code pretty well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:12:31] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" He's building a foundation model. So maybe he's okay, my foundation model, we'll just understand it as is. So there's like different levels of platform here. but but I also, I guess I, I expected that because I don't know, I don't know what your opinion is. Like I feel, I always equate it to human understanding.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:12:47] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I think, there's a level of understanding of your code that is sufficient that, at that point, if you want to write docs, if you want to write tests, if you want to write all of these other things, then okay, you have enough understanding of the code from here on, it's about your competency in that.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:00] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, absolutely. leads us nicely on, actually, we talked about this a few times, prior to this month, but came up a number of times during the sessions this month, which is really about what a future developer will look like, and I think there's a couple of, pieces here, one talking about the role of a developer today, and how I guess AI is currently impacting on that or assisting that role today and perhaps even an overestimation of its capability today, which we'll talk about that in a second.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:28] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And then secondly, in future, how do we expect AI to change our jobs and in what capacity? Maybe we start with Jason, first, because he talked a little bit about the impact of AI, on our behaviors today. And this notion, many people have batted about, which is, it is as good as a junior developer today.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:48] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:49] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Now his opinion of course was\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:51] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, that it's not at all. Yeah. I think the quote was something like, none of these are junior developers. .\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:13:55] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. The quote was, AI assistants are evolving, but there's still a leap to make before we could even start thinking about them as autonomous junior developers.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:02] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And there's a lot of talk about hands on keyboard and the need for developers to still have hands on keyboard. because it's just not there, it's not there in that fully automatable sense.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:10] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I find all this quite fascinating. First of all, it's an observation that is about the current state of the art in terms of the technology and, in his defense, he was referring to his own platform as well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:20] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So he says like, it's not like the models just produced junior developers out of the blue. I like, use the word autonomous there. last time we geeked out a little bit about, about the term autonomy. I talked to Des at Intercom, about that and, autonomy probably is around just sort of the, it's a, it's a size question.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:36] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So there's some,can you autonomously summarize an email for me? They can do that today. but you don't call that autonomous because it's too small. It's the task is too small. So I guess there's a little bit of lack of clarity about what is a junior developer? What is the scope of a task that a junior developer can do that is enough to give it that sort of title autonomy, right?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:14:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But, but I think it's, I thought that was really interesting around,first of all, it led to a bunch of this conversation around understanding the codes as well. He implied, I don't think he said that explicitly, that it would be better able to explain the code to you at a relatively senior level than be,able to execute it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:13] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I think really a lot of that boils down to autonomy, right? Andif you think about this in the context of what is a, what is the future of a developer, I think it's interesting. and I guess he was still leaning to in the next few years, we're still thinking about primarily AI as assistants and the developer engaging with them,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:33] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" which I think is,is like a little bit unexciting, but maybe it's a transitionary period. I don't know how much do you like reviewing tests and, and code that got generated.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:41] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" It's going to become like the code reviews, right? Whereas, if it's a, if it's a few lines, then maybe I'll look at it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:15:46] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" If it's a 500 line, maybe I'll scan it and have no comments about it and just accept it. And I think it's a little bit, it's a bit of a bummer thought to think that we will get stuck in this mode. It's fine, Copilot, Poolside,whatever, all these things, they're going to generate code for us.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:02] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And then we will, we will review that and, you would have written it faster. Fine, maybe we're faster, but as long as it's not autonomous in the sense that we can trust it, then,basically our job becomes reviewers and that's not a fun job.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:17] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And that's the key, isn't it? It's trust.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:19] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" It's trust in the ability to say, look, I'm accepting what you are, what you're providing me. Yeah. And that's not through my blind trust of your ability to create code. It's through testing and assertions that are passing based on that,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:32] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" but once you are able to trust, you're not reviewing, so much anymore, or, it's not in the usual workflow to, dip into the code and start looking at that. Jason mentioned two parts, product, and architects. Now we've battled around architect a little bit in the past, as well as the future of development.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:47] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" I think , that resonates with you as well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:49] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It does. Yeah. Absolutely. So I think once you get past the hope that the future of a software developer is not a code reviewer. I don't think we want that, and so because we don't want that, I don't think it'll stay there.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:16:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And let's assume from an autonomy perspective, we will have gained enough trust to, to give it that title of autonomy. Then, okay, what does a developer do? And I really liked how Jason phrased that, quite aligned to how I think, which is on one side, you might have some developers that lean, product.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:12] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" They lean, user needs. They lean an understanding of what is it that the software should be able to provide to the user to address the problem. And so I think that's one path. It's probably a bit more well understood right now, and some developers love that, and some do not. And so if you're a developer that more leans into, into the technical aspects,I'm drawn into software development because of the, problem solving aspects to it,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:37] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" then you might go more down an architect route. and the architect route is interesting because architects actually don't touch code much, I don't know, like I, I'm at this point overhead and quite rusty. And so hard for me to,comment on what I do today. But as I was more in architect roles, I understood code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:17:54] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I cared about code, I could look about code, I could understand the implications of code. My job wasn't really about writing code. Also, the more, and I think we discussed this, the more towards architecture you go, the more kind of naturally polyglot you become. And he leaned into that. And he talked about all sorts of esoteric languages generating them. I think it's interesting, it's these two paths. If you were drawn into software development, because of the creation element because you wanna solve problems and it's exciting to you that you can write some code and suddenly you can have a system that does X, you might move up the product manager route.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:18:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. Which certain developers do today. And if you're drawn into software development more because of the problem solving, then you might move up this architect route that is more about solving problems at a slightly higher resolution with code being a solved problem just as, as we do today with abstractions in languages, it's send a request over the network and you don't feel like you are solving a lesser problem because you don't need to actually write the bytes and the bits on the, on the wire.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:18:54] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. And I think there are two other, points of view here, which are well worth mentioning. mentioning. First of all, the testing point of view, and secondly, the kind of more of the role where we think about it from a larger specs, space, the testing point of view is interesting because it kind of was really focusing on the code no longer being the most important artifacts that a developer would work on.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:14] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And this is really. something that would help gather that trust of generated code. An interesting quote here from,Bouke. He said, developers are increasingly, test writers and , one of the things that was leaning into is the fact that if developers are focused more on tests and the tests are creating a specification as to what the application should look like and then the application can be generated. Number one the tests are the most important artifact in that because that's your source of truth.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:44] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And yeah, number two, you don't even need to look at the code because if your tests are good and the tests pass, you don't care how it's implemented, almost. And in our session, there was in fact, I fell in the trap of, Oh, can you show me the code? I'd love to see the implementation. And Barker was like, no, you don't need to look at the code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:19:57] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And it was an interesting conversation and it's more kind of habit to say, I want to see the code. I'm curious as to how it's done that. So that was one piece, an interesting piece. And, the second piece, was Itamar, and when you were talking the future, he was talking about it being PRD and specification and from that, the tests can be generated directly from that specification that kind of leans very much into the AI Native, viewpoint of having the specification and things being pulled from that specification, very different style of development, which we're used to today.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:20:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, absolutely. And I think there's different things that pull at this future role of a developer. One aspect of it is what would the tools make available, right? What can they do? And therefore, what is the job that is left to be done? And so, if tools are able to write code, and they're able to write code to a level that you trust then you have to start thinking about what's my job, It doesn't matter what I, what I don't like doing what is needed of me.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:20:55] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I think, I'd like to think that writing tests is just like this is just the outcome of the validation for most people is really no more fun than reviewing code, and so I think it is very functional, it's a way for us to drive I think the move towards PRD and specs and those terminologies, similar to the statement of architects also touches, on what could be the next craft.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:19] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Because I don't think you're going to get a lot of 18 year olds signing up to university to become test writers or code reviewers, but would they sign up to be, product requirements writers? So your problem solving is on the user base or would they sign up to become architects?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:33] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I think those are our real paths, I think we're combining this, today to be able to achievevalue from the LLMs. You need to either allow them to write to the code and then review it because you don't trust them yet and do that less and less,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:21:48] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" or you need to provide tests that give you confidence. In both cases, what you're doing is you're verifying their work. And I think really we're gonna evolve into a place in which it's a more collaborative mode in which,you are trusting that they will do this piece, and therefore you are doing something that is more high level.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:05] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, another good Jason's quote, I wish I was 25 years old again. The next 20 years would be totally different. Two questions, Guy. Question one, do you remember being 25 years old? And question two, it's actually really hard to be someone just coming into the space right now as a developer, whether through university or just being 25 and, trying to make your way because, you can't see where you're going to be in 2, 3, 4, 5 years if AI does take over this space. But secondly, it was at least a little bit clear in terms of your trajectory, your path, Having the idea of product and the architect side, it's lovely to be able to view that path.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:45] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" What kind of advice would you give to people starting off now? Is it something they should be frightened about? Something they can get ahead in?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:22:52] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's really tough, right? It's, I think that the factual part, I think in Jason's quote there is the next 20 years will have been very different.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:00] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I think that's, really hard to contest. If you start coding today, or if you're 25, maybe you're a few years into that coding, then your next 20 years are different than if you did that 20 years ago, or whatever, a hundred years ago, like myself. and,and but I think whether you wish the first part of I wish I was, I think that's a bit debatable.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:20] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's hard to really saywhat is the advice to build in? I think there's the life advice, which is, you invest in, in resilience and, and be adaptable and learn to learn and, and be able to adjust. But I think also a lot of the core principles are similar to, maybe guidance that you would give people around being too enamored with a single language.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:40] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And so to say, really, again, when you're a senior enough developers, I don't care. I initially learned to program in Pascal, back in the dark ages, and then I learned C and C++ and then Java, and then .Net, and then JavaScript. And it's okay. Like it's all programming, it's all development and they're all tools to a means.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:23:57] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And so it's maybe just leaning into what is making you tick and think about code, not as an identity, but rather as tools of the craft.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:06] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, absolutely. I think that's really good advice. we talked about the code, not looking at the code and let's, I'll mention a couple of quotes and then we'll talk about, Jason.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:14] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Love that provocation, right? Why do you need to look at the code?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:17] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And it's, it is against what we as developers want to do, right?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:22] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's a visceral.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:22] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" As soon as people see a project, I want to see the code. I want to see the code. so James said, I'm wondering why does that code actually need to exist in a human readable form if the AI can actually get it right? If, the AI, has some form or way to express what I'm trying to get to, then why does the code need to be readable?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:41] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Bouke also said you wouldn't need to look at the code, if the tests are good enough, you have to trust the generated code, from,from the AI. The tests are actually what become, paramount. And, now Jason, a hard time ingesting that kind of viewpoint, he still thinks there are experts that will need to understand the code.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:59] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Your thoughts?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:24:59] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah. And I think he even phrased it a little bit as it's hard for me to foresee that, that you wouldn't need to know code at all. Again, two interpretations that might be here. One, which I think is right, is we're having, some separation anxiety here, and I think it's absolutely right that if we get that trust and that if is probably a word to lean in, if we get to that trust, that whatever higher level representation we have, is trustworthy, that it would create something that, that actually provides that, writes the code behind the scenes, or does whatever it is that it wants to do to provide us with the functionality that we requested.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:35] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Then why do we need to look at the code? At the same time, a lot of, developers today, they perceive themselves as coders, a lot of, the world of software development, of open source, they are all about the source, they're about the code, and people take pride in their craft.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:25:49] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" you I think the world of creation has seen this, for instance, with the move to digital, and it probably still is the case where a lot of people feel that, drawing with brushes and paints is not the same as drawing digitally. And some people feel the opposite. And some people feel that pain, you know, playing an instrument, that is, actually producing vocally, the sounds, is very different than is the true music creation versus anything digital, let alone EDM or electronic sort of sampling.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:15] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" So I think it's a new mode of creation and I think it'll take a while. I do want to point out that,we still have mainframe developers out there. They're paid very well, because technology doesn't die and it it sticks around and someone needs to maintain it, but ,it's hard for me to imagine how a cutting edge developer in five years time, let alone 10, is really coding,much, if anything at all.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:39] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. Very interesting, and of course. Models will improve over time to, match that reality. one of the things Jason, obviously at poolside creating, looking at creating a specific,Gen AI model around coding, and there's a lot of talk around specific models versus generic models and which will win, which are needed.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:26:58] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" and, yeah, great quote which Jason also said was in a world with infinite resources, the general purpose model is key to AGI. But in reality we face constraints on energy, data, time. How do we balance these limits? So what are your thoughts in terms of whether, generic models will get to that stage where they're going to be good enough for coding versus specific models that are trained for that very purpose,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:23] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" are going to be a league ahead of those generic models.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:26] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, that is the multi billion dollar question based on the amount of venture investment right now in, in foundation model. I thought that perspective,was very interesting and very valuable. And I don't know, like maybe I'm too much of a reductionist here, but I like, trying to find the primary pivot points in a bunch of these decisions.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:27:45] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I think the key one here is around scaling. If you believe that we are,going to be able to continue to feed these, these training processes more and more data, and continue to see, as we have, that if you give them, large volumes of data, large volumes of compute, and if you do substantially more than that, Then you will get substantially better results,\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:28:09] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" then I think that diminishes the, the value of dedicated models like Poolside, like robotics models, to an extent, even, Itamar talked about their sort of test specific models. Although I'm not sure how those are built. We didn't talk as much about that. And if you're going generic. On the other hand, if you think, and there's some rumors around that, that we're actually at a bit of a local maxima in terms of our ability to train, and that the GPT 5 and equivalent models will actually have a hard time and they will become more like 4.1, 4.2, and it'll take us a bunch of years, two, three, four years to figure out how to make some advancement. Maybe power is indeed the, the obstacle, maybe it's something else. And so we resort to algorithmic changes, things like that. Then it actually is easy to believe that if you took a code only model, and you basically, Yeah, picked amidst the huge volumes of data, the subset of data that is relevant to coding.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:03] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" However, you need to be good at that. then maybe you produce something that is a lot better because it's purpose built and doing the same for different domains. So I think it's really interesting. And I guess as a consumer, I'm happy. That there's enough venture money, you know, to try these different paths.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:19] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" There are environmental aspects to it, which are not negligible, which is not awesome, as a, as an investor. Those are very hard decisions to make, and I guess they're still being made because if they do prove correct, there are very big amounts to be won. So, I think as developers, as tool builders on top of LLMs, I think it's yet another reason to believe that the LLMs will continue to get better and better at generating code because we have very capable, very well funded companies trying both paths and others.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:29:51] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. Yeah. Super interesting. It'd be interesting to see how that changes over the years. So that kind of almost wraps us up. But, but guy, we're into September now. What's that? Fall? That's fall now, isn't it? First, first, first week of fall. So August really was pretty good for AI news around, around a number of startups making various announcements.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:10] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Of those announcements, anything catch your eye?\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:12] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Yeah, we had a bit of a flurry of funding announcements in the AI dev space specifically right now. we saw Cursor, which is an AI focused IDE and more, announced a 60 million round. We saw Codeium with an E, which is different than Codium AI, who had on it.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:27] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" That's a very kind of unfortunate reality, I think, for both, both Codium companies. It's not like you name your company Codium and you say, Oh, I think many people will do that. anyways, it is what it is. So Codeium with an E, has raised a $150 million round at a $1.25 billion post, and maybe a bit more,interesting beyond the sort of the VC,hype maybe, or like frenzy around this domain, not to say that it's overhyped.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:30:49] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's just, but there's definitely a lot of hype is the, is that they publish that they have 700,000 active users. I don't know what active means here. Is this like from the beginning or not, even if we assume 700,000 users that have used the product at least once, that's substantial, and over a thousand customers.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:03] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And again, I don't know if it's an individual developer, and I guess the last one that, it may be sorted by, by size here is Magic dev, which is more of a model company. They're somewhat secretive, so I don't know everything around it, but they raised, $320 million.and they claim that they effectively can achieve a hundred million token context window for code related things through some smart summarization or something of that nature, and, first of all, reinforces the point that, there are many different smart companies that are trying to get code generation to work better in many different ways.Definitely reinforces the point on, on dollars flowing into this world to try this out.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:43] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Which is, which is cool. and, I guess it shows that it's not a Copilot only world, which, which is exciting because I think, I'm at fault of that as well, which is, it's easier. It's hey, AI dev tool, which one would you name? And I think there's like a strong bias to, towards Copilot.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:31:59] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" But I'm happy to see a bunch of these companies get some pretty substantial traction in adoption. And I think that would lead to a better dev tooling ecosystem.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:08] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" I had a look at some of the Copilot stats from just before this session.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:11] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" And actually there's not many numbers apart from the bigger announcement a while back, which people have maybe heard from Satya. 40 percent of GitHub's growth this year.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:20] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" Came from Copilot and the likes.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:22] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah, absolutely, and, open source developer platforms now reached $2 billion run rate.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:27] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Copilot has over 77,000 organizations which are adopting, 180 percent increase from the previous year. So yeah, it's a slightly different order, but they're certainly not alone in this space. Comparable numbers and growth from other companies, which is great to see.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:32:40] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" I think Copilot is doing a great job and GitHub and Microsoft commercially are doing a great job kind of bundling in and, making it available to many people. And I think they're helping the ecosystem in that they're driving, familiarity and trust and maybe getting people to find out what they do and what they don't like about Copilot, which opens the door to now for a new vendor to come and say hey here's a pain.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:04] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" It's very hard when you build tools, when you build a startup, a disruptive entity, you're disrupting some pain, you need to go to someone and say, Hey, I know you're doing this thing and you really want to do it, but this aspect of it really annoys you and you're building those out. And when it's a totally brand new practices, yeah, I know you've been using these AI coding assistants, for all this time.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:25] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And aren't you annoyed that when really the, all this time is I've been using them for five minutes, it's hard to disrupt. And so to an extent they are establishing them, establishing themselves as an incumbent, as the starting point, which on one hand gives them some ability to reach many users.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:42] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" On the other hand, identifies, there's definitely a lot of magnifying glasses on all the things that they're not doing correctly. And, a whole bunch ofstartups trying to,address those in more or less systematic ways.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:53] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Yeah. we'll see what happens, in the next few months as well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:56] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" We'll, we'll talk about anything that we find interesting in the news as well.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:33:58] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" And I would, and I would say I guess if I were a betting man, but also I know of a couple of them, I think September and October will have their own share of,exciting AI dev tool announcements.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:08] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" Excellent. we've got a few more sessions, great sessions coming up, including one that I just listened to actually for next week. So I'm looking forward with that one. Tamar, the chief product officer of Glean, so yeah, very interesting session, which we'll be releasing next week and many more for the rest of the month. So stay tuned for other sessions coming up later this month.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:27] \",/*#__PURE__*/e(\"strong\",{children:\"Simon Maple:\"}),\" In the meantime, thanks all for listening and tune into the next session.\"]}),/*#__PURE__*/t(\"p\",{children:[\"[00:34:31] \",/*#__PURE__*/e(\"strong\",{children:\"Guy Podjarny:\"}),\" See you there.\"]})]});export const richText4=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Introduction\"})}),/*#__PURE__*/e(\"p\",{children:\"In this episode of the Tessl podcast, Simon Maple is joined by Bouke Nijhuis, the CTO of CINQ, a consultancy company based in Amsterdam. Bouke has a rich background as a Java developer and has been an international speaker since 2019. The focus of this episode is on Test-Driven Development (TDD) and how AI can assist in generating code from tests. This discussion will delve into Bouke's experiences, his current projects, and the potential future of AI-assisted development.\"}),/*#__PURE__*/e(\"p\",{children:\"Bouke is not only an experienced technologist but also an influential voice in the tech community. His journey from a Java developer to the CTO of an innovative consultancy has equipped him with a deep understanding of both the technical and strategic aspects of software development. Known for his engaging presentations and practical insights, Bouke has been sharing his knowledge on international stages since 2019, focusing on the intersection of AI and software development. At CINQ, he leads initiatives in Data, DevOps, and Development, driving innovation through AI-enhanced workflows.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Bouke's Background and Role at CINQ\"})}),/*#__PURE__*/e(\"p\",{children:\"Bouke Nijhuis introduced himself and his journey from a Java developer to the CTO of CINQ. As he explained, \u201CI started out as a Java developer at CINQ about 10 years ago. Actually, in 10 days, it will be exactly 10 years ago.\u201D CINQ is a consultancy company based in Amsterdam with specializations in Data, DevOps, and Development. Bouke shared that in his role as CTO, he oversees the development unit, focusing on Java, Kotlin, Angular, and React technologies. He also enjoys international speaking engagements, having started in 2019, which allows him to share his knowledge and engage in discussions with the tech community.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Introduction to TDD\"})}),/*#__PURE__*/e(\"p\",{children:\"Test-Driven Development (TDD) is a software development approach where tests are written before the code itself. Bouke shared his initial experience with TDD, stating, \u201CI learned about TDD, I would say about 15 years ago, and I think it\u2019s pretty hard to do it the official way. I think I\u2019m doing a little bit in the middle.\u201D Despite not being a diehard TDD practitioner, Bouke appreciates TDD for helping him develop faster and better. The benefits of TDD include ensuring code correctness, promoting better design, and providing a safety net during refactoring.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"AI Coding Assistants in Bouke's Workflow\"})}),/*#__PURE__*/e(\"p\",{children:\"Bouke discussed the various AI coding assistants he uses, such as GitHub Copilot, JetBrains AI Assistant, and ChatGPT. These tools integrate seamlessly into his daily workflow, helping him generate code and identify bugs. Bouke emphasized the value of these assistants: \u201CI use them to just generate code for me. I use the, if I just press enter, it does a proposal, I like that, but I like even better that you can chat with it, and then you can reason about the code and ask it to generate something and to improve upon it.\u201D This interaction is akin to having a virtual pair programmer, making AI assistants an invaluable part of his development process.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Generating Tests with AI\"})}),/*#__PURE__*/e(\"p\",{children:\"The process of generating tests using AI assistants involves providing existing code and asking the AI to create corresponding test cases. Bouke highlighted the advantages and limitations of this approach: \u201CI ask it to generate test cases, but that\u2019s only when I already have an implementation.\u201D While AI-generated tests can cover happy paths and some edge cases, it\u2019s crucial to manually review these tests to ensure they are comprehensive and correct. This manual validation helps catch any inaccuracies or omissions that the AI might overlook.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Generating Code from Tests\"})}),/*#__PURE__*/e(\"p\",{children:\"A particularly intriguing concept discussed in the episode was generating code from human-written tests using AI. Bouke\u2019s research in this area led to his talk \u201CTDD and Gen AI: A Perfect Pairing.\u201D He explained the motivation behind this idea: \u201CCan you also do it the other way around? You as a human, can you write a test? And then can you ask the AI to come up with an implementation?\u201D This exploration resulted in Bouke creating a tool to automate this process, allowing developers to write tests and have AI generate the corresponding code, which can then be validated against the tests.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Practical Implementation and Demos\"})}),/*#__PURE__*/e(\"p\",{children:\"Bouke provided a step-by-step description of how his tool works. Starting with simple problems like odd/even tests and prime number generation, the tool automates the process of generating code from tests. Bouke introduced a Maven plugin to handle more complex frameworks like Spring Boot, demonstrating its capabilities with a Hello World example and a more intricate age calculation endpoint. This progression from simple to complex scenarios showcases the tool\u2019s versatility and practical applications in real-world development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Challenges and Solutions\"})}),/*#__PURE__*/e(\"p\",{children:\"Using AI to generate code presents several challenges, such as non-compiling code and incorrect implementations. Bouke addressed these issues with a feedback loop mechanism: \u201CIf you would summarize it, I use it like a virtual pair programmer.\u201D This loop involves providing the AI with test results, prompting it to refine the generated code until all tests pass. Comprehensive testing, including performance and security tests, ensures the robustness of the final implementation, highlighting the importance of detailed and thorough testing.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Future of AI and TDD\"})}),/*#__PURE__*/e(\"p\",{children:\"Bouke envisions a future where tests become the primary artifact, and code generation is fully automated. He shared his thoughts: \u201CIf we trust the generated code from the AI, and that means if we trust our test, then the test will obviously be more important. We don\u2019t care about the implementation anymore.\u201D This shift could significantly impact software development practices, emphasizing the quality and comprehensiveness of tests. Ethical considerations and the reliability of AI-generated code will play a crucial role in this evolution, ensuring that AI tools are used responsibly and effectively.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Bouke's Experiences with Different AI Models\"})}),/*#__PURE__*/e(\"p\",{children:\"Throughout his research, Bouke experimented with various AI models, both local and cloud-based, such as Llama 3, ChatGPT, and Claude 3.5. He shared his insights into their performance and capabilities: \u201CThe Llama 3 models are the best one for my use case. A few months back, I switched to Llama 3 and nowadays I\u2019m using Llama 3.1 and that works really well.\u201D This comparison highlights the strengths and weaknesses of different models, helping developers choose the most suitable tools for their needs.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Summary\"})}),/*#__PURE__*/e(\"p\",{children:\"In this episode, Bouke Nijhuis shared his journey from a Java developer to the CTO of CINQ, his experiences with TDD, and the role of AI in enhancing development workflows. Key takeaways for developers include the importance of TDD, integrating AI tools, and staying updated with AI advancements. Bouke\u2019s vision of a future where tests drive code generation underscores the potential of AI-assisted development. Developers are encouraged to experiment with AI tools and adopt TDD practices for better software development, ensuring they remain at the forefront of technological innovation.\"})]});export const richText5=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(o,{href:\"https://github.com/BoukeNijhuis/ai-native-dev-examples\",motionChild:!0,nodeId:\"X7VlByTfx\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"AI Native Dev Examples\"})})}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(o,{href:\"https://github.com/BoukeNijhuis/test-driven-generation\",motionChild:!0,nodeId:\"X7VlByTfx\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Code Generator JAR Repo\"})})}),/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(o,{href:\"https://github.com/BoukeNijhuis/test-driven-generation-maven-plugin\",motionChild:!0,nodeId:\"X7VlByTfx\",openInNewTab:!1,scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(n.a,{children:\"Maven Plugin Repo\"})})})]});export const richText6=/*#__PURE__*/t(a.Fragment,{children:[/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:00:22] Introduction and Welcome\"}),/*#__PURE__*/e(\"br\",{}),\"Simon Maple introduces the episode and guest Bouke Nijhuis.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:00:57] Understanding the Workflow\"}),/*#__PURE__*/e(\"br\",{}),\"Bouke explains the basic workflow of generating code from TDD-style tests.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:05:00] Initial Demonstration\"}),/*#__PURE__*/e(\"br\",{}),\"Bouke demonstrates generating code for an odd-even test using ChatGPT.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:09:52] Tackling More Complex Scenarios\"}),/*#__PURE__*/e(\"br\",{}),\"Bouke showcases generating prime numbers and integrating libraries.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:12:19] Automating the Process\"}),/*#__PURE__*/e(\"br\",{}),\"Introducing the automated tool for generating code and running tests.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:17:28] Improving the Tool with Existing Tests\"}),/*#__PURE__*/e(\"br\",{}),\"Bouke discusses using the tool to improve and refactor existing code.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:24:24] Adding New Features\"}),/*#__PURE__*/e(\"br\",{}),\"Demonstration of adding new features by writing additional tests.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:26:06] Overcoming Challenges with Cloud Models\"}),/*#__PURE__*/e(\"br\",{}),\"Bouke talks about the benefits and challenges of using cloud-based LLMs for more complex tasks.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:32:28] Integrating with Frameworks\"}),/*#__PURE__*/e(\"br\",{}),\"Using a Maven plugin to integrate generated code with Spring Boot.\"]}),/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:38:01] Conclusion and Next Steps\"}),/*#__PURE__*/e(\"br\",{}),\"Final thoughts, key takeaways, and resources for further exploration.\"]})]});\nexport const __FramerMetadata__ = {\"exports\":{\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText6\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],
  "mappings": "iOAAsJ,IAAMA,EAAsBC,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uPAAuP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gCAAgC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8EAA8E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,sXAAsX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gJAAgJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sGAAsG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qUAAqU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0EAA0E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uRAAuR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0UAA0U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oSAAoS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gaAAga,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+UAA+U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sTAAsT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,4cAA4c,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0IAA0I,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mVAAmV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oPAAoP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kMAAkM,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0VAA0V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mZAAmZ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uHAAuH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0aAA0a,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2LAA2L,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0XAA0X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mVAAmV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oVAAoV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6aAA6a,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,4bAA4b,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,iRAAiR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uUAAuU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0RAA0R,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oBAAoB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,UAAU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2FAA2F,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,kWAAkW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oXAAoX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6RAA6R,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2KAA2K,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iWAAiW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kGAAkG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uUAAuU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2TAA2T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qYAAqY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0OAA0O,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,wVAAwV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,4gBAA4gB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,4KAA4K,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,snBAAsnB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qTAAqT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2bAA2b,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,6UAA6U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wPAAwP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yZAAyZ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gnBAAgnB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2TAA2T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gLAAgL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kFAAkF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,WAAW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mOAAmO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6BAA6B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8MAA8M,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uBAAuB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8BAA8B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+RAA+R,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,iDAAiD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0LAA0L,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,kTAAkT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qTAAqT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6WAA6W,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qUAAqU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mdAAmd,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mCAAmC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yUAAyU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yUAAyU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gOAAgO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kTAAkT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0UAA0U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wiBAAwiB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gSAAgS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8EAA8E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2VAA2V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qUAAqU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uUAAuU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sXAAsX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,8UAA8U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sXAAsX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0QAA0Q,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+BAA+B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,kEAAkE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qSAAqS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gYAAgY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iSAAiS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uYAAuY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oCAAoC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,iWAAiW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+TAA+T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gXAAgX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,wOAAwO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,sTAAsT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,giBAAgiB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,OAAO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uYAAuY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wWAAwW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,waAAwa,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,icAAic,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6XAA6X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0GAA0G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0DAA0D,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,eAAe,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+FAA+F,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sTAAsT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,ylBAAylB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,8GAA8G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+UAA+U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6TAA6T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+VAA+V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mWAAmW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+WAA+W,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sLAAsL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mbAAmb,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wWAAwW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,gVAAgV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,8WAA8W,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+OAA+O,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qDAAqD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mBAAmB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4BAA4B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qDAAqD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+UAA+U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,4VAA4V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uSAAuS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+QAA+Q,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kDAAkD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yYAAyY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,8OAA8O,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,SAAS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qEAAqE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yjBAAyjB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2GAA2G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wEAAwE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,sTAAsT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,yQAAyQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oNAAoN,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4JAA4J,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,kNAAkN,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qUAAqU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gLAAgL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,iJAAiJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gQAAgQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uFAAuF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0QAA0Q,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uQAAuQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,0TAA0T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,wWAAwW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,yTAAyT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,oWAAoW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2XAA2X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+aAA+a,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2iBAA2iB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,qTAAqT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,uVAAuV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,8VAA8V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gVAAgV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uHAAuH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,+YAA+Y,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,2hBAA2hB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,mCAAmC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2YAA2Y,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wMAAwM,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,wTAAwT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6PAA6P,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+NAA+N,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,iBAAiB,CAAC,EAAE,6DAA6D,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8EAA8E,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeC,EAAuBH,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8eAA8e,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,iCAAiC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wcAAwc,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iWAAiW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,qXAAsX,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8WAA8W,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0CAA0C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+UAA+U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wVAAwV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,4CAA4C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oTAAqT,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kWAAmW,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,oDAAoD,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,8YAA8Y,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kVAAkV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,6BAA6B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wUAA0U,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4RAA6R,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,mCAAmC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iPAAmP,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,6TAA6T,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,uCAAuC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,iUAAiU,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yVAAyV,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0CAA0C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,kOAAkO,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,QAAQ,CAAC,EAAE,gDAAgD,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,EAAE,4EAA4E,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,WAAW,CAAC,EAAE,4DAA4D,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4JAA4J,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2HAA2H,CAAC,EAAeF,EAAE,KAAK,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,qDAAqD,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+EAA+E,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,+CAA+C,CAAC,CAAC,CAAC,EAAeA,EAAE,KAAK,CAAC,kBAAkB,IAAI,SAAsBA,EAAE,IAAI,CAAC,SAAS,wEAAwE,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,oEAAoE,CAAC,CAAC,CAAC,CAAC,EAAeE,EAAuBJ,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,qFAAqF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,SAAS,CAAC,SAAS,4CAA4C,CAAC,EAAE,wHAAwH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,SAAS,CAAC,SAAS,qDAAqD,CAAC,EAAE,8GAA8G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,SAAS,CAAC,SAAS,uDAAuD,CAAC,EAAE,8GAA8G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,KAAK,CAAC,CAAC,EAAeA,EAAE,SAAS,CAAC,SAAS,+DAA+D,CAAC,EAAE,mGAAmG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,wCAAwC,CAAC,EAAE,0GAA0G,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,8CAA8C,CAAC,EAAE,sFAAsF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kDAAkD,CAAC,EAAE,mFAAmF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,qDAAqD,CAAC,EAAE,mFAAmF,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeG,EAAuBL,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,kFAAkF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+DAA+D,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,keAAke,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kCAAkC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mRAAmR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+EAA+E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gRAAgR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,qDAAqD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2NAA2N,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gJAAgJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,qPAAqP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gUAAgU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,uTAAuT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,qHAAqH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0lBAA0lB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wWAAwW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iHAAiH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,iTAAiT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,uLAAuL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kSAAkS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,sUAAsU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4VAA4V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,6YAA6Y,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,6RAA6R,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gRAAgR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0TAA0T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,6OAA6O,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qXAAqX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mXAAmX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,inBAAinB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mRAAmR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,6TAA6T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8UAA8U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,sWAAsW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oTAAoT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wUAAwU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,ofAAof,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,sVAAsV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,QAAQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+CAA+C,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,kUAAkU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,sBAAsB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2VAA2V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0SAA0S,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4TAA4T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8OAA8O,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,kCAAkC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,6CAA6C,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,2XAA2X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,keAAke,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uXAAuX,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wSAAwS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4TAA4T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mTAAmT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,wfAAwf,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mTAAmT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,QAAQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gCAAgC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iHAAiH,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,oKAAoK,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,8LAA8L,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gNAAgN,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kUAAkU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+XAA+X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uUAAuU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8UAA8U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+KAA+K,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,0HAA0H,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uSAAuS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kPAAkP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,4CAA4C,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gPAAgP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,oTAAoT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,6CAA6C,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kOAAkO,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iTAAiT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,ubAAub,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,yTAAyT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,ypBAAypB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0fAA0f,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,4ZAA4Z,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,yfAAyf,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,6VAA6V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,wkBAAwkB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gdAAgd,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0XAA0X,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,yRAAyR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oPAAoP,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4UAA4U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,4oBAA4oB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mJAAmJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oJAAoJ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8VAA8V,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,gWAAgW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wUAAwU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0IAA0I,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,sLAAsL,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,qEAAqE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mEAAmE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mBAAmB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,2UAA2U,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,qUAAqU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,iBAAiB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2oBAA2oB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uRAAuR,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,kfAAkf,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,+YAA+Y,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gUAAgU,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,+ZAA+Z,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,0DAA0D,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,uVAAuV,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2WAA2W,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,o8BAAo8B,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,wTAAwT,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,ygBAAygB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,wWAAwW,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,mDAAmD,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oQAAoQ,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,yaAAya,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,8TAA8T,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,srBAAsrB,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iSAAiS,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mKAAmK,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,2EAA2E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,4KAA4K,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,mCAAmC,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,0FAA0F,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,6QAA6Q,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,maAAma,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,0ZAA0Z,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,4SAA4S,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,2NAA2N,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,gEAAgE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,iFAAiF,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,oMAAoM,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,iYAAiY,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,EAAE,4EAA4E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAC,cAA2BE,EAAE,SAAS,CAAC,SAAS,eAAe,CAAC,EAAE,iBAAiB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeI,EAAuBN,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,cAAc,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+dAA+d,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,mlBAAmlB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,qCAAqC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,+nBAAqnB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,qBAAqB,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wkBAAojB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0CAA0C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,2pBAAipB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,wjBAAoiB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,4BAA4B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ymBAAglB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,oCAAoC,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,0hBAAqhB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,yiBAA+hB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,0BAA0B,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,4mBAA6lB,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,8CAA8C,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,ugBAAwf,CAAC,EAAeA,EAAE,KAAK,CAAC,SAAsBA,EAAE,SAAS,CAAC,SAAS,SAAS,CAAC,CAAC,CAAC,EAAeA,EAAE,IAAI,CAAC,SAAS,olBAA+kB,CAAC,CAAC,CAAC,CAAC,EAAeK,EAAuBP,EAAIC,EAAS,CAAC,SAAS,CAAcC,EAAE,IAAI,CAAC,SAAsBA,EAAEM,EAAE,CAAC,KAAK,yDAAyD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBN,EAAEO,EAAE,EAAE,CAAC,SAAS,wBAAwB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeP,EAAE,IAAI,CAAC,SAAsBA,EAAEM,EAAE,CAAC,KAAK,yDAAyD,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBN,EAAEO,EAAE,EAAE,CAAC,SAAS,yBAAyB,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeP,EAAE,IAAI,CAAC,SAAsBA,EAAEM,EAAE,CAAC,KAAK,sEAAsE,YAAY,GAAG,OAAO,YAAY,aAAa,GAAG,QAAQ,oBAAoB,aAAa,GAAG,SAAsBN,EAAEO,EAAE,EAAE,CAAC,SAAS,mBAAmB,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,EAAeC,EAAuBV,EAAIC,EAAS,CAAC,SAAS,CAAcD,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,qCAAqC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,6DAA6D,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,uCAAuC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,4EAA4E,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,kCAAkC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,wEAAwE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,4CAA4C,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,qEAAqE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,mCAAmC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,uEAAuE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,mDAAmD,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,uEAAuE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,gCAAgC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,mEAAmE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,oDAAoD,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,iGAAiG,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,wCAAwC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,oEAAoE,CAAC,CAAC,EAAeF,EAAE,IAAI,CAAC,SAAS,CAAcE,EAAE,SAAS,CAAC,SAAS,sCAAsC,CAAC,EAAeA,EAAE,KAAK,CAAC,CAAC,EAAE,uEAAuE,CAAC,CAAC,CAAC,CAAC,CAAC,EAC3zyIS,EAAqB,CAAC,QAAU,CAAC,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAG,CAAC,EAAE,mBAAqB,CAAC,KAAO,UAAU,CAAC,CAAC",
  "names": ["richText", "u", "x", "p", "richText1", "richText2", "richText3", "richText4", "richText5", "Link", "motion", "richText6", "__FramerMetadata__"]
}
