{"version":3,"file":"X7VlByTfx-22.D_7iA8sG.mjs","names":["o","a"],"sources":["https:/framerusercontent.com/modules/mDkIT313K0Wt5oSoVDyx/YXY020H2Vv5itJBUKbmE/X7VlByTfx-22.js"],"sourcesContent":["import{jsx as e,jsxs as t}from\"react/jsx-runtime\";import{Link as o}from\"framer\";import{motion as a}from\"framer-motion\";import*as n from\"react\";export const richText=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** [00:00:00] The issue is that with generative AI the world is way more open. You can create anything, but because you can create anything, you also need to be like super precise in your instruction. Garbage in, garbage out. What do you put into your prompt? What is the level of details that you give our context?\"}),/*#__PURE__*/e(\"p\",{children:\"it's all about context. And so you're opening yourself to way more mistake or actually not just the right guidelines.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** You're listening to the AI Native Dev brought to you by Tessl\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"hello and welcome to another episode of the AI Native Dev. On today's show we have Roxane Fischer, who's going to talk to us a little bit about how AI can assist us in our DevOps infrastructure as code flows. Roxane, welcome. How are [00:01:00] you?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Hi, nice to meet you. Good on you.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, doing well. Thank you.\"}),/*#__PURE__*/e(\"p\",{children:\"You're the co-founder and CEO of Anyshift. And in fact, prior to that, you've done a huge amount of research. And I think we partly crossed paths a little bit before as well with an acquisition from Snyk. Tell us a little bit about your past.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** I'm one of the co-founder of Anyshift.\"}),/*#__PURE__*/e(\"p\",{children:\"We are a source graph connected to your cloud how to enhance visibility for DevOps team in the day to day workflow. And so personally, I'm a former AI researcher. I was doing research in FinTech companies and also at Samsung, InVision and financial data. And when I started actually the entrepreneurial journey, I met my co-founder Stephane whose previous company CloudSkift, who created DriftCTL was acquired by Snyk.\"}),/*#__PURE__*/e(\"p\",{children:\"So how the world's crossed. Yeah. And here we are started Anyshift a couple of months ago. Trying to bridge actually this gap between DevOps and AI.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, amazing. And it's interesting when when we think about AI [00:02:00] and how AI dev tools can help our workflows and help our development.\"}),/*#__PURE__*/e(\"p\",{children:\"We automatically think very quickly about how we can write code faster, how we can review code and all those kind of things. But there's a ton more which is perhaps in the DevOps space and others. That are going to be very helpful to us over time. First of all, how well adopted do you think, or rather not adopted necessarily, but how much do you think people are focusing on code completion almost too much and coding assistants almost too much where AI can actually help us in various other places of the CICD pipeline of DevOps and things like that , are we over rotating on coding assistants and under utilizing AI in various other places in the pipeline?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** It's a good question. I think first of all like Gen AI is great and aids all of our life like 10x, 100x faster. Yeah. And we should use it because like entering a new generation of code generation and what capabilities we can create, but still there are some issues about that.\"}),/*#__PURE__*/e(\"p\",{children:\"As you said you have so many more like line of codes, [00:03:00] which are created now, like in an exponential way, which create like issues about legacy you can create legacy code way faster. And we were like a limited number of reviewers to actually review the code which was generated, which is actually even more crucial and difficult with infrastructure as code, where like it's complicated to test your infra, like the layers of tests for your infra.\"}),/*#__PURE__*/e(\"p\",{children:\"Like not as a mature like it's more complicated to do it than for application. And the other point is that with so many lines of code, which are now created, like you also have a sense of ownership which is less important for different developer teams. You don't even remember sometimes the code that you have created.\"}),/*#__PURE__*/e(\"p\",{children:\"So those are new challenges. And also if I complete to your question add something about that. It's particular also like for infrastructure as code. So we like, it can help a series DevOps team engineering the infra because compared to like other frameworks such as Python, Java infrastructure as [00:04:00] code such as Terraform is relatively new.\"}),/*#__PURE__*/e(\"p\",{children:\"Like Terraform is 10 years old. Like when you go on GitHub, Like you only have one thousand, like a couple of thousand public modules of Terraform, the amount of code that was used to actually train LLMs. Those large models that do this sort of completion those generative model those are assets are quite small compared to other framework. One of the issue is that nobody really wants to put the infra on clear on GitHub. It's too sensitive. And so the generation of infrastructure as code framework won't be as good as for other one.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** So is generative AI, the way LLMs are trained needing huge amounts of data, for example, is that the right way we should be thinking about AI in, in the infrastructure as code in the DevOps space?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** The way your models are trained is that actually they're going to train on a large amount of data to understand patterns. [00:05:00] And you will understand like large amount of patterns. The more data you have, the better you are actually to understand like similarities, correlation between different framework.\"}),/*#__PURE__*/e(\"p\",{children:\"One example and I like to take is that you can create like some code, but potentially because your amount of data won't be big enough, you won't get like the best type of generation of code as that you could expect, so one of the few sandbox environments that we did at Anyshift would be okay, let's generate a VPC peering between two different VPCs. And we are going actually to ask GPT to create it. And it's gonna do it. It's gonna work. Because it's still really good. And it's still help a lot in your day to day. But when you do that, if you don't give enough context, so you will have, there's a two hard coded values of the VPCs.\"}),/*#__PURE__*/e(\"p\",{children:\"And also you will miss some additional information metadata. The auto accept value is missing and the tags also, which don't help to create actually these quality content to be part of [00:06:00] the policies of your company how you should actually ensure an infra which is consistent tags that are consistent to actually avoid future mistakes in the future.\"}),/*#__PURE__*/e(\"p\",{children:\"Like for instance, with those hard coded data you're opening yourself to future outages with hard coded value, dependencies. And so some context is still missing in terms of the quality of the content that you're generating.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** And I guess that's where you as a user will want to use AI to, as you need to provide the right levels of information to the LLM, not just to make sure it provides something accurate, but actually relevant to what you're trying to build.\"}),/*#__PURE__*/e(\"p\",{children:\"And I think that's probably where I would guess that interaction between the LLM as a chat would likely work as well. Tell us a little bit about something called Synthesis AI. That's something that I know that you're very interested in using that alongside generative AI.\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Yes. So super interesting. And so just to make the difference and the split between the two, pretty much like the same technology is used behind it. But [00:07:00] two words to actually point out two ways of using those LLMs. You can describe like Synthesis AI about how you actually Synthesis information.\"}),/*#__PURE__*/e(\"p\",{children:\"So as inputs to your LLM, you're going to take logs and also like metadata to actually understand something and give some insights. I want to do like a faster, like root cause analysis. Like automatic root cause analysis of an issue. And so this Synthesis will actually help me to go for like thousands of hundreds of logs, whereas like for generative AI, you also have an input, but rather than actually like finding some insight, you want to create something.\"}),/*#__PURE__*/e(\"p\",{children:\"You want to create some code. You are going to ask in your prompt to create a special configuration for me. The issue is that with generative AI, the world is way more open. You can create anything, but because you can create anything, you also need to be like super precise in your instruction garbage in garbage out.\"}),/*#__PURE__*/e(\"p\",{children:\"What do you put into your prompt? What is the level of details that you give our context? [00:08:00] It's all about context. And so you're opening yourself to way more mistake or actually not the right guidelines. Synthesis AI, on the contrary, is something that we believe is a more mature in the way of you're taking a lot of information in huge amount of data.\"}),/*#__PURE__*/e(\"p\",{children:\"And you want to find the patterns, where can I find this needle in the haystack? And this is mature because I didn't like same technology, LLMs that are going to actually find or create some insights those patterns, but Synthesis is way more framed. So you are going to find information which is already somewhere.\"}),/*#__PURE__*/e(\"p\",{children:\"Whereas like generative still lacks some context, the context we've been talking about.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, just to reflect that back, you use the Synthesis AI to effectively take in huge amounts of data and boil that down almost to insights and then pass those insights over to the generative AI to effectively create something based on those insights. And that presumably avoids the problem of [00:09:00] then passing huge amounts of context to the LLM where it may or may not use that context wisely, given, we know LLMs aren't great with large amounts of context. How does the Synthesis AI, how does that deal with large amounts of context?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** The way you're going to do it is that as you said you have different flows. You can use Synthesis AI, find insight, and then take it as input to generate some code. Or you can give other way or the windows of context, it can be like the output of like Synthesis AI or something else.\"}),/*#__PURE__*/e(\"p\",{children:\"The way it works, let's say Synthesis is just a word that I use to make the distinction. That was actually like referred to in different articles. But, to say that you can consider two different pillars for using those LLMs. Synthesis would be exactly used the same way as generative, like using for instance GPT to do but instead of using a prompt with a guideline create some code your prompt would be to say, I'm going to take into input all my logs. And now search [00:10:00] inside it, find me the different correlation, connection potential issue about one specific configuration among all of them. One actually, as a thing, which is interesting about that is how you can link it. And actually it's the same principle as RAG, which retrieval augmented generation principle which are used in AI to actually leverage content information and then find in this content some information.\"}),/*#__PURE__*/e(\"p\",{children:\"So the same way, like you, I'm speaking about Synthesis AI integrating some logs and finding some patterns One of the technology that a lot of people use nowadays to actually answer to complex questions with a lot of input data is to, before like actually asking those questions, do an entire processing of this amount of data.\"}),/*#__PURE__*/e(\"p\",{children:\"So it can be logs, it can be something else can be documents anything that you are going to actually encode. You are [00:11:00] going to actually like encompass information into latent spaces which are used in AI to encode information and then query it. And so this entire process of RAG is going to be done after doing this processing, create faster, a huge amount of information.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** And with that large amount of information we've talked a few times in the past about determinism and, determinism and non-deterministic output from the LLMs in terms of being able to have that deterministic mapping I guess to the RAG, talk us through, what's available there for us.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Yeah. Deterministic is an interesting topic. Because Gen AI is probabilistic. When you call an LLM it's going to give you an answer, but with a probability of the answer. So you're never 100 percent sure that it's going to be the same answer. by nature of those models. Deterministic algorithm on the contrary will [00:12:00] always give you the same answer. If you give the same input, you need both to actually generate quality content, depending on the use case, of course. But so when you imagine that you have like infrastructure with thousand of cloud resources, so many configuration.\"}),/*#__PURE__*/e(\"p\",{children:\"Having a deterministic context that you actually know about is as important as also having this LLM, Large Language Model, which will give you some insight about it or generate some code about it. What's interesting about your infrastructure. So depending on what you do is that your cloud resources, Kubernetes etc at the end, it's going to be a graph.\"}),/*#__PURE__*/e(\"p\",{children:\"It's a way that resources are connected. You have VPC and within a subnet and you have IAMs and everything all those resources are like interconnected and you can actually represent them as a graph of interconnected nodes. If I take back the example about this VPC peering the way you understand that everything is [00:13:00] related, you need to represent it as a graph.\"}),/*#__PURE__*/e(\"p\",{children:\"And if you want to generate some content or get insight about it so imagine I want to generate the best practiced codes for this VPC peering with no hard coded value, I need to have the context of where actually the other resources that I'm going to refer to, those dependencies reside in this graph, right?\"}),/*#__PURE__*/e(\"p\",{children:\"And so I need to have this context, this deterministic context, about my own infra or my own content, my own documents, anything to then generate quality content or get the right information inside about it.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yep. Yep. Oh, very interesting. Okay. So that's, so it's almost like the determinism is built in there.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I guess you're adding the outcomes you would expect almost and having AI choose, pick a path effectively to provide that determinism. Is that fair?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Yes. Because otherwise it's all about context. If you don't give this context, you can do anything.\"}),/*#__PURE__*/e(\"p\",{children:\"It's so powerful [00:14:00] that it can generate anything. Can create like some Python code, infrastructure as code documents, all within the same model, you can have like more specialized, smaller models, but still super powerful. If you want to have a precise answer to complex queries, you need to give like precise context.\"}),/*#__PURE__*/e(\"p\",{children:\"And so this context, either like you put it in your prompt, I want this, and that, but so you need to actually rephrase everything. So it would be the equivalent of coding yourself something, or you need to plug in as inputs these deterministic contents, your own knowledge, like your enterprise knowledge to actually be able to create these very precise quality content.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, really interesting. And I think we're like flowing from the today of how people can use AI to like almost looking forward as to how we expect people in the DevOps space to use it. Let's go through a couple of examples where we kind of talk about a couple of flows that we would expect folks to, who are listening to very much resonate with. Why don't we start with the idea [00:15:00] that, perhaps there's an issue or something like that in our deployment or our infrastructure somewhere we would have a ton of data through our logs, we'll be doing a lot of our root cause analysis trying to identify where the main issue came from and what tools would you say developers and ops folks should be looking at or considering to be able to analyze or rather even use the AI to analyze that kind of level of data that we know we can get from production systems.\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** It really depends on which time of teams you're speaking of, because at the end like the world is open. And so it depends of what do you integrate as inputs? The more info you have as a better, you have like many startups and we're also building it that are working on that.\"}),/*#__PURE__*/e(\"p\",{children:\"And I guess you are too in time of actually, how do you take so many input data to actually analyze it. And the more, the better logs, metadata clouds, APIs, kubernetes everything you have, even like customer spots. I have an application, which is laggy [00:16:00] at my front end. How do I correlate the signal?\"}),/*#__PURE__*/e(\"p\",{children:\"So it's like a text, a complaint with some logs about latency. And how I take all this heterogeneous data, use it as inputs to my models and being able to find back this insights, this correlation. And I guess for me, it's one of the biggest strength of this models. It's amazing at actually integrating heterogeneous data.\"}),/*#__PURE__*/e(\"p\",{children:\"And this is why like this and this Synthesis AI principle for me is like really, and we are like different tools, which are really good at doing this root cause analysis. Especially good at actually analyzing logs. And so depending of once you will get all the different information you can have. But it's at the end, a lot of work in terms of infrastructure or what do you integrate in your pipeline?\"}),/*#__PURE__*/e(\"p\",{children:\"So integration, but once you have integrated everything, how you're able to dump everything encode it and actually being able to query it.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Any tools you'd call out there in terms of the root cause analysis or the [00:17:00] triage space, looking through log lines?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** I've heard recently about cleric.io, never used it. But I think it's good. And you have also all the big players, such as Datadog that are like now leveraging AI to read logs.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"And you have also like many startups like doing that on Kubernetes also.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Yeah. Cool. If we take a question about almost an organizational question, when we think about how over the years with DevOps we're trying to get developers closer to their production environments and trying to get them more involved in that overall, deployment, understanding the infrastructure and so forth.\"}),/*#__PURE__*/e(\"p\",{children:\"Do you feel like there's an issue in and around ownership where the more we expect developers to use AI in creating their infrastructure and creating their deployment artifacts that they actually lose that ownership or lose that connection more with the final production, because in some sense they're using effectively AI to [00:18:00] push production away or push the ops, leave the ops to the AI. Is that a problem or is that not necessarily one?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Yeah I find it super interesting because I think it's a problem, but also like it comes with the benefits of AI. I think like at Anyshift, we like more than 50 percent of our code is generated by Claude. And it's about what do you make out of it? What do you use like as a prompt and you're able to actually code faster.\"}),/*#__PURE__*/e(\"p\",{children:\"But the thing is, because of that, because you're so much more efficient, how can you remember everything you have done? I don't have a clear answer to that. I'm myself trying to figure out I think we need like more safety nets. We have for instance so other example about generating infrastructure as code content, imagine like your prompt is really bad or like your model is not super good.\"}),/*#__PURE__*/e(\"p\",{children:\"And you are going to create like some open ports to everything. And so super bad practice. Because it's probabilistic and you're not 100 percent sure about the code you're going to [00:19:00] create, you still need those safety nets Checkov Snyk IaC to actually be sure that you don't rely 100 percent on something which is still non deterministic and we can speak about potential security flows that it can also create.\"}),/*#__PURE__*/e(\"p\",{children:\"So how to always have, like different ways of actually safeguarding yourself from attacks or from bad configuration. And in terms of ownership how to actually enhance the ownership of different people making some like generating content. It's not obvious because you're getting more like efficient.\"}),/*#__PURE__*/e(\"p\",{children:\"You still need to actually own your code. And you need to remember what you have pushed and why you have done it. If you don't even remember. Which line of code you have generated that's an issue.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, no, absolutely. Let's now take a little look going forward. We obviously mentioned the kind of using RAG a lot more.\"}),/*#__PURE__*/e(\"p\",{children:\"I'm really interested there in terms of that the determinism that you can pull out using that. What do you feel like are some of the future trends in this [00:20:00] space particularly around your work with with AnyShift.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** We sincerely believe that the future of AI slash DevOps resides in two pillars.\"}),/*#__PURE__*/e(\"p\",{children:\"AI, but also deterministic parts, the more like infrastructure parts. And you need both to have this context plus also like the power of AI. And so because your infrastructure is a graph, there's a type of like your cloud resources, everything that are interconnected, the definition and different like providers, DNS, et cetera that are linked to it.\"}),/*#__PURE__*/e(\"p\",{children:\"We believe that, to create the best of it to generate content for DevOps team. And also like to remediate to some issues. We need to create and work on those both pillars. , this deterministic part, this graph, and then the query and the remediation with AI. In terms of the graph, and also I believe this is where it's important to have, like some knowledge about your space and like how you're going to create it.\"}),/*#__PURE__*/e(\"p\",{children:\"The schema of your graph is important. [00:21:00] The different dependencies between your nodes and your edges to represent some cloud resources. It can be done in many ways you can connect a VPC with a subnet with an edge. But you can also connect them because they have similar tags. For instance, data team, I don't know.\"}),/*#__PURE__*/e(\"p\",{children:\"So the schema of this graph, how you connect components and what you put inside it is super important because it's also how you're going to query it. How you're going to retrieve information. Because to actually do this remediation and code generation for SRE teams, you need to first create this context, this graph, this schema, and then do the query.\"}),/*#__PURE__*/e(\"p\",{children:\"And how do you query it? I want to get all the information of all the dependencies of this specific EC2 in the last 24 hours and the team who owns it that has created a change. To be able to query and to have an answer to those specific and global or local queries you need to have a schema that is actually adaptable to those [00:22:00] queries.\"}),/*#__PURE__*/e(\"p\",{children:\"How to have to actually go fetch information in the fastest and like more accurately.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** And so you pair those two then, is that something you use as a schema to almost validate or is that something that you then pass directly into the LLM to say look here's some information about the data or about the configurations.\"}),/*#__PURE__*/e(\"p\",{children:\"You use this to be able to query, use this to be able to understand. That was good.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** That's very interesting because the schema we define ourself because this is where we believe the value of having some knowledge about how infrastructure work is. So how do you connect things and how are you going to query them afterward?\"}),/*#__PURE__*/e(\"p\",{children:\"One of the fun parts is that to actually enhance development and make it faster, we use AI also like internally to actually populate this graph. When the schema is in place, how do you actually like leverage AI to pass some data and to actually populate your graph.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Awesome. So tell us a little bit about Anyshift and the area the space that, your mission, in the space that's in.\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** [00:23:00] So as you understood we want to give back some visibility to SRE teams to answer actually to key questions, simple questions that are still hard to answer nowadays. Such as like, where is this response defined in my code base who owns it what are it's dependencies, but between like cloud resources, Kubernetes one DNS data providers, et cetera.\"}),/*#__PURE__*/e(\"p\",{children:\"And the way we do it is that we are going to create this digital twin of your infrastructure. And from that, you will be able something similar to a resource catalog, being able to actually like query your graph to answer to those questions. We begin with something which is like a very shift left approach, something very similar also to Snyk or like other tools how to actually like prevent issue before it happen.\"}),/*#__PURE__*/e(\"p\",{children:\"Or give like at least more information and visibility to your change before you deploy it. And so we are going to integrate within your pull request to actually, once you make a [00:24:00] change, query this graph, query this map of dependency and give you more context about actually what you have done. And okay you have changed like this data for module, It's going to affect like other resources, the repository, should you really do it?\"}),/*#__PURE__*/e(\"p\",{children:\"And so the AI part that we're going to leverage is actually on the educational content. So something similar to the Synthesis AI we were speaking about. We have all this information. You have your graph, you have all the dependencies complex and your teams are pretty less aware of the entire context of your infrastructure.\"}),/*#__PURE__*/e(\"p\",{children:\"And so how do you use AI to actually explain what happened? And based on this context that we provide, also do the explainability. Based on that.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** How do people have a play with Anyshift? Is it available today? Can people get involved?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** So the way we work is that now you can actually already test Anyshift on a platform if you're like going to subscribe and actually what we are [00:25:00] building first would be like this deterministic part of your graph. So giving all of these dependencies of the impact zone of your change.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I think you said 50 percent of the code in Anyshift is written by Claude. Is that right?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Okay. Tell us a little bit about that then. Cause you know, it's interesting. Sometimes I think Google or someone recently said 25 percent of the code that they write is written by AI. I'd love to hear a little bit more about the 50 percent of code that's written by AI. How are you using AI internally at Anyshift today? Is it mostly on the code creation or a bit of everything through the pipeline?\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** Actually, both first of all we are like in Go and so all of our developers don't necessarily are experts in Go and like LLMs are amazing translator in any kind of language. So how do you like ask something you need at the end to know exactly what you want. And your LLM will create 80 percent of what you need [00:26:00] and you will then need to be able to do the review, which actually accelerates the development a lot.\"}),/*#__PURE__*/e(\"p\",{children:\"But this is like all the developers now do it nowadays? Is the other part that's bothering me more conscious way is to actually automatically use LLMs to pass some code and some content to then actually generate a graph.\"}),/*#__PURE__*/e(\"p\",{children:\"Which is something which is a little bit more specific. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Interesting. How have you found using it? Do you have best practices or what advice would you give to people who are trying to add AI capabilities into their workflows today.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** I think my conclusion would be that people need to, it's when you do a recurrence you have test case number one, test case number two, and then you can do like the N plus one for loop you need to understand very precisely what you want to do.\"}),/*#__PURE__*/e(\"p\",{children:\"Because then you will understand the edge cases and then you will be able to actually ask your model exactly what you need as a best way, but it's easy to go straight [00:27:00] away to your model as something and think it's going to do magic. And sometimes it does but if you want to do something at scale be super aware with like first edge cases and number one, number two.\"}),/*#__PURE__*/e(\"p\",{children:\"What you want, what's going to be the output, and then you understand the patterns. Let's go.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, amazing. So still very in a very attended fashion,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Roxane Fischer:** I guess, but everything is improving. So I guess like potentially this comment will not be relevant in a few months. Yeah. Yeah. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Amazing. Roxane, thank you very much for the session.\"}),/*#__PURE__*/e(\"p\",{children:\"And I'm looking forward to the demo actually of Anyshift. So for those who are interested in that, please do check out the YouTube channel and you'll see the YouTube video. It's only YouTube but the YouTube video of of Anyshift in action as well. So thank you very much, Roxane, and thanks everyone for tuning in and we'll see you again next time. Thank You.\"}),/*#__PURE__*/e(\"p\",{children:\"[00:28:00] Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.\"})]});export const richText1=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Introduction\"})}),/*#__PURE__*/e(\"p\",{children:\"Welcome to this month's special Thanksgiving edition of the AI Native Dev podcast, brought to you by Tessl. Hosted by Simon Maple and Guy Podjarny, this episode delves into the heart of the AI and tech community, exploring themes of gratitude, innovation, and the evolving landscape of software development. Guy Podjarny, a renowned figure in the tech world, is noted for founding Snyk and his previous CTO role at Akamai's Web Experience division. His extensive expertise in web performance and security makes him a trusted voice in development and security convergence. This episode features insights from Mathias Biilmann, CEO of Netlify, DevOps pioneer Patrick Debois, Notion's Simon Last, and Eric from StackBlitz. Together, they tackle the pressing issues of AI's impact on the open web, the latest AI tools, and the transformative role of AI in software development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Thankfulness and Community Spirit\"})}),/*#__PURE__*/e(\"p\",{children:\"In the spirit of Thanksgiving, Simon and Guy express their gratitude for the vibrant tech community. Simon reflects, \\\"I'm always in awe of the technical community in which we work in,\\\" highlighting the collective intelligence that drives innovation. Guy shares his appreciation for the Tessl journey and the amazing people involved, emphasizing, \\\"It's amazing and it's a humbling and it's fun in the day to day.\\\" This section underscores the importance of collaboration and diversity in AI development, celebrating the unique contributions of each community member.\"}),/*#__PURE__*/e(\"p\",{children:\"The narrative unfolds with stories of collaboration where diverse minds converge to solve complex problems. This diversity not only fuels creativity but also fosters an environment where new ideas can flourish. Both hosts share anecdotes of how community-driven initiatives have led to breakthroughs in technology, reiterating the power of collective thought in advancing AI and software development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"AI Native DevCon Highlights\"})}),/*#__PURE__*/e(\"p\",{children:\"The AI Native DevCon marked a significant milestone as the first developer conference dedicated to AI-native development. With over a thousand developers in attendance, the event showcased groundbreaking talks and sessions. Guy Podjarny's keynote on the evolution of software development with AI was a highlight, presenting new opportunities to rethink traditional methodologies. Simon commended the conference's success, noting, \\\"It was great to have that many people coming together, talking and discussing and listening to some amazing speakers.\\\"\"}),/*#__PURE__*/e(\"p\",{children:\"The conference was a melting pot of ideas where developers shared cutting-edge techniques in AI. From leveraging AI to enhance user experiences to integrating AI in development workflows, the sessions provided a rich tapestry of knowledge. Attendees were encouraged to engage with speakers, fostering an interactive environment that mirrored the collaborative spirit of the tech community. This section serves as a testament to the ever-evolving nature of AI and its profound impact on the development landscape.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Open Web and AI\"})}),/*#__PURE__*/e(\"p\",{children:'Mathias Biilmann raised a crucial question: Does AI threaten the open web? In his discussion, he highlighted the challenges and opportunities AI presents for web development, stressing the need to maintain an open ecosystem. Guy echoed these concerns, stating, \"When AI comes along, it makes it even messier,\" but also acknowledged the beauty in the web\\'s diversity and creativity. The conversation underscored the importance of balancing innovation with openness to ensure the web remains a thriving platform for all.'}),/*#__PURE__*/e(\"p\",{children:\"The dialogue delved into the intricacy of preserving an open web amidst the rise of AI-driven applications. Mathias pointed out the potential for AI to both disrupt and enhance the web, urging developers to consider the ethical implications of their innovations. This segment emphasizes the delicate balance between embracing technological advancements and safeguarding the open nature of the web, advocating for responsible AI integration that respects the web's foundational principles.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Innovative AI Tools and Platforms\"})}),/*#__PURE__*/e(\"p\",{children:'Patrick Debois brought a fresh perspective with his unorthodox exploration of AI-assisted coding techniques. He shared anecdotes of experimenting with alternative input methods, such as gestures and voice commands, demonstrating the diverse possibilities AI tools offer. Simon remarked on the community\\'s role in embracing this diversity, saying, \"Everyone is in a different place and everyone prefers something that fits well for them.\" This section celebrates the variety of tools available, emphasizing that different solutions can coexist to meet varied developer needs.'}),/*#__PURE__*/e(\"p\",{children:\"Patrick's experiments highlight the versatility of AI as a tool for enhancing the development process. By exploring unconventional methods of interaction, he showcased the adaptability of AI in catering to individual preferences and workflows. The discussion highlighted the community's openness to innovation, encouraging developers to explore and integrate diverse tools that align with their unique needs and styles, fostering a culture of inclusivity and experimentation in AI development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"AI as a User\"})}),/*#__PURE__*/e(\"p\",{children:\"The concept of AI as a user, discussed by Mathias Biilmann and Eric from StackBlitz, opens new avenues for understanding AI's interaction with web applications. Mathias noted, \\\"There's an AI training model platform system somewhere that as a user has chosen to take this application and deploy it.\\\" This section explores how AI acts as a consumer, making decisions and utilizing services, and what this means for developers in terms of adapting their offerings to cater to AI-driven interactions.\"}),/*#__PURE__*/e(\"p\",{children:\"This paradigm shift challenges developers to rethink how they design applications, considering AI not just as a tool but as an active participant in the ecosystem. It prompts a reevaluation of user interfaces and service architectures to accommodate AI-driven interactions. This discussion encourages developers to innovate with AI in mind, creating platforms that are not only user-friendly for humans but also optimized for AI engagement, paving the way for a new era of AI-inclusive development.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Fine-Tuning LLMs: Insights from Notion\"})}),/*#__PURE__*/e(\"p\",{children:'Simon Last provided valuable insights into the practicality and challenges of fine-tuning language models. He cautioned against over-investment, explaining, \"Fine-tuning can be a negative indicator...it implies some cluelessness.\" Instead, leveraging out-of-the-box models offers more flexibility and adaptability. This discussion encourages developers to weigh the benefits and drawbacks of fine-tuning in their AI strategies.'}),/*#__PURE__*/e(\"p\",{children:\"The conversation explored the nuances of fine-tuning, highlighting the potential pitfalls and advantages. Simon advocated for a strategic approach, suggesting that developers should focus on understanding the core capabilities of pre-trained models before diving into customization. By doing so, they can harness the strengths of LLMs while avoiding unnecessary complexity, ultimately leading to more efficient and effective AI implementations.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Context Windows and Attention\"})}),/*#__PURE__*/e(\"p\",{children:'Simon Last also addressed the limitations of large context windows due to limited attention spans in LLMs. He shared, \"Attention is still limited,\" emphasizing the need to manage constraints and instructions effectively. Practical tips for optimizing LLM interactions include prioritizing important instructions at the beginning of prompts to ensure they are given due attention.'}),/*#__PURE__*/e(\"p\",{children:\"This section delves into the intricacies of working with LLMs, offering practical advice for developers to optimize their interactions. Simon's insights into context management underscore the importance of strategic prompt design, encouraging developers to be deliberate in their instruction placement. By understanding the limitations of attention spans, developers can craft more effective prompts that maximize the potential of LLMs, paving the way for more accurate and reliable AI outputs.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Role of AI Engineers\"})}),/*#__PURE__*/e(\"p\",{children:'The evolving role of AI engineers was another key topic. Simon Last suggested that a deep machine learning background might not always be beneficial, stating, \"Sometimes you actually have dispositions that you need to unlearn.\" Instead, agility and iterative development are crucial for building AI-powered applications. This insight highlights the need for adaptability and a willingness to embrace new methodologies in the rapidly changing AI landscape.'}),/*#__PURE__*/e(\"p\",{children:\"The conversation emphasized the dynamic nature of AI engineering, advocating for a flexible mindset that prioritizes innovation over rigid adherence to traditional methodologies. Simon's perspective encourages aspiring AI engineers to cultivate a growth mindset, embracing new tools and techniques that align with the evolving demands of AI development. This section serves as a call to action for the tech community to foster a culture of continuous learning and adaptation, ensuring that AI engineers remain at the forefront of technological advancements.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Summary\"})}),/*#__PURE__*/e(\"p\",{children:\"This episode of the AI Native Dev podcast offers a wealth of insights into the dynamic world of AI development. Key takeaways include the importance of community and diversity, the potential and challenges of AI tools, and the evolving roles within the tech ecosystem. Listeners are encouraged to explore additional resources on Tessl's YouTube channel, featuring talks from AI Native DevCon. As AI technologies continue to evolve, they present exciting opportunities for developers to innovate and create impactful solutions. This episode serves as a reminder of the transformative power of AI and the responsibility of developers to harness this technology ethically and creatively.\"})]});export const richText2=/*#__PURE__*/e(n.Fragment,{children:/*#__PURE__*/t(\"p\",{children:[\"[00:00:00] Introduction and Thanksgiving Greetings\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:01:00] Thankfulness and Community Spirit\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:03:00] AI Native DevCon Highlights\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:06:00] The Open Web and AI with Mathias Biilmann\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:09:00] Innovative AI Tools with Patrick Debois\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:14:00] AI as a User with Eric from StackBlitz\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:18:00] Fine-Tuning LLMs: Insights from Notion's Simon Last\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:23:00] Context Windows and Attention\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:28:00] The Role of AI Engineers\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:32:00] Summary and Closing Remarks\"]})});export const richText3=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** [00:00:00] You're listening to the AI Native Dev brought to you by Tessl\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Hello everyone and a big welcome again to another live monthly roundup. My name is Simon Maple, joining me today.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Guy Podjarny. Yeah, doing another live episode. We haven't learned our lesson yet. Why do we do it to ourselves, eh?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** It's November 28th, which this year is Thanksgiving. First of all, a massive happy Thanksgiving to the folks in the US and those celebrating.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah, this somewhat weird holiday, but happy Thanksgiving folks. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I suppose we should start off with maybe what we are thankful.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah, that's a good idea. I don't know, do you wanna kick off? What do you, what are you thankful for?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** So actually do you know what, we'll talk a little bit about AI Native Devcon later.\"}),/*#__PURE__*/e(\"p\",{children:\"One of the things [00:01:00] I'm always in awe of is the technical community in which we work in. And one of the things I love is when ideas come around the community really comes together and thinks together as one brain and provides different angles and different discussions.\"}),/*#__PURE__*/e(\"p\",{children:\"And with the new community that we'll talk a little bit about later, it's been amazing to see so many people join that and contribute into that. So community, technical community and I guess community in general, but right now technical community, I'm super thankful for that.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah, no, that's amazing. And that fully relates to that element. I guess on my end, I guess beyond 125 million in funding of the thank you. Thank you. Yeah. Dear investors. Beyond that. Probably like what really jumps to mind is a bit more serious on it, which is the ceasefire that we have in Israel between Israel and Hezbollah, hopefully would stop a whole bunch of suffering that has been happening for a while, in the midst of a otherwise, a pretty a dreary situation over there.\"}),/*#__PURE__*/e(\"p\",{children:\"It's nice to [00:02:00] have a moment of something that feels a step in the right direction on it. So quite thankful for that. But yeah, I think generally thankful for the Tessl journey. So so much fun to be building and have amazing people even including present company in it. I was gesturing at you on it like clearly wasn't thinking fast enough now, but I really am thankful for the amazing people that joined the journey that believe in the journey and just I kind of building together on it.\"}),/*#__PURE__*/e(\"p\",{children:\"It's it's amazing and it's a humbling and it's fun in the day to day.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Amazing.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Cool.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Thank you. Thank you for that. So this month on the podcast, we've been busy again. We've had Mathias Biilmann who's been talking about one of the big questions which we'll dive into a little bit is, does AI threaten the open web?\"}),/*#__PURE__*/e(\"p\",{children:\"And some really interesting discussions around that challenges and opportunities that AI can provide us with the open web. Thank you. From Matt Biilmann, Netlify's CEO, of course and co founder. Patrick Debois had a second session, but this time not talking about DevOps, rather talking about it from a development angle, so talking about how he's been playing with various coding tools but in a slightly non orthodox [00:03:00] way.\"}),/*#__PURE__*/e(\"p\",{children:\"So rather than just using them as they're supposed to be used. Seeing how we can take them to the next level. So that was pretty cool.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Always unorthodox thinking from Patrick.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. You can always expect Patrick to think that one level different, which is amazing. It opens up so many opportunities.\"}),/*#__PURE__*/e(\"p\",{children:\"And then we of course had a session in the studio around the corner, which was entirely about an amazing funding round that we had. And how that enables us to build an AI native developer platform. And then you spoke with Simon Last, which has been confusing for the last two weeks when we've been talking about so many, like on the same email thread, like\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Simon, are you replying to that email?\"}),/*#__PURE__*/e(\"p\",{children:\"Oh, hold on. Simon. Maple is not on that thread. It's only Simon Last\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I immediately thought though, Oh, this is something I've dropped and so often, but every now and then it is nice.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** An amazing guest, though. It's so much sort of real world insight around building Notion AI, really appreciated him taking the time and sharing some of those learnings.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Let's talk about some news. Before we jump into a deeper dive into those sessions the first was one that you added, which was David Singleton's new AI agent operating system slash dev slash agent. [00:04:00] Sure. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** So it's been recently announced on it. I've heard it through the VC grapevine before which is David Singleton and a bunch of other sort of amazing people, many of whom are original sort of Stripe leaders on it have announced the new company.\"}),/*#__PURE__*/e(\"p\",{children:\"I think they call it slash dev slash agent, which is an allusion to slash dev slash payment, which I think was Stripe's original name. So it's a nice homage to that. The website is the S D S A. Like everything, it's like a little bit nebulous in a precisely what it is that they're doing. I don't know, like I'm familiar with at least one more company that is saying something like this, or that is talking about a future and reimagining UIs and the likes.\"}),/*#__PURE__*/e(\"p\",{children:\"I think it's very promising because of the amazing team that is built around it. And. what in another conversation that I've had in a group conversation with David is on talks about how it's a combination of a new UI paradigm, a system level services to orchestrate across the agents.\"}),/*#__PURE__*/e(\"p\",{children:\"It's the right developer SDK and tool chain, and it's a two sided marketplace around how do the people, create agents and consume them. So it sounds promising and interesting. I think [00:05:00] what's interesting to me from a news perspective, it's not, there's probably 1000 other AI companies that, founded and launched during November as well is the caliber of the team that is quite impressive. And I guess I'm on the lookout for people that try to imagine something that is really further out. And so they say agent app creation, agent UI, like the way humans interaction interact them. What are the privacy models for that?\"}),/*#__PURE__*/e(\"p\",{children:\"All of those are just like substantially different ways to address it. And then they start by, it seems like they are anchored in the future. So I found that interesting and I look forward to tracking what it is that they build and how they advanced. And a lot of that team has also built a lot in Android.\"}),/*#__PURE__*/e(\"p\",{children:\"And so they've had an opportunity to rethink operating systems for a while. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Interesting. Interesting. Second set of news that's here is the Gemini new Google Gemini XP model.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yep. Yep. The new Gemini there. I think so it's interesting. It's a new model from Gemini. I'll admit that, with all the round funding and all that, I haven't had a chance to play with it personally as much.\"}),/*#__PURE__*/e(\"p\",{children:\"But it made headlines. Mostly because in a bunch of the benchmarks, it won first place, made the top mark and oftentimes by a [00:06:00] decent margin and has done so even compared to o1.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** And it's interesting if you're not familiar with the o1, o1 is very much the reasoning model and it's really capable, but it's also slow and it's quite expensive and it thinks through things in some sort of mysterious behind the scenes way. It feels interesting, agentic model and Gemini comes along and in what feels like a much more typical LLM fashion, if that can be said for something though.\"}),/*#__PURE__*/e(\"p\",{children:\"Legacy traditional fashion it just gives answers. And those answers actually compare to that reasoning model. So I think it's interesting to think about that distinction of reasoning versus just training and having the neural networks. And I read some place, someone talking about intuition and it's almost like reasoning versus intuition is when you do have the answer immediately, but you don't necessarily know it's your subconscious kind of in action trying to figure out the the jumps in your neural networks, maybe that gets you to the result.\"}),/*#__PURE__*/e(\"p\",{children:\"And so someone, it wasn't a term that I've heard the Gemini team use. But [00:07:00] someone talked about reasoning versus intuition, which I thought was like an interesting maybe analogy, maybe a little bit unmorphosizing. I always come back word thinking about a lens as a, you took a risk.\"}),/*#__PURE__*/e(\"p\",{children:\"You took a risk. It did pan out. Didn't pan out. Yeah. , I don't know. You have a history with mispronunciations Simon.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I've been known to mispronunciate my words quite a lot. Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** It's interesting to see Gemini both, the battle continues. You see really at the end of the day, mostly Google, Anthropic and OpenAI compete for the top spots with Llama always staying close to the top, but mostly I feel getting results that are similar to the reasoning model without a reasoning delay is something that might shake things up.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Interesting. Tessl also had a couple of not necessarily announcements, but we've been busy in the news as well.\"}),/*#__PURE__*/e(\"p\",{children:\"Obviously the funding announcement, which we've already had a session on full podcast on it, but we won't go too deep into here, but just to just the highlight,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** we got some money. We're being longterm.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** There we go. There we go. And of course with that, we have the shiny new Tessl website. So feel free to have a look at that.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** [00:08:00] Shoutout to Rachel on the team and with a bunch of support have built this great new website in a relatively short timeline.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, absolutely. Yeah. Last week as well 21st of November, we had AI Native Dev Con, which the amazing Sammy Hepburn helped us with and again, another short timeline, but. an AI native developer conference. I think this is the first AI native dedicated developer conference. So it's great to have that. And it was over a thousand developers coming together, talking and discussing and listening to some amazing speakers. There's this one guy called Guypo who kicked it off. I don't know if you know him, but talking about I'm sure people are sick of listening to it. I know. I have sad news for you. It might happen a couple more times.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** If you haven't listened to that, I very much recommend it. I think it's just the way you showed that progression, and we'll talk about it a little bit later when we think about tools, but the progression or the journey in terms of the challenges that we have with traditional software development and how AI gives us that opportunity to do things differently and to actually help or rather change the way we build software.\"}),/*#__PURE__*/e(\"p\",{children:\"So [00:09:00] definitely have a look at that because I think it gives a different, that higher level. change of the way of working based on the new norm that we have now with AI being able to do an amount of stuff under the covers. So I think that's, that was very interesting.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** It was an interesting I find oftentimes when you have to give these talks, you have to really distill a bunch of the messages into something that is more manageable that people can actually understand and find interesting.\"}),/*#__PURE__*/e(\"p\",{children:\"And so I think AI native development is elaborate and you've probably heard me describe it in various ways already. But it really is about how do we narrow it down and how do we compare it to the history of software development simplifying those terms. So it was a lot of work to put together a keynote, and I hope folks have enjoyed it or found it interesting.\"}),/*#__PURE__*/e(\"p\",{children:\"But it is also like just an important step in the journey of just forcing it to be, hey, keep it simple. What are the problems with code centric development today? How does it map to some problems that were solved in the past? And where is it going from here?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** So I always think doing that just generally now stepping away from the topic, but generally when we have to almost [00:10:00] present something or write something down about our way of thinking, it really does allow us to step back and think about things in a clearer way to be able to more concisely write something down or present something.\"}),/*#__PURE__*/e(\"p\",{children:\"So it's a great thing to do generally.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Distil your learnings, really force it. And I think sometimes people refer to it as dumbing it down, which I really I don't like that terminology because it's not dumbing it down. It's distilling it.\"}),/*#__PURE__*/e(\"p\",{children:\"It's saying what is the core of it? And then if you distill it and you build these core principles, you can think bigger because now you have these kind of good foundations. They're well defined. You don't have to unravel them every time and you can build on top of it. So\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** there's so many things in like generally that I'm doing, I'm coding or something.\"}),/*#__PURE__*/e(\"p\",{children:\"And I'm like, yeah, I know how to do this. And then I realized I have to present on it. And it's oh, actually there's a ton of things I haven't actually looked at. I haven't thought through.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. Just let's go into the deep dive.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah, maybe actually before we dive into the topics of it , so I really maybe it's like over time you get a little bit comfortable with the podcast on it yeah, and this sort of new muscle and these new conversations and It's always fun to talk to smart people, but it's nice to also loosen up a little bit and talk about things [00:11:00] that are not there.\"}),/*#__PURE__*/e(\"p\",{children:\"What was your sort of favorite piece that was not, maybe as much the substance of the episodes on it in this month?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** So one thing that I really enjoyed there was one thing in Patrick's session that I really enjoyed, which was, he was talking about how you can effectively use AI tooling, AI assisted tooling through different UI effectively.\"}),/*#__PURE__*/e(\"p\",{children:\"So he was looking at cursor and he'd done a number of things, including gestures and including voice commands and things like that to put a greater level of input without having to type at a keyboard. And the reason he went to gestures over voice was because as he was trying to code, using voice commands as well as typing at the keyboard.\"}),/*#__PURE__*/e(\"p\",{children:\"His wife comes in and starts talking to him. And it's when real life really kicks in and you think, oh, yeah, that's why we use keyboards. Cause it's, there's no outer interference with keyboards. And that was the reason why his wife came in and started talking to him and started messing up his coding and interfering with what was actually being coded.\"}),/*#__PURE__*/e(\"p\",{children:\"So he then goes straight to that, right? And he gestures to be able to put one finger up or two fingers up as he's [00:12:00] typing something to be able to do it in a particular mode. So that was a nice kind of like life realization moment of okay this extra input is challenging.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** That's hilarious.\"}),/*#__PURE__*/e(\"p\",{children:\"We actually try at home sometimes at dinner to do a ChatGPT voice mode. And we have a question, something's hey, instead of asking Alexa, which we have in the kitchen, let's open up ChatGPT voice mode and tried and, we're asking it something that we've just been debating.\"}),/*#__PURE__*/e(\"p\",{children:\"And so everybody has an opinion on it and it totally cannot handle the multiple speakers at the same time. And so yeah, there's definitely some evolution, a little bit of having these things that have voice recognition and know who identify when is it that they're getting instructions and where is it casual conversation.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** I thought you were going to say Alexa and ChatGPT were starting to have a dialogue. I haven't tried that yet, yeah. How about yours? What was yours then?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** I probably most enjoyed the music analogy that I've had with Matt Biilmann. He was a music a journalist, I think, before joining in. So I started, it's just like a, just a fun sort of start to it.\"}),/*#__PURE__*/e(\"p\",{children:\"It's hey, tell us a little bit about what that means. And we got into talking about music and AI. And it ended up being this theme throughout the episode of comparing [00:13:00] software development and AI to music and AI, which I really liked because I always think about software development as a creative role, as a, you're creating, it's somewhere between the engineering, we call it software engineering, but I really think of it a lot as creative and as something where you have to, you take a blank slate, you take something that doesn't exist and you just modify it per your imagination with these virtual tools that we have. So I really liked that. So I liked, I was almost like pleased it wasn't intentional. It was pleased with how the analogies worked because oftentimes I think of them as well as something that is inspiring one another. I think about software creation over time and how is it affected by how music has been created or art or others and vice versa. So I found that really fun. I also enjoyed the height difference in the episode of you and Patrick. Just pointing out, it made it engaging.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** There we go. Then it's then it achieved something. That's amazing. Yeah. Yeah. Awesome. So yeah, what did we start off with then? One thing that actually came that I've been thinking about recently was with all the, particularly around the conference, actually, there are a number of today's tools that people will give a [00:14:00] practical advice over Lisa Ray's, Patrick was talking about a lot of them.\"}),/*#__PURE__*/e(\"p\",{children:\"We had sessions from both of you. We had, Vassell speaking as well. There are a number of different tools that are doing some amazing stuff. And I hear too much in the community these days or in the news, oh this thing's going to take completely take over cursor. This cursor is going to win or Bolt's going to win.\"}),/*#__PURE__*/e(\"p\",{children:\"It's the new thing and stuff like that. And one of the things that I actually love about the community, getting back to what I'm thankful for. One of the things I love about the community is the diversity in that. And everyone is in a different place and everyone prefers something that fits well for them.\"}),/*#__PURE__*/e(\"p\",{children:\"So I think when you look at all the tools that are out there, I don't like the talk of the is Java dead, those types of that type of talk. I don't like and when we think about the journey of AI native. where we are today in the more AI assisted land. It's interesting for me to think, okay where are we today?\"}),/*#__PURE__*/e(\"p\",{children:\"And what's going to actually get the majority of that usage today versus in one year, two years, for example, thinking about, are people going to lean into that AI native more and [00:15:00] so forth. And it makes me think that actually. We have spec driven, we have code driven, we have prompt driven coding, and actually all of these are going to be used.\"}),/*#__PURE__*/e(\"p\",{children:\"There is no black and white kind of one or the other. All of these are going to be used, and it's the case of people being on that journey, and everyone's going to be on that journey in different places. I love to think this as, similar to DevOps really and say in the case of people are gonna be in different stages and all of these tools could actually coexist and the community can be in different tools at the same time and each one there is no right to everyone will have tool that's right for them.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, what do you think in terms of the journey? With these tools that are taking very different approaches. Do you think there's a place for all of them to exist in? Or yeah,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** I very much hope I really, I love the diversity as well. And I like the composability as well. Many of these are things that you can pull together. And I think that's software development today. And so some of it is about fit for purpose. It's quite likely that a different stack will be needed to build a pacemaker than the latest and hottest mobile game. It [00:16:00] makes sense that there might be different approaches but they would probably overlap.\"}),/*#__PURE__*/e(\"p\",{children:\"There will be pieces that you want to use in both. And you want something that is composable to say I need this piece from here. I need this level of adaptability versus predictability. I need this level of sort of whatever speed versus cost. I'm a little bit more cutting edge and I want to try something that's brand new versus not.\"}),/*#__PURE__*/e(\"p\",{children:\"Maybe it's clients and hardware requirements, like all these different things alongside just preferences and ages and you build an application with a stack that comes out today and maybe in three years time or in the case of JavaScript frameworks, five minutes time there might be something entirely new that is out there.\"}),/*#__PURE__*/e(\"p\",{children:\"And you've already built your application on the previous stack, so you need bigger gaps to be able to look to it. And so I think that's a part of the beauty. It's messy but it's a part of the beauty. I think when we navigate the unknown and this touches a bit on some of the stuff that I've been discussing with Matt Biilmann is it is really the kind of the dissonance or the difference between the closed ecosystem and open ecosystem. If you don't mind me, peeling a little bit into that, there's, you think today about the world and you think there's you [00:17:00] can look at the web versus mobile and the web is this. open ecosystem and it's a mess. It's it's an entire mess. You have all these JavaScript frameworks. You have all these languages. You have really a wide variety. Technology never dies. You need to support all these ancient for a good while. Like, how long did it take us to get rid of IE6?\"}),/*#__PURE__*/e(\"p\",{children:\"And but it's also the source of its creativity. You tried it. It brought to life SaaS and what games and it doesn't have a moderator. There's not nothing there so says you're not allowed to build an application in this fashion or create that user experience. And at the end of the day, different worlds, different users wants different things from it and different innovators and builders can own their own niches, both in terms of what is being built.\"}),/*#__PURE__*/e(\"p\",{children:\"What is the output of the user? What is the process of creation? And that's amazing, right? That's beautiful. If you think about mobile, we have these much more closed, controlled, opinionated environments. We really mostly have iPhone and Android. These are two ecosystems. They're substantial.\"}),/*#__PURE__*/e(\"p\",{children:\"They're both very powerful. You can build in them. Android a bit more open than iPhone, but not quite as open. And they act as [00:18:00] gatekeepers. If you want to know about what would AI be like in mobile phones, it depends on what iPhone and Android decide, right? If you want a different type of sensor and interaction, the barrier of entry is so high, these new devices that try to come to market on it generally don't really have much of a shot, again, maybe at best in Android because there's a variety of Android based devices and I think software development today is more like the web. The web is partly part of the same thing, and I find that beautiful. I find that messy. And when AI comes along, it makes it even messier. And it allows us to experiment and try more things. But those things are, it's harder indeed. You might have fashions and hey, this is like the hardest thing ever because there's more free competition, free room for creation and multiple winners can come out.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. And if you contrast that to closed systems and maybe it's mobile. Maybe it's some sort of more closed development environments that exist today. I think the concern is that with AI, the other way to embrace AIs to actually embrace it into places in which they're more [00:19:00] closed ecosystems.\"}),/*#__PURE__*/e(\"p\",{children:\"They're more opinionated. They now understand everything. There's like a lot of power to be had in understanding your code base, understanding all the business needs and information around it, controlling all the sort of the pipeline and being opinionated about how software is going to built and and deployed and operated and sold.\"}),/*#__PURE__*/e(\"p\",{children:\"And so within those environments, it's easier to absorb the chaos that is AI. It's easier to think, okay, like it's chaotic and how it creates, but it creates things into a more confined space. And I see the appeal and I think we should enjoy the appeal of having AI boost these kind of more opinionated systems.\"}),/*#__PURE__*/e(\"p\",{children:\"But what I worry is that almost if they get too strong, if we lean too much into it, we'll find ourselves with two or three platforms that have almost like the sort of superpower of their ability to provide that breadth of capability and software development will be dependent on whatever it is, like innovation and software development, more development tools, changing it will depend on really how they moderate it and I contrast that to the web in [00:20:00] which It's more about composable pieces that you pull together and it's a bit more chaotic and it's a bit more effort because extensibility and composability come at the expense of simplicity. And so it's a bit more chaotic, but it allows more players to play and kind of form more pictures that work.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** One of the questions that kind of came out of the map in a session then was, is the open web threatened by AI?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah, I think that's the sort of a version of all of this is and Matt was making good points to say he's concerned about that specifically for the web.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, no, I think it's hard to detach that from Matt's specific kind of a commercial reality. Netlify is very much a composable web type platform and it allows for connecting all sorts of things. And if you contrast it to maybe something like a Vercel or I don't know, maybe like a Shopify then those environments are somewhat competitive.\"}),/*#__PURE__*/e(\"p\",{children:\"They're not entirely these things composed, right? They can collaborate, but oftentimes those are more closed environments. They allow. easier ways to [00:21:00] create a subset of the web's applications. And so he was concerned that just as they get almost so good at it, that we lean into these benevolent dictatorships, that allow us to tap into it, but we lose the ability to create the different components.\"}),/*#__PURE__*/e(\"p\",{children:\"So I don't know it's like a slightly nebulous concern right now. And I oftentimes feel I don't know. Am I just a doomsayer? By saying this and I don't know if there's a very immediate concrete things. I think when I think about what can Tessl do to help in this context, which is the area where maybe there's some kind of a control that we can apply is, I think there's an advantage to having intermediate representations of the decision.\"}),/*#__PURE__*/e(\"p\",{children:\"So the more The way you interact with the AI is, hey, this brilliant alien mind, can I tell you something? And then you will just get stuff done. And as long as the result is correct, I. I have no need to engage with your interim decisions. Then the more dependent you are on it, right? Like at this point you flip the light switch and a light turns on and you really have no [00:22:00] idea what happened in the middle.\"}),/*#__PURE__*/e(\"p\",{children:\"The more we require or invest in the, from a community perspective and having explainability and having interim artifacts that are standardized that people can work from and explore then the more you're less locked in the more collaborative. the ecosystem can be because people can pick things off from different places.\"}),/*#__PURE__*/e(\"p\",{children:\"They can optimize an interim artifact. So I don't know, there's a chance of that sort of old school software development thinking on it. And I guess I'm thinking out loud about it and expressing a concern. I don't know that this is a kind of a doomed path that we're certainly on.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Yeah. Very interesting, that opinion. Let's talk a little bit about, in fact, so this came up in the session with Simon and. Matt which is talking about AI as a user. So now, rather than using AI as a tool, AI actually almost effectively consuming various parts of the web or other parts of applications and so forth.\"}),/*#__PURE__*/e(\"p\",{children:\"It was also mentioned the bolt.new s ession with Eric as well from [00:23:00] StackBlitz AI as a user. And that's an interesting concept and one that I don't think too many people are catering for.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah. Yeah. And I think probably this might be one of the top insights for me from this month, because I think a month ago, I don't think I would have used that term.\"}),/*#__PURE__*/e(\"p\",{children:\"And now I think about it actually all this time, maybe there's a little bit of once Matt opened my eyes to that notion in the prep call that we had even ahead of the podcast, then I now see it everywhere. And it's interesting suddenly you think, okay, hold on.\"}),/*#__PURE__*/e(\"p\",{children:\"When you even go to your code assistant, you hit tab and it introduced them open source library for you. How did it choose which one to use, right? Or in Bolt. new, it creates an application. I'm sure the GitHub Spark and all the others do the same. And it chose to deploy it. How did it choose where to deploy it?\"}),/*#__PURE__*/e(\"p\",{children:\"How did it know what is possible and what is not possible. And so all of those are actually versions of AI as a user. There is an AI training model platform system somewhere that as a user has chosen to take this application and deploy it in say, Netlify and continue [00:24:00] on from there.\"}),/*#__PURE__*/e(\"p\",{children:\"There's an open source library that was mentioned in some places and documented in some fashion and the code generator chose to use it over choosing something else. So if you're the provider of these things, how do you encourage that? Do you want to encourage that? Typically the answer is yes.\"}),/*#__PURE__*/e(\"p\",{children:\"Netlify did a really interesting thing. Which is they have a feature that was unrelated to AI at all, which is you can deploy something to a website without authentication and then claim that website after you deployed as an ease of use, like friction reducing element. And that is actually one of the things that made AI deployers defaults to deploying it on Netlify because they don't need a user. They can just do that and then give their users and if the user liked it and the human can claim the site. So I thought that was really interesting. And I think something that we will need to deal with more and more.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, and it brings me back if it was to think a little bit back to the Armon session as well, where he talked a little bit about how I can fill in various parts of a request that you have. What should it fill in? [00:25:00] When should it make decisions? And when should it require the user to actually put that input?\"}),/*#__PURE__*/e(\"p\",{children:\"Because actually, it's meaningful to the user to actually make a requirement there or suggest something and it's actually a little bit similar to that, but rather than thinking about the how it's creating a file or an artifact, it's rather doing it one step higher. So maybe how I should deploy this or what decisions I should make about which vendors to use to deploy this, how I make a decision as to what library I want to choose.\"}),/*#__PURE__*/e(\"p\",{children:\"Maybe I care about it. Maybe I don't care about it, but it's all about that user then providing what they need to do and then allowing the AI to fill in the gaps. It's a very similar kind of style problem, but one level higher,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** yeah, I agree. And I think there's almost like two versions of it as well, like in most kind of, aggregations of content.\"}),/*#__PURE__*/e(\"p\",{children:\"One is AI uses the same signals that humans do. And so the question is really what's in its training data. Is your documentation indexed by the LLM so it would know about it? Is your project, is the proof of data of usage or usefulness there? Does the [00:26:00] LLM have what it needs to have to be able to successfully use your platform?\"}),/*#__PURE__*/e(\"p\",{children:\"And what happens when you have new versions and all of that. So it's interesting. Those are similar to what you would want for human developers. Like with Google, you just want it indexed. And the other question is, are people going to start gaming it? If you're a new, whatever, like deployment platform and you want these apps to deploy with you, is there a way for you to provide information and slightly over, like with SEO, over rotate them?\"}),/*#__PURE__*/e(\"p\",{children:\"To using them, would they create, content farms, would the LLMs needs to get smarter, to avoid reducing that all the way to maybe even thinking about intentional. Is there a nonhuman specification, a structure, like a different way to inform LLMs about, Hey, if we're going to use.\"}),/*#__PURE__*/e(\"p\",{children:\"I was going to stick to Netlify over here, right? If you're going to use Netlify, here's a manual. So I think it's interesting. It even came up in the conversation with Notion when we talk about how do you aggregate data? They aggregate data from different places, not just for code, right?\"}),/*#__PURE__*/e(\"p\",{children:\"Aggregate data from different places. They choose some authority. What [00:27:00] signals would over time content include in it to think about AI as a consumer? But it's also very applicable in AI dev.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah, I really like that session actually with Simon. This is the Notion CEO and co founder. Yeah.\"}),/*#__PURE__*/e(\"p\",{children:\"Super smart guy. His jumper. It didn't look itchy, though. I must say, for those who haven't seen the video. I think\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** He's a very technical guy. And he I think he is much more about the substance. He's not a marketing personality.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** See, that's what I see. I look at that and I think, oh, that looks, it looks comfy, but itchy.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. Now here, there are a few really insightful things actually from that session. So let's talk about fine tuning, because this is something that people have invested heavily in, sometimes built their companies around. And it sounds like potentially from what Simon was saying, maybe over invested in because Simon was questioning the point of fine tuning an LLM based on the fact that actually give it another six months or whatever, and a model's gonna come out that will actually debunk your fine tuning on a previous version or actually go [00:28:00] beyond what it can do today anyway.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Tell us a little bit about fine tuning first. What's the difference between fine tuning?\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, and\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** I think that it's a slightly, I guess a ill defined term, or maybe it is defined well in terms of the science of it, but people use it for different purposes. So first of all, like within systems, there's a lot of like within models, there's a lot of conversations about pre training and post training.\"}),/*#__PURE__*/e(\"p\",{children:\"Pre training is just gathering the data, just, there's lots to it, right? But a lot of the data that got gathered in and how do you understand, convert it to all these billion parameters and assemble attention to it, et cetera. Inference is when the actual model runs to execute it. Post training of the models before is really trying to make sense of that knowledge and convert it into behavior of how you want it to behave. Chat GPT required post training and over time reinforcement learning to be able to learn which answers are correct or not. So a lot of those things happen within the models and post training is a domain in which generally the perception is there's going to be more and more opportunity in it.\"}),/*#__PURE__*/e(\"p\",{children:\"In fact, many people feel that pre training at the moment is actually a little bit less differentiated. [00:29:00] Slightly the extra promise over there is not that grand. Maybe just like the scale of data and the prestige, the opportunities to innovate and to really do something substantial is in post training.\"}),/*#__PURE__*/e(\"p\",{children:\"Then reasoning is like a whole new field. So this is the set up I think within the that domain, I don't think Simon was really challenging whether in models post training will be a challenge within that some of these platforms, OpenAI and others, they allow you to fine tune, which is like a version on top of that, that eventually also changes the weights in some fashion, but you provide oftentimes what it translates to is you give it a bunch of Correct and incorrect answers.\"}),/*#__PURE__*/e(\"p\",{children:\"Hey, here's a, an example of a good case. Here's an example of a bad answer and you can give it a large volume of that. And when you do that generally the computation, right? Like the inference, when you then run a question that can make an LLM call, then you would sometimes take longer and you will usually pay more because you've asked the platform to do more for you.\"}),/*#__PURE__*/e(\"p\",{children:\"And I think what he's been describing is that in concept, it's very [00:30:00] promising. So if you manage to curate a bunch of good examples or bad examples of it to train the system, can you use that and then have the system magically produce the right answer? Calibrate the neurons so that it can produce the right answer.\"}),/*#__PURE__*/e(\"p\",{children:\"And he was pointing out that it's just one is it doesn't work as well, but also it's really hard to work with it to debug. Like you came along, you gave it a bunch of information and it's now producing an answer. It produced the incorrect answer. What do you do now? How do you handle that?\"}),/*#__PURE__*/e(\"p\",{children:\"Or, you add in another example to your test case and suddenly the answers are different. How do you do now? And and so it's just impractical to use it. He hasn't found it as. valuable. And he went as far as saying that he finds fine tuning to almost be a a negative indicator when he talks to startups.\"}),/*#__PURE__*/e(\"p\",{children:\"If they say, hey, the reason we're going to succeed is because we're going to fine tune. It implies some cluelessness I don't know that I have a very firm opinion on whether fine tuning is or isn't successful. I feel [00:31:00] like I'm deferring here to real world experience from him.\"}),/*#__PURE__*/e(\"p\",{children:\"I totally relates to the difficulty to debug and maybe I should contrast the alternative to that is if you have a bunch of those good examples, you can put a few of them in a few short prompt or into the system prompt and things like that. And you can use others in an agentic process that looks at the result and says is this correct?\"}),/*#__PURE__*/e(\"p\",{children:\"Is this not? Can I change it? And if you do those, if you contrast the debugging capability, there is much more that you can do here. You've identified a case, that here you've identified this new problem case, like you just have more control. You're relying less on magic, not that dissimilar to the composability comments that I meant before.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. So it's interesting. What was beautiful about the insights from Simon was that they're based in real world reality versus the promise, the sort of the marketing and he's massively excited by AI. So don't take any of this as, he's skeptical about it.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. It's quite the opposite. Yeah. But it's just pointing out what is real and what is not. And yeah, he was saying, if you don't find tune, if you use the models as they are. It's easier for you to bounce around between models as they hold.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. And of course, [00:32:00] Notion AI is probably one of the best examples out there about the power of depth of AI.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. In a production style. Yeah. Application.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** There's a backstory of it as well that that he didn't share on the podcast, which is a Notion has made an attempt to really deeply fine tune a model. I think it was based on Anthropic, I'm not sure. And really invested a lot of time in it and really also didn't see sufficient results. And so they ended up using the core models and the core models as I understand it, this is a little bit of hearsay. Take it with a grain of salt, but over time the models just at the same time and that they were putting in that effort to find you and then you want to come along and it was just like 80 percent of that has already been achieved.\"}),/*#__PURE__*/e(\"p\",{children:\"And so I think part of it is also just based on that.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. And what's the opportunity cost that you could have had all of this development team doing? Exactly. Yeah. That would continue.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** And if you contrast that to like a critic agent model in which, okay, maybe what happens now is you put that critic, it says did you give the wrong answer?\"}),/*#__PURE__*/e(\"p\",{children:\"And maybe now you see that the wrong answer is not given. It gets the right answers. So you have visibility to that over time. Maybe you even remove that critic because you don't need that [00:33:00] anymore. Yeah. So it's interesting.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Couple of other very interesting things that Simon mentioned or myth busted about big context windows.\"}),/*#__PURE__*/e(\"p\",{children:\"Number one. And we actually had a really good ,Guy Eisenkot had a really interesting session about context and ordering another one to go and really good talk. Yeah. But yeah, talking about attention or limited attention, causing problems with that. And I know that was one that caught your eyes.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Yeah. And that was I think it's a little bit straightforward. I just mostly loved his phrasing. So we talked about his system prompts and asked him like how big is your system prompt? It's pretty big. It's getting pretty big. It's big enough that it doubts the value of adding another line.\"}),/*#__PURE__*/e(\"p\",{children:\"Like they came across another problem. Can they now introduce that into the system prompt? It might not notice it. And I think we're just in the first year plus of the conversation, all the conversation was around context window. And his point, which I agree with, is we're past that, now the context windows are pretty big.\"}),/*#__PURE__*/e(\"p\",{children:\"But what that hides is the fact that attention is still limited. And so you can tell it a million things, but it can't [00:34:00] pay attention to all the million. And so you think you informed the LLM about whatever, some new instruction, or, new edge case of if this, then that, or behave like this.\"}),/*#__PURE__*/e(\"p\",{children:\"And and in practice, attention, is focusing, simply put, focusing, within everything you told it about what matters when a prompt comes in is actually a lot more limited. So that's the bottleneck today. So I would say that is very aligned with both what we've been experiencing, but also with what I've been hearing from others and the best practice at the moment seems to be.\"}),/*#__PURE__*/e(\"p\",{children:\"Keep the instructions at the top, keep them contained and past a certain magical point, it really becomes just data. So things that are explicitly looked up and so you can provide like limited text and then, whatever big files and big like volumes of data, maybe like code bases and things like that.\"}),/*#__PURE__*/e(\"p\",{children:\"And it can handle those reasonably well, but the instruction, the attention to instructions, it's much more limited at the top and you need to be careful in managing them. And be based on evaluations, which we know are finicky to figure out [00:35:00] whether a new instruction did or didn't make an impact versus faith in the attention.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** That's really interesting actually because what makes me think of a blog that is actually going to be going out hopefully next week may see one of our community engineers has been investigating and writing a little bit about how constraints in a prompt, depending on where you position them in a prompt can have an impact.\"}),/*#__PURE__*/e(\"p\",{children:\"And it's amazing. I think I still think this, prompt engineering and things like that, and the amount of context to how that's delivered prompts and context are two of the easiest things, the cheapest things that we can actually change. But it's still such a art that people aren't as familiar with.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, I think there's a ton of tips like this and we should be, writing more about this in the community about how everyone can level up with just a few simple tricks or a few little things that people should be doing. And Macy's going to be talking a little bit about how you move that constraint to the top.\"}),/*#__PURE__*/e(\"p\",{children:\"It's actually going to respect that far more than if you add it to the end because the engines already done a lot of work before looking at that constraint and adding that constraint into\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** a ton of tips like that.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah, very good. And as Caleb Sima actually in an [00:36:00] earlier episode said, the data plane and the control plane here are mixed together.\"}),/*#__PURE__*/e(\"p\",{children:\"And so if this was an ordered list and structured format feel very natural. like hey, earlier instructions get more weight loose like the data and the instructions are all mixed together then its so easy to mix them and put an instruction and send them some instructions and some data\"}),/*#__PURE__*/e(\"p\",{children:\"and that's just not the right way to address an LLM and the lines are blurry and they also change from model to model and from model version to model version on it.\"}),/*#__PURE__*/e(\"p\",{children:\"So part of what makes building these applications challenging,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** . And the third one before you almost wrap up there, the third one. His insight was a deep ML background could actually be a negative trait for an AI engineer. 1st question guy. What's an AI engineer?\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** First question I think maybe it's like my paraphrasing a bit , I'm not sure if Simon said the AI engineer title. This was very much in context on it.\"}),/*#__PURE__*/e(\"p\",{children:\"And [00:37:00] again, it loved the spicy thought when I asked him? is, hey, as you hiring people into your sort of specialized team that is building AI products, which was what I called an AI engineer, people didn't know how to build on top of the LLMs which we talked, by the way, a lot about how that is complicated.\"}),/*#__PURE__*/e(\"p\",{children:\"So if you hire into that team, how important do you think machine learning background is? And he said like most of the team doesn't really have an ML background. Some people have and that the problem is that LLMs are actually reasonably different to traditional ML. And so if people come with enough ML background, pre LLM ML background, which is, LLMs are like a couple of years for practically everybody in the world, maybe there's a few that have another year opportunity if they were inside, then sometimes you actually have dispositions that you need to unlearn. You have biases and methods and sometimes you might be slow because you're not used to having this monstrous capability within your easy access right at your fingertips.\"}),/*#__PURE__*/e(\"p\",{children:\"And as a result of that, it feels like it actually [00:38:00] holds them back. And eventually he described the best people he has as people that are just very good at iterating, very good at moving fast. Which I found once again to be a very DevOps y type principle, reminded me of what Armon from HashiCorp has said in the previous episode around how the conclusion we got to there, which is if you're actually the best at DevOps, at continuous deployment, at instrumentations and all that, Then you're actually able to best use that information to train your system and evolve it.\"}),/*#__PURE__*/e(\"p\",{children:\"So it does feel like this ability to be agile, to be dynamic is very aligned to how you build with LLMs to be really purposeful about what it is that you want to build, but to be ready and willing and able to roll with the punches and to adjust when things don't work. And do those because the products are so unpredictable.\"}),/*#__PURE__*/e(\"p\",{children:\"Yeah. Yeah. Which is almost counter to how you would run research.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Which is interesting. Almost having that, like relying on that mechanical sympathy enough of what's under the covers and how things should work. Yes. That'll get you so far, but it's that level of iteration on top of that\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Research is [00:39:00] research as a field as a whole, and of course, I'm generalizing here, is about slightly like more thorough thesis and some sort of a hypothesis. And then how are you going to go about doing it and trying to be thoroughly assessing, where does it go?\"}),/*#__PURE__*/e(\"p\",{children:\"And some fields, of course, in LLM field world very much require that to be able to evolve the models. But when you're building on top of it. It's an interesting thought that I think has good points, which is you actually have to be careful to say, are you too fond of or like too used to these kind of ML methodologies that are more methodical that actually will get in the way of your agility and maybe acceptance of some of the new power, if you will, of the LLM.\"}),/*#__PURE__*/e(\"p\",{children:\"We're sometimes needs slightly less scientific methods to make the most of. So I thought it was really interesting at this point. We touched a few topics here, but you really should listen to the episode and hear it straight from the source on it. It has slightly like a lot more to say and like different ways to phrase it, which are far better than my butchering them here.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** All of those episodes on [00:40:00] apple podcast or on spotify, et cetera. Feel free to have a listen subscribe. On wherever you are, and you'll be first to hear about all our new episodes going forward as well. That pretty much wraps up this live episode Guypo. Yeah,\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** I think, thanks for tuning in.\"}),/*#__PURE__*/e(\"p\",{children:\"And I will maybe make one more mention, which is we have a lot actually all of the talks from AI Native DevCon on our YouTube. That's right.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Yeah. Everything's on the Tesla YouTube. Yeah, go to YouTube, do a search for Tessl and you'll see all the talks within their own playlist.\"}),/*#__PURE__*/e(\"p\",{children:\"You can have a look through there as well as all of our podcast episodes as well. So there's two different playlists that you can choose from.\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** So check them out. A lot of information produced by. Smart people. And then also us yeah. Not all of them are winners. Some of them have to have to be better than others.\"}),/*#__PURE__*/e(\"p\",{children:\"I wanted to do something to help the others shine, but you can catch all those talks on the YouTube channel and a lot of great tools and really smart perspectives on where the future of AI dev and AI native dev is going. So check them out.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** So happy Thanksgiving again to those in the U. S.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Happy Thanksgiving.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** And thanks all [00:41:00] for tuning in and we'll see you on the next episode.\\xa0\"}),/*#__PURE__*/e(\"p\",{children:\"**Guy Podjarny:** Indeed. See you then. Bye.\"}),/*#__PURE__*/e(\"p\",{children:\"**Simon Maple:** Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.\"})]});export const richText4=/*#__PURE__*/t(n.Fragment,{children:[/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Birth of Notion AI\"})}),/*#__PURE__*/e(\"p\",{children:'The journey of Notion AI began in October 2022 when the team gained early access to GPT-4. Simon Last described this as a turning point, stating, \"Playing with GPT-4 was the trigger for me. Oh my God, this thing is actually really useful now.\" This realization led to the immediate development of Notion\\'s first AI product, an AI writing assistant launched in February 2023. The assistant can write, edit, and insert text, offering various pre-packaged prompts for improving writing.'}),/*#__PURE__*/e(\"p\",{children:\"This initial foray into AI was not just about incorporating a trendy technology but about fundamentally transforming how users interact with digital content. The AI writing assistant was designed to be intuitive, allowing users to seamlessly integrate it into their existing workflows. This integration was made possible by understanding the core needs of users—efficiency, accuracy, and ease of use. Simon Last emphasized that the AI's role was to complement human creativity, not replace it, by taking over repetitive tasks and allowing users to focus on more strategic activities.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Key AI Products Introduced by Notion\"})}),/*#__PURE__*/e(\"p\",{children:'Notion introduced three main AI products: AI Writing Assistant, AI Autofill, and Q&A. The AI Writing Assistant allows users to write and edit text with ease, using custom prompts. Simon Last explained, \"There were prepackaged actions and then also you could just type whatever you wanted.\" The AI Autofill feature enables users to fill out database columns using AI-generated prompts, useful for summarizing or translating content. The Q&A product, launched in November 2023, indexes all of Notion, using embeddings to facilitate a chat bot where users can ask questions. Simon Last noted, \"We built an embedding index over all of notion, and then you could ask questions and it\\'s a chat bot.\"'}),/*#__PURE__*/e(\"p\",{children:\"Each of these products serves a distinct but complementary purpose in the Notion ecosystem. The AI Writing Assistant boosts productivity by streamlining content creation, while AI Autofill automates data entry processes, reducing manual workload. The Q&A feature leverages advanced natural language processing to provide accurate and contextually relevant answers, making information retrieval faster and more intuitive. Together, these products exemplify Notion's holistic approach to integrating AI across its platform, enhancing its utility and user experience.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Organizational Structure and Team Dynamics\"})}),/*#__PURE__*/e(\"p\",{children:'The AI development at Notion started with a small, agile \"tiger team,\" which Simon Last believes is crucial for rapid innovation. \"It\\'s good to have a small group of people that can move really fast,\" he stated. As the AI efforts expanded, the team grew to about 20 people, organized into subgroups focusing on indexing, UX, and modeling. Despite challenges in democratizing AI across teams, Simon Last emphasized the importance of enabling more teams to work with AI.'}),/*#__PURE__*/e(\"p\",{children:\"This approach not only fosters innovation but also promotes a culture of collaboration and knowledge sharing. By embedding AI specialists within various teams, Notion ensures that AI expertise permeates the organization, leading to more cohesive and integrated product development. Simon Last highlighted the importance of maintaining a balance between centralized AI expertise and distributed innovation, allowing all teams to leverage AI while benefiting from a shared foundation of knowledge and resources.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Evaluation and Testing of AI Capabilities\"})}),/*#__PURE__*/e(\"p\",{children:'Evaluating AI products is challenging, with Simon Last highlighting the need for a repeatable evaluation system. Notion focuses on robust logging and dataset creation to track and address failures. Simon Last explained the importance of deterministic evaluations: \"For the situations that you test, you need to make sure that those work and they don\\'t regress.\" This approach allows Notion to continuously improve their AI capabilities.'}),/*#__PURE__*/e(\"p\",{children:\"The evaluation process involves rigorous testing, using both synthetic and real-world data to simulate various scenarios. This iterative method ensures that AI models are not only accurate but also resilient to changes and capable of adapting to new data inputs. By prioritizing empirical testing and data-driven insights, Notion can refine its AI models, enhancing their reliability and performance over time.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Role of Fine-Tuning and Model Selection\"})}),/*#__PURE__*/e(\"p\",{children:'Simon Last shared insights into the complexities of fine-tuning models, noting that it often complicates the development process. \"You\\'re making your job like a hundred times harder,\" he remarked. Instead, Notion prefers in-context learning and leveraging the latest models to maintain product stability while adapting to new technological advancements.'}),/*#__PURE__*/e(\"p\",{children:\"Rather than relying heavily on custom-trained models, Notion opts to use the most advanced models available, integrating them into its products in a way that aligns with user needs. This strategy allows Notion to stay at the forefront of AI innovation without the overhead of extensive model training and maintenance. Simon Last emphasized the importance of flexibility and adaptability in AI development, recognizing that the landscape is rapidly evolving.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"Building Trust in AI\"})}),/*#__PURE__*/e(\"p\",{children:'Trust is a critical aspect of AI adoption. Notion employs strategies such as user verification and transparent actions to build trust. Simon Last stated, \"We show you this little pop up and the default is no, but you can opt in to sharing data with us.\" Citations and visualizations in the Q&A product also help users verify answers, fostering trust in the AI\\'s outputs.'}),/*#__PURE__*/e(\"p\",{children:\"Transparency is key to trust, and Notion is committed to clear communication with its users about how their data is used and protected. By providing users with the option to opt-in for data sharing, Notion respects user privacy while still gathering valuable insights for improvement. Additionally, visual cues and citations in AI-generated content allow users to assess the reliability of the information provided, enhancing their confidence in the tool's accuracy.\"}),/*#__PURE__*/e(\"h4\",{children:/*#__PURE__*/e(\"strong\",{children:\"The Future Vision for Notion AI\"})}),/*#__PURE__*/e(\"p\",{children:'Looking ahead, Simon Last envisions AI automating tedious tasks, allowing humans to focus on higher-level work. He sees AI as a primitive, integral to Notion\\'s mission of enabling custom software creation. \"Our goal as a company is to try to break the pattern of these like rigid vertical SaaS tools,\" he shared. This vision positions Notion AI to significantly impact knowledge work, elevating productivity and innovation.'}),/*#__PURE__*/e(\"p\",{children:\"As AI continues to evolve, Notion aims to harness its potential to transform how individuals and organizations manage information. By automating routine processes, Notion empowers users to allocate their time and energy towards more strategic, creative pursuits. This shift not only enhances productivity but also fosters a more dynamic and adaptable work environment, where AI serves as a powerful ally in achieving business goals.\"})]});export const richText5=/*#__PURE__*/e(n.Fragment,{children:/*#__PURE__*/e(\"p\",{children:/*#__PURE__*/e(o,{href:\"https://www.notion.so/product/ai\",motionChild:!0,nodeId:\"X7VlByTfx\",openInNewTab:!1,relValues:[],scopeId:\"contentManagement\",smoothScroll:!1,children:/*#__PURE__*/e(a.a,{children:\"Notion AI\"})})})});export const richText6=/*#__PURE__*/e(n.Fragment,{children:/*#__PURE__*/t(\"p\",{children:[/*#__PURE__*/e(\"strong\",{children:\"[00:00:00]\\xa0\"}),\"Introduction to the Podcast and Guest\",/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:01:00]\\xa0\"]}),\"The Birth of Notion AI\",/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:04:00]\\xa0\"]}),\"Key AI Products Introduced by Notion\",/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:07:00]\\xa0\"]}),\"Organizational Structure and Team Dynamics\",/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/e(\"br\",{}),\"[00:09:00]\\xa0\"]}),\"Evaluation and Testing of AI Capabilities\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),\"[00:13:00]\\xa0\"]}),\"The Role of Fine-Tuning and Model Selection\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),\"[00:16:00]\\xa0\"]}),\"Building Trust in AI\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),\"[00:21:00]\\xa0\"]}),\"The Future Vision for Notion AI\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),\"[00:41:00]\\xa0\"]}),\"Discussion on AI in Software Development\",/*#__PURE__*/e(\"br\",{}),/*#__PURE__*/t(\"strong\",{children:[/*#__PURE__*/e(\"br\",{}),\"[00:51:00]\\xa0\"]}),\"Conclusion and Closing Remarks\"]})});\nexport const __FramerMetadata__ = {\"exports\":{\"richText1\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText6\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText2\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText4\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText5\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"richText3\":{\"type\":\"variable\",\"annotations\":{\"framerContractVersion\":\"1\"}},\"__FramerMetadata__\":{\"type\":\"variable\"}}}"],"mappings":"qVACa,AADb,GAAkD,IAA8B,IAAuC,IAAwB,CAAa,EAAsB,EAAA,EAAa,CAAC,SAAS,CAAc,EAAE,IAAI,CAAC,SAAS,8UAA+U,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2HAA4H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oFAAqF,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2PAA4P,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4DAA6D,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+CAAgD,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wPAAyP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4DAA6D,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oaAAqa,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0JAA2J,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mKAAoK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,spBAAupB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2SAA4S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0cAA2c,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gUAAiU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+VAAgW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6hBAA8hB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uNAAwN,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+UAAgV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+nBAAgoB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wWAAyW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sOAAuO,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gQAAiQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iRAAkR,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wUAAyU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+cAAgd,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gUAAiU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6WAA8W,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4TAA6T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6FAA8F,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ijBAAkjB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iTAAkT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,03BAA23B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yUAA0U,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mYAAoY,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iTAAkT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qmBAAsmB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mWAAoW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qXAAsX,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qTAAsT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qNAAsN,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yGAA0G,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0KAA2K,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yHAA0H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yUAA0U,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qXAAsX,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,k3BAAm3B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wSAAyS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0TAA2T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qUAAsU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mZAAoZ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gJAAiJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oJAAqJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yMAA0M,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,WAAY,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8EAA+E,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8UAA+U,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ucAAwc,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sVAAuV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0YAA2Y,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,saAAua,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4SAA6S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0MAA2M,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6IAA8I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mOAAoO,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qGAAsG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iWAAkW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iaAAka,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sUAAuU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mWAAoW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4VAA6V,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2FAA4F,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yPAA0P,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yFAA0F,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oQAAqQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+QAAgR,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qIAAsI,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0XAA2X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kaAAma,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2bAA4b,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sUAAuU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uJAAwJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2GAA4G,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0TAA2T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+GAAgH,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,WAAY,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kZAAmZ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4bAA6b,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8NAA+N,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mEAAoE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2MAA4M,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wQAAyQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0XAA2X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mGAAoG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+EAAgF,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gKAAiK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wEAAyE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wWAAyW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kGAAmG,EAAC,AAAC,CAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAS,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,cAAe,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,22BAA42B,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,mCAAoC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ujBAA4jB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kZAAmZ,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,6BAA8B,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uiBAA0iB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kgBAAmgB,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,qBAAsB,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wgBAA0gB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0eAA2e,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,mCAAoC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gkBAAkkB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+eAAgf,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,cAAe,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kfAAqf,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ofAAqf,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,wCAAyC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6aAA8a,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8bAA+b,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,+BAAgC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6XAA8X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gfAAif,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,0BAA2B,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ycAA0c,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+iBAAgjB,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,SAAU,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8qBAA+qB,EAAC,AAAC,CAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAsB,EAAE,IAAI,CAAC,SAAS,CAAC,qDAAkE,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,+CAA4D,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,yCAAsD,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,uDAAoE,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,qDAAkE,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,oDAAiE,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,iEAA8E,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,2CAAwD,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,sCAAmD,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,wCAAyC,CAAC,EAAC,AAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAS,CAAc,EAAE,IAAI,CAAC,SAAS,+FAAgG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uHAAwH,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4IAA6I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gKAAiK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8FAA+F,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qFAAsF,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8HAA+H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sGAAuG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wRAAyR,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0RAA2R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yfAA0f,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mfAAof,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2EAA4E,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+BAAgC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6BAA8B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yPAA0P,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mbAAob,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4IAA6I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gEAAiE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8IAA+I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0WAA2W,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0DAA2D,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2EAA4E,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oIAAqI,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4LAA6L,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wPAAyP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qQAAsQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0cAA2c,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8RAA+R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wnBAAynB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oTAAqT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uFAAwF,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yHAA0H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mOAAoO,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iMAAkM,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4BAA6B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iaAAka,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uiBAAwiB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sSAAuS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6HAA8H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oFAAqF,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+WAAgX,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wIAAyI,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sKAAuK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gEAAiE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+IAAgJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+JAAgK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,moBAAooB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0bAA2b,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2RAA4R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kPAAmP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2WAA4W,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yQAA0Q,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0WAA2W,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wCAAyC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2NAA4N,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sRAAuR,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oGAAqG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oMAAqM,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6CAA8C,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0XAA2X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yHAA0H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mPAAoP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sVAAuV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oWAAqW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+RAAgS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qCAAsC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kRAAmR,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oUAAqU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gLAAiL,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wNAAyN,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gmCAAimC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0aAA2a,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6TAA8T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oTAAqT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uTAAwT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uWAAwW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0fAA2f,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qLAAsL,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4dAA6d,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gVAAiV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oUAAqU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,m5BAAo5B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,scAAuc,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ySAA0S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,s+BAAu+B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iTAAkT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8UAA+U,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0TAA2T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6vBAA8vB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qIAAsI,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2KAA4K,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qVAAsV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8ZAA+Z,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wbAAyb,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,saAAua,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kUAAmU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0RAA2R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sVAAuV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2MAA4M,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0LAA2L,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sQAAuQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4TAA6T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uSAAwS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wSAAyS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0mBAA2mB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kVAAmV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mbAAob,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yOAA0O,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6IAA8I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qVAAsV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gcAAic,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2RAA4R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oSAAqS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+MAAgN,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qHAAsH,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0HAA2H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wIAAyI,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iHAAkH,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ulBAAwlB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6BAA8B,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2GAA4G,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,eAAgB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kUAAmU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gtBAAitB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2TAA4T,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ieAAke,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qXAAsX,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uUAAwU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oSAAqS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gTAAiT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gSAAiS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iVAAkV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0VAA2V,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iQAAkQ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yPAA0P,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2IAA4I,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qDAAsD,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2oBAA4oB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2DAA4D,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4JAA6J,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+KAAgL,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iPAAkP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yHAA0H,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0SAA2S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sTAAuT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sUAAuU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+SAAgT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,0XAA2X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8SAA+S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uVAAwV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qVAAsV,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,6SAA8S,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,uTAAwT,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iMAAkM,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gDAAiD,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kJAAmJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8RAA+R,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sKAAuK,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oEAAqE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+NAAgO,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yKAA0K,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gUAAiU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,muBAAouB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wkBAAykB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sUAAuU,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wEAAyE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sPAAuP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oSAAqS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,odAAqd,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yXAA0X,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oSAAqS,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kDAAmD,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kJAAmJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8JAA+J,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,gJAAiJ,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8LAA+L,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,iPAAkP,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wEAAyE,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2CAA4C,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qGAAsG,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,8CAA+C,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,wGAAyG,EAAC,AAAC,CAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAS,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,wBAAyB,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,qeAAue,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,ykBAA0kB,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,sCAAuC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,urBAAyrB,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sjBAAujB,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,4CAA6C,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sdAAwd,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,+fAAggB,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,2CAA4C,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,sbAAwb,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,4ZAA6Z,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,6CAA8C,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,mWAAqW,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,2cAA4c,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,sBAAuB,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,oXAAsX,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,odAAqd,EAAC,CAAc,EAAE,KAAK,CAAC,SAAsB,EAAE,SAAS,CAAC,SAAS,iCAAkC,EAAC,AAAC,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,yaAA2a,EAAC,CAAc,EAAE,IAAI,CAAC,SAAS,kbAAmb,EAAC,AAAC,CAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAsB,EAAE,IAAI,CAAC,SAAsB,EAAEA,EAAE,CAAC,KAAK,mCAAmC,aAAa,EAAE,OAAO,YAAY,cAAc,EAAE,UAAU,CAAE,EAAC,QAAQ,oBAAoB,cAAc,EAAE,SAAsB,EAAEC,EAAE,EAAE,CAAC,SAAS,WAAY,EAAC,AAAC,EAAC,AAAC,EAAC,AAAC,EAAC,CAAc,EAAuB,EAAA,EAAa,CAAC,SAAsB,EAAE,IAAI,CAAC,SAAS,CAAc,EAAE,SAAS,CAAC,SAAS,gBAAiB,EAAC,CAAC,wCAAqD,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,yBAAsC,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,uCAAoD,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,6CAA0D,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,4CAAyD,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,8CAA2D,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,uBAAoC,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,kCAA+C,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,2CAAwD,EAAE,KAAK,CAAE,EAAC,CAAc,EAAE,SAAS,CAAC,SAAS,CAAc,EAAE,KAAK,CAAE,EAAC,CAAC,gBAAiB,CAAC,EAAC,CAAC,gCAAiC,CAAC,EAAC,AAAC,EAAC,CACtg1G,EAAqB,CAAC,QAAU,CAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,SAAW,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,UAAY,CAAC,KAAO,WAAW,YAAc,CAAC,sBAAwB,GAAI,CAAC,EAAC,mBAAqB,CAAC,KAAO,UAAW,CAAC,CAAC"}