Nothing Special   »   [go: up one dir, main page]

Iterate.ai’s Interplay platform accelerates AI-enabled app creation

Overview

Companies looking to create and deploy AI-enabled apps quickly can avoid lengthy and costly development cycles by working with Iterate.ai's Interplay platform. The low-code solution is built to provide a complete runtime environment specific for accelerating the performance of AI models. Product teams can quickly "iterate" ideas into actual applications 17x faster than traditional development methods. Shomron Jacob, Iterate.ai's head of applied machine learning & platform, demonstrates the key features of Interplay.

Register Now

Transcript

00:00 [This transcript was auto-generated.] Welcome to DEMO, the show where companies come in and show us their latest products and services. Today, I'm joined by Shomron Jacob, he is the head of applied machine learning and platform at Iterate.ai. Welcome to the show.
Thanks for having me. I'm excited to be here.
And so what are you going to show us? You're gonna show us an enterprise AI application platform that's drag and drop low code AI for enterprises, but you have a cooler name? Yes,
we call it Interplay. But you define it well, it's an enterprise platform.
Okay. And so who is this designed for? Generally, you know, is this for anybody in the company or a specific subset? No, we
actually designed it for all the C level folks, we want them to come in and try out an idea that they got away with and just see if it works. And you can take it to market overnight. It's designed for them. And
what problem is it generally solving for a lot of these teams? Correct. So it's mostly
solving everything and anything. We come in where you have an idea, and you want to prototype it real quick before you go to your executives and talk about it and show it. This process usually takes two to three weeks or four weeks, right. But with our platform, we let you do it in two to three days. And that's what we are solving. Yeah, time to market?
And what would what would like what would a company do if they didn't have this? It would then be a longer drawn out process? People would have to you'd have to have code developers and all these other these other combinations. Yes,
absolutely. You got it right. You have to go the traditional route, you have to figure it out, integrated together, make it work, and it's no longer cost money. And mostly people shut it down because it will require some form of funding. But with the platform, you can really hook it up together and prove it
out. Okay, so let's let's jump right into the demo here. Show us what you got. And then I may jump in with some other questions. But no problem. Please show us the demo. Great.
So like you see on your screen. This is what our platform is proven enterprise platform. Like I talked about, we build and ship apps, so many x faster. So before I get into demo, I'll tell you we are industry agnostic. We work across every industry, we are device agnostic, we can work cloud agnostic, as well. And we are more or less technology agnostic. So we can work AI with IoT with software with back end integrations, SMEs, everything, okay, everything together, correct. If you've seen your bottom left, we have currently deployed across 4,100 Edge deployments. And this involves our partnership with Intel, Qualcomm, and all that good stuff. So I'm going to show you next Okay, let's quickly jump into our platform. This is what our platform looks like. It's a drag and drop platform, which basically lets you prototype something really quick. And then once you're satisfied with the product, we can go to market overnight as well, is built with the the ways where you can go to production without really thinking too much about it. And in my previous take what I just said, we are platform agnostic cloud agnostic, you can figure out what suits you best. And you can just deploy that correct.
So I just showed show that it is drag and drop like you alright. So
if you see on your left, these are our nodes, correct. Each node is configured in its own category. So let's say you are looking into AI nodes, you can just select the AI nodes, now we are going to look into language model nodes. And let's say you want to do a Data Loader. You just drag this guy drop it, and then you start connecting it correct. So that's all you're doing. That's why I said this is targeted towards the C level folks where you can just like really quickly say like, I have an idea, I think it'll work. Okay, and just drag these nodes, build a connector, like a workflow for it. More or less, sometimes you just need to have a configurator model where you can just put some settings in it, and it'll just slide right out of the gate. So
what was this platform around before generative AI came around? And so you've just added some generative AI additions to it. So it was it was this low code, no code platform before?
Correct. It's been like this for the last six years for us. Okay. How about a generative capability that you see on your top left, we have now made a device agnostic cloud agnostic, so it can run on a GPU, it can run on a CPU. So before generative AI came in, this was not really a big problem, people were not really aware of it. But yeah, ever since language models came out, we we have clients who come in and say we want to make it run on the CPU versus the GPU, right. So that has been one of the biggest additional things for this year. Okay. What you see on the top are flows, which is what we call it, a workflow is correct. Each workflow is designed to achieve certain tasks. And you can have as many workflows as you like, so you can have a workflow it says, Bring in the documents from a bucket, then run some generative features on it, then extract some information, then process my claims. So you can have as many workflows as you like, and all these workflows run in parallel to each other. So you never worry about what happens if that dies. And this is still running, like will it affect my processes? Nothing like that. It just, it's a separate container altogether, right? So on your screen, if you see here, here, we are just basically creating a vector database. Again, one of the biggest things that blew up when genetic data came out was a vector database like everyone started looking into it. Yeah. So if you see here, we are just creating a project downloading a language model, Data Loader. And here we are basically initiating a project and building a vector database. Is that simple. Correct? before January, I cannot this used to be a foreign task, because you would have to code each piece separately and then join the dots. After January I came out and with a local technology, you can do this in 30 minutes. And it's very customizable. You can deploy whatever type of models you want you can do, you can define your, your configurators, you can define your database model, all that stuff is very configurable, you can just pick and choose and go. Okay. So this is the powerful part in our platform, which is why a lot of our clients are using it, because multiple teams can work together for multiple problems and connect it all together and go live.
So if you were building this, you said, you you're building this for a prototype or an idea that someone had, after you've done all this, would you then go to a different? Would you need to customize it with with custom code? And you throw it to the developers? Or could you deploy the final package from this this good question.
And you can actually go live right from here. So if you see here, you can actually go live right from here. And you can say, I like what I've done, let's go live. And it just, it just comes back with a production line setup. And it'll go live from right there. Okay, all you got to figure out is the cloud or the device you want to release it with and some sort of configurations on your domain names and websites and URLs and all that good stuff. Otherwise, it's ready to go.
All right. And you've got some you got some examples of how people have been using correct, right. Yes. Okay. So
I'll quickly show you two examples. One is our one of our clients Ulta Beauty. They use this entire platform to power their search. So their teams have built these workloads. And I talked about here. Yep. And using that the search here. So if you go to ulta.com. And the top, if you start looking for products, the entire search through the entire catalog from probably they have probably over a million products in real time is powered through us. So it actually comes through our platform, we do the search, we connect to multiple databases, deploy multiple models, and then we figured out the closest match and throw it out. Okay. And this is one of the examples, they are using our platform to build generative AI capabilities, they have released a chatbot with over 200 plus intents in real time that handle customer requests coming in solve shipping issues, product issues, payment issues, everything. Yeah, so this, this client has basically been working with us for the last six years. And if you go to our website, as well, you will see, this is one of the executives talking about why they picked us over a lot of other competitors. Okay. And one of the things she talks about is, is that we utilize 75% Less compared with 20 times less memory. Yeah, which is a big problem in today's world, especially with your data where it's coming in, the computer required is so big. And the funding required behind so big, the intro required to support it so big that not everyone can do it. Right. So because of that our platform has gained more popularity, because you can do it 20 times less the call of 75% less effort. And
it's not just the speed, it's also the cost savings, correct? Yeah. And then your second example is I think what's called GenPilot, correct? Yeah. Yes.
So GenPilot is our proprietary product built on top of our local platform. Okay. So think of GenPilot as your as a product which can do multiple things in parallel, correct. The very first thing that you come in is you see a dashboard of different users using the platform and why are they using the platform? And what are they doing in it? Right? So the top two things that you see the app usage here is a service pilot and a document search correct. Let's get into documents search, which is basically a way for you to talk to any document across your computer, device, cloud anything. So here are a few things, we have loaded tons of documents, and I don't even know what it is, in real time. Actually, that's the truth. I don't know what it is. So what I'm going to do is I'm just going to say here, generate a summary for me. So I at least can figure out what this is all about. Okay. This is very important, because in a lot of our clients, they have a million documents sitting in a folder, and they want to figure out the four or five documents that can get the answer that they're looking for. So this this product comes in, where it'll actually go find those type documents and get you the answer and actually pinpoint it correct. So we actually just, we don't really throw the answer out, we actually tell you where it's coming from. So we'll actually put out the source for it will tell you that it's in this page, this document this paragraph, this person said it. So we will bring out the exact content that you're looking for correct and combined across different sources. You can also talk to it so you can say like, this is the document and I want to ask questions, okay. Correct. And once you have figured out the document, you can say, okay, you know what? I have another question. Yeah. Can you tell me what's the answer, because I don't want to read 100 page document. Correct. So this is a very popular use case, because of two reasons. If you see on the top, this is where we let the people decide if you want to use a private language model, or you want to bring in an open source or a closed source. So here, if you see we are using GPT 4, which is a public cloud, but we are also using Mistral, which is a private cloud correct. And we you can combine multiple of them together. So you can have an open air model, talk to a Google device and talk to your own open source model that you have developed. All three can talk to each other.
So you're not pointing them to a specific element or specific cloud provider either? Correct, correct. So you
can actually this is the powerful part here where you can actually combine different models together in one workflow correct and you can have all of them talk to each other. In today's all that kind of heart. You can talk to open AI and good but then the minute you want to talk to bison, which is owned by Google. Now, it's a problem. How do you connect those two? Yeah, right. Yeah. So that's one of the reasons why they do it. And the second is obviously, the choice to have multiple databases, okay? You can connect multiple databases from different processes, different, different sections, different teams. And this becomes very important because as an executive, when you're looking for certain answers before going into a big meeting, you don't want to call someone who's reporting to you. And then they figure it out with the IT team, right? So it's a long process. Yeah, you can actually tap into this and figure it
out. So if a company is interested in exploring this, how long would it take to kind of set up the, the platform with with a company? Correct.
So our workflows are pre built, you don't really have to do anything. If you want to give it a shot, it's under 24 hours, you can do it. If you want to customize it, it usually takes less than a week for people to do that. But most of the people who come in, they take it and go as it is because it's 90 person as per what they want. Correct. Only few things here and there, they will want to change. So
login, give Create a name or create an account, give you a credit card and walk away with your asset. And again, this there's a lot of other information, a lot of other features that you have. So where can people go for more details on on this product? Yes, good
question. You can actually visit our web site, like I was talking about here. And in one of our Interplay use cases, you can look at what people are using it for. This particular product is getting used in the financial industry. We're using it with Intel, Qualcomm as well. They're using it in our APAC region and Singapore. So everyone has a different use case, the baseline remains the same. That's cool. You're gonna read about everything on
here. So Iterate.ai. Correct. Yeah. All right, Shomron. Thanks again, and thanks for the demo. Yes. Thank
you so much, Keith. Thanks for having me again.