Ep 275: Be prepared to ChatGPT your competition before they ChatGPT you

Harnessing the Power of Large Language Models to Outsmart Competitors

In the current competitive business world, artificial intelligence (AI) and large language models (LLMs) are revolutionizing the way enterprises function. Not only do these advancements streamline operations, they also offer an edge over competitors. Businesses that have been quick to integrate these technologies have seen significant growth in terms of customer satisfaction and productivity.

Democratization and Productization of LLMs

Despite the popularized access to large language models, productizing them at scale comes with complexities. However, it’s crucial for businesses to understand and adopt this transformative technology. Neglecting to integrate these innovations into systems and operations may put businesses at the risk of being disrupted by competition who have embraced this technology.

Balancing Fluency and Accuracy

While using AI, the notion of fluency vs. accuracy often creates confusion. While most tend to lean towards fluency, relying solely on it can be misleading and may lead to undesired results. Therefore, it's crucial to balance fluency and accuracy, a concept explained through a 4-quadrant graph outlining different AI use cases.

AI Use Cases

The use of AI isn’t restricted to certain sectors or operations; there are various business areas where AI can offer competitive advantages. These range from enhancing creativity and productivity to improving online search capabilities.

Control and Customization: The Future of AI Models

As businesses leverage AI technologies, it's essential to maintain control over unique and proprietary elements, including data, security, privacy aspects, and to guide the direction of AI model development. Enterprises may also find it beneficial to customize AI models for specific domains, potentially leading to better accuracy and lower costs.

Tackling Data Security and Privacy Concerns

While integrating AI and LLMs, businesses often express concerns over data security and privacy. This can be addressed effectively by using enterprise-grade versions of AI models or creating custom, domain-specific models.

Harnessing Data to Stay Ahead in the Competition

In the current digital age, leveraging data is no longer an option but a necessity for survival. Businesses must embrace AI and utilize their data to avoid facing disruption at the hands of competitors.

Embracing and Understanding Technology for Growth

As AI continues to dominate, understanding its application and importance becomes a strategic necessity for business growth. This involves staying abreast of latest advancements, learning new skills, and implementing AI-centric business models.

Choosing the Right LLM for Your Business

LLMs are powerful, but it’s important to acknowledge their limitations regarding accuracy, cost, and copyright control. Understanding how to choose the right model specific to individual tasks and ensuring its effective operation is key.

The Power of AI in Business

The importance of incorporating AI and LLMs in business cannot be overstated. From improving productivity to boosting customer satisfaction, these technologies are set to shape the future of enterprise. With careful exploration, understanding, and application, the possibilities are truly limitless.

Topics Covered in This Episode

1. Large Language Models (LLMs) and Business Competitiveness
2. Understanding LLMs for Small to Medium-Sized Businesses
3. Use Cases and Misconceptions of AI
4. Data Security and Privacy

Podcast Transcript

Jordan Wilson [00:00:01]:
If you're not using AI to get ahead, there's a good chance that your competitors are. It's something that I say a lot here on the Everyday AI Show that, you know, hey. It's no longer 2023. There's no longer time to be experimenting. If you're not implementing generative AI in 2024, you are asking for your competition to pass you, or maybe you're asking for your competition to ChatGPTU. Right? So I'm extremely excited for today's conversation where we're talking to and we're saying literally this, to be prepared to ChatGPT your competition before your competition ChatGPT is you. So, extremely excited for today's show. Before we bring on today's guest, just as a reminder, as always, if you're listening, whether you're in the car, on the treadmill, walking your dog, wherever you are, make sure afterwards or right now to go to your everyday ai.com.

Jordan Wilson [00:00:55]:
Sign up for our free daily newsletter. I already know today's conversation is going to be full of so much, insightful information where you're gonna wanna be taking notes. Don't worry. We got that. It's gonna be in the newsletter that we're recapping. And, hey, as a reminder, I'll still be in the comments live answering any questions you have. Today's, you you know, we're debuting this show live, but it's technically prerecorded. So, with that, I hope you're excited.

Jordan Wilson [00:01:20]:
I am. And please help me welcome let's bring on our guests to today for today. There we go. We have Barak Turofsky, who is the VP of AI at Cisco. Barak, thank you so much for joining the Everyday AI Show.

Barak Turovsky [00:01:33]:
Hi, Jordan. It's a it's a pleasure to be here. I'm very excited.

Jordan Wilson [00:01:37]:
Alright. So, you know, I'm sure mostly everyone in the world knows, knows Cisco, you know, a fortune fortune 100 company, one of the biggest, you know, companies, in kind of the tech and cybersecurity space and technology in general. Right? But, Brock, tell us a little bit about what you do in your role of VP of AI at Cisco.

Barak Turovsky [00:01:58]:
Yeah. I will want to start a little bit about my background and talk about how I got into that space. So I kind of have this funny way to introduce myself that I worked on, quote, unquote, those esoteric things like AI and large language models long before they became the hottest thing on earth. On a more serious note, my, to a large extent, claim to fame was that I was working or leading product that was the first product that productized LLMs at scale. It was called Google Translate. I spent 10 years at Google leading the languages AI product team. In 2015, 2016, we basically did multiple technological breakthroughs across software side, the hardware side to basically to be able to run what's called deep neural networks on a huge corpus of data, which is a prerequisite for what we now call LLMs. Then then it transformed actually transformed to a transformer research paper that in many cases, as you know, it's basically a baseline for check GPT and for everything else.

Barak Turovsky [00:02:58]:
Ironically enough, most of the researchers have worked with us on translation in some shape or form. If you read those papers, they mostly talk about translation as a use case because, obviously, that was a use case that at that time was most exciting. Then, I worked on productizing. The first productization of transformer technology was called BERT, bidirectional transform bidirectional embeddings based on transformers, also done at Google. We used it initially for Google search, and that created a huge jump in search quality and understanding Google, user intent of queries. And finally, we worked on products like Google Ads, where we added multibillion dollars, lift in Google revenue based on better AI based targeting, Google Cloud, etcetera. And now in my current in in addition to that, in between Google and Cisco, I led product engineering and AI teams as chief product and technology officer at a large late stage startup focused on computer vision AI. And now I'm at Cisco working on applying cutting edge technology to networking domain at Cisco.

Jordan Wilson [00:04:00]:
Wow. So, you know, maybe Brock won't say this outright, but if you, if any of that went over your head, just know he is one of, I I'd say a leading expert. Right? So more than 25 years, in the artificial, artificial intelligence and and related fields and, you know, working, on on AI teams at Google. You know, with that, Brock, I'm curious, you know, because, yes, a lot of people maybe don't understand. Yeah. AI has been around for many decades. Right? It's not new. But what is new is kind of, you know, what you talked about is now these transformers and the GPT technology and and large language models, you know, now being, available and accessible to everyone.

Jordan Wilson [00:04:41]:
So I'm curious from from your, you you know, vantage point, especially when we're, you know, out there talking maybe about or if we're talking to business leaders and when, you know, can you talk a little bit about how impactful large language models are even, you know, in in the course of someone like yourself who has decades of experience in the space?

Barak Turovsky [00:05:00]:
Yeah. As I mentioned, I consider myself a incredibly lucky because the technically, in the second wave of AI, or maybe you can even call it second hype of AI, because the first wave of AI, as I mentioned, was 2015, 2000 16 when Google or Google translate showed that you can actually run deep neural networks. So now what they call large language models on a huge source of data. We actually believe that will be a test revolution. I actually encourage people to read an article called, great AI awakening from New York Times Magazine that, you can probably even find in podcast description. It's a really good article to the history of AI. But I think that hype was pretty short lived because very quickly, a lot of enterprises understood it's pretty expensive and actually need an amazing amount of talent to do it. I think.

Barak Turovsky [00:05:44]:
And that's why a lot of the AI innovation was limited to companies like Google, Microsoft, Meta, etcetera, because they need a concentration of talent, compute, etcetera. What I believe JetGPT did is very clever UI and also building a lot of amazing technologies that Google and others developed as a democratized access to it. Right? And now a lot of people suddenly discover the beauty of LLMs. One thing I would caution everyone is that it's a democratized access to try it, but productizing it at scale, it's still it's it's way easier than before. It's not a rocket science at all. Like, a a nuclear bomb development as it was before, but it's still pretty complex. And it's complex because almost in every product you have a technology has this product principle. You spend 80% of the effort on 20% of functionality that actually makes the product work well.

Barak Turovsky [00:06:32]:
So but as I said, a lot of use cases, the this technology makes a lot of use cases way better than it was before.

Jordan Wilson [00:06:40]:
And and and speaking of use cases, I'm I'm excited to dive into that here, and we're gonna be talking about that a little bit. But first, I I do wanna set the stage a little bit for even the the premise of this episode or even the title. Right? So I I I started off, the show, Barack, by saying, hey. 2024, if you're not implementing generative AI and and and if if your business isn't using large language models already, you might be in for a an uphill battle. What are your thoughts on that before we dive into use cases in this concept of you gotta chat ChatGPT your competition before they ChatGPT you?

Barak Turovsky [00:07:14]:
Yeah. So we'll talk about the use case. Obviously, not every use case will be immediately available or served by LLMs, but there are many use cases and many areas where it will be pre technology is mature enough that could be productized well. So if you take it the if you start on, like, a generic statements that any business, sizable business, that's sizable number of customers, I believe they'll have must connect their internal knowledge sources, databases, and internal communication channels across emails and chats and speech to be served through large language models. If they do it, I think they will see pretty significant increase both in customer satisfaction and productivity. But if they don't, I believe within 3 to 5 years, that will be in a very increased risk to be disrupted by their existing competition or newcomers that will be able to offer way better, level of customer, interactions at a fraction of the cost. And I think it's something that needs to be taken very seriously.

Jordan Wilson [00:08:16]:
Yeah. And, you know, kind of beside the point, but kind of related is, you know, at least my take on this. I I think that, you know, especially small, medium sized business owners, don't fully understand generative AI and don't fully understand large language models. I tell people, you you know, kind of the lines between large language models and traditional Internet, search seem to be blurring. Right? So when you talk about Google's SGE, you know, and bringing this, you know, more of a large language model type search into Google, you know, perplexity, you know, we just we we just saw a new, model update GPT 4 0, from, from OpenAI today. Should business owners even start to think of the 2 kind of the same. Right? Just like you wouldn't, you know, put together a big presentation without using the Internet. Should they be putting together a big presentation or a big sale deck without using a large language model?

Barak Turovsky [00:09:10]:
Yeah. It's obviously, use case dependent. I would argue for some simple use cases, it might be important, but it might be less important. But, definitely, when you look at scalable business, the sizable number of customer interactions, I think, then where it becomes way more important. Again, as I say, on the local level, on the meet medium to, small businesses, it also depends on your competition. But, yes, even on a small level, if your competition is starting to use it, if they if we create a small chatbot that shows, you know, in an interactive way what products and services you offer, That's by itself could be very powerful and start taking away your share of your business.

Jordan Wilson [00:09:47]:
Yeah. Abs absolutely. So let's let's go ahead and and jump into some of these use cases here. So for our for our livestream audience, this is this is probably gonna be pretty pretty straightforward. But, for our podcast audience, we essentially have and and we're gonna put this, so check the show notes, for a link where you can go take a look at this. But we we have, kind of this this 4 quadrants here in different use cases for AI. So everything from, you know, low accuracy and low fluency on the lower left hand corner, cascading up into the right to high fluency and in high accuracy. So, you know, Barak, walk us through a little bit, about this, kind of graph that we have here.

Jordan Wilson [00:10:28]:
And I'm also curious where where did the idea for this come from? Because I love it. It's something very easy to to visualize, you know, use cases for generative AI.

Barak Turovsky [00:10:38]:
Yeah. It was actually very funny. The idea came from the fact that when JGPT launched, I was on vacation in Hawaii. And then there was I don't know if you remember, but was it a classical market panic or market excitement that Google is finally disrupted and Microsoft will take all the share because they now have an open AI. So literally every investor in the world wanted to talk to me to understand because I had no clue what it is. And is it true or not? Should we show Google now? What should we do? And I will basically explain it to people that, actually, I believe search is not the best use case to start these LLMs. And people listen to me and said, oh, wow. And I basically was explaining to them the biggest misconception is actually the fluency between fluency and accuracy.

Barak Turovsky [00:11:20]:
And people are like, oh, wow. All those investors told me you just totally publish it because it's so helpful. And I was actually very proud that within 3 months, as people looked and said, oh, Google Share is not really moving, and Microsoft is not disrupting. And don't get me wrong, Microsoft will do a lot of money on other places like cloud. But, definitely, they're not disrupting search, and that's probably partly because it's, like, with the framework. So I basically came with the frameworks that a lot of people find very helpful. And I wanted to dispel some of the misconception because I believe the biggest misconception and this overexcitement about the LAMPS is that people, especially people who are not in the industry, like, for 10 years like myself. So it suddenly discovers that machine can produce content that is, in many cases, better than average human produces.

Barak Turovsky [00:12:01]:
You're talking about the the what I call fluency. The the the portishnessness, the confidence of of JetGPT answering questions are pretty amazing, and it's a major technological advancement that is built on top of clever UI, plus all this transformer technology that was developed by Google and others. But I think what people tend to forget is that high fluency creating a very compelling story, polished, charismatic story, doesn't mean being providing correct, accurate information. It's a very different dimension. It could be extremely extremely polished, extremely confident, and still say complete crap. And my example my example of, you know, closest human analogy to this behavior, LLMs at the end of the day answers a question or guesses the next word or next sentence. Right? So think about a person who can confidently and, in a very polished and charismatic manner, talk literally about any topic in the world, but they're prewired to give you an answer. If you ask them a questions they don't know, they will never say I don't know.

Barak Turovsky [00:13:05]:
They will make up stuff on the spot. And because they're so good at it, their delivery is so polished. You will actually believe in what they say. And it's very dangerous. Human analogy could be con artists who does it on purpose to defraud you. And that's why con artists are so successful because in many cases that are that fluency is amazing. It's a delivery, especially in topic you don't understand, Or it could be very successful entrepreneur. Good example is Steve Jobs who basically, you know, has a had a reality distortion field and so things that others you know? There might be less good examples, I don't know, like FTX where you speak and you really believe in what you do, but it's actually not necessarily true.

Barak Turovsky [00:13:41]:
Right? So all those examples at the end of the day is the results are not necessarily great. That's why it's really important when you think about the use cases to look at this over 2 dimensions and not 1 and not be only influenced by the fluency. And, finally, I believe another thing that I represent by colors and my just 2 by 2 grid is also be very realistic. What is the consequences or what is the risk of getting the answer wrong? Because if it's a lost take use cases, it's probably okay ish, and I will give an example. For example, in those who can see it or can after in the notes, look at my framework in the, in the low accuracy, high fluency, quadrant, we have bunch of green bubbles, which I call low stack use cases that are roughly creativity or productivity use case, like writing science fiction book, writing children's book, to composing music. The beauty of it, there is no objectively right or wrong answer here. It's all about the story. That's a perfect use case where, like, fluency is so important.

Barak Turovsky [00:14:39]:
If you look on the other side, on the left, in the low low low, fluency, high accuracy, high accuracy, high fluency use case, you have bunch of red bubbles. Those are the use case that roughly, let's call them Google search plus. It thinks like, hey. What is the earnings call what is the earnings growth of Google last 10 years? You're you just need an answer. No story. Right? Or I want to buy dishwasher. Tell me which one to buy. But what? Like, give me recommendation.

Barak Turovsky [00:15:05]:
Or I'm going to pay this and I want to get a recommendation for a hotel. You do need a story. You need an explanation, but if a story will come with an accurate data, that doesn't help. The product is useless. Right? And then in between, you actually have very interesting use cases that I call them productivity enhancement use case. So for use case like writing business memo email or, like, creating a business presentation or writing a review. The beauty of it, it's not that it's not important. It's important use case.

Barak Turovsky [00:15:31]:
The beauty of it, you start with a draft. And draft, you need both a story and pretty high accuracy. And the beauty of it, if it's a draft, is that unless the people are really dumb, they will not send it automatically or not publish it automatically. They will use it as a draft. And the division of work between machine and human would be, machine, please give me a good story because many of us, Germany, you're probably an exception, you're probably good in stories, but most of people have really good mastery over facts, but they're they're struggling to create a story. It takes effort and time. If you help them create a good story from the fact, it's a huge productivity boost, huge value creation for a lot of people. And if you create the UI right and explain to them, hey.

Barak Turovsky [00:16:12]:
It might be wrong. You'll need to it might be inaccurate. Make sure you double check and adjust if needed. That's to me very compelling product to start with. I would not start in use cases where you actually need high stakes. If you make a mistake, it's actually bad. Right? And also use case like search because we are talking about the normal scale here. Right? You're creating millions of presentation, billions of search queries.

Barak Turovsky [00:16:34]:
In search, you cannot really put a person behind every query to validate it. And if you ask users to validate it, that's effectively what search does today. So I would recommend people to start with use cases that are much more grounded in value creation of creating a good story versus let get me the one answer right from the beginning. So that's basically the gist of this. What is more important for the use case? The more the fluency is important, the more the story is important, the better it's a fit the use case is a fit for LLMs, at least in the short or medium term.

Jordan Wilson [00:17:07]:
Yeah. And this is, again, this is one of those if you're on the podcast, We always appreciate your support, but you gotta come watch the video of this or, you know, make sure to check out, the the graphic that we'll include in the show notes because I really think that does, really just help better understand the framework for using generative AI. Right? And and and know when it's just okay. Is this just maybe, you know, high fluency? Because Brock, it it just just because something is highly fluent. Right? Just because a large language model can can spit out a bunch of content doesn't necessarily mean that that might be the best use case just because it is a use case. So so speaking of use cases, let's go ahead and and and talk here. So now we have, some some more illustrations here on the same graphic, but talking about different actual use cases. So walk us through here, about some of these, you know, specific use cases, and then we'll, kind of dive into that.

Barak Turovsky [00:18:03]:
Yeah. So I mentioned it a little bit, but here in this graphic, it's the same 2 by 2, but I just created 2 clouds, so to speak. So, basically, I I noticed something very interesting trend. The base no. I was actually going back. Yeah. Here. Yeah.

Barak Turovsky [00:18:16]:
So the trend here is that, what I called the area of creator workplace productivity use case, it's actually a very good fit for OLM because, as I say, it's either the, the there is no right or wrong answer. So accuracy, to some extent, doesn't really matter, or you cannot really measure it objectively. But but that's for industries like entertainment, etcetera. It's very specific industries that I'm not an expert, so I don't want to talk too much about it. Even though I partnered with someone and wrote an article about it just to to to to me, it was like a testing ground to understand whether my framework applies. But if you think about some of the yellow bubbles on the visual, even some of the green bubbles is like writing businessmen memo of email or creating business presentations. That's to me the cracks of the use case that can start become useful in the enterprise in the enterprise setting. And if you kind of focus on, say, millions of use cases so so say millions of use cases in my opinion on short to medium term are relevant.

Barak Turovsky [00:19:11]:
1 is entertainment that I mentioned. Perfect use case, even though it has other dynamics. You know, it's highly litigious industry. There is a copyright protections and all kind of stuff. But the use case techno technically, one is a perfect use case. No right or wrong answer story is important. The next one is what I call forecast productivity, and I think on the high level, it's 2 big buckets. The first but there are kind of interact interrelated.

Barak Turovsky [00:19:34]:
The first one, I call them roughly customer facing interactions. What it means is that anything would have frequent interaction with the customer, but actually story and facts, both are important. But that's use case like customer support, technical support, sales, service, etcetera. In other use cases that in many cases in highly technical domains, for example, in networking domain like Cisco, isn't related related to coding. Coding, by the way, is a very good use case for LLMs because to some extent, it's a language. There is a reason that those models are called large language models, but that's human language. Coding is a human created language that was created artificially, and it's much more structured. So LLMs are very good in dealing with that.

Barak Turovsky [00:20:18]:
And in may in some use case, you actually combine those 2 because in many cases, you need to run code to to resolve some customer problem. Right? So that's a a very important use case. And finally, the next the last one, I call it education slash professional certification. As you probably know, Chegg GPT, passes bar exam, medical exam, science exams with flying colors. One of the reason for that is that it's actually very good in pertaining and choosing from closed number of questions because it's very good in the reasoning of understanding slight nuances and questions. That's what usually do. It's not an open ended question. It's actually understanding from a choice of questions.

Barak Turovsky [00:20:57]:
And there is multiple interesting use cases related to professional certification or education that I think will be very relevant to LLM. So that's on a high level. Again, it's not an exhaustive list of use cases, but that's kind of how I basically look look at the framework and then try to translate it on specific changes of these cases.

Jordan Wilson [00:21:17]:
Yeah. And and like like I've said multiple times, you've you've gotta just be able to take a look at this. So we're gonna, include this in our newsletter as well and and break it down a little bit more. But, Brock, let's let's talk a little bit because I think now, any business leader, any decision maker, you know, heard what you just said. And if they weren't already, you know, all on board with using large language models for certain use cases. I'm sure they are now. Right? Like, you can't just listen to someone, you know, with 2 and a half decades of experience and break it down this simply and still say, I think my business is gonna pass on this whole large language model thing. Right? So so maybe let's talk about, you know, how you actually choose, the right model for the right task because, I know there's no, you know, blanket answer for that.

Jordan Wilson [00:22:06]:
But how do you go about? Right? So you you know, you see all these stats and these, you know, McKinsey studies that say, oh, you know, large language models are gonna, you know, automate up to 80% of of knowledge work and, you know, everyone's scrambling to figure it out. How do you find the right model for the right task, and how do you go about making sure that it's actually working for your company?

Barak Turovsky [00:22:27]:
Yeah. So I believe there is also, a bit of a hype over hype or maybe, you know, misconception. And, again, it's totally understandable because JetGPT, which I call maybe, you know, first of all, a little bit of mythology. There is no small bit, language models here. They're all large. Right? There is, like, large, homongos and gargantuan or something of that sort. Right? So those homongos or gargantuan, large language models like Gemini or Ggbt 4 are pretty amazing, and they can cover wide range of use cases. I mean, to some extent, any use case.

Barak Turovsky [00:22:59]:
Right? But I encourage everyone to understand there are limitations on accuracy. Right? And there are other limitations. One limitation is scores. They are expensive. And, yes, it's very easy and frankly cheap to try it out. But when you start scaling it for a big use case, it becomes pretty expensive. Right? The second one is control, and maybe I mentioned it a little bit on the entertainment side. It's highly copyright protective.

Barak Turovsky [00:23:20]:
Right? Like, there is a lot of there is huge importance for many enterprise on maintaining control authors that are unique elements when utilizing AI models. Unique elements could be proprietary training data like an entertainment. Right? Your the unique components could be privacy, security, etcetera. And, frankly, I think it's for many enterprises, it's also important to maintain maximum control over the destiny of the model because in some cases, you might want to work in a domain that is not a priority for CheckGPT or Gemini, and then you need to wait. Right? And finally, the last one is domain then you And then you basically need to pay more to customize the model, and you would still use a humongous model for a task that may be very specific. Because of those 3, things, I think it's very important to be grounded and realistic and understand. In some cases, humongous foundational generic model works well. But in some cases, if you have a relatively constrained use case and you want to achieve better accuracy with with lower cost, you might actually go and try to create, what what's called fine tuning pure, custom domain specific model.

Barak Turovsky [00:24:34]:
And the good news about it, until a year ago, obviously, doing it on your own required a lot of talent, a lot of compute. But now some companies like Mistral or Facebook or Meta actually created those pretrained large language models that are pretty good for generic knowledge. You can take it and fine tune it if you have your own proprietary data and actually create a potentially a better combination of cost versus domain specific city and maybe even more control. And as I said, it's very use case specific. For some use cases, generic model works. For m use case, domain But it's very important to understand there is no one size fits all here, that you can all generate models work for every use case.

Jordan Wilson [00:25:16]:
Speaking speaking of size and and models, this is something I'm I'm curious. And, you know, it's not often I I get to talk to someone with this type of experience. You know, my viewpoint this is from, you know, an outsider that doesn't know a lot, but, you know, you know, you mentioned, you know, Mistral and and Mattis Lam. Right? And it seems like the models that they're coming out with at least are getting smaller. Right? You you know, we talked about, you know, hey. There's small models, large models, and then these gargantuan models. Yes. Are we gonna see a a a future where models are actually be are are going to become smaller, and and that might actually help drive down costs and drive up, you you know, use cases across the business spectrum just as the fine tuning process becomes better, compute technically becoming more affordable.

Jordan Wilson [00:26:05]:
Is that a trend that we're gonna see as smaller models for more specific use

Barak Turovsky [00:26:10]:
cases? Yes. So, first of all, I think just to repeat, nothing is small. It's like large. Come on. But, yes, if you want to use numbers, the smaller today or large is, like, 7,000,000,000 parameters. Right? The the humongous maybe 50, 60,000,000,000 and gargantuan, is like 500,000,000,000 parameters. Nothing is small. But just, again, as I say, a lot of a lot of current experience come from previous experience.

Barak Turovsky [00:26:35]:
For example, Google Translate, many years ago, 2016, we had a model of 6,000,000,000 parameters that was gargantuan at that time. Like, it was nothing that time. And we actually had a problem at that time. You know, cost was not such a big problem for Google, but, we had a problem on performance. Our problem was that those models were were at at that technology and it's the level of GPUs at that time. It it was 100 x lower than our production system to serve, what's called inference. It's that's actually led Google to do 2 things, a, to develop their own custom hardware TPUs. That's why Google is so expensive in that space.

Barak Turovsky [00:27:09]:
Believe it or not, was developed. The first use case was Google translate. But the second thing, we invested in technology that artificially shrinks the model. It's called distillation, where we basically effectively did the trade off. Okay. Let's shrink the model and it will give us better latency. Yes. We will lose some in quality, but it will not be significant loss.

Barak Turovsky [00:27:26]:
And then maybe loss. And then maybe we'll kind of custom train or fine tune and get back that. So I think a lot of the things that we did this time now could be applied in enormous scale here. As I say, that's all use case specific. If you have if you take this large 6,000,000,000 or small model just for a sake of argument, relatively small model, and you have sizable proprietary data, and maybe even create a program with high quality human data, you might actually get to relatively high quality bar with a fraction of the cost. Right? In some cases, you cannot do that if you don't have enough data or if you don't have enough color to do it. You might need to use a bigger model. So I think it's it's a range of possibilities, but I think it's very important to understand it's not only generic models plus rack or retrieval augmented generations that people think that will solve all problems.

Barak Turovsky [00:28:16]:
Frankly, it can make it even more expensive because now you send a lot of context to this huge model. Right? It's good for NVIDIA. It's good for OpenAI, but it's not necessarily good for enterprises. So I think there is a lot of, it requires to define the business problem well. Meaning, like, what is the quality bar? What does the you know, what are the margins of your business? Can you afford to use those models? You obviously don't want to you to use, you know, a new nuclear bomb to to kill a fly. Right? So it's important to understand and use a wide range of tools at our disposal to try to achieve your you know, to to serve the use case.

Jordan Wilson [00:28:50]:
Yeah. That's that's that's a great analogy there. Right? And I'm just wondering I'm just wondering the amount of compute that's gone out the window to, you know, write 50 versions of a of a haiku or something just for fun. But, You know? So so, you know, you did say something in there, Barack, about data. Right? And and that's how we kind of started, the show is is how, you know, most companies are gonna have to start leveraging, their data and bringing their data into large language models. Yet that does seem, know, maybe the tide is turning a little bit, but that does seem especially in the first kind of year or 2 of this, popularization, so to speak, of large language models is so many people and companies and business leaders were concerned about the data, and they're like you know, they go from banning large language models to now all of a sudden they're, you know, building their own and fine tuning them. But, you know, in in your unique position as, you know, the VP of AI of one of the largest security companies in the world at Cisco, What's your thought on these companies that maybe still are hesitant to put any of their data into an enterprise level, you know, graded security large language model. What's your thoughts on that?

Barak Turovsky [00:29:58]:
Yeah. So I think it needs to be practical. I think the most important message that I want to bring back, how we started and I apologize that I'm using the fear factor. But, like, if you don't do it, somebody will disrupt you. So you need to start with that. You have to do it. Now there are ways to do it. Right? It doesn't mean you need to use it and just give all your data to OpenAI.

Barak Turovsky [00:30:19]:
That's why sometimes creating a custom model and investing in some kind of a small amount of high high quality talent to develop those models. If you're worried about control, if you're worried about privacy, if you're worried about security, invest in your own custom domain specific models. There are also other range of possibility. For example, if you prefer, if you're worried about security of OpenAI, you can use an enterprise grade version of OpenAI through Azure. You can use Gemini through GCP. You can use Aptropic through, you know, AWS. That's the beauty of a cloud based provider that can provide you way better level of security and privacy. But, again, it depends on the industry how important the data is, how proprietary you consider the data.

Barak Turovsky [00:31:00]:
Are you okay if this data will be used potentially to sell to your competitors? That has all those questions that are very important to address. But the good news is that there is a range of possibilities how to address it, especially for enterprises. So then they need they definitely need to think about it. Right? I in my opinion, just say you'll ban it outright. That's not applicable. If you have large number of customers or customer interaction, I think you're setting yourself for being disrupted. Right?

Jordan Wilson [00:31:27]:
Alright. So so, Barak, this has been a a conversation that I've my hands are hurting because I've been typing so many notes, because we've talked about, you know, kind of your your very useful framework, on on the best ways to to leverage or to, when to use AI for certain situations. We've talked about different use cases, you know, from entertainment and customer facing interactions. And then we talked about how to find or when to use the right model for the right use case. So, you you know, as as we wrap up, maybe what's your best piece of advice, you know, for someone that is out there now? Again, not saying we're, you know, going the fear factor, but maybe someone is now a little scared of saying, You know, we've been hesitant, and I'm scared that some of my competitors are gonna be doing this. What is your one takeaway piece of of specific advice for those people in order for them to really leverage generative AI?

Barak Turovsky [00:32:20]:
Yeah. So I think it's embrace the technology, understands the technology. It's generally your friend. I really like the example of Steve Jobs that he was gaming about technology. He basically said, if you put you know, if you do a competition on speed between all type of your species on Earth, You know, humans without any technology will be, you know, in, like, 25 bottom 25 percentile. Yes. Faster than a, you know, a turtle, but way, way slower than a cheetah. Right? But human on a bike, human on a plane, human on a car will be way faster than cheetah.

Barak Turovsky [00:32:52]:
So I believe that's a very good analogy of technology, but it requires you to learn new skills. Right? If you don't learn how to drive or ride a bike, you will be left behind. Right? So, you need to look at this technology, understand it's your friend, it's your strategic potential to take your business to the next level, but you need to invest in understanding it. You need to invest upskilling yourself and upskilling your organization to be able to to meet this. Yes. It's it's very exciting, but also frightening challenge, but we need to embrace technology. You know, planes and fire and and trains are all were steady initially, but we embrace them and learn how to, manage them. And that's what we need to do here too.

Jordan Wilson [00:33:32]:
Love to hear. There's nothing more that I love than ending an episode with a beautiful analogy like that. Wow. This is a good one. So, Barak, thank you so much for joining the Everyday AI Show and sharing all of your insights. We really appreciate your time.

Barak Turovsky [00:33:48]:
Thank you. It was a pleasure.

Jordan Wilson [00:33:50]:
And, hey, as a reminder, everyone, yeah, that was a lot of of great insights from an industry leader, someone that's been around decades doing this thing at a high level. So, you know, like I said, maybe you weren't able to to watch live as we are going through, kind of this diagram of use cases. So don't worry. Make sure, if you haven't already, to go to your everyday ai.com. Sign up for our free daily newsletter. We will be recapping everything there as well as well as leaving, links to more resources that are really gonna help you make sure that you don't get ChatGPT ed by your competition. Thank you for joining us, and we hope to see you back next time for more everyday AI. Thanks, y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI