Ep 294: Why the Future of AI Will Be Built by Non-Technical Domain Experts

Recognizing the Importance of AI and Domain Knowledge

As swiftly as we hurtle into a technologically-saturated future, the pivotal role of domain knowledge remains a constant. This is true especially in the field of AI application development, where applications must be differentiated to be truly effective. It is here that nontechnical domain experts, using their extensive real-world knowledge and experience, can build AI products that are far from generic and are tailored to specific needs and contexts.

The Power of Prompt Engineering

Prompt engineering, or tuning the inputs of an AI system, is a skill that does not necessarily require the technical expertise of a machine learning engineer. On the contrary, it thrives on communication skills and a deep connection with language models. Much like learning to ride a bike, prompt engineering requires practice, patience, and the willingness to iterate — traits that nontechnical domain experts are often prized for.

AI and Innovation

As with all spheres of technology, the landscape of AI is evolving. What is especially noteworthy today is that instruction-giving to AI applications has become a task squarely in the realm of domain experts. Large language models have been dominant in AI conversation, and there's speculation that the shift will be towards smaller models that make better use of the specific data provided by domain experts.

Preparing for AI Advancement

As large language models become more conversational and intuitive towards understanding context, it becomes imperative to prepare for this shift. Whether it's personal use or the building of AI applications, the method we use to prompt and interact with these models will need to adapt suitably. When building AI applications, recommended is a modular approach that does not overly rely on existing models but rather leverages the strengths of several for best results.

The Role of Computational Thinking

While AI has its strengths, human expertise, especially in the realm of computational thinking, remains an essential component. Breaking down an issue into computational steps, and then implementing logic, math and coding to resolve it is a skill set that is necessary for understanding the workings of large language models.

Navigating The Commoditization Risk

As domain knowledge becomes indispensable to the AI revolution, one possible drawback is the potential commoditization of this knowledge by AI itself. However, an informed approach and investment in skills development can equip nontechnical domain experts to navigate these waters. Involvement in building AI applications and validating outputs only increases their relevance and indispensability in this technological renaissance.

In conclusion, the future of AI relies heavily on the contributions from non-technical domain experts. Their unique skills and domain knowledge enhance the development of AI applications making them more accurate, effective, and user-friendly. These experts, with their deep understanding of the fields they specialize in, play an instrumental role in steering the growth and direction of AI technology.

Despite the many shifts and changes we may see in the future of AI, the integral contribution of nontechnical domain experts will undoubtedly remain constant. Their presence ensures the human touch in a technology-dominated world.

Topics Covered in This Episode

1. Importance of prompt engineering
2. Role of non-technical domain experts
3. Evolving landscape of AI technology
4. Future concerns and preparations

Podcast Transcript

Jordan Wilson [00:00:17]:
A common misconception about prompting and prompt engineering is you have to be a machine learning expert. You have to be an engineer, and that's maybe not necessarily the right take on it. Right? And even when we talk about the future of building AI products, it might not be those machine learning experts per se that are the ones pushing this to the finish line and beyond. It might just be your domain experts, the experts in the field. Alright? I've got hot takes on this one, but I've got a guest, today, and we're gonna be talking about why the future of AI will actually be built by nontechnical domain experts. I'm excited for today's conversation. And, hey, this is it. Welcome to Everyday AI.

Jordan Wilson [00:01:02]:
What's going on y'all? My name is Jordan, and I am the host of Everyday AI, and this is for you. We are your daily livestream podcast and free daily newsletter helping everyday people learn and leverage generative AI to grow their companies and to grow their careers. So if that sounds like you, thank you. Thank you for tuning in. If you're on the podcast, as always, check out your show notes. We'll have a lot of more information on today's show as well as a link so you can go read our newsletter. Alright. So one of the things we do every day is we recap the AI news.

Jordan Wilson [00:01:29]:
So if you haven't already, please go to your everydayai.com and sign up for that free daily newsletter. Alright. So let's get started with what's happening in the world of AI news. So first and foremost, we have the pope Francis is advocating for ethical AI at the g seven summit in Italy. So pope Francis addressed g 7 leaders at their annual gathering in Southern Italy focusing on the need for stronger guardrails on AI to ensure ethical development and use. The Argentinian pope emphasized the importance of AI being, more with human like values and compassion, mercy, morality, and forgiveness to prevent unchecked risks. So Francis called for an international treaty to regulate AI development, echoing worries about AI safety, potential bioweapons creation, and disinformation spreading. Alright.

Jordan Wilson [00:02:26]:
Next. Well, OpenAI has added a pretty big name to its board of directors. So retired general Paul Naccassone, the former head of the NSA, was appointed to OpenAI's board of directors. So Naccassonne is to join OpenAI's safety and security committee also focusing on AI's role in cybersecurity. OpenAI is obviously aiming to strengthen its cybersecurity profile through AI to detect and respond to threats quickly. His expertise will likely guide OpenAI in ensuring AI benefits all of humanity. That is their mission. Right? So OpenAI's board now includes Nakasone CEO Sam Altman and others notable fig and other notable figures in the tech industry.

Jordan Wilson [00:03:11]:
Last but not least, Microsoft is is shipping its new Copilot Plus PCs next week without a key AI feature. So Microsoft is delaying the release of their new recall feature after receiving way too much backlash from privacy advocates and security experts. So recall, which essentially takes nonstop screenshots of a user's action on their computer and then allows them to tap into that, well, it's not gonna be available initially to Windows, insiders or buyers of Copilot Plus PC. So recall, reportedly was developed in secret and not tested publicly with Windows insiders according to reports. This move comes after Microsoft president Brad Smith testified in front of the US Congress Thursday over AI and security concerns. So, in that, testimony, Brad Smith, said that Microsoft is committing to adopting all recommendations made by the congress's cyber safety review board, investing in cybersecurity initiatives, adding more security engineers to the team, and ensuring that security is a top priority for all aspects of the company. Wow. Apparently, every single piece of AI news today is about security.

Jordan Wilson [00:04:23]:
Alright. So, there's always more, so make sure to go to your everyday ai.com. Sign up for our free daily newsletter where we will be recapping all of that news and more. But today, we're here to talk about the future of AI and how it'll be built by nontechnical domain experts. I can't wait for this one. Please help me welcome to the show. There we go. Our guest for today is Jared Zoneraich, who is the finder founder of Prompt Lair.

Jordan Wilson [00:04:48]:
Jared, thank you for joining the Everyday AI Show.

Jared Zoneraich [00:04:51]:
Thank you for having me. Excited for it.

Jordan Wilson [00:04:54]:
Alright. Yeah. This is a good one. I'm excited about this. But, before we dive in, Jared, just tell us a little bit about yourself and Prompt Lair.

Jared Zoneraich [00:05:02]:
Yeah. Totally. So Promptlier, we're a platform for prompt engineering. And, maybe we could talk about what prompt engineering means later, but, basically, we are helping teams build real AI applications with domain knowledge, and we're a small we're a small company. We're based in New York, about, like, 5 or 6 people now. And, yeah, it's a it's a fun time.

Jordan Wilson [00:05:26]:
Yeah. Love to see it. Hey. Prompt engineering. Can't wait. This is one of my favorite topics. And, hey, to our livestream audience joining us, Woozy and Brian and, someone, from Florida and Tara. Thank you.

Jordan Wilson [00:05:38]:
But, yeah, please get your questions in for Jared. What do you want to know about the future of AI prompt engineering? It's Friday. Sometimes we get a little wild on Friday, so get your questions in. But let's just talk about that, Jared. What the heck is prompt engineering? I say it is both fake and the most real thing ever. But, I mean, what is prompt engineering for those that maybe aren't, aware? Yeah.

Jared Zoneraich [00:05:57]:
I'll give you my definition. Tell me tell me if you agree on this one, Jordan. But, basically, how I define it is there when you're building an AI system, you have a lot of inputs that go in and you have an output. That's all I define it as, tuning what inputs are to get the output. So that includes the prompt, which is probably the biggest part of this, but it also includes what model are you running, what, hyperparameters are you sending to the model, what order are you sending it in, all that fun stuff. So basically, it's just what do you send to the AI? You're engineering those inputs input engineering.

Jordan Wilson [00:06:36]:
Yeah. I I love that. And, you know, Jared, kinda like my my take on it and, I I think the easiest way to describe it is, like, Midjourney. Right? So if you look at the the early days of Midjourney and AI image platform, you really had to speak to it almost in code to get anything out of it. Right? And now you can speak to it like a human. Right? So to me, prompt engineering is is almost the the conversation that the end user has with an AI platform to get the most out of it. And that that looks very different, right, depending on what platform. But, you know, I'm I'm curious, though.

Jordan Wilson [00:07:04]:
Let's let's just get to the bulk or or or the, the crux of this conversation about this concept of actually nontechnical domain experts being the key in the future. Jared, what's what's your take on that and what the heck does that even mean?

Jared Zoneraich [00:07:19]:
Totally. Totally. Totally. So I so I think at the core of this point that, that I guess we're gonna talk about here is is the question of how do you build an AI product that succeeds, and how do you build an AI product that differentiates? Where where in the world, you know, there's a there's a common, derogatory word people call start up stuff, ChatGPT wrapper. And the the the big question is what how How do you build an AI application? How do you build the AI product that is different than a ChatGpT wrapper? And my my take on it at least is that domain knowledge is the way you do it, domain expertise. So, I maybe an example is the best way to illustrate this. If you're building an AI like, a legal application, a legal AI app, the the engineers building the application I'm an engineer. If I'm building a legal AI, I don't know if the contract it's spitting out is correct.

Jared Zoneraich [00:08:15]:
I don't know what the correct answer to a question about a contract is. I don't I don't understand anything about the law. And, the what this means is you have to have someone who does understand on your team. You do need these we call them domain experts. You could call them subject matter experts. You could call them whatever you want. But, in my opinion, these are the people who are gonna be behind AI applications in the future. All this little, like, prompt tuning and, as you were saying, these mid journey language you have to speak to the model in, that I think is all approaching 0.

Jared Zoneraich [00:08:45]:
That's all going away. It's getting easier and easier to build these systems. And the one thing that is gonna remain is how do you actually impart the the task to the AI? How do you tell your AI application what you do? Like, what, I think there's there's another there's another tangential concept here, which is it like, is prompt engineering going away totally, and, like, how are you gonna build these applications when it when it does? And I think one core part that is related here is that at the end at the end of the day, okay. So you're building a travel assistant AI, and you say, book me a flight to from New York to Chicago. There's a lot of different correct answers to that. There's a red eye, there's a 3 stop let 3 leg flight, and at the end of the day, you always need to define the right task. They call it lost function in ML, call it whatever you want, but someone needs to define the task and that's where we go back to the domain domain expert. Someone has to define how you're actually solving the problem.

Jared Zoneraich [00:09:50]:
And, yeah, that hopefully that makes sense.

Jordan Wilson [00:09:54]:
No. Yeah. And and I love it, and I'm I'm just gonna spit out some hot takes here and and get your thoughts, Jared. So, you know, I feel with with large language models, you know, especially in 2024, there's been this emphasis on RAG. Right? You know, kind of bringing in your your own data. And and and I don't know. I think the future of large language models are gonna be small models. I think it's gonna be, you know, maybe where now someone's relying on, you know, 90% of a, you know, trillion parameter large language model with 10% of, you know, their domain expert data.

Jordan Wilson [00:10:29]:
I think in the future, it's gonna be the opposite. I think it's gonna be 90% of their own, kind of first company data is what I call it with 10% of the large language model. Is that wild to think or are domain experts in the knowledge that that they have just too important for that not to be the future? Hey. This is Jordan, the host of Everyday AI. I've spent more than a 1000 hours inside ChatGPT, and I'm sharing all of my secrets in our free prime prompt polish ChatGPT course that's only available to loyal listeners like you. Here's what Lindy, who works as an educational consultant, said about the PPP course.

AI [00:11:13]:
I couldn't figure out why I wasn't getting the results from ChatGPT that I needed and wanted. And after taking the PPP course, I now realized that I was not priming correctly. So I will be heading back into ChatGPT right now to practice my priming, prompting, and polishing.

Jordan Wilson [00:11:32]:
Everyone's prompting wrong, and the PPP course fixes that. If you want access, go to podpp.com. Again, that's podpp.com. Sign up for the free course and start putting ChatGPT to work for you.

Jared Zoneraich [00:11:50]:
So I don't I don't think they're mutually exclusive. I, I guess I think for well, I'll start with saying, first, I think this take of everyone's gonna be using small models, 5050. I could totally see it. I could also see a world where, you know, OpenAI, Anthropic, Google, all the all the big models are just so good that you kinda just use them for most of your things and you tweak do these last mile tweaks, which we're doing to today, it's all last mile tweaks. It's prompt engineering, it's fine tuning, it's rag. I I I guess how it relates to domain experts and how how that'll exist in these world in this world, if we go down the route of everyone's using small models, maybe they're using LAMA, maybe who knows what they're using. But I don't think it changes anything because, like, if we go back to the definition, prompt engineering, it's also choosing which model to use. And there's at the end of the day at the end of the day, you're getting an output from an AI and someone needs to know if the output's correct or not.

Jared Zoneraich [00:12:54]:
And there's a lot of different options you get especially if and I think this is the most powerful thing about language models is that you can solve problems that don't have a ground truth solution, for example, conversations. And someone needs to someone needs to tailor that voice. Someone needs to tailor what it is and choosing small models, maybe using small models in conjunction with big models. I think it's almost not relevant to the argument of who is gonna be building these systems.

Jordan Wilson [00:13:26]:
I've I've more on that, but I wanna get to this question here from, Yogesh, former guest. Thanks, Yogesh, for the question, asking, do you see prompt engineering, becoming a core skill set that everyone needs, or is it going to be specialized experts who offer their knowledge to others? Love this one. What's what's the answer there, Jared?

Jared Zoneraich [00:13:46]:
Yeah. Yeah. I like this a lot too. So, there's a term I read a long time ago. It might have been Steven Wolfram who said it first, called computational thinking. And that's what I think is the real skill set, here. And I think that's a skill set. I think that's already almost a core skill set in most knowledge work people do today, which is just kind of, can you think algorithmically? Can you think in terms of, like it might be.

Jared Zoneraich [00:14:15]:
You could call it logic. There's a lot there's a lot of ways to describe this, but, can you can you reason about how to do something in a computational engine from a computer, from a language model, something like that, and can you build this algorithm? So having said that, yeah. I that I think is a core skill set. I think the skill set of prompt engineering is kind of just a mix of communication and, communication and this computational thinking. So, yeah, you'll probably need it. I think the this in your question to specialized experts versus prompt engineering, I think that's 1 and the same. I think the specialized experts are gonna be the ones who need to understand how these systems work because they're gonna be using a platform and I I see it as simple as in the future, and and we wanna build front layer into this, like, a platform where you should only like, the only relevant parts for building an AI system should be this specialized expert coming in and just telling the AI what it needs to know. That's it.

Jared Zoneraich [00:15:16]:
I mean, all this other stuff we're doing, fine tuning, whatever, this is all could be made easier in the future and can be like, as I said, like approach 0 in terms of complexity. The real irreducible part of this is the specialized knowledge and how you impart that to a system.

Jordan Wilson [00:15:33]:
You know, one thing one thing I I I talked about with someone recently is someone who is kind of kind of scared of of AI. Right? They understood how large language models were useful, but they looked at their own job and their own role, and they said, well, this seems like a a large language model could do the majority of my role. You you know, and I kinda said, well, hey. You have the data. Right? It is it is your decision making. It is your domain expertise that ultimately matters the most. So, Jared, like, I'm curious for because I'm sure there's a lot of people, especially people listening or or watching the show today who might feel that way. Right? Who might feel, hey.

Jordan Wilson [00:16:15]:
I'm a subject matter expert, and, you know, if what's you know, is is my knowledge going to become commoditized? So for those people who are nontechnical domain experts, where should they be investing their time in in terms of learning new skills and and how it how to find, ways to still push their companies or departments forward?

Jared Zoneraich [00:16:38]:
Yeah. Yeah. No. That's a great question. And kind of like tangentially, who who are the big beneficiaries of the AI revolution, and how do you become one of them? I I I think these type of technologies, and these type of let's call it, like, technological revolutions, they they rise, they rise the tide of productivity to a new level of certain people. So, like, and skill. So, like, a calculator lets you do math at a higher level. Photoshop lets you do art at a higher level.

Jared Zoneraich [00:17:10]:
Cars lets you move at a higher level. I think LMs lets you do conversation and knowledge sharing at a much higher level. And who's gonna be who's gonna be the maestros of this, and who's gonna who's gonna take advantage of it? And like you're saying, yeah. Say say you, are realizing 80% of your job could be automated. Kinda scary, but that means also someone in that field is gonna be, not it's not 80% more productive, but whatever that, whatever that fraction is, I can't think right now. But, what they're they're gonna be an order of magnitude more productive, and I think the skill set you need is just what I'm like, communication and, and this algorithmic thinking. So use Tragedy PT. And if you're good at it, start using it more.

Jared Zoneraich [00:17:58]:
And, check out the GPT widgets if you're not technical. I think that's I really believe like, I think that's a direction we're gonna be moving in the future. I don't know if OpenAI's implementation is the right thing right now. Not it doesn't seem like it really picked up, but I have certain GPT GPTs as they're called, for, like, writing docs and stuff like that. And, just get a feel for the technology is probably the best thing to do.

Jordan Wilson [00:18:24]:
Yeah. You you know, speaking of, you know, where don't nontechnical domain experts, right, which I think is so many people. You you know? Where or, you know, how can companies kind of leverage these people, you know, for companies that are investing in their own, you know, whether their own fine tune large language models, whether it's, you know, software companies and they're investing in their own products and services around AI, How can these nontechnical domain experts, help set those products apart?

Jared Zoneraich [00:19:00]:
Totally. Totally. That's a great question. So I think we let's, I think we can look at let's look at, like, the AI applications that exist today and maybe talk through those and that that that should elaborate here. So there's, like, a group of AI applications that are that I use to code, for example. So, like, Cursor is a great one. Obviously, Copilot. All these, like, AI tools that just make, like, software engineering an order of magnitude better.

Jared Zoneraich [00:19:30]:
These, I think, are a confusing example to look at because the people who made them are the experts, because they're made by engineers for engineers. So so let's ignore all those for now. And the other subset of AI applications are kind of like the Harvey's of the world. Hevia, for example, for financial mod for financial companies or character AI for, like, personas, all of these applications, rely on some, like, bringing AI to a specific vertical. And, I think if you're building an AI product today, especially one not in software, in software it's a little bit easier because everybody on the team knows what it's supposed to do. But if you're building an AI application in a different vertical, there's 2 ways to do it. And one way is to rely on engineers and kind of, like, do the best you can. But the way to really build a good product and stand out is to actually have the domain experts in charge of the project.

Jared Zoneraich [00:20:30]:
So I can give some more examples. So there's this company, ParentLab, Very cool company. We've worked with them for a while. They are a coaching app, like a parent parental coaching app, and they basically the person who's in charge of the AI product or I don't know in charge, but in charge of prompt engineering the AI product is, an educator. She actually never touched code in her life. She was a teacher for 16 years, did real estate, but she understands, like, communication, and she understands how to talk to people. And it's incredibly important for them to get yeah. This is it.

Jared Zoneraich [00:21:12]:
It's incredibly important for them to get the voice right of the product and for the product to make a actual connection with these parents. So that's a that's that's a big example of, like, if you if you wanna build a really good product, and this is a very unique product, you you need to have someone you need to bring in that domain knowledge as your competitive edge. And there's so many other examples. There's, Gorgias is another great example. It's a, it's like it's the number one help desk for Shopify for Shopify stores, And they wanna make customer service really good, and they're good trying to automate a lot of it and save a lot of money for merchants. And they're building this AI product and the people so at that company, we work with both the ML team and the prompt engineers. And the prompt engineers are largely customer support specialists who are then coming in and saying, hey. This responded in the wrong way.

Jared Zoneraich [00:22:07]:
This promised too much. We don't want it to do this much. And, yeah, they're really building, like, the cutting edge, customer support, chatbot. And the way they're doing this is by leveraging these people who actually know what the responses should be as opposed to people who know how to use Git version control. You know?

Jordan Wilson [00:22:29]:
You know, Jared, I'm I'm curious, you know, getting getting more to this concept of, you know, just prompt engineering and those people who because I don't know ever. Right? I'm not like I haven't been around for forever, but I've been working, you know, full time since I was a teenager. So, you know, more than 20 years. And I can't remember any other time at least in my, kind of working history that there was such it seemed like such a rush to adapt to a new technology. Right? It wasn't really like this with the Internet. It wasn't really like this with cloud. It wasn't like this with mobile, but with specifically generative AI and large language models, I don't think there's ever been such a disparity between kind of the the haves and the have nots in terms of the skill set. But but for those people who are still nontechnical, right, when we talked about prompt engineering, I think it's just this scary term.

Jordan Wilson [00:23:23]:
So let's just break it down. How can a nontechnical person, be good at prompt engineering? Like, what makes a good prompt engineer?

Jared Zoneraich [00:23:32]:
Yeah. Yeah. It's, it it is. It's it is very interesting. It's it's creating a new world of, like, there's, like, a gap of technology where, like, innovation has happened so fast, and now there's there's a lot to do to catch up to it. But, how to catch up, how to be a good prompt engineer? A lot of it yeah. A lot of it boils down to this communication skill of are you good at articulating ideas in words? Are you good at articulate like, understanding it's it's a language large language models are a tool for language, for processing language and outputting language. And that's a big skill here.

Jared Zoneraich [00:24:12]:
And, honestly, familiarity with how it works. I think it's it's really, like it's closer to the skill of hackers and makers and that sort of thing as opposed to just raw engineers. A lot of times, like, people who are good at prompt engineering, you'll see it split in an engineering team. Half the people are good at it, half aren't, and it is a completely different skill set. And to get good at it, I'm not sure the best way to get good at it, honestly. I think, I think find out if you're, like, decent at it and then just start doing it a lot. But to go from 0 to 1, honestly, I'm I'm not sure. I'm not sure.

Jared Zoneraich [00:24:51]:
Just maybe maybe using Chat with the team more and more.

Jordan Wilson [00:24:55]:
Yeah. Yeah. I agree. It's one of those things. It's like you can read and watch YouTube videos about riding a bike and, you know, talk to people who are experts at riding a bike, but until you do it, right, and until you start to fall a little bit and see why you fall, where you fell, etcetera. But yeah, I just think it's it's it's being curious and it's having conversations and it's, it's iterative work. Right? Because I think people when they think of just the word prompt, they think of, like, a long, long prompt. Right? And they say, oh, I it has to be very formal and it has to be structured and in JSON or or YAML, etcetera.

Jordan Wilson [00:25:29]:
And it's like, no. It's have a conversation with a model. Right? You know, especially when we see, you know, people like, you know, Sam Altman saying, you know, hey. The future of large language models are gonna be much more conversational. They're going to be much better at at picking up on, kind of like in like human intuition. With that in mind, you know, how how would you suggest people to, suggest to people to prepare for, you know, large language models to come? I mean, is it is it best to just, you know, go in and just keep prompting and until you fail? I mean, what's the best way to improve on those skills and learn to ride that bike?

Jared Zoneraich [00:26:08]:
Yeah. Totally. And I think, I think there's 2 ways to answer this question. There's the, there's are you how are you doing the prompting? Are you building a application? Are you building an AI application and you're doing prompting to build this application? Or are you just doing it for yourself? So may I start with doing it for yourself? That's a simpler question. Kinda just it's it's not up to you. You know? The models that are come out are gonna come out. But in general, all the I guess when HTTP first came out, there was a lot of buzz of, oh, if you're nicer to the model, you'll get a better answer because Stack Overflow questions that are nicer got better answers or something like that. That was probably true, but it's I doubt it's even true today, and a lot of these tips and tricks, not even worth learning too heavily because they're gonna go away exactly like you like you said earlier.

Jared Zoneraich [00:27:02]:
Conversational is the way to go. Models are really good at fixing problems. So then I guess, like, the second way to answer it is how do you prepare for new models if you're building an AI application and how do you not index too heavily on what exists today because you wanna build these things in a model agnostic way. 4 o came out. People had to change their stuff. A lot of our customers actually had to change it back from 4 o because it wasn't working for their stuff. So the way I I think and we call it we actually call this concept prompt routers, which is build things in a modular way. And instead of relying on one prompt to do everything and relying on this AGI like prompt that is just autonomous agent running in the background, build a build a flowchart, build a state machine where each prompt is doing something specific that you can test modularly, then it's super easy.

Jared Zoneraich [00:27:53]:
Swap out the model, swap out the new model, see if it's faster, see if it's cheaper, See if it works better, and you could build these unit tests. And I I think, like, rigor rigor is the answer for the second group is in my opinion.

Jordan Wilson [00:28:06]:
Yeah. A a a great question here, following up from earlier from Cecilia, but I think it's, important to talk about. So she's asking and saying, can you give a more specific example of computational thinking and what is the best way to develop it? Yeah. That's that's that's a great question here. But, yeah, what's what is that, Jared? What is a good example of that?

Jared Zoneraich [00:28:27]:
Yes. Totally. So a good example of computational thinking is let's go back to that example I gave earlier, booking a flight between New York and Chicago. When you hear that question, the way to think about it computationally is to say, what are the exact steps I need to solve? And this is actually funny enough, this is how people say you should be prompting with chain of thought. So it's kind of chain of thought for humans. So the first step is saying, like, what are my requirements? What what does this person need? What are the options? Just how do you in other words, computational thinking is how do you break down a problem into computational steps, into out into discrete steps to do. And the best way to develop this skill is kind of logic, math, coding. Coding is honestly probably the best way to learn how to think computationally.

Jared Zoneraich [00:29:24]:
It's not necessary, but the whole concept of coding is writing, doing a problem in discrete comp compute steps, not to use the word indefinite.

Jordan Wilson [00:29:38]:
Yeah. So, that's that's a great yeah. Like, even just thinking chain of thought. Right? Like, I think a lot of times when it comes to getting the most out of a model, it is just thinking of those important steps. Right? And and sometimes, you know, you have to give an outline or work backwards from the solution that you think you need and and kind of provide those stepping stones to help a model get there and have a conversation. So Jared, we've we've we've covered a lot today. Right? Like we've talked, you know, about kind of the future of AI models and how nontechnical subject matter experts are are the key. We've talked about how the tech revolution rises the tide of productivity.

Jordan Wilson [00:30:17]:
We've covered a lot, but maybe what is your one important, one most important takeaway as we wrap up, for people to best understand, kind of how the future of AI is going to be built by nontechnical domain experts. Yeah.

Jared Zoneraich [00:30:34]:
Put me on the spot with one important takeaway. I think, I guess the one thing I think people should understand is AI the, yeah, the differentiation differentiators of good AI applications are bringing experts into the loop who actually know if the outputs are correct or not. And obviously, I say that's what we do at prompt layer. I think there's an alternate takes to it. So if you disagree with me, plenty of products that do disagree with me. And, yeah, that's what I say. Bring experts into the loop because you need to build an AI application system with people who know if the outputs are correct or not.

Jordan Wilson [00:31:18]:
Yeah. That's huge. Hey. Still always room for smart humans in the loop. So, yeah, you're nontechnical domain expert like so many of us, don't worry. Still still need you. Right? Alright. So we covered a lot today, and we're gonna be recapping it as always in our daily newsletter.

Jordan Wilson [00:31:36]:
Jared, thank you, so much for joining the Everyday AI Show. We really appreciate your time and insights.

Jared Zoneraich [00:31:42]:
Thank you for having me, Jordan. This is fun.

Jordan Wilson [00:31:44]:
Alright. And hey. As a reminder, we covered a lot. If this was helpful, if you're listening on the podcast, please, drop us a review and subscribe to the pod. Also, if you're listening on LinkedIn, thanks y'all. Tag someone that needs to hear this, repost this if this was valuable. But most importantly, go to your everyday AI.com. We're gonna be recapping today's conversation as well as going over what you need to know to keep up and stay ahead in the world of AI.

Jordan Wilson [00:32:09]:
Thank you for tuning in, and we hope to see you back for more everyday AI. Thanks y'all.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI