Ep 242: AI Tools to Supercharge Research

Harnessing the Power of AI: Elevating Research for Business Growth

The landscape of scientific and business research has been vastly transformed by the revolutionary power of artificial intelligence. By aggregating and distilling scientific information into easily comprehensible summaries, AI has made research more accessible than ever before. Advanced AI tools, including Chat GPT, are now becoming indispensable allies in our quest for insights, saving valuable time and making knowledge readily available.

However, with this democratization comes concerns about the reliability and integrity of the findings. The issues of research fraud, the potential misinterpretation of existing information, and the challenges in reproducing and validating AI-generated results cannot be overlooked. Thus, leveraging AI necessitates striking a balance between accessibility of research and maintaining integrity and

AI and the Challenges of Conducting Research

The use of AI in research comes with its own unique set of obstacles. When the stakes are high, especially in business decision making, the importance of assessing the reliability of AI tools cannot be stressed enough. Utilizing AI to streamline business operations and decisions is invaluable, but caution must be practiced to avoid pitfalls.

AI tools like Sykespace, Illicit and Sciwriter can be robust aids in obtaining reliable answers and sourcing information. However, to maximize the utility of these tools, understanding their functionalities and potential limitations is fundamental. It's advisable to begin with a few AI tools relevant to specific research needs and pain points, and gradually increasing proficiency.

The Role of AI in Writing and Summarizing

In the realm of business, the significance of effective and clear communication cannot be overstated. Here, too, AI offers an efficient and convenient solution. Tools like Sciwriter offer great assistance in managing the writing process and overcoming the usual challenges in producing compelling and comprehensive reports.

One must also remember the importance of attributing to original pieces in the AI-driven sphere of writing and research. Verifying sources using tools like Consensus Scholar AI and Perplexity is pivotal for maintaining credibility and ensuring the integrity of the information presented.

The Future of AI in Research

AI shows promise for a future where curated content is valued, and the demand for reliable, verifiable information in the market is met efficiently. Harnessing the power of AI in the business context could lead to exponential growth by empowering decision-makers with actionable insights.

But as we move into this AI-centric era, it remains essential to remember that while AI is a mighty tool, human intervention for interpreting, validating, and contextualizing findings is irreplaceable. Researchers, business owners, and decision-makers all share a collective responsibility to uphold research integrity.

In conclusion, the growing reliance on AI for business growth brings with it a unique mix of opportunities and challenges. Being informed and judicious in our approach to these AI resources will enable us to reap the benefits without compromise. Thus, the future of research lies not in AI alone, but in an intelligent blend of AI and human intellect.

Topics Covered in This Episode

1. Importance of Research Accessibility and Transparency
2. Utilizing AI Tools in Research and Writing
3.  Integrity and Reliability of AI-Generated Content
4. Responsible Use of AI and Source Attribution
5. Problems of AI-generated Content in Research


Podcast Transcript

Jordan Wilson [00:00:17]:
How is AI being used in academic and scientific research? And with AI, can we all be researchers? Should we be? And what are the AI tools that we should all be using that can really help supercharge that journey? Alright. We're gonna be talking about that today and more on everyday AI. Welcome. Thanks for joining. My name's Jordan Wilson. I'm the host of everyday AI. We're a daily live stream, podcast, and free daily newsletter helping everyday people like you and me not just learn generative AI, but how we can all actually leverage it to grow our companies and to grow our careers. So if you're joining us on the podcast, thank you.

Jordan Wilson [00:00:57]:
You know, as always make sure to check your show notes and go to your everyday AI dotcom. We'll have a recap of today's show in our newsletter that goes out a couple hours after our livestream here. And if you are joining us on the livestream, like Tara joining us from Nashville or Brian joining us from Minnesota, let me know. What questions do you have about academic research? Alright. But before we get into that, let's first do as we do every single day. Go over what's going on in the world of AI news. Alright. So Replit has introduced CodeRepair, an AI coding assistant powered by real time coding data.

Jordan Wilson [00:01:34]:
So Replit has just unveiled CodeRepair, the world's 1st low latency program repair AI agent. So this AI agent is informed by Replit's unique data developer intuition and is designed to automatically fix code in the background. So, the program uses real world use cases to enhance its ability to repair code efficiently. And with code repair, developers can save time and improve productivity by having their code automatically fixed without manual intervention. Pretty big news from Replit. If you haven't heard of Replit, even for non developers like myself, I always use Replit to just go try things and to deploy them. So, it should be essentially, I put a lot of bad code into Replit, so I'm excited to see, how this new coding assistant can can fix it. Alright.

Jordan Wilson [00:02:18]:
Our next piece of AI news, Stability AI has released a new audio tool called Stable Audio 2.0. So with Stable Audio 2.0, it's a new, update from Stability AI that allows users to generate music tracks up to 3 minutes long at a higher, quality as well just based on AI prompts. So put in text, get a up to 3 minute song. Pretty cool. Right? So it also has a feature that allows users to manipulate any audio sample using text based prompts. So, the tool has a content recognition filter to ensure compliance with copyright laws as well. The company, if you listen to the show, it's faced some controversy recently with its previous AI models training on copyrighted material, leading to the resignation of the company's VP of audio, and I believe their their CEO just, just left as well last week. Speaking of that, our last piece of AI news for the day, musicians are banding together to fight that very thing, to fight the potential impact of AI on the music industry.

Jordan Wilson [00:03:20]:
So a group of over 200 musicians have signed an open letter calling for protections against the use of artificial intelligence to mimic human artists' voices and their likenesses in the music industry. Some notable names, to sign this kind of open letter include Nicki Minaj, Stevie Wonder, Billie Eilish, and a lot others. So the letter demands that the technology companies pledge to not develop AI tools that undermine or replace songwriters and artists. Seems like that's already, well well ahead. Right? It seems like that's already happening. But they're wanting more responsible use of AI technology that could benefit the industry, but concerns over copyright infringement and labor rights remain. And, obviously, with all these new tools, there's been an increased debate over the use of artist likeness after their death as well with AI tools raising ethical debates. Well, those debates aren't gonna go away, but we'll always be talking about them, here on Everyday AI.

Jordan Wilson [00:04:17]:
But you didn't tune in today to talk about music, you tuned in today to learn about how AI is really changing, just just the the game of of research. Right? Because it's it's ever evolving and, you know, you don't don't worry. You don't have to listen to me blab on about it. We actually have an expert, today to come on and help us understand this a little better. So please help me welcome to the show Avi Stayman, the founder of Scider AI. Avi, thank you for joining the Everyday AI Show.

Avi Staiman [00:04:45]:
Thanks so much, Jordan. Great to be back second time. So hopefully that means the first one wasn't too bad. And, looking at those pictures at the introduction, I'm looking for an AI tool which can make us look like we did 10 years ago when we took those photos. If you know of it if you come across anything, let me know.

Jordan Wilson [00:05:00]:
Yeah. Gosh. It's that's a good point. Right? Like, should I just be should should I have, like, an AI filter that just makes me look 10 years younger, on on live video at all times? Maybe. But but, Avi, real quick, give us give us an an overview of kind of what you do, at Sciregar AI.

Avi Staiman [00:05:16]:
Yeah. That's a great question. Thanks so much. So, Scywriter AI is an attempt to make the writing process for academic researchers, much more simple, straightforward, and streamlined. So, imagine your typical researcher, you may know some that, you know, likes doing science, likes spending time in the lab, you know, but may not love actually writing up their results and having to communicate and document them. That's super critical, for the act for the scientific literature and for the academic record that there's actually a recording, and documentation of the results of their study. So and but but no one is, you know, especially when it comes to the sciences, no one's looking at it as a, you know, literature prize for writing. So when I'm asking the question of it, is there a way to, semi automate through AI that process of from the lab to the paper without stealing really valuable time and resources from the researcher.

Avi Staiman [00:06:14]:
And if we keep in mind that the average researcher, the average paper. Okay? Jordan, let me ask you this. How much do you think the average paper costs taxpayers? Just one paper. One study. Well, the fact that

Jordan Wilson [00:06:25]:
you're asking me means it's either very cheap or very expensive. So I'm gonna go with the latter, Avi, and I'm gonna say, I don't know, $5,005,000. Who knows? Knows?

Avi Staiman [00:06:33]:
Alright. Add on a number of zeros. It's a half $1,000,000 for Oh. Okay. Yeah. It's a pretty wild number. Now when I say paper, that includes, obviously, the infrastructure and the lab and the staff that you need, but science is expensive. Right? It requires very specific tooling.

Avi Staiman [00:06:47]:
So, we want to maximize I mean, every single one of us are right? Our taxpayer dollars is going to fund science, and I think that's a good thing. But we wanna make sure that we're, you know, maximizing that investment, and that we're actually letting the scientists stay in the lab. And they spend a lot of time now through a very inefficient process writing and revising articles.

Jordan Wilson [00:07:07]:
So that's that's great, and this is really what we're gonna be diving into today. And, you know, as a reminder to our our livestream audience here, what do you want to know about this intersection of AI and research? Now is your time to get your questions in. But I I wanna start high level here, Avi, before we get into these tools that can, you know, these AI tools that can help supercharge the research process. But why does research or why should research still matter to everyone in a day and age when seems like information is at our fingertips?

Avi Staiman [00:07:36]:
Right. I mean, you know, it's, some would say that science is under attack. Right? In terms of, you know, do we believe in in science? But for those who do, and I and I count myself as as as among them, I think it's really critical and important to realize that part of the skepticism around science was that for traditionally it was a very closed box. Okay? So everyone's heard of the ivory tower where the, you know, the scientists go and do their magic and then we're just supposed to trust them. And I think we live in a different world the same way we don't just trust our doctors, we actually double check their work. And, you know, so long as we're doing that responsibly, that makes a lot of sense. And I think what is really fascinating about what AI has done in a very short period of time to research is it's democratized right now. So back in the day, you would go to one doctor, maybe you would go for a second opinion if you could afford it.

Avi Staiman [00:08:33]:
And then you kinda just have to take their word for it and and and and do that. Nowadays, I think the first people the first thing people are doing is they're starting to Google. Now the problem when you Google is most likely you're gonna come across a long slew of really dense, heavy, boring academic articles that for someone who's been in the industry for 15 years, I don't know how to tell you what it means. What AI has been a game changer for, and then how do

Jordan Wilson [00:08:57]:
I know not only that.

Avi Staiman [00:08:57]:
How do I know which ones are important? Which ones are are are really peer reviewed? Which ones are leading in the field? That's kind of a, a, again, a mystery. Maybe it's not a mystery for the researchers in that specific area, but for me as a as a patient or as a family member, I really wanna know that. So there are some really great tools now that do a a number of things. First of all, they distill a lot of different papers and a lot of different scientific information into lay summaries, into ways that we can easily digest and, absorb that content. And me and you can say, oh, I get I get what they have. I understand what the potential treatments plans are. I understand what potential drawbacks of those plans are. So that's just one example of how research and making an AI accessibility to research is really critical and changes the way that we perceive, our interactions and our the way that we engage with, the world around us, really.

Jordan Wilson [00:09:50]:
Yeah. And and you bring up so many, great points there, Avi. And I love the the concept of just democratizing research. Right? Because I even thought to myself, you know, a week or 2 ago, I'm reading, quote, unquote, reading research papers now. Right? Because I can talk with these papers, you know, using, you know, a chat g p t or a or perplexity or consensus or something like that. But can you talk a little bit about there and, you know, about how now all of these AI tools can really turn anyone, maybe not into a researcher, but someone that can benefit, you know, from being able to actually have a conversation with a paper. How does that work and how does that change how we all consume information?

Avi Staiman [00:10:33]:
Yeah. I mean, I think we all we're all familiar with Chat GPG. It's pluses and its minuses. And I think that, you know, probably your audience knows that the training data, that it ingested for gpt was a a big potpourri of things. Right? Everything from the New York Times and some peer review journals all the way to, you know, Reddit and other, you know, maybe less of authoritative information. And I think that the question we need to ask ourselves is when we're building when we want reputable reliable information or when we're building business cases that really need to be based on facts. So I think we have a really wonderful resource in the scholarly literature to build out those models. Even if they're in finance or they're in business or they're in law, we wanna go based on what what is the the the, you know, the the the confident content that's most verified or that's highest value or highest quality.

Avi Staiman [00:11:30]:
And I think that the peer review process that research goes through, makes it as such. Right? Similar to like the way the New York Times would be for journalism. It goes through a lot of reviews and edits and facts checks. So we wanna be basing, I think our ideas, assumptions, and use cases and businesses based on, you know, reviewed content, not based on polluted or diluted, content. So I am a big fan of chat gpt. I use it all the time. But I actually think if we're thinking about actual business use cases in, you know, areas where reliability and, trust in the sources is critical, then we have to sort of turn to the consensus or to perplexity. And like you said, it allows, you know, me and you to really get answers that are based on, you know, materials that were inaccessible just prior.

Jordan Wilson [00:12:19]:
So, you know, I think that there's multiple ways that we can look at how a how AI is impacting just the, you know, academic scientific research, industry. But we've kind of been talking about, you know, on the back end. Right? After the the research has already been published and we're trying to make sense of it all. How is it being used on the front end? Right? Because presumably, you know, someone like me would hope that, you know, all research papers. Yeah. These research papers that cost $500,000 of of taxpayer money. You know, you would hope that there's some original content in there. Right? But at what point should we be worried about, hey.

Jordan Wilson [00:12:56]:
Is AI being used too much for original research papers? Is is there a downside to that?

Avi Staiman [00:13:03]:
Yeah. It's a big mess in our industry right now. Right? Around, like, research because we've got this weird love hate relationship in the, scholarly world with AI. On the one hand, I mean, AI companies have already come up with incredibly complex chemical, compounds, for example, that would never have been able to be discovered by humans because it requires the synthesization of so much data, or even running, you know, data analysis, over with synthetic data that just would not have been possible, doing it, you know, in the traditional way. And all of a sudden that's been a game changer. On the other hand, it makes issues of research fraud all that more prevalent and all that more problematic. So let me explain. You know, some of you may have seen recently a few cases of very senior, researchers, even presidents, the president of Stanford University, the president of Harvard University actually had to resign because of issues with that with research integrity.

Avi Staiman [00:14:01]:
Right? With the, you know, reliability and trustworthiness of their research. And I won't get into the specific details now about each one is its own case, but bad actors or, researchers who are trying to cut corners and even good researchers who maybe are not naive to how these AI tools work may just take outputs from LLMs and say, okay. You know, I trust this. It did a data analysis for me. Well, here's the results. Well, can we trust that data analysis? How do we, is there a way to reproduce it? Right? Today it might give me one answer and tomorrow it might give me a totally different answer. So there's this really I think, you know, there's excitement because like we said before, it can get me back to the lab in less time and make my work more efficient, take off some of those frustrating tasks that I don't wanna be doing. On the other hand, if we use it without looking or without being careful, then I think we run into a problem.

Avi Staiman [00:14:52]:
So there was a there was a a a a funny yet sad case that, a few weeks ago of a rat. I don't know if you saw this, Jordan, but there was a there was a peer reviewed article, journal research article that had a rat with a giant testicles, in it, and it was supposed to be an anatomic display description of the rat and it was just an AI, you know, baloney image that made it into a journal, and that made it to The World News. I just saw it was on Steve Stephen Colbert was making some funny jokes about it the other night. Like, it's funny but it's also quite sad and scary because, okay, that's a very obvious case, where science failed or the scientific process didn't work properly. But how many cases are there that are more subtle that we don't necessarily notice? And can we trust when we're reading, you know, these summaries, can we say with a full heart that, yeah, I can rely on what's being output here? Or are we worried that, you know, this push towards what's known as publish or perish? Right? There were always need to be publishing articles that's gonna push researchers to cut corners or to falsify data or to generate text which isn't really based on legitimate studies and research. So I think it's, you know, we have to, on the one hand, push forward with our experimentation, but on your other hand, always be asking ourselves, does this pass the barometer for research integrity, or does it need to be checked at the door? And maybe we need to wait until it's been tried in other use cases first.

Jordan Wilson [00:16:22]:
Yeah. And, you know, it's a good point. I'm gonna I'm gonna bring it up here. I don't know.

Jordan Wilson [00:17:28]:
Like, do we have to put, like, a rated r, you you know, thing on, today's livestream? You know? So here's what I'm talking about. The, the rat with the the giant testicle that actually somehow made it into, actual academic research. So that's interesting. But, you know, Avi, you bring up a good point here. How does the role of of humans, change with AI being, you know, relied heavily on every step of the process. How should, you know, humans, specifically those working in and around academic research, how should they be changing their outlook, changing their role to make sure whatever does get published isn't, obnoxiously large rat testicle?

Avi Staiman [00:18:09]:
Yeah. I mean, there's a lot of actors kinda along the way. Right? So there's the researchers that are doing the research. There are the universities that they're kind of, like, you know, the boss. In a certain aspect, the researchers have a lot of independence. So the research universities need to kinda take responsibility and say, we need to make sure that, you know, at Purdue, at Illinois, at Harvard, we are stamping giving our stamp of approval that the researchers are doing legitimate research. And then there's the research publishers. Right? They're the people who are actually putting out the content.

Avi Staiman [00:18:36]:
They also have a responsibility. But as often hap is the case in these situations, because there are multiple bodies that are responsible, sometimes things fall through the cracks or or there's expectations that the other one is going to kinda take care of, this problem. So it starts with, like, I think researchers educating themselves about the, you know, possibilities of AI, but also the the potential downsides of AI. And then it's up for us as a society and as, you know, research funders, research universities, to to come along and say, no. We're gonna make sure that the research that's being, you know, published or the research that's coming out of our institutions is of the highest standard. And in different fields, that means different things. Right? So if you're a historian, right, making sure that you're going to primary sources is gonna be key and not just relying on, let's say, the ALM. On the other hand, if you're a scientist making sure that you're, you know, that you're double and triple checking your protocols, that's great.

Avi Staiman [00:19:32]:
That that that's gonna be really important. But for example, a a protocol. So that's a, let's say, a a formula that I go through in order to design a a research study. Well, who's to say that us as humans, we have a better, ability, to design a research study or to say which chemicals I need to put in, you know, or which testing environments than an LLN? At the very least, I think that it's every researcher's responsibility to use it as a springboard, to use it as a as a as an idea factory to, bounce off of. We may decide that, you know what, we're actually gonna reject the l m suggestions or we're gonna continue prompting it in different directions because of things that we know. But I think that whenever it comes to large datasets, it's really gonna be important for us to have the humility to say, you know, we're we're we're we're gonna include AI here. The the bigger question I think becomes, what is our error rate that we're okay with? Right? And this is the same question that we ask when it comes to self driving cars. Right? Are we okay with 1 out of every a 100000 people, you know, getting into an accident as a result of a self driving car.

Avi Staiman [00:20:35]:
Well, how does that relate to 1 of every 10,000 when we're not using that technology? And that's more bigger philosophical societal questions that I think, you know, we need to become more comfortable with the fact that it's not gonna be perfect, but then again, neither are we humans. So let's just be a little bit have the humility to recognize that as well.

Jordan Wilson [00:20:53]:
Yeah. And speaking of, you you know, accuracy when it comes to using large language models, a great question here from Tara. So thanks for this one, Tara. So she's asking how can we assure accurate attribution to original sources in the AI driven realm of writing and research, thereby giving proper credit where it's due. How how does this work? How should this work, Avi?

Avi Staiman [00:21:16]:
Yeah. That's a really good question, and that's why I think that people that scientists have reacted quite strongly and sometimes negatively to GPT because of the lack of sources or because of, falsified sources. And that's why I'm a big proponent of, you know, some of the tools you mentioned before, consensus. Scholar AI is a great add on to GPT, so you can actually use it within JetGPT. And all you need to do is then when you're search when you're querying or when you're prompting and you're doing your prompt engineering, you actually get results that are that are sourced, which is really great, and and and perplexity does the same same thing. And I think that's really important because, you know, we need to make sure that we can always trace it back to the source. And most of the time, we need to actually go back and read that source. There's not really any shortcut cuts to, you know, to in the meantime, at least that I know of, to, fact checking, you know, these l m.

Avi Staiman [00:22:07]:
So, I think that, you know, the the whole scientific record is based on attribution in a proper way. Right? I'm here is what basically, a a scientific paper is saying, here are the studies that have been done previously. Here's a gap in the literature. Here's a question that we haven't been able to answer yet. I'm going to try and come and fill up that gap, but I need to be able to connect it to what came before me in order to move forward. I can't just do research kind of out of thin air. So that's a really yeah. That's an important, sticking point.

Jordan Wilson [00:22:37]:
So, you you know, we you you kinda just mentioned there, I think a lot of, you know, great AI tools that the everyday person can use. So, you know, chat g p t luckily did, you know, announce, earlier this week about improving its citations. I think it's, you know, obviously a work in process. But, you know, we talked about even if you are using, you know, a large language model like chat gbt, its ability to to work with consensus, scholar AI, you know, we mentioned perplexity. Is there anything else, Avi, whether it's everyday person or even people who are more in academic and scientific research? Is there any other tool, that that is is going to kind of better ensure because you can't get it a 100%. Right? Yeah. Is there any other, you know, tool that can better ensure accuracy whether it's for for reading, writing, summarizing, etcetera?

Avi Staiman [00:23:23]:
Yeah. There's 2 tools that I I personally like using a lot when it comes to if I have a question and I just wanna, you know, get a reliable answer and a source answer. And one of those is, Sykespace. And Sykespace really does a great job because what I can do is I can get, first of all, I can get a what I call a meta answer. So an answer that summarizes all the literature and tells me kinda what the bottom line is. But I can also look at individual studies, and it'll summarize those as well. And within those studies, I can summarize different parts of the article. So let's say I wanna understand what methods they used in this article.

Avi Staiman [00:23:54]:
Well, I could summarize that. Or if in another article, I wanna understand what their conclusions and results were. Well, I can analyze that. And that's a really so that kind of becomes more granular and detailed, and it's a dynamic way for me to summarize. So I'm a big fan of of of Sykespace. The other one that I like to mention, is illicit. Similar functionality, illicit.org, and Illicit is, again, a tool where I can ask a question. So maybe, Jordan, you just wanna, like, click on one of those questions that's coming up there so people can get a a feel for it.

Avi Staiman [00:24:23]:
It might be cool, and actually get, like, science based answers to the question that I'm asking. So all of a sudden I've got, so you see here there's, like, all these papers. Right? And the insights that there's a TLDR in case I don't wanna read it. But here I have a one paragraph summary that's sourced. Right? Which is a a reliable answer. Okay? Or at least somewhat reliable answer. Right? More reliable than you get in g p t. So already there, I think it addresses some of the major problems that we see in Chat GPT already addressed here.

Avi Staiman [00:24:48]:
Now what'll be interesting is if what you're saying is right, which I assume it is, that, ChatGPT is gonna start doing this, does that, you know, make these kind of tools, already kinda outdated? But I think there's a lot that scientists can do with, some of these tools that really make them quite powerful. So that's on the that's on the, you know, kinda scientific search end. And then in terms of what I'm building on, you know, Sciwriter, is really has a lot more to do with scientists who are writing their research, but it's a it's no one is born knowing how to write a scientific paper. Put it that way. Oh, that's not good that it's, coming up coming up with a with a with an issue. Okay. I'll have to take a look at that. But, you know I typed

Jordan Wilson [00:25:26]:
it in wrong there, Avi. Sorry.

Avi Staiman [00:25:27]:
Oh, okay. Alright. I got it. You're giving me a bit of a heart attack there. Maybe I, you know, maybe our website's down. Anyway, so but I think what happens there you go. Alright. That's that's that's it.

Avi Staiman [00:25:38]:
Yeah. So it's really it's really trying to overcome that that blank page and the struggle that comes along with writing that academic paper. And by the way, that starts early in your thesis. Right? It can continue on to if you're a doctoral student, post doctoral student. No one's born knowing how to write in that specific genre. It's a new genre that you need to kinda learn on the fly. Universities don't do a great job of teaching academic. So this is almost like a 247, you know, assistant pilot.

Avi Staiman [00:26:03]:
Almost like imagine having your own private writing coach available to you whenever, however. And that's kinda what we're trying to build over at SiteWriter.

Jordan Wilson [00:26:11]:
So so maybe, you know, I'm curious, Avi. You know, you're you're very well versed, in this kind of intersection of of AI and research. So let's say someone for the first time is is hearing about this and and they're excited. Maybe, I don't know, they're in college or they're they're working to, you know, transition in their career and and learn something. What's some good first steps for people that they can use AI in a responsible way and make sure that they can learn things maybe a little better, a little faster, and and and making sure that, oh, it's I'm I'm I'm not relying on a bunch of hallucinations.

Avi Staiman [00:26:46]:
Yeah. That's a great question. And I I'm sorry. This is shameless self promotion, but this is just I've taken the horns in this field and kinda tried to address these questions. So I ran a course called AI tool up Tuesdays. It's an entirely free course. It's 24 different tools that each one of them is really, you know, very relevant for, research and for researchers, and I split them up thematically. So you've got research writing, how to create illustrations for research, you know, research, discovery.

Avi Staiman [00:27:15]:
And what I recommend is I don't recommend to anyone to try all these 24 tools. I think you're gonna get overwhelmed. You're gonna say, forget it. I can't handle AI. It's not gonna work. But what I do think can happen is you can pick out 2 or 3 tools that you're like, alright. This is interesting to me. I'm gonna try this out.

Avi Staiman [00:27:30]:
I'm gonna play around with it. I'm gonna see if it works. Think about what your pain what your current pain points are. What are your problems? What makes it hard for you, to to to actually do that research? And that's the way that, you know, I think it's just like in bite sized pieces. You can you can kinda, you know, handle that. I'm also doing, you know, a a boot camp for research institutions and for universities in a more formal way that's, you know, a little bit more structured and actually coming in. But anyone can access the AI tool of Tuesdays and just sign up and just, you know, kinda get a taste of these tools. Because what I what I think is really exciting about our little, you know, kinda corner of this big AI market is that I think people are gonna start finding us more and more because I think that in the end of the day, a real business that respects themselves or a real, you know, entrepreneurial venture is gonna say, I don't want a polluted, you know, debt l m that I'm drawing from.

Avi Staiman [00:28:22]:
I want an l m which is reliable, verifiable information. So that's why I actually think that it's interesting. Because on the one hand, it'll be easier to publish and maybe publishing will be more democratized. On the other hand, the value of this curated content, I think, is only gonna increase over time.

Jordan Wilson [00:28:37]:
Yeah. And, you know, getting back to what we, you know, talked about a little bit earlier, obvious, you know, this this kind of, you know, funny example of somehow this, this giant picture AI generated photo of, of a rat with a giant testicle made its way into, like in the academic research world. You know, I think for every one of those that gets caught, there's probably, I don't know, dozens or 100 or 1000 of, you know, maybe academic research papers that aren't maybe a 100% true, because of AI. Is there a problem when we look into the future now that these things are online and, you know, the next version of all these large language models are gonna gobble this up? Is there a danger, you you know, that that maybe AI early on was used in irresponsible ways before a lot of these tools existed that now is gonna be gobbled up and presented as fact in the future. Is that a problem? And if so, how do we address that?

Avi Staiman [00:29:33]:
In one answer, yes. It is definitely a problem. I don't want put it this way. I don't wanna overreact. So what I mean by that is is that there actually were some interesting articles. This is a very hot topic, in our industry right now. And there were some articles doing some some early studies, and it seems like it's a problem, but it's also not a it's not, you know, it's not taking over. I I I I I wanna be really careful to not so that people don't have, you know, the feeling all of a sudden, well, it's like, well, we can't trust science anymore, so forget about that.

Avi Staiman [00:30:02]:
No. That's really not it. And I think that we need to be aware that it's an issue, come up with, with tools to identify these issues and then, you know, and and address it. But I don't think that we're gonna be able to root it out entirely. You know, one one good example I'll give you one example where it's being done pretty well, is in is in the case of images. So there are a few companies, one's called Proofig, another one called Image Twin. They're specific to the scientific research, so probably not super interesting to to to your audience. But but they will be able to go through an image and see if the image has been doctored or if it's been duplicated or if it's been copied from somewhere else.

Avi Staiman [00:30:37]:
And that's the kind of tool where a human being would need to know all the images in the world. They would never be able to do it. And I think it's somewhere where actually the problem preexisted AI and AI is part of the solution not part of the problem. So, you know, I I I wanna warn of, yes, we need to always think critically. I guess that's the bottom line. Right? Anytime you read it, even if you you see a research article, don't take it as, you know, God's word to Moses on the, you know, on on the hill. Like, it you always need to be thinking about, does this make sense? Are there conferring studies? Meaning are there other researchers that agree with this? And also do a search online. There's an there's an amazing website called Pubpeer where the entire goal of Pubpeer is to critique published articles.

Avi Staiman [00:31:17]:
So what that means is someone will publish an article and say, here's the study I found. And then there's other scientific science sleuths that come along and they're like, this doesn't look right. Right? There's something here that's off. The data doesn't make sense. You know, and that's where some of these stories come to light from some of these more higher profile, you know, heads of universities, that were doing things that maybe weren't exactly, you know, as they should be. So I think, you know, it it it we we should consume we should respect the academic literature. We should consume it, but we should always have that critical we should never, like, relieve ourselves of the critical thinking and, you know, the confirmation studies that we're gonna wanna see in order to, you know, actually make sure that it gets into play. One last story that I wanted to kinda share with with you that I think really drives home how valuable and important AI is in the scientific realm is is a story back from the 19 seventies, a professor to You You.

Avi Staiman [00:32:11]:
She was the head she was a researcher, in China, in a in a lab for, you know, for, they studied plants and and and and, you know, and cures for and and disease, research. And she came across this, this, component of a plant called artemisin, which essentially, she realized over time was could cure malaria. Right? And malaria is one of the most deadly diseases killing millions of people a year in Africa. And she wrote up her results, she wrote up her study, and she published it in Chinese. Now I don't know why she published it in Chinese, but no one noticed it. Maybe a few researchers read it. So essentially, we had a cure for malaria for many years, and there you go. And we didn't do anything about it.

Avi Staiman [00:33:00]:
Like, what like, millions of people died unnecessarily. And I and I I can only think that if she had found this in the age of AI where where we could easily search for research articles and understand them regardless of what language they're and she could have published it in in English easily or in 20 different languages simultaneously. We could have saved millions of people. So while I think that we need to put up barriers to AI or or safeguards to make sure that it's not, you know, overrunning, the research literature, literature. I also am concerned of what happens when we don't make those tools available or we ban them. There have been journals scientific journals that said, nope. No no no AI use. That's problematic in its own way.

Avi Staiman [00:33:38]:
So I just I think it's, you know, it's it's kinda running the the the the rational middle path where we're careful, but we're also encouraging people to do experimentation. And as you see here right up on the screen, she got a Nobel Prize. Right? As well well deserved. Unfortunately, it took us years to realize that she was deserving of of such.

Jordan Wilson [00:33:57]:
So, Javier, we've covered so much, in today's show. You know, we've talked about how AI can democratize research. We went over, a lot of those tools. Many I haven't even heard of that can, you know, really supercharge your, research process. But maybe what's the one takeaway, that that you want people to leave with today, whether they are working, in the researching world or they're trying to take advantage of of AI to to to better learn, and understand research papers. What is your one best takeaway for these, for all these people to kind of live at that intersection, you know, learning new things with AI in a responsible way?

Avi Staiman [00:34:34]:
Yeah. I think I think that the main takeaway is that we all do research. Right? Anyone who has a business as at some point dumpster some sort of market research. Or if you're looking to buy a car, right, you do research. And if you're, you know, as we discussed, if you're looking to, you know, understand a family member's illness, we all do research. Now when I say do, it doesn't necessarily mean that we're sitting in the lab, each and every one of us, but it does mean that we need to understand it. Right? Our whole lives are built on the premise of I get into the car. I'm trusting the researchers that have built this.

Avi Staiman [00:35:05]:
Right? And the technicians that it's gonna work and it's not gonna explode on me. But I think that we realize that we need to all be educate ourselves more and more as time goes along because there are so many competing narratives, and it's the only way to do it is really for us to to to to understand and to and to and to realize. So what I would encourage people to do is next time you have a question or next time you have a decision and you're like, alright. Let's try GPT or let's try an LM. Stop for a minute and ask yourself, is this question important? Right? Is the answer going to impact my business decisions, my life decisions? Right? And if it has if it's not just, well, write me a fun poem for, you know, April fools day. But if it's something more meaningful than that or more important or the stakes are higher, take the time not just to throw it into GPT, but also to try some of these other tools. Because I think that many of them are built by scientists who said, GPT is great as a platform, but it it it's not built for, you know, really reliable verified, you know, information and content, which is should be the basis, at least in my opinion, of all the decision making or anything that's meaningful in life. And therefore, you know, be aware of where the content is digested, you know, what access we have to the literature, and and and and educate yourself.

Avi Staiman [00:36:19]:
Right? You know, before you go to that doctor, find out what the most recent treatment, you know, plans are. It'll take you 10 minutes. It maybe used to take you 5 hours, and it's like, no. Forget it. I'll just trust what my doctor says. Now it takes you 5, 10 minutes to understand. Okay. Here's a, b, c, d.

Avi Staiman [00:36:33]:
Here's how would they treat here's how science treats this illness. And I just think that it will you know, we should have the humility to know that we're not the just because we get these, you know, answers doesn't make us the, you know, the the arbiters of truth. On the other hand, we'll have educated ourselves in a way that's meaningful and that impacts our lives in a real way.

Jordan Wilson [00:36:51]:
Wow. I I think that's such great advice. You know, you broke it down there for us, not just, you know, to to grow your business, but how we can just help us in life. I hope everyone that that tuned in today got a lot out of today's conversation. Avi, thank you so much for joining the Everyday AI Show. We really appreciate it.

Avi Staiman [00:37:11]:
Thanks so much, Jordan. It's a pleasure. And, you know, next time that'll make that'll make a hat trick. So, you know, you'll let me

Jordan Wilson [00:37:18]:
know. There we go. And Hey, there was a lot. Yeah, we, we name dropped a lot of different tools, studies, etcetera. Don't worry. Well, first, if this was helpful, please consider giving us a review on Spotify, apple podcasts, etcetera, or sharing this with your friends. You know? I think there's a lot of people that need to hear this message on, responsible ways that, yeah, AI can kind of supercharge, your research and how you even, read and understand information. So make sure to go to your everydayai.com.

Jordan Wilson [00:37:48]:
Sign up for that free daily newsletter where we'd be recapping this fact filled show. Thank you for joining us. Hope to see you back tomorrow and everyday for more everyday AI. Thanks, Elle.

Gain Extra Insights With Our Newsletter

Sign up for our newsletter to get more in-depth content on AI