Your AI Roadmap
Your AI Roadmap the podcast is on a mission to decrease fluffy HYPE and talk to the people actually building AI. Anyone can build in AI. Including you.
Whether you’re terrified or excited, there’s been no better time than today to dive in! Now is the time to be curious and future-proof your career and ... ultimately your income.
This podcast isn't about white dudes patting themselves on the back, this is about you and me and ALL the paths into cool projects around the world!
What's next on your AI Roadmap? Let's figure it out together. You ready? This is Your AI Roadmap the Podcast.
Check out the companion book at YourAIRoadmap.com
Your AI Roadmap
Building an Inclusive AI with John Pasmore of Latimer.ai, "BlackGPT"
John Pasmore (he/him), CEO & Founder of Latimer.ai, collaborates with Joan to explore the importance of an inclusive LLM. Latimer.ai, an AI-driven B2B SaaS platform, enhance the representation of Black & Brown histories & cultures in LLMs. The goal is to provide a more accurate & bias-free reflection of diverse cultures & histories.
John shares his journey of founding Latimer.ai, highlighting the need to address the lack of diverse representation in current LLMs & emphasizing the significance of inclusivity in AI. Committed to authenticity, factual data sourcing, & ethical data use, Latimer.ai leads in creating AI technologies that encompass a broader spectrum of human history & culture, paving the way for tech that is inclusive of all people.
Note: Both Latimer.ai & ChatGPT were utilised to create these show notes.
John Pasmore Quotes
🚀 "We've built what we believe is a premier inclusive large language model trained with additional content on Black & Brown histories & cultures to reflect accurately & without bias."
💡 "Latimer talks about 'formerly enslaved people.’ It's a slightly different use of language, but it frames the people differently."
📚 "I think ultimately, Black & Brown people should own their own history.”
Resources
- Latimer.ai
- New York Amsterdam News
- deeplearning.ai - for those interested in learning more about AI & deep learning.
- Coursera - Resources for AI, coding, & more, including by Andrew Ng
- Tanya Lewis Lee's documentary "Aftershock," a Hulu documentary discusses healthcare disparities faced by Black & Brown women in the U.S.
- Ruth Health partners w/ women on the journey through pregnancy & beyond. Disclaimer: Joan is an investor in this startup
Connect
- Connect with John on LinkedIn
- Latimer.ai: Check out the product for free today
Learn More
YouTube! Watch the episode live @YourAIRoadmap
Connect with Joan on LinkedIn! Let her know you listen
✨📘 Buy Wiley Book: Your AI Roadmap: Actions to Expand Your Career, Money, and Joy
Who is Joan?
Ranked the #4 in Voice AI Influencer, Dr. Joan Palmiter Bajorek is the CEO of Clarity AI, Founder of Women in Voice, & Host of Your AI Roadmap. With a decade in software & AI, she has worked at Nuance, VERSA Agency, & OneReach.ai in data & analysis, product, & digital transformation. She's an investor & technical advisor to startup & enterprise. A CES & VentureBeat speaker & Harvard Business Review published author, she has a PhD & is based in Seattle.
♥️ Love the podcast? We so appreciate your rave reviews, 5 star ratings, and subscribing by hitting the "+" sign! Be sure to send an episode to a friend 😊
Hi, my name is Joan Palmiter Bajorek. I'm on a mission to decrease fluffy hype and talk about the people actually building in AI. Anyone can build in AI, including you. Whether you're terrified or excited, there's been no better time than today to dive in. Now is the time to be curious and future-proof your career, and ultimately, your income. This podcast isn't about white dudes patting themselves on the back. This is about you and me. and all the paths into cool projects around the world. So what's next on your AI roadmap? Let's figure it out together. You ready? This is Your AI Roadmap, the podcast. Hello, hello. Hello, how are you? Well, how about you? Good, good. Quick intro. My name is John Pasmore. I'm the CEO and founder of Latimer .ai. Awesome, well so thrilled to have you on. Can you tell us a little bit more about what y 'all are doing at Latimer .ai? Sure. We've built what we believe is a premier inclusive large language model. what does inclusive mean when we're talking about large language models? We mean that we've built a large language model and trained it with additional content on Black and Brown histories and Black and Brown culture because we really wanted to reflect. accurately and without bias the history and culture of Black and Brown people. That's awesome. Yeah. Well, and I'd love to, I want to unpack with you more about machine learning and large language models and how you all are differentiating yourselves, which you're just talking about. Could you tell us kind of nuts and bolts, like what the product looks like today? What are its uses? Yeah, we launched about three weeks ago into beta, you know, where we've built essentially what we call a rag model or what the industry calls a rag model, which is means that there's kind of two models, there's our data, and then there's a foundation model that sits underneath that. in our case, that foundation model is Chat GPT could be Gemini in the future, it could be Mistral in the future. But we've launched with ChatGPT, or we may have, at some point, may let the consumer choose which model they'd like it. to be the foundation model. And we've just been training it and getting really great training data. About four weeks ago, we announced a partnership with the New York Amsterdam News, which is the second oldest Black newspaper in the United States, takes us back to 1926 with their data, a ton of other data to make sure that we have all of the... information that we want. And the use cases are pretty wide. We work with HBCUs, Historically Black Colleges and Universities. Miles in Alabama was the first one on where the president, Bobby Knight, recommended to her students, hey, this might be a great tool for you to use if you're doing research. Obviously, we don't want... anybody to cheat using large language models. But hey, this is a great tool because it reflects your history and culture maybe a little more accurately than ChatGPT So that continues to expand. We did we have a relationship with Morgan State and many, many more HBCUs coming. then we think that just generally, you know, Temple University, where we have our Content Czar, Professor Molefi Asante, we expect Temple to use it. and many, many other parents young people that are their homework are language models for. And then businesses as well. Businesses have a couple of different use cases, whether it's creating marketing materials and they want their marketing materials to be... of authentic to the audience or sound authentic to the audience and maybe, you know, Black and Brown people use, you know, slightly different language. So, we're not necessarily talking about Ebonics per se, but hey, people talk differently and models can, mimic that or take existing content and put it in a more kind of authentic dialogue. Absolutely. When I've been lucky enough to be one of the beta users, is it available to the public now? Or okay, awesome. Well, and I opened it up and it looks a lot like ChatGPT, good branding. you know, it ran really smoothly. Can you give us an example of if someone were using just, you know, Open AI's ChatGPT versus Latimer.ai, some of the differences they might see in the output, you mentioned kind of tone and the different. data behind it, but can you give us a flavor of what that might be? You know, there's so many examples. We had a gentleman who has an ancestor who had a farm on the Underground Railroad. And he posed a question about that property to both ChatGPT and to Latimer. And he was pleasantly surprised that while ChatGPT talked about runaway slaves, Latimer talks about formerly enslaved people. So it's a slightly different use of language, but it's maybe, kind of frames the people different. their identity is not necessarily slaves. They were enslaved people. And that's something that we want to retain. It shows up in a lot of different ways, not with just that term, but in other ways, just to describe people and their history. Absolutely. Well, I think that goes back to the data set you're using and 1926. Can you tell us more about what collection of data, how you're collecting the data? I would love to learn more. Yeah, I mean, there's hundreds or thousands of pieces of data and those could be anything from public domain documents. So it could be, from the public domain, it could be a copyrighted document that's older than 99 years, but it could also be documents about, actions from the US Congress or acts or... Court cases. So all of those things are relevant to us and textbooks, for instance, as well as a big source for us. Dissertations. And what we're really focused on is creating a factual database. So we're not scraping comments from the open internet. The foundation models already have that. That's not something that we intend to do or that we want to do. We really just want to populate our database with factual data or factual sources of information. And then we're looking at in the future how to make sure when you get a response that if there is a specific source of attribution that we can share that as well. So that's an additional feature on the. That's awesome. That's one of my favorite features of Perplexity. I recently asked like, what about this? And they're like, here are the references, A, B, and C. I was like, ooh, that can be really, really helpful. Well, and who is collecting the data or who's doing that? I guess, is it a manual effort? I think at this point for us, it is fairly manual because we do want to, we have a small team. Professor Asante at Temple University maybe points us in a particular direction, whether it's an author, whether it's a data source or whether it's, you know, a library that can be a resource for we're talking to many universities about ingesting, you know, complete libraries. And we'll see where those go. But we do want to make sure that, as we, we're kind of creating this foundation of data, that everything that we're getting, number one, is attributed properly. Even, so we have, you have the data itself, but you have pretty robust metadata that describes, the source and the content. We want to make sure all of that's uniform. Again, we're just trying to make sure that it's all either licensed or able to be, if we can't license it directly, that we would put it in another group of things that we would like to license. But everything is very carefully sourced at this moment. I love that I'm hearing clean data. I'm hearing organized data. I'm hearing, you know, ethical data sourcing. That's awesome. Yeah. I think you mentioned earlier about kind of the rag model or kind of thinking about different types of machine learning to use on that. Is that correct? Well, I think we'll stay with the RAG model for quite some time. We have to be flexible in the sense that the technology continues to evolve. And if there's better technology, we would switch to that. At this point, we know that in the near future, we're not creating a foundation model. So we would always work with one of the foundation models in some sense, whether or not our data our database grows to the point where it could become a foundation model. You do see some small, 7 billion parameter types of foundation models now, and they seem to be pretty impressive. That's awesome. Yeah. Well, and some of our listeners are going to be like, yeah, yeah, rag models, parameters, sure. Other ones will be like, rag? Like, tell me more. Would you mind breaking down what a rag model is and kind of the choice behind it right now? Retrieval, augmented generation. Again, it's really the combination of two models. So you have, as opposed to, let's say, just ChatTPT, you have this other data store that sits above it. So when you write a query, what happens is the model searches the new data source and says, hey, This person just asked about X. So if your question is about, hey, I need to write a C program in the language of Java, and it has to do this, when it does a search, it's going to say, oh, this is not a topic that is in our data. That's in the foundation model. I'm just going to go to the foundation model, perhaps change a little bit of the language if I'm going to describe what this is. But it's not truly a Latimer. question. So that's kind of the beauty of the rag model is it can do everything that ChatGPT does. But if it does say, oh, this is data, somebody just asked, who built the pyramids or, what was the emancipation proclamation that is in our database, so it can kind of live in this new set of data and make sure in since we've spent so much time making sure that it's factual, it can respond from our data. That's awesome. Cool. have all these efforts, whether it's public libraries or governors who want to restrict what people can have access to, what people can read. So another part of our effort is to make sure that data is kind of immutable. If it's factual history, there shouldn't be an individual or a library that can change accuracy or change how we perceive historical facts. So, you know, we feel that we're kind of putting something beyond the reach of a lot of these efforts that we're seeing today. Absolutely. Well, and in a wild world that we live in where, you know, books get banned in certain part of the United States and we'll have an international audience for this one. So they might laugh at these Americans with their wild practices. But I think it's an interesting way to think about how we want to honor our history and thoughtfulness around inclusion. As you've been building this, what are some surprises that have come up during the founding and inception of the company. What are the surprises? Maybe a bit of a surprise is anytime somebody, you know, some people call us the "BlackGPT." some people are like, well, why would you separate version? I thought that was, you know, kind of obvious in my own history as an adult, let's say I went back to, you know, after getting a business degree and about 2008, 2009, went back to Columbia University to get a computer science degree. And in basically any curriculum in the United States, in college, you take the history of Western art. when you really think about that, well, there's art from Asia, there's art from Africa, in many cases kind of predates the Greeks or the Romans. So why are we looking at history through that particular lens? But we do. And... That's essentially the lens that in many ways is reflected in the large language model is that they have this Western bias, you could say. Those are the, you know, if you said, what's the most or the foundational document, I was reading something the other day where a gentleman felt that the Iliad basically, was kind of the, that period was the beginning of Western thought. And I was like, wow, the pyramids were built a couple of thousand years before the Iliad was written. So a lot of things had to happen in those 2 ,000 years. And they're pretty bright people who built the pyramid. So I think that's kind of the surprise is that not everybody really looks at the overall history of what things that happened in present day India. thousands of years ago or in China thousands of years ago or in Africa. And how does it all fit together as opposed to this, you know, kind of linear view that colleges still want to teach about the history of Western civilization? Yeah, so kind of the surprise that people need education in this respect. I'm just surprised that people don't realize that that's a fairly restrictive view of history. Absolutely. What are some learnings you've had? I asked about surprises, learnings along the way. initially easier to talk to data resources about, hey, we want to ingest your documents into a large language model. And I think that, number one, the scraping that went on with the bigger large language models and then the subsequent lawsuit, most notably the New York Times. kind of led people that have data or content to believe, and probably rightly so, that, hey, content is kind of a gold mine in this new large language model or a new AI landscape. And we're just going to hold off, or we're going to sit on our hands till we figure out what it's really worth, because right now it's really difficult to understand what is the value of that content. Absolutely. So you feel like there might be some more reticence than there was a few years ago. even a year ago, I think people, it's clear that we really just don't have a value. There's no easy way to put a value on a piece of content, especially let's say one dissertation. What is the value of that? And it's very hard, especially at the foundation level, where you're dealing with a neural network to say, hey, was that particular piece of data referenced in making response. So maybe if you could say, you know, on average, ChatGPT uses this dissertation, you know, 100 or 1000 times a day, then it might be easy to say what the value is. But when it's when you have a neural network, you can't really quantify it, it becomes really difficult. Absolutely. Having written a dissertation myself and the value of a dissertation, it's interesting to think about. But I think also the kind of discrete data points versus, I'm sure you've seen these papers too, about how large of models do we need? What is actually helpful? Do things get obfuscated? Or just, yeah, as you mentioned, millions to billions of parameters, these different factors are needed or helpful. I think we are. experiencing trying to figure that out in real time, what is necessary or helpful. It's happening in front of our eyes. Maybe by the time this comes out, we'll already have figured it out, but I seriously doubt it. Are there other pieces of jargon or kind of education that you think people are upskilling in during this time that would be helpful to unpack or really think about in broader sense? Um, you know, the, the landscape is moving so quickly. You had a moment and maybe it's still happening where folks are training themselves in kind of prompt engineering. Um, and it is, uh, it, you know, for now, I think that's a great topic to, to really understand if you're spending a lot of time with large language models and you want, you want to get a different perspective from your answer or, you want it to consider different information, um, in a response. So it is at this point, it almost reminds me of like SEO, like it's kind of an art and a science. And, you know, the folks that are good at it can really be very, very good at it. Absolutely. I'm lucky enough to do some trainings on that. And it's a weird skill to be able to ask for what you need and get the right outputs. I wonder that that will be far smoother user experience in the next 12 months. OK, where do you see your part of the AI field heading? Let's say we're kind of an inclusive LLM, and that's kind of our space. I see it growing. I see that potentially that we could provide answers to even other large language models. We launched our API in about three weeks. So I see that we can interface a lot with some of the larger language models where perhaps they haven't. focused on this area. They don't have the same data set that we have. And what we have, it could be incremental to what they have. Number one, and also just as a demonstration of saying, hey, we're going to be as inclusive as possible to different perspectives. And therefore, we're going to work with Latimer. So we hope there's more of that. But. You know, we do say AI for everyone because we think that everybody benefits by having the most accurate view of history possible. And by accurate, I mean, also inclusive of other perspectives and not just the, you know, the history of Western civilization. super myopic, narrow version of history. Absolutely. I'd love to hear, you know, this is called Your AI Roadmap, about your career, how you got to where you are I've been in the tech space for a bit. I had my first venture where I was a founder in 2008. And we went out for Voyages TV was the company. We raised $10 million in venture funds. And I was considered a non -technical founder at that point and didn't really like the moniker, number one. But also, it came up. We spent a couple million dollars on the web. site itself. I don't think we really, you really don't know if that's money well spent, put it that way. So at that point, I went back to school, which was somewhat painful and long, you know, taking night classes or actually day classes, there's no night classes there at Columbia for this was a bachelor's degree. went back and spent the time to really learn programming, essentially, and CS and understand how computers think and how smart they are. And it was fascinating to learn that the professors at Columbia, at that point at least, felt that computers could check your code for accuracy, but they weren't able to write code. And there was very little belief that anytime soon they would have what was considered a reasoning. capability to write code. I think there's still a pretty robust conversation on whether or ChatGPT is reasoning when it, when it writes code, or it just has seen so much code that it can kind of copy almost an output, almost whatever you need. But maybe that's the same, in a cognitive sense with human thinking that if you, if you memorize something, or if you memorize a lot of things, you're able to. to kind of move it around and create new pieces of code. So, you know, that was my history. I was surprised when ChatGPT first came out that it could write code. I knew that was a big kind of landmark development. And then I was disappointed when I saw that there was, at first rampant bias within the responses. And, I initially thought that, hey, that seems like a, you know, it's certainly a technological problem that could be solved. And given their resources, it seemed to me that it was going to be solved fairly quickly, but then it really wasn't. So to me, that presented an opportunity, not only for bias, but just again, in the overall kind of inclusiveness of what I was seeing was kind of the data set or the focus of what little we know about. building those data sets. you still have just as a university sets out what their course, selection is, you know, there's a certain bias there. So I think the same bias applies to a large language model in that it's going to have almost like a person, it's going to have a certain perspective of what's important is, Middle Eastern history as important as what happened in the United Kingdom? In the US history, maybe not, depending on your bias. If you took maybe more of a global view, you have a different perspective. And that's kind of what we wanted to bring to the dance, so to speak. So we built this. It's been really interesting. We've gotten a very good reception, certainly from the public. We've gotten a lot of interest from a lot of organizations, from agencies, from brands, from technologists. So it's been really exciting. That's awesome. I'd love to go back just really quickly because I think you exited your first company quite successfully. Yeah. So it's been, and you're ready to be a founder again? Or I, some, some people are serial founders and other less so. right. I mean, I have I have been involved in several other companies. I'm a partner with Spike Lee's wife in an organic vitamin business. I was partner at TRS Capital, which is a family office with Bob Sires. But again, just kind of watching AI revolution, so to speak. know, I'd spent a lot of years in college and a lot of time code work and at home for my homework. And it just seemed like, there was a very clear opportunity that wasn't being addressed. And it's very significant. I do think also, people are on X and other social media, basically asking Google or Meta or OpenAI to, be more inclusive or to correct these inaccuracies. And I think ultimately, Black and Brown people should own their own history so that we're not necessarily asking, hey, Google, can you get this right? They will, I'm sure, get it right eventually. But we should have ownership of it. And we can do it ourselves. The technology is not out of reach. So we were able to build it. Yeah, that's awesome. When I think especially as I progress in my career here in tech, it's shocking to me that people still, or general public seems to forget that there are humans building these things. It's as you talk about different tech companies that build large language models, they get to set their agendas of what product features come out. And with the type of resources that you see in the news, they certainly have opportunities to have teams specifically working on your... project or like, one could say, like, couldn't they just build an inclusive team? Anyway, but it's wonderful that you see the problem and are executing towards it. Yeah, I think there's some folks out there, whether it's the New York Amsterdam News or whether it's a library at a historically Black college and university. I think that while everybody wants to work with, let's say there's five major tech companies, I think the idea of working with a startup that is diverse in terms of their ownership is attractive as well. And there's things that we can do and there's things that we can do faster in many cases than a large company. I mean, we're not going to spend months in legal looking at something. We have great lawyers, but they're fast because we're a startup. So there's just certain things that we can get done faster. Yeah, well, and also there's something about like the, at least my own perspective is that libraries have been somewhat undervalued in the last few years. And it's fascinating to hear you talk about like, suddenly they're sitting on these gold mines or proverbially, like thinking about what is the value of these artifacts with the value of these texts from 1926. It's a really interesting way to think about who has the authority over those different data sets and you know. I think that's kind of the gray area, actually, because even if you're a famous author and you've, your estate has bequeathed your writing to a university. But does that mean that the university has the right to train a large language model on that? Or does that mean that the just the students have the right to go and read it when they go to the library. So it's, in many cases, wasn't, you know, clearly defined because nobody knew that this was going to come. about so that we have a lot of gray areas, which is kind of slowing some of the data acquisition down across the board because nobody really knows. So if Penguin publishes a book, did the author retain the rights? Does the publisher have the rights? Who can make that agreement? So. Absolutely, well, and I don't think, yeah, they weren't planning for scraping and different types of tools and deep learning. yeah. And then you never know, there might be action on top on in terms of what the federal government decides. You know, I think somebody could make the argument that, having large language models have access to this data is, of a national interest. So it supersedes the copyright owners' interest in it on in ways that, so you look at the last big similar issue, not the same, but similar certainly was with Napster and then Spotify with music in that they figured out a way to manage the rights, but I don't think there was any kind of national security issue around music in the same way that now when you look at some of the things that the CEOs are saying to Congress or the Senate. They're worried about what is a foreign entity doing in this AI space? And if they have unfettered access to creating data sets and we have a very difficult road to accessing data, what does that do to our competitiveness? Absolutely. But it also makes me think kind of about royalties or about how the music industry payouts changed based on streaming and how do authors get a tiny little sliver if there's an API call that leverages their dissertation or book or otherwise. I think it's a... I wouldn't be surprised by it that way if that's the way that things kind of fall out. Well, I think with the lawsuit with Sarah Silverman and really pushing the boundaries of a language model was scraping her comedy routines or something to this effect. But really, yeah, pushing the legislation and defining those things. I'm curious, though, you went back to get a CS degree, or computer science degree. What was your path into tech? Or did you have any degrees going before that? really business. I had a business degree, State University of New York, I always consider myself kind of a business guy with tech leaning. Even in the media space, the production of media, we're in the magazine business or the TV business. There's a kind of significant use of new technology whenever there's an ability to do that. Generally, cost saving, right? So those industries automated and it was fascinating. And I think everybody now carries a pretty powerful piece of computing in their pocket with their phone. So it just seems like a natural extension that now we're building kind of these super smart entities. And we're calling them ChatGPT or Latimer or what have you. So, to me it was just, it was just kind of a natural of what I wanted to do next. It's, large language model is a funny combination of, of media and search and, you know, whatever you want to call Alexa and Siri is kind of your personal assistant or, you know, I don't know, you know, but we're, we're kind of merging those things into one entity one platform, and it's moving very rapidly. Totally, when it used to be just a few years ago, voice AI, conversational AI, now into this multimodal or integrated system, I honestly believe it might just simplify down to data. Again, let's see what happens. Are they going to eat each other? Will AI eat software? Anyway, I think terms are mostly irrelevant in my mind, or de -jargoning things. But it's understanding how we can leverage these tools, what is their power. Do people understand what's in their hands or otherwise? we talked a little bit earlier about kind of healthcare and opportunities in that domain. Would you mind sharing a little bit? I mean, we've talked about like how people could use it for tasks and creating code, but other use cases beyond that. Yeah, especially within diverse communities. My partner in the organic vitamin business, Tonya Lewis Lee, created a documentary on the impacts of, the journey of Black and Brown women in the United States is they're having children and the statistics are terrible. one of the contributing factors to that is how care providers interact with women that are having babies. And that can be something as simple as a woman saying she's in pain and for whatever reason, a nurse or a doctor maybe hears it but doesn't act on it. And so you could have an intelligent assistant, let's say, that's listening to that conversation and understands that the doctor, or nurse, or whoever it is, care provider, heard the patient say that they were in pain, but then doesn't see any pain remediation in the follow -up or in the next hour, half hour, 15 minutes, whatever it needs to be. So that's an area where... Maybe the AI somebody or instruct somebody that, hey, this patient has said X and you need to take an action against that. I mean, there's countless examples along those lines of whether the person is heard or how they're addressed could. create additional stress for the person and then you're already in a stressful scenario if you're in a hospital, additional stress can have a negative impact on your outcome. So maybe having these machines, these technologies listen to the interaction, it can perhaps prompt or help somebody better communicate with patients and that could have a significant impact on patient outcomes. So there's... tons of like that. We know that also Black and Brown people don't participate in certain things like clinical trials based on our history with some of those types of efforts. But ultimately that has a negative impact because, hey, this drug wasn't tested on enough African -Americans and maybe Maybe the drug has an interaction that wasn't noted properly with folks with diabetes or high blood pressure or other aspects of the African -American community that maybe has a higher than normal propensity for those. So the drug is approved, but then has a bigger impact on African -Americans because it wasn't tested properly. So that's something that maybe AI could help correct for. Yeah, that's awesome. you brought up so many cool points, important points. I think the most famous one I know in the news is Serena Williams, even with her notoriety and wealth, really, really had a terrible childbirth experience, my understanding. I'm a marginal investor in Ruth Health, which is specifically working on pre, during and postnatal support. Based out of New York, which is one of the... biggest disparities in the nation related to black and white maternal health. I think it's especially when people are not represented in a data set for really obvious trust reasons that go back really long time. Unfortunately, there are the with a certain data set, you propagate that bias throughout drug trials and administration of that. I've worked on a few medical experiences before and it would be So awesome if we saw triaging or like it seems like such an obvious augmentation to things built. Are there any? we've talked to in the medical space has kind of mentioned could be tremendously, tremendously helpful. And then, you know, Tonya Lee's documentary, Aftershock is worth, you know, it's eye opening. So I would suggest you watch it. I think it's on Hulu. awesome. Yeah, I think hopefully a lot of our listeners will know the resources we're talking about and if not, get to do some work themselves. So people might be hearing this and be like, whoa, John is doing the coolest work. How would you recommend they get into the field? How might they start or pivot into AI? What guidance might you offer people? there's just a tremendous amount of, of resources that you could use now, the founder of Coursera, Andrew Ng created deeplearning .ai, which is an amazing set of video resources that if you can, take your time with them, those are certainly, very deep in terms of artificial intelligence. you could do tons of resources if you did want to learn coding, actually, you know, nowadays, if, you just want to have a conversation with Latimer or ChatGPT or Gemini, it'll take you through. If you, you could just ask it a question as if it's a person, you could say, what would be the first thing that I should learn about coding? And it'll tell you and you can ask. you know, well, can you give me, some practice exercises on that? Again, just speaking to it and it could become a very, very long conversation and both with Latimer and other platforms, you can save those conversations, come back to them, pick them up it's really, fairly amazing. almost like a scaffolded educational support just for that, for computer science. Do you believe that people need to get computer science degrees right now to go into the AI field? I think they'll be, you know, I don't necessarily think so. I mean, there are other areas of AI like ethics or justice or, how it's going to be applied. And certainly, we want to have artists creating art. You know, I don't think everybody needs to focus on, on AI or computer science. I think it's. helpful in the same way that we have an education, generally speaking, that gives folks introduction or maybe in some cases a deep dive into math. I think it's helpful to understand how computers think and what you can do with them as a tool. But not everybody needs to pursue that professionally for sure. Well, how about people who want to be entrepreneurs like you who say, hey, I see this problem that's not being addressed. I want to attack that problem. What kind of recommendations might you have for entrepreneurs? Again, I think that it's a tool in the same way that you don't need to know how an internal combustion engine works to on an automobile. So I think it's similar. If you really want to, it's probably helpful the more you know. have different skill sets. Some people are just great managers. Some people just have great creativity. You have things like Midjourney that certainly you don't need to know anything about, computer science to use it, maybe how to use Discord, but you know, you can create really beautiful artwork and that's gonna continue to evolve. So, you know, I would just follow whatever you're passionate about. And are there specific internships or certificates you recommended deeplearning.ai, which is an awesome resource. Are there other resources that you might recommend pointing people towards? I mean, you know, Coursera, which I guess is, you know, maybe a more general learning platform that I spent significant amount of time using. And even YouTube, there, you can find somebody you like, whose kind of approach to the material comfortable for you. So there's there's so much out there that, your imagination is really your only limit. Ooh, I love that. Any last advice or takeaways you might want the listener to have? I guess if people are watching this, they're already paying attention to artificial intelligence. And I'd certainly recommend that to anybody that's not paying attention to it. I kind of compare it to crypto, where we saw tremendous, tremendous interest in crypto. And I think AI is several orders of magnitude more can change. an awful lot, it can change the jobs landscape, it can change education and how we communicate with each other. would suggest, yeah, definitely pay attention and use it. Just try one of the platforms, if not Latimer, try something else and have a conversation. Absolutely, yeah, I've certainly heard the like, is this just like NFTs? Do I have to like, will this disappear in a little while? And like, in my experience, the last five, 10 years, I would say no. Now's the time to dive in. years. So it's just been getting, it's just been kind of growing and growing. And then, you know, certainly OpenAI opened the floodgates. Well, if people want to learn more about Latimer.AI and you, where should they go? We're on Instagram @Lamiter.ai. Come to the website Latimer .ai. I spend a good deal of time talking to other professionals on LinkedIn, John Pasmore. So I'm pretty easy to find. We will definitely link those things in the show notes. Thank you, John, so much for your time. It was a pleasure talking to you. Yeah, great to talk to you as well. Oh gosh, was that fun. Did you enjoy that episode as much as I did? Well, now be sure to check out our show notes for this episode that has tons of links and resources and our guest bio, etc. Go check it out. If you're ready to dive in to personalize your AI journey, download the free Your AI Roadmap workbook at yourairoadmap .com / workbook. Well, maybe you work at a company and you're like, hey, we want to grow in data and AI and I'd love to work with you. Please schedule an intro and sync with me at Clarity AI at hireclarity .ai. We'd love to talk to you about it. My team builds custom AI solutions, digital twins, optimizations, data, fun stuff for small and medium sized businesses. Our price points start at five, six, seven, eight figures, depends on your needs, depending on your time scales, et cetera. If you liked the podcast, please support us. Can you please rate, review, subscribe, send it to your friend, DM your boss, follow wherever you get your podcasts. I certainly learned something new and I hope you did too. Next episode drops soon. Can't wait to hear another amazing expert building in AI. Talk to you soon!