Your AI Roadmap

Emotional AI: Building Better Human Connections with Rana Gujral, Behavioral Signals

Dr. Joan Palmiter Bajorek / Rana Gujral Season 1 Episode 5

Rana Gujral, CEO of Behavioral Signals, joins Joan to discuss his company's groundbreaking work in emotional AI. Behavioral Signals extracts nuanced emotional, behavioral, and mental states from conversational tones, essential for developing Artificial General Intelligence (AGI). Their technology identifies emotions and behaviors from voice signals without using speech recognition, making it language-agnostic.  Gujral's journey from corporate leadership to startup success highlights his commitment to leveraging AI to enhance human interactions and stay current in this evolving field.

Rana Gujral Quotes
✨ “We can get into your state of mind, your emotional behavioral cognition construct simply by your tone. It's magical.” 
🤝 “AI is now helping us make better, more human-like connections with other fellow humans.”

Resources
🚀 Join Your AI Bootcamp! Code COHORT1 expires June 4: https://yourairoadmap.com/career
📚 Substack: https://substack.com
💻 GitHub CEO says Copilot will write 80% of code “sooner than later”:  https://www.freethink.com/robots-ai/github-copilot

About Rana Gujral
Rana Gujral is an entrepreneur and CEO a Behavioral Signals, a Cognitive AI company building essential building blocks of AGI with its acoustics-based deep learning technology. As a thought leader in AI, Rana often leads keynote sessions at industry events such as the Leap, DeepFest, World Government Summit, VOICE Summit, TNW, Collision, and The Web Summit. His bylines are featured in publications such as FastCompany, Inc., and Entrepreneur, and he is a contributing columnist at TechCrunch and Forbes. Rana has been recognized as among ‘Top 40 Voice AI Influencers to Follow’ by SoundHound, ‘Entrepreneur of the Month’ by CIO Magazine, ‘Top 10 Entrepreneurs to Follow’ by Huffington Post, and an ‘AI Entrepreneur to Watch’ by Inc Magazine. In 2022, the CEO Monthly listed him among the “Most Influential CEOs.”

Connect with Rana Gujral
LinkedIn: https://www.linkedin.com/in/ranagujral/
Personal Website: https://www.ranagujral.com/
Behavioral Signals: https://beh

Support the show

Learn More

YouTube! Watch the episode live @YourAIRoadmap
Connect with Joan on LinkedIn! Let her know you listen

✨📘 Buy Wiley Book: Your AI Roadmap: Actions to Expand Your Career, Money, and Joy

Who is Joan?

Ranked the #4⁠⁠ in Voice AI Influencer, ⁠⁠Dr. Joan Palmiter Bajorek⁠⁠ is the CEO of ⁠⁠Clarity AI⁠⁠, Founder of ⁠⁠Women in Voice⁠⁠, & Host of ⁠⁠Your AI Roadmap⁠⁠. With a decade in software & AI, she has worked at Nuance, VERSA Agency, & OneReach.ai in data & analysis, product, & digital transformation. She's an investor & technical advisor to startup & enterprise. A CES & VentureBeat speaker & Harvard Business Review published author, she has a PhD & is based in Seattle.

♥️ Love the podcast? We so appreciate your rave reviews, 5 star ratings, and subscribing by hitting the "+" sign! Be sure to send an episode to a friend 😊

Hey folks, this is your AI bootcamp and Joan on the mic. I'm talking to you today about the bootcamp, heck yeah. It's gonna be an amazing opportunity to do an intensive deep dive onto your personal brand and steps to get you to your next stage of your AI career journey. In fact, using AI and weekly coaching, templates, frameworks, guidance, and an amazing group of people that you're gonna be going through the cohort with. Check it out yourairoadmap .com / career. Hi, my name is Joan Palmiter Bajorek. I'm on a mission to decrease fluffy hype and talk about the people actually building in AI. Anyone can build in AI, including you. Whether you're terrified or excited, there's been no better time than today to dive in. Now is the time to be curious and future-proof your career, and ultimately, your income. This podcast isn't about white dudes patting themselves on the back. This is about you and me. and all the paths into cool projects around the world. So what's next on your AI roadmap? Let's figure it out together. You ready? This is Your AI Roadmap, the podcast. Hey folks, this is Joan. I wanted to just talk to you a little bit before the episode. This is a really cool one about voice technology. I met Rana a few years ago and really appreciated how he was translating work from R &D, from the research and development labs, directly to business outcomes. He's a savvy businessman. He really thinks about the numbers, fiscal returns, in a very data -driven way. but also you'll hear, they throw out words. Like they focus on other pieces of voice technology that is such an unusual perspective in the field. As someone like me who comes from speech technology from a let's learn every word perfectly in every different language, I really just love learning from other people and other perspectives. And I think you're gonna love this episode. So you ready? Let's dive in. Hello hello! Hey, how's it going? All right. Good. Glad to have you on the podcast. Thanks for joining us. I'm glad to be here and thanks for having me. Totally. Well, can we start with one of those, my name is Blah and I work at Blah? Yeah, we could do that. My name is Rana Gujral and I am the CEO of Behavioral Signals. And we're having a lot of fun at Behavioral Signals. Yeah, what do you all do at Behavioral Signals? So we're doing a few different things, but the way I see what we're doing contributing and building, essential building blocks of AGI in some meaningful, differentiated ways. To better describe what we do, we're focused on various... emotional, behavioral, and mental constructs. So these are sort of state and frames of minds. And it's essentially taking a specific modality and digging deeper into that and our focus area from day one and I'd say also claim to fame has been sort of cracking some of these early models that extract these signals from the tone, which is the nonverbal cues in a conversation. Um, you know, the pitch and tonal variance, intonations, prosody, among other things. so to better describe our technology. So in a real live conversation, just like we're having right now, and it could be a multi-party conversation. We're extracting signals, primarily focused on the tone. And the signals that we extract can roughly be put in three different buckets. First, there's the bucket of emotions, so anger, happiness, sadness, and there's a whole range of other emotions we can tap into. Then there's the bucket of behaviors. This is engagement, empathy, politeness, et cetera. And then the last bucket is probably the most interesting bucket, because this is... sort of a collection of advanced classifiers that are tapping into more advanced and complex mental states. So this is where you're looking at, you know, someone's level of experience or the level of satisfaction with the other party live in the moment. So think of this as a live NPS core. But also other things such as stress, distress, duress, control. Control is really fun and interesting because like, you know, it's the same you say you're saying something but you're like not wanting to say it, but you're being made to say it, like you could have a gun on your head. Can you see if you could detect that? And other things such as intent markers, so predicting an intent to do an action or intent to not do an action. So really predicting how a person is going to act based on what they're saying in a defined use case, obviously. So that's really the core technology. What we do with that technology is two different things. We have two primarily sort of parallel focus areas. One is the built advanced tools to improve the experience and call centers and we can talk about what the product is and how do we do that. But we also in parallel work with the government agencies. You know, it could tell which is actually the CIS VC firm is one of our investors. And with that, we. we're sort of involved in some very interesting use cases that are mostly sort of centered around national security, law enforcement, among other things, and really essentially leveraging the same building blocks of understanding human cognition and behavior and solving for other things. So it was those sort of two parallel focus areas that we do operate in the defense sector, but also we have very unique products, I'd say. I mean, they're... in many ways, first off, at Skyn, as to what we do in the call center space. Very cool. Gosh, okay, you're just like dropping nuggets. That's amazing. Well, and I wanna make sure, we'll talk about different pieces of your technology, but just breaking it down for someone who's listening. And they might say, okay, you can hear, or it sounds like your technology understands if someone is angry or something, and how we can make that actionable. Is that kind of, is that a summary, or how might you say it? that is possible by a technology like this. So if you think about, you know... Well, I mean, so first off, I mean, emotions, behaviors, and cognition are all very diverse and involved spaces in their own, you know, in their own right. But if you take one of those areas, emotions, I mean, emotions, the technical word for emotions would be affect. That's what we would call it in the academia. It's, it's, and it's an, and it's in itself is very, very complex because, um, as, as a human, we express emotions in multiple different modalities, but modalities, what I mean is, um, form factors, uh, in some ways. So you could express emotions by what you're saying, the actual words that you're choosing to use, uh, or your facial expressions or your body language. And sometimes by not saying anything, you're expressing emotions. And also very much so from your tone. And they're all useful in their own ways, but the one big aspect that had eluded this industry was the tone. It's the hardest to do, and it also has the biggest prize, because it's very visceral, it's very hard to fake or hide. You could say something and mean five different things. So if you're relying on just a spoken word, you're not going to get an accurate assessment. But also, you're trying to really get what the person is trying to project and really trying to get, understand under, you know, and understand the psyche of the person and how they're actually feeling. And you could do those things by tapping into, you know, these other modalities, such as the intonations, prosody, tone of voice. And so when you're looking at anger, anger is just one of the emotions, right? So you could tap into that. And anger is actually one of the easy ones. Then there are ones that are not so easy. Well, and I'm a linguist, got a PhD in this field. Can you break down a little bit for other people kind of what tone means, if there are tonal languages, and just a little explanation on that one? So tone, you know, tone is really, you know, when we're speaking, where the voice is ebbing and flowing. I mean, there's a lot of... wavelengths in terms of how the voice resonates and those sort of intonations. And then it's also how fast we speak and how many pauses we take. And that sort of gives a unique flavor to each utterance that we make. And none of that is really. you know, directly linked to what we're saying, because you could be saying whatever and still have that variability in your tone, in your prosody levels, in your pauses, in your speed of how you're delivering it. And a very interesting fact is like, you know, so our core technology can really function just as good in any language data set without any modifications. And so it's really truly language agnostic from perspective that these models perform, these models that are extracting these signals, which is emotions, behaviors, and these other complex signals that I just described, are not just limited to a specific... you know, data set that model is trained on or understands, which is really true for typical speech recognition systems, or, you know, the voice to text and ASR based emotion recognition systems, because you have to build a model for English while that model is useless if you're going to apply for Chinese. You have to build a model for Chinese. Now, the way it works for intonations is when you're looking at marking a specific signal, And let's say you're trying to pinpoint if I am trying to detect anger in, you know, in someone's voice. The way that typically works is there's a baseline that is created for a specific data set, and then there's those points of inflection from the baselines that you can measure. And those points of inflections are very specific to the signals that you're trying to measure. So for example, if you're throwing those signals on the chart and see where it lands on that map, and when it lands at a specific point, you know it's anger. And it's, uh, S you know, uh, that's really how you measure, measure anger. Now, uh, what would happen for, uh, a language that is a tonal language or the tonal and a non tonal language would be one that, you know, uh, like some examples some of the Asian languages, tonal languages, you're speaking in more higher pitched tones, and some of the other languages are more softer undertones, is that the baseline shifts. And you could recalibrate that baseline for a new data set with very small amounts of data. And once you have done that, those signals drop at the same points of reflection. It's really magical. It's almost sort of like whatever code we're running on as humans, it's the same code. the base files are the same. And you can sort of see this, like, you know, we have a lot of differences in our culture and how we speak, the languages, and it looked very, very different to the untrained eye, but when you go behind the veil and you look at sort of like how the engine's working, and now you can detect that using AI, like, I mean, a lot of what we're doing using AI is also understanding how the human brain works. You know, that's a whole different conversation we can have. So... You look at that and you're like, okay, well, you recalibrated the baseline for Chinese data set, um, and the signals are dropping at the same point of inflection. Is that a coincidence? I don't think so. Um, but that also enables this capability to be truly language agnostic. Um, you could be spoken word does not matter. In fact, we don't process a spoken word at all. We do not use ASR or speech to text in any of our models. What you're saying is not relevant. It doesn't give us any information that we require. We can get into your state of mind, your emotional behavioral cognition construct simply by your tone. It's magical. people are listening to this and they're like, no words? Like the words don't matter? Like I think a lot of people might be very surprised to hear that. But I think also, I think what's really helpful is we can build these types of tools, right? You've very sophisticated, you have been working in this company, I think several years now. But how, I'm sure you can't tell us some of the CIA use cases maybe, but broadly, how might a government, can you give me an example of how this is actionable or helpful to different entities? Yeah, so I mean, on the government side, obviously, the interest areas are around different use cases. Our capability like this allows you a deeper insight into a human's psyche, state of mind, various, behavioral, emotional, both macro states and the micro states, which is what is the state in the moment right now versus who you are as a person and other things that play from that. And so there's a lot of use cases that play into the areas of benchmarking a behavioral dynamic of a human that you're interacting with over a period of time. Once you benchmarked it, you've understood based on your frequent interactions with that person, you can then monitor for unique or concerning variances from that benchmark. So that's a use case. You're also looking at various aspects of detecting... intend to harm, whether that be a level of trustworthiness or whether that be intent to fraud or something else. I mean, those use cases become very relevant in law enforcement. And also Deepfakes, I mean, were one of the first companies to build a live deepfake detector that can both understand deepfakes in a situation where you're creating a deep fake to imitate a known person. Like it could be a person who's well known, famous, or it's just a wise clip that doesn't necessarily belong to anybody important or someone you know, but it's not real. In both cases, we can detect deep fakes from audio. And those are big problems that actually are big problems today, but for the most part, it's not really in the mainstream yet in terms of a concern bucket. But there will be soon because the technology is there and it's being misused already and you need a counter to that. So there are a whole variety of use cases that play into the defense ecosystem aside from what I mentioned. Now, what we do on the call center side is something super interesting. So... Our technology allows us to create what we call a conversational bioprint, a behavioral profile of a person based on a previous interaction or a recent interaction. It doesn't have to be a long interaction. It could be a short one to two minute interaction, audio interaction, and that allows us to create this profile. This profile or the bioprint is a complex amalgam of about 75 to 100 attributes that range from how the person speaks fast, slow, various emotional behavioral states that are unique to that person. And you put it all together, it really, it represents how that person converses with a different person. And it's there sort of, you know, that's what we call the conversation bioprint. So there's a few things to remember about this bioprint. One, it's fairly comprehensive. So it gives a deep, you know, information into your conversational dynamic, which is not necessarily your state of mind in the moment, but it's like, how's your, this is your dynamic based on the person you are overall. And so it's sort of like, you know, it's just like a macro assessment. Second, it's unique to you. as a fingerprint is unique to you, which means there are no two bioprints that are identical or similar or exactly the same. So yours is as unique as you, right? And that's an interesting thing which, you know, wasn't really understood previously. And the third part is that because it's unique to you and It's not necessarily a question of whether it's a good bioprint or a bad bioprint. I mean, there's no, I mean, is that a good fingerprint? No, I mean, fingerprint is a fingerprint and it's unique to you. You're either clashing with someone's bioprint or matching someone's bioprint on a daily basis, sometimes multiple times a day, every time you're interacting with someone, like this conversation we're having right now. So there's the aspect of compatibility or non-compatibility that comes into play every time you're speaking or interacting with someone. And most times it doesn't matter, but sometimes this matters a lot. I mean, specifically in a business conversation or in a call center interaction, you know, with these aspects of either specific goals that you're trying to hit or experience that you're trying to deliver. And so if you can create these bioprints on the fly, you know, for both parties, you would have a list of predefined matches in the system that allow you to bear that person or the incoming call with their one-to-one match that has been preselected for them, that gives them a better chance for a better outcome and have a better experience overall. I mean, they're just going to build natural rapport, natural affinity towards each other. because their bioprints are matching versus randomly matched to someone that could match or completely clash. You just don't know. So I think that's a use case. So we were the first to build a product like that, which we were the first to market. And now we're using it to deliver much more improved. One, it's a better experience. But also, it significantly improves all the KPIs. that the industry cares about, whether that be revenue increase or increasing sales or customer satisfaction or faster, better debt collection, improving first call resolution. All of these things are extremely important for that call center industry and the technology makes that possible. Yeah. Well, I think I remember on your website or otherwise, there was a case study related to debt collection that was a huge percentage better. Can you share a little bit about that one or something similar? yeah, so the one that you're referring to is an implementation we did at a large public bank in a EU nation. this happened to be actually their largest public bank in the union nation and They they had a very sort of standard sort of team dealing with You know their clients that were falling behind on payments or mortgages etc. So this particular use case Was pertaining to a call center that was focused on Really sort of talking to these people who are falling behind, they're in trouble and negotiating a loan restructuring engagement, whether that's payments or terms or whatever that is so that it doesn't completely go into default. And that's a complex, also very emotion-filled engagement that happened. So we, this was actually one of our first... deployment. I mean, at that time, we had barely built this engine and we're looking for an opportunity to test this in real life deployment. And we did a champion challenger setup where we separated all the call center agents participated in the test, but the clients were split into a control group and a test group. What that means is like... There's a bio print for every client, but some of the clients will be matched by AI, and some of those clients will just be thrown into this random pool of who our agent picks up, which is business as usual. And what we saw as we didn't know, we knew we could improve things, but we didn't know how could it, how much could it improve. What we saw, what came out of that implementation was just mind boggling. What we saw was, now there was a net 22% improvement in loan restructuring success by the agents who were matched by AI. In specific numbers, I mean, that translates into dollars saved or earned by the bank. That translated into almost a million and a half dollars of additional generated revenue per agent per year. Um, and so they had about 130, 140 agents in that, in that test. And so you're looking at, you know, $400 million of upside created by this software that is just matching the right people and nothing else changes. Like, for example, you're not, you're not hiring any different people, the same people, you're not training them differently, the same, there's the same use case, same training you did. In fact, agents have no idea. If anything is different. I mean, they're just simply answering the call. as router to them and just simply by getting the right two people together versus just a random matchup, which is really how the vast majority of the world's call centers are still run, were able to create $400 million upside for that bank. So it was really, really powerful. And it was like really mind boggling. And it's not surprising because, yeah, it's not surprising You know when you're clashing with someone or matching with someone, right? I mean, like, for example, um, we've all been in two situations and I'm sure you have, Joanne, I mean, so like, you know, one would be, um, let's say you're, you know, having this tense controversial discussion, maybe let's pick politics, like with a family member or a friend or someone. Um, and so you're like trying to push your point of view. And let's say you walk out of that discussion without necessarily feeling like you've succeeded, um, you know, you didn't change the person's mind or whatever the situation was. I mean, it was tense, but you feel really good about that engagement and dialogue. You're like, you know what? I wasn't able to change his or her mind, but I feel good about the conversation. And, um, and the second situation could be just met somebody at a party or a business event, and you're not trying to discuss politics. It's nothing controversial. You're just having normal chit chat. But in like five minutes, it's like, Oh, someone saved me. I mean, I, you know, I. can't take this. And so in the first case, even though it's a complex conversation, your bioprints match, which means you're just having fun. There's natural rapport. Doesn't matter what the outcome is. And the second one, your bioprints are clashing. It's just the way the other person's speaking. It's just not matching yours, right? I mean, people sometimes who speak very thoughtfully and slowly, you know. are not going to have or be able to have a long conversation with someone who's just flying off the handle. But if you're one of those, you just speak fast and think fast, you're like, you can't stand someone who's just taking like six seconds between every utterance. And so I'm just saying, so those things matter on a daily basis and I'm just using a very simple speed example, but there's an emotional behavioral element in there and so if you bring those two people together in a business setting, you could have a much better impact where you have a choice to do so. Oh, absolutely. I'm thinking back to different Lyft driver conversations, bad happy hours. I'm getting some flashbacks here. One of you. So what you're saying for this bank, though, also, and I think let me just break it down for folks who are listening to it. And call centers may or may not be people's domain. So I'm calling in and I have to negotiate this very sensitive topic about my mortgage, blah, blah. And I could get matched up with Tina. I could get matched up with Tom, let's say. And I. for you're saying maybe speech patterns are kind of my, my bio, my speech print. I forgot exactly how you said it. I know I match way better with Tom. Bio print, voice bio print. I match way better with Tom for whatever reason. Is it 22% difference if I get matched up with Tina versus Tom and Tina and Tom don't even know. They're just getting these calls from people to negotiate. Is that what's going down? Roughly? Okay. with a person who your bioprint's compatible with, one, you as a client will have a much better experience. If the person on the other line does exactly the same as the person you're clashing in terms of says the same thing, does exactly the same thing, you're still going to feel better dealing with the person one than person two. Um, which is also unfair to person too, but you feel, you feel better. You feel respected, you feel engaged, you feel you're going to have, you have a natural rapport and affinity. And so you feel good. So that translates into better experience for you. Second, uh, because the other person who's also matched well with you feels the same. Um, you're going to come to a resolution quicker. It's just going to like each other more. And which means of course, you know, it's good for the business. Um, you know, you don't have to have a second call, third call. first call is good or the second call is good. You don't have to have the third call or the fourth call, and the business is solved. But also the agent feels better about their job. You feel better. It's a win-win across the board. And in net numbers, that translates into, I mean, you're looking at different companies care about different KPIs. Some are very centered around, hey, we're focused on customer experience, right? So how much can you improve it? Or we're focused on FCR or average handle time. And generally what we see is that, you know, at any level, like, you know, you're seeing double digits improvement anywhere from 12% to 20% improvement in these KPIs. Now there are a lot of AI products. They may not all be AI, they call themselves AI, but there are a lot of software products who, the ones that which claim to improve the dynamic. in a call center. And typically, the impact is like 1%, 2%, 3%, 5%. And those are big numbers in the industry. And so when we come to market and we came to market with like 12% to 15% of the first sort of pushback from everybody else was, OK, you're just like, this doesn't make sense. I mean, it's not true. And so we literally had to do pilots. It's like, OK, well, let's measure based on your measurements. And it was like, it's mind boggling. And all you're doing is bringing the right people together. And you're not looking at changing anything else. So it can sit on top of whatever you're doing. And the best part is you could put this in a call center that is having call center. I mean, if you think about most non-English speaking countries, you're actually speaking in at least two languages at the same time. I mean, you're using English words with your native language. And so you're sort of like mixing those two things in there. And a lot of those sort of engines that are built on, you know, the spoken word just fall apart at that point. And also you just don't have high quality data in every language set. So they're very specific to sort of, okay, my system works for English, but okay. But the English that is being spoken in this call center is not really English. I mean, it kind of sounds like English, but it's not really English. So if it's a tone-of-voice capability, it just works across the board, which is another cool advantage. Absolutely, and I think what I'm hearing also is that this AI enablement makes humans have a better human interaction, like this Madeline AI augmenting a humanity, yeah. So cool, so cool. AI is now helping us make better, more human-like connections with other fellow humans. So the goal here is not to create an AI that pretends to be like a human, and now you're trying to build a relationship with this human who... actually the thing that you think is a human, but it's not a human. No, I mean, it was like, okay, you know, helping humans be more human, uh, and build more empathetic and more, uh, you know, uh, interesting connections with each other because we're unique. And, you know, I mean, there's always an aspect, if you, if you're going to have things that are unique, you're going to have aspects of compatibility. You're just not going to be equally compatible with everybody else. We know that, right? Everybody knows that. And so how's that different in a business setting? It's the same thing. Absolutely. That's all about relationships I found in business. Okay, I want to eventually get to your background and how you've got here, but when you think about this technology, where do you see the future of this side of the field heading? So I think, you know, one, we're keeping an eye out on the goal that we have ahead of ourselves, especially the folks in the AI space, which is AGI, right? So we're working towards that. We're all working in small ways towards that, building pieces towards that. And this capability or these, I'd say models, these engines or what are you going to call them are essential to eventually getting to the point where you say you've met, you've gotten to AGI. And no AI, whether it can crunch numbers or write beautiful prose and poetry or do complex math or even reasoning can eventually reach AGI. If it can't. understand emotions, behaviors, cognition, state of mind, like a human brain does. Because the goal of AGI is to do almost everything as good, if not better, than what a human does. And these things are incredibly important. So that's one. So it's a very essential piece of that puzzle without which you can't get to AGI. So we're glad that we're... playing a role in that space and we've built pieces that are going to be an absolute essential component to that larger AGI experience. Second, it is also a very important aspect of an intelligent AI because emotions and behaviors and the ability to understand emotions and behaviors are a very fundamental piece of intelligence or part of intelligence. In fact, there's a very famous quote from Marvin Minsky, one of the founding fathers, where someone asked him if machines or intelligent machines should have the ability to understand or reflect emotions, more from a ethical morality standpoint. I think that's how the question was asked. And I think the answer was interesting because it was like, okay, well, question's fundamentally flawed You can't say, should an intelligent AI have the ability to understand emotions? Because can that AI be even considered intelligent if it doesn't understand emotions? So it's an aspect of intelligence, right? I mean, if you don't do that, I mean, you're just not, you know, your ability is not really intelligent enough. So, I mean, so I think, you know, it's the, but okay, keeping those things aside. There are brilliant experiences for us to have, right? I mean, so if you have, you know, we're interacting with our devices around ourselves, including our cars and other things using our voice. And ability to understand my state of mind allows me to have a much better experience, but also, you know, I mean, like if you take a simple thing as, you know, like a voice assistant. you know, a voice assistant can do commands, act on commands, you know, it can do the basic NLP and NLU, which is understanding the language and doing a search and then processing the response back in a human speaking form. You can do all of that, but it can't do a simple thing as hold a conversation. You can't hold a conversation because you can't. Hold a conversation is a much higher bar, as simple as that might sound. To hold a conversation, you need to understand the other person's state of mind in the moment. Otherwise, you're just rambling on things that are going to end the conversation very, very quickly. So to hold even like a two-minute conversation, you need to understand the emotional behavioral state of mind, which is what our human brain does effortlessly. So if you now have, you know, AI makes that possible for a non-human to do it, whether that be a voice assistant or it could be a car or it could be, you know, a point of sale device, you're just going to have better experiences. You can actually, and maybe even like, you know, build these voice assistants that are actually your voice assistant. I mean, you could literally, you know... use that entity as an assistant and have a whole conversation, have a conversation, and have a much, much more complex use cases can bring to fruition. So there's a lot of different things you could do, but there's also emotion behavioral AI has use cases and impact in healthcare. on other dynamics of human experiences, which are beyond call centers, obviously. And so there's a lot of different things, a lot of applications that become possible, as are, I mean, the industry is already applying them. Definitely, yeah. Okay, well, let's hear about, you know, this podcast is called Your AI Roadmap, so people can work on where they want to head. What was your career path to getting into what you do now? Yeah, it was like some somewhat of a, I'd say interesting twists that sort of woke me to the power of AI. I mean, I've been in tech pretty much all my life and most of my early career was in the corporate side of things, larger public companies where I'm running business units and building products, software, hardware, everything else, but. largely looking at scaling, growing product lines, and among other things, and including innovation for sure, startup world is obviously very different. I also, I mean, moved out of the corporate when I was presented with a very unique opportunity to do a turnaround. which I thought was incredibly challenging and also was going to be a lot of fun. So this is a long story short, this is a very well established 25 year company that had been in business for 25 odd years and had gone from its peak of half a billion in revenue to at that point, it was I think like 30 million in revenue with 300 million of debt. and losing 100 million a year, negative 100 billion EBITDA. And so it was really bankrupt and the investors had close to, close to, you know, half a billion of investment that they were necessarily looking to write off. And, you know, it was gonna go chapter 11. And so there were believers who thought it could be saved and a team was put together to go make it happen. And so that's how I left the corporate world to go work on the turnaround. Long story short, we took it to profitability very quickly, negative 100 million to plus 12 EBITDA in about two years, 110 million EBITDA improvement, and eventually to its IPO, IPO at four billion on the public market. So that was a great run. And after that, I really wanted to do a startup. And the startup idea I had was to take a vertical SAS in an old school archaic space, build a vertical SAS for an old school archaic space, and just reinvent that particular sector. So we built a very specific software system for the specialty chemicals market. And I had this initial brush with machine learning and AI around that time. But I hadn't looked deeply into it. But I met a couple of smart guys who were fresh graduates. And they were marketing themselves as ML engineers. And I was like, OK. We want to do, we want one of those. Uh, do you want to come work with us? And, uh, they're like, yeah, sure. I mean, they came as a pair and they joined our company as ML engineers. And, um, we had no idea what to do on the ML side. And, uh, we, uh, sort of went our merry way. And I remember, uh, six months in, I mean, these two guys walk into my office and they're like, why are we here? What are we doing? And I was like, okay, I mean, like, you know, you hired us to, you know, build AI. I mean, we're just building regular software. We're doing this, like, you know, same stuff as everybody else. It's not why, you know, we joined this company. I was like, okay, well, you know, what can you do? And they're like, give us a complex problem and maybe we can solve it. Like, we can focus on that. So we came up with like very interesting use case, right? So there was this unique aspect of that, that of that workflow in the software that we had, where you had to make a guess of what the price of a commodity would be for quotations and profitability, and you do it based on industry trends. And, but if your guess is wrong, you're gonna lose money, even if you got everything right. And if the guess is right, I mean, you eventually build a product that's profitable. And so we figured, okay, you know, I figured like, can you go predict? what the price could be. I mean, you have all this data, can you build AI model? I mean, build a crystal ball, build a crystal ball we can see in the future using AI, we're using machine learning. And the goal was like, okay, you know, maybe you can go as far as like, you know, three weeks out or six weeks out. The actual software system required close to like, you know, close to 12 weeks. But we didn't know. And so these guys went off and came back. we, after like a couple of months with something who's working. Um, and we looked at it and it was, you know, not too far out. It was like a week, week and a half in the future. Um, but it was very accurate. I mean, and they built these early models where they were predicting the price of say titanium dioxide on the spot market, not today, but a week from now. And, uh, I mean, they were delivering that level of accuracy between like 80% to 85% back, you know, This was like the first stab at it. And, or like, wow. I mean, if you could get this to like maybe six weeks, maybe 10, maybe 12 and the accuracy a little higher. I mean, and the built it, we got it. And the rest is history. I mean, Ty's got to acquire really quickly off of that. But I was like, okay, we got to pay attention to this thing. And this was, this was years ago. I mean, you know, and 2015, 2016-ish timeframe. and, for me, it was like, okay, I mean, this is what I need to focus on. so Behavioral Signals was the next progression. Um, yeah, AI, uh, for the uninitiated is just a buzzword. And if you're in it and you're looking at it, you're getting surprised by it every day. every day. You're like, you know, I mean, in fact, you know, some of the things that we're doing ourselves on the Behavioral Signals sides are just mind boggling. I mean, we get our own like, sort of like shock moments in our mini, like, how's it working? Why is it working? Like, let's understand it. It's working. That's great. But like, you know, the power is tremendous of what we can get there, you know, what we can get out of this capability. That's awesome. Wow. I didn't know about all your bigger finance stuff in the past. That's awesome. Okay, so what if someone's listening to this and they're like, whoa, I want to get into this. Heck yeah, let's go. What are action steps, advice you might have for them? What would you say to that person? Well, yeah, I mean, look, I mean, if you are a technical person, brush up your Java and Python and go start to fool around with some of the open source models that are available just to get the hang of, you know, where, you can take it or where you can go. I mean, there's... tremendous gap in the demand and supply in the market today. So if you're a good ML engineer, you have a career ahead of yourself, and it's well worth the investment. If you're an entrepreneur, There are some exciting opportunities. Like for example, I mean, as I said, like the working towards, the industry is working towards a goal and there's a lot of different pieces that are still unsolved for. There are pieces both on the enablement side, but also on the technology side, right? And so one, if you understand what those missing pieces are. and you can fulfill those, right? If you can bring in a solution for some of those, that's a tremendous entrepreneurial return evading you and also satisfaction because you actually meaningfully contributed in some way. So I think it's a very exciting time. I mean, I'd say we're living through history in a very meaningful way. We're literally seeing, I'll say this, right? from Jan 23 to December 23, is a very different world. You may not see it if you're not paying attention, but it's a very different world, and the change is drastic. And I think, you know, you see the same, we're sitting here in Jan, and by the end of this year, it's gonna be a very different world. I mean, with what's happening in this space, I mean, the pace is... That's why people are concerned because it's moving so fast, which is a real concern, of course. But yes, I mean, so it's a very exciting time. I mean, we're looking through this transition. I mean, if you're looking 10 years out, it's hard to visualize how would we do, the things that we're doing today, the tools that we're using today, I mean, how we're living our lives today, how would that be 10 years from now? We can't, I mean, a lot of that would just be completely different. And 10 years is like, you know, it's gonna be here before you know it. It's gonna be in our lifetimes, hopefully, right? Definitely yours. I mean, and just saying, so it's, that's fun and it's exciting, right? I mean, I don't think a lot of people get to live through that. I mean, that switch, that timeframe. It's definitely bigger than internet in factor of many, for sure. So yeah. Yeah, I've heard people say something akin to, this is the slowest it's ever going to go. And it's kind of like the... But I think also, as you mentioned, you could become a machine learning engineer. You could find a problem and really work on that solution, whatever that may be. I've also heard, I don't know how much I believe in this, my CTO laughs really hard, will there be coders in three to four years? Hmm. if most of what's on GitHub today is already generative. Like something like 80%, I need to actually look at these numbers. there be audiologists? Do we need paralegals? There's a question that you can ask for about, you know, half of the careers that exist today, if not more. I mean, we know we'll need a gardener. We'll know we need a plumber. 30, 40 years from now. I've had... No, so that's a funny thing is like, so I mean, most of these, most of the impact is on the white collar jobs, not the big car, because robotics is just simply lagging tremendously, right? So we just don't, it's not, I mean, the time where we have a physical robot that does everything like as a human, it's just got wilds away. But on the software side, like, I mean, if you're a radiologist, which is one of the most high paid and most complex fields of medicine. I mean, you know, uh, ChatGPT is solving for radiology better than most of the radiologists do. I mean, it's detecting tumors and, you know, and so do you need a radiologist then? I mean, because it's a perfect job for AI. I mean, you just look to lots of data, put towards all that experience and come back with, you know, here's your assessment of what the problem is. Uh, it's got, I mean, paralegals. I mean, you know, uh, I've created now multiple legal entities, non-profit, by documentation, created purely from ChatGPT, without any modifications. And these are early days. So I'm just saying, I mean, a lot of those stuff, things are, I mean, coders, as you said, right? I mean, if you're just a regular coder, if you're just a basic coder that's doing front end and this and that, I mean, well, I mean, you're in trouble, you know, unless there's other value that you can add. Yeah. Right, well, and even debugging, I've hired a coder, or just this concept of human in the loop, or kind of, right, what is the value add you're providing? If I can get code spit out from something else, and my job as well, anybody's job, but my CTO and I have been joking about this, and she's like, just because cameras came out doesn't mean that painters are no longer relevant. She sees them as different variations of this, but I do think of what's future-proofed, where is the human in the loop? Do our... lives need to be. 9 to 5s, you know, chained to computers. I think, as you mentioned, we're going to see wild changes in the next three years, amongst other things. Yeah. Well, crazy wild. Is there other advice, things you might recommend for folks, resources to check out? Yeah, I mean, I find the most accurate information in this space is, you know, with very few you know, small group of people. You know, there's just a lot of fluff in this space. but there's a small group of people and I don't want to call them out to embarrass them. I work with some of them in the industry. But it's not like, you know, it's like I'd say like 50 people tops, right? In the entire industry. Know what they're doing, really who know what they're doing and building. I mean, you go find those who people are and follow them. Listen to their podcasts, listen to their talks, listen to what they're writing on Substack. Um, and that's the best way to educate yourself rather than, you know, uh, going after sort of like, you know, the, the usual fluffy stuff and the media stuff. Cause most of it, it just regurgitated or it's inaccurate. It's also in out of date. I mean, if you're looking at something from like 12 months ago, well, it's out of date already, right? So, um, That's my best advice for someone who's really technical and is, you know, looking at say, how do I keep up? I'd like, you know, have a list of 10 people, 15 people who, you're vetted and believe, you know that, you know, these guys know what they're doing. I can give you that list privately, not publicly, I just follow them and I listen to what they have to say. And they're not all agreeing with each other, but they're all original ideas and original thoughts and things are moving really fast and it's, you know, it's a lot of fun. Yeah, and that reminds me of the phrase I keep hearing also is like, trust is the new data. And those humans that we trust to and I agree with you, they don't all agree, but they at least have a point of view and they're practitioners in our field. I think that's what's really crucial. Cool. Well, if people want to find you, if they want to learn more, where should they go? reach out to me on LinkedIn or go to our web page. If there's a business related thing, I also have my own personal sort of page. You could just throw a note there. I mean, if it's non work related, if you want to just talk about some technology stuff, I mean, throw me a note, email comes to me and we can chat. We'll have all those in the show notes. Thank you so much for your time. I really love talking to you. Thank you, Joan Cheers! Oh gosh, was that fun. Did you enjoy that episode as much as I did? Well, now be sure to check out our show notes for this episode that has tons of links and resources and our guest bio, etc. Go check it out. If you're ready to dive in to personalize your AI journey, download the free Your AI Roadmap workbook at yourairoadmap .com / workbook. Well, maybe you work at a company and you're like, hey, we want to grow in data and AI and I'd love to work with you. Please schedule an intro and sync with me at Clarity AI at hireclarity .ai. We'd love to talk to you about it. My team builds custom AI solutions, digital twins, optimizations, data, fun stuff for small and medium sized businesses. Our price points start at five, six, seven, eight figures, depends on your needs, depending on your time scales, et cetera. If you liked the podcast, please support us. Can you please rate, review, subscribe, send it to your friend, DM your boss, follow wherever you get your podcasts. I certainly learned something new and I hope you did too. Next episode drops soon. Can't wait to hear another amazing expert building in AI. Talk to you soon! Welcome to Your AI Bootcamp. This is a four -week intensive bootcamp where we'll be working on goal setting, a professional glow -up, crafting your story for other people to hear you better and getting credit for your work, and lastly, a demo day, coffee outreach, expanding your amazing network to hit your goals faster and be seen. Your AI Bootcamp is where your AI career begins and takes off. Come join yourairoadmap .com / career.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Hello Seven Podcast Artwork

Hello Seven Podcast

Rachel Rodgers
Your First Million Artwork

Your First Million

Arlan Hamilton