podcast

Read AI Founder & CEO David Shim

Post on

May 29, 2024

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Managing Director Matt McIlwain hosts David Shim, founder and CEO of Read AI. These two have known each other for over two decades, having worked together at Farecast and then Placed, which David founded in 2011. They came together again in 2021, when David started Read AI, which has raised $32 million to date, including its $21 million round led by Goodwater Capital in 2024.

Read AI is in the business of productivity AI. They deliver AI solutions that make meetings, emails, and messages more valuable to teams with AI-generated summaries, transcripts, and highlights. In this episode, David and Matt examine how leveraging emerging technologies, getting engagement from multiple members of your ecosystem in the early days, and learning your way into a business model have been three themes across David’s startup journeys. They explore the challenges and success of implementing AI in various categories, the benefits of using AI model cocktails for more accurate and intelligent applications, and where they see the opportunity for founders well beyond productivity AI. It’s a great discussion you won’t want to miss.

This transcript was automatically generated and edited for clarity.

Matt: I’m just delighted to be back again with David Shim. Welcome back, David.

David: Excited to be here. It’s been a while.

Matt: It’s been a while, and you’ve been up to some good stuff. Let’s talk about Read AI. Why don’t you take us to the founding and the genesis?

The Genesis of Read AI

David: I was CEO of Foursquare for about two years, and I left the company. During that time when I was leaving, I was in a lot of meetings where I wasn’t making a lot of decisions at the end because I wanted the team to make those decisions because they were going to inherit that for the next 12 months. I took a step back, and I just started to watch and saw, hey, there are a lot of people in this meeting. What are they doing — and half of them are camera off and on mute. I made this term up called “Ghost Mode,” where there’s no sound, there’s no video — you’re there, but you’re not actually there. I started to notice that as I left Foursquare and took a couple of months off, bummed around different beaches in Mexico. I started to have a lot of video conferencing meetings, and I started to think about this problem again to say — this is not a good use of time to have this many people in a meeting.

A lot of the time, we know within the first two or three minutes if this is going to be a valuable meeting for me or not. I started to say, can I notice people who are and aren’t paying attention when I’m bored? I looked at one person’s camera and realized they had glasses on and the colors in the lenses looked very similar to what I had on my screen — and it was ESPN. So I looked closer to the glass reflection and they were on ESPN. So I was like, there’s got to be a model or machine learning model that you’re able to go in and say, can I identify when someone is paying attention using visual cues versus just audio cues? And then, if I combine those two things together, is that something that’s differentiated in market? And what I found out there was no one was doing it.

There were people doing it for different use cases, but not for video conferencing. That was a really interesting thing to dive into. Then I reached out to someone that I know who heads up Zoom, specifically Eric Yuan. I said, “Hey Eric, real quick question.” We know each other from emails every once in a while, but I was like, “Hey, I’m really interested in this concept of engagement and video conferencing meetings. Is this something that you’re working on? And if it is, is this something that I should be thinking about as well?” And his response back was, “We thought about it before Covid, it was one of the features that we were going to invest in, but with Covid, priorities changed. I think you should absolutely go into this category and into this space.” That checked the box to say, “Okay, now the platform is bought into saying this is something that is valuable, and this is something potentially that they might support.”

Matt: There are a couple of things that, if we grounded in Placed, will apply to Read AI. Let’s start with this. What were the emerging technologies at that time that it wasn’t obvious how they were going to be useful in terms of building a next generation company? How did you approach those emerging technologies?

Leveraging Emerging Technologies: The Journey of Placed and Read AI

David: When we started Placed in 2011, smartphones were just starting to get into the mainstream. iPhone 2 had just released, Android hadn’t been released yet, or the dev version of Android had been released. People didn’t know what exactly to do other than, hey, there’s Fruit Ninja, there’s games you can play, there’s calculators, there’s flashlights, and those are all great, but that was version one where people saw some novelty with the applications that were available, but people didn’t know what to do next with that.

Where I started with Placed was on the thesis of can you actually get some unique data out of the smartphone that you couldn’t get out of a computer. Previously. It was this concept of could you measure the physical world in the same way that you do the digital world, and is that smartphone going to be that persistent cookie. For the longest time it wasn’t the case. I had engineers going to the iPhone and the Android conferences and say, “Can you get GPS-level location data in background with software?” The answer for a while was, “No, that isn’t possible.” It was only around 2011 that all those things fell into place where that was possible to do.

Matt: You had the mobile phone that was becoming increasingly common. You had location services and GPS data. I remember one of the challenges early on was being able to ping the app, the Placed app that you had often enough that you could get that data without burning up the battery life of the phone. That was a fun adventure, huh?

David: A hundred percent. I had my cousin, I think he was 19 or 20 at the time, use 12 different phones with different models that we had bought for battery life optimization, walk around downtown Seattle, go to different places inside and outside so that we could actually infer what is the best model to last throughout the day where the user isn’t impacted, and also get enough fidelity or context to actually infer did they go into the store or did they walk by the store.

Matt: Let’s talk about Read AI. There were some interesting technological and even some societal “why nows” there. This is early but in the Covid-era. Video conferencing exploded. The cloud has come a long way — applied machine learning and all the different kinds of models. Talk us through some of the technologies that enabled Read AI that were, at a minimum, super-powered versions of things that you were using a decade before to start Placed and maybe some others too.

David: I think the bottoms-up approach to video conferencing was something that we hadn’t seen until Covid came along, and that accelerated where you’ll see all the research reports now that says five years of adoption was pulled up within the first two or three quarters of Covid because everybody went remote. You started to see that people were using the best solution available to them. It wasn’t a top-down approach of you have to use this solution like BlueJeans. A lot of us used to have the equipment for BlueJeans and video conferencing. Now it was you’re remote, what is the easiest solution to use? It was Zoom. Then you saw Google Meet come in, and Microsoft Teams come out.

Now you saw this bottoms-up approach where people were adopting video conferencing as the default where you’re interacting with someone. You don’t go in and say, “Hey, I’m going to go fly out to see you.” Now the default is, “Hey, let’s set up a call.” And the call, if you say, “Let’s set up a call,” It’s not a conference call number. I can’t remember the last time I had a conference call. It always defaults to video conferencing, but that was brought up five years ago. That brought up the demand in market where people were used to going in and saying, “I’ve now chosen a platform. I am able to use this platform on my own for essentially every single meeting that I have.”

That’s very much like the iPhone and the Android devices that came out where, at first, people were like, okay, this is kind of interesting. Then the app started to catch on, and people started to implement them, and the businesses weren’t ready; the enterprises weren’t ready for the smartphone. You started seeing people install apps, they connected their email, they connected their calendar and they didn’t have policies in place. That was actually a good thing for driving adoption because it was a block. The flood of usage was actually so strong that I believe that is the way that we are seeing the same thing that we’re seeing when it comes to Read AI and AI. It’s that we’re seeing the same level of adoption as we are for smartphones. I’d even say more so from a mainstream perspective because the cost is so minimal, it’s no longer sign a one-year or two-year subscription to get a smartphone and then you have to sign up for a data plan. It’s sign up where it’s completely free or maybe $15 or $20 a month.

Matt: Let’s go all the way back to Placed. You had to learn your way into the business model. I think you had a first assumption, and then you evolved because you listened to the market and you listened to the customers. Tell us about that evolution.

The Evolution of Business Models at Placed & Read AI

David: That was a hard one. Madrona was great for this one. I believed that location analytics was going to be a multi-billion dollar industry back in 2011-2012. I think, ultimately, it did become that; it just took a little bit longer. But the use case that I had was not the right use case. The use case I stuck with for about 12 months was I could pick any business in the United States and give you a trend of foot traffic across time. You could start to see trends like Black Friday, where did people go shop and where did they go shop next. Really cool data.

We got to the point that we were on the Wall Street Journal and the New York Times; it was not a problem getting press. It was also not a problem getting big C-level or VP level meetings because they had never seen the data before. They’re like, “Oh, you could tell me where people come after they visit my competitor? Okay, this is really interesting. I want to look at that data.” Or do you get gas first or groceries first when you go on a trip? We were able to answer those questions, but the downside was there wasn’t a use case for that data. They would come in and say, “This is a great meeting, we love it. Can you give me a little bit more data?” We send the data over and they’re like, “All right, thanks.” And we’re like, “Hey, do you want to buy anything?” And the answer was like, “No. We’re out, peace out, we’re gone.” What do we use this for?

The use case ultimately was the customer coming to us, and they said the customer wasn’t the end consumer. It wasn’t the enterprise clients that we were directly talking with, surprisingly, it was the mobile ad networks and the mobile publishers. They had come to us and said, “Hey, installing games is the ecosystem when it comes to mobile apps today, but we’re trying to get more dollars from brick-and-mortar retailers because we believe that people are in the physical world and you want to be able to close that loop.” They said, no one trusts our data because we’re already selling them the data or selling them the advertising. You don’t trust the person that sells you the advertising to tell you that it works generally in market. That’s changed a little bit today.

But they said, “We know that you have the cleanest data out there. Can you intersect our ads with your panel’s store visits and actually attribute, did someone hear an ad for Target, and then did they actually go to the brick and mortar target location data three days later?” And for the longest time, I said, “No, I believe that we’re a pure play analytics company and we’re not going to do any advertising.” Then you and Scott and the Madrona team were very much like, for the first six to 12 months, “If they’re willing to pay you money for this, maybe you should try it.”

Matt: Maybe you should see what customers who are willing to pay you money would actually be willing to do. The rest is history. It’s a very, very well-built, successful company. Let’s talk about Read AI. You’ve got these technology changes, these societal changes, and then you had to get engagement. How did you think about that? Ultimately, getting alignment with different parties, not just the consumer, but even making it work reasonably well with Zoom and these other platforms.

Engagement and Distribution: Partnering with Platforms

David: On the engagement side, we took the approach of: Work with the platforms. They have the control at the end of the day. They’re the ones that can also get you distribution. And I think with a lot of startups, distribution is a problem where you can have a great product, but if people can’t find it, if people can’t install it, that becomes a problem. And so what we did was we took the approach of working with the platforms. We had great partners at Zoom that said, “Hey, we’re launching this program called Essential Apps. And what Essential Apps does is we’ll put it in front of 10 million users on a daily basis where they will see an app on their right-hand sidebar.” So that was an incredible opportunity and we’re like, “Absolutely, let’s get it done.”

And the same thing came along this past year with Google Add-Ons, where Google said, “We are going to introduce apps or add-ons into Google Meet. We would like you to be one of six apps that are featured in that app store.” We’ve been featured over the last three or four months, and that’s driven significant adoption, and Teams has been similar in terms of the promotion that they’ve given us.

Those platforms and discoveries have enabled us to get a lot of traction. The thing I would say is I made a very similar decision with Placed, but I made it a lot faster. With Placed, I was wrong the first time, the location analytics to understand where people go in the physical world and not combining it with anything else, I just said that standalone is the use case, and that was not the case. The use case was attribution. With engagement, the use case was, “Hey, in a real-time call, when I’m talking with Matt, and he’s a venture capitalist, I want to know when he’s disengaged because when he’s disengaged I can try to recover in that meeting and say, ‘I know this slide’s not very interesting. Let me go to the next one, Matt.'”

The problem was once people started to use it, they didn’t know what to do with it. They saw this chart, it would go up and down. As it went down, they started to get more nervous, “Well, what’s going on? How do I actually recover from this?” And there wasn’t this knowledge base to pull from to say, “Oh, when engagement drops, you should do this.” And so, there was a lot of education that was involved in that process. We found a lot of users; there were certain use cases that we did really well for, especially large groups and presentations. The stickiness wasn’t there. Where we found the stickiness was to go in and say, what can we combine our ability to measure engagement and sentiment in real-time based on a multimodal approach, audio and video?

How can we take that really unique data? These are proprietary models that we’ve built out tons of training data. How can we actually apply that to something else to make it even better? What it came down to was transcripts. There were companies that had been doing transcripts for the last 10 years; some charge per minute; some spot-joined the calls, and some were built into the platform. What you started to see was they were starting to do some summarization, and this was partially due to OpenAI, partially due to their own models in hand where people were asking for, “I don’t read a twelve-page transcript after a call, but I would love to see a summary, and I would love to share that summary with other folks.”

We took that approach of, this is interesting, but everyone can do this. This is a wrapper. This is a commodity at the end of the day where I could take a whole transcript, write a couple prompts and get a summary. And that wasn’t interesting. So we said, “What do we do that is different?” And we applied the scores that we had. So when Matt says this, David reacted this way. When David said this, Matt reacted that way. We created this narration layer that wasn’t available in any transcript, and we played around with it, and we started to see that, “Okay, this is incredibly valuable. This materially changes the content of a summary because you’re taking into account that reaction from the audience.”

Matt: I think moving from trying to give me the assessment of the engagement in the sentiment, you then created what some call an instant meeting recap, but not just a superficial one, quite a robust one because you were actually using a variety of types of data, video, voice, and a variety of models. How did you think about which models are you going to take off the shelf? Which models are you going to customize? How are you going to mix these models together with the data that you have to ultimately produce this output of an instant meeting recap?

The Power of Model Cocktails: Enhancing AI Applications

David: Yeah, that’s a good question. Where we really focus in on them was we are the best when it comes to measuring engagement and sentiment, and then we are the best when it comes to layering that engagement and sentiment on topics, on sentences, on content, on video. Those things were really strong. We then went in and said, “Okay, what is a commodity?” At the end of the day, if you look at OpenAI great partners, you’ve got Llama with Google and Place, you’ve got a bunch of other open-source solutions in market. That is a very hard problem to solve, but it is kind of like the S3 bucket. At the end of the day, it is the commodity that everybody will be using at some point, and you’ll choose between the licensed models that you prefer or the open-source models.

We said, “Hey, if 90% are in-house models, that last 10% where I’ve identified 14 sentences that do the best job of recapping what that meeting was about, here are the action items. We think this had 90% engagement when this content was being discussed.” If you can then load that in to just summarize it down to four sentences, that’s what we’re going to use the third-party solution for. It wasn’t about figuring out how they can analyze engagement. It was more how do we bring our secret sauce into their system? That really did result in differentiated results where on the Coke challenge, we’ve been winning more and more over the last 12 to 18 months where we’re seeing more traction in terms of market share adoption. I think the best feature that we’re seeing is people are starting to copy our features, the legacy incumbents are starting to copy our features.

Matt: Always a great source of compliments when people are copying you. Let’s take us back maybe 12-14 months. You’ve learned a bunch of things. You’ve got this product and this ability to deliver these instant meeting recaps. It’s what I like to call an intelligent application. How do you start to get momentum around customers? At the time, I think you had very little revenue, and today, you’ve got tens of thousands of paying customers — incredible momentum in the business. How did you get from essentially zero revenue at the beginning of last year to the great growth you’ve had over the last 12-15 months?

The Momentum of Read AI

David: That was a bit of a journey. I think where we had in 2022, we had very little traction like I already talked about. We had a lot of interest, we had a lot of users, but ultimately it wasn’t what we’ve seen in the last 12 to 15 months. And that was an iterative process to be upfront. We had the summaries that launched, and we got some traction there, but then people started to come in and say, “Hey, can you do video?” At first, we were like, “Ah, we don’t know if we want to do video.” We did the video and then we started to tune the video where we’ve got three different concepts. The full video recording, everybody has that, but that’s table stakes. Then we went in, and we said, “Hey, highlight reel.” Think of it like ESPN for your meeting where we can identify the most interesting moments by looking at the reaction.

If you think about it this way, if you’ve watched an episode of Seinfeld, you can go into the laugh track, and if you actually look at 30 seconds before the Laugh Track, that actually does a really good job of understanding what people are interested and what people find funny. We started to build this highlight reel, but then we also took into account content. Now you’ve got the content plus the reaction that creates a robust highlight reel. We did something very similar to create a 30-second trailer. The idea here was customers were asking us for video, we enable that. The funny thing that we didn’t do that others did do was we didn’t roll out transcriptions until the summer. We said that is table stakes. Everybody has transcription. That is a commodity service at the end of the day. Yes, you could do it better than other folks, but everyone has it. It’s available in-platforms where you hit a button.

We said, “We don’t want to deliver another copycat product. What we want to do is be the best when it comes to meeting summaries, the intelligent recaps, the key questions, the action items, the highlight reels.” And then we started to go and say, “Okay, we got all this, this is great, but how do we activate against it?” That’s a little bit of the advertising background that I had and attribution background is how do you activate against it? Interesting data is interesting data, but if you’re able to activate it becomes valuable.

So we started to test things like email distribution and Slack distribution where we pushed out the summaries to where people consume the information. We didn’t need to be that central hub for reading the reports. We’re going to send it to wherever you’re used to reading it, and that actually started to gain more traction where people said, “Oh, this is great. After the meeting is done, I get a recap.” Or, “Hey, this is a recurring meeting and you’re going to send me the notes an hour before, so now I can actually review the notes because I forgot why we were going to even do this call.”

Matt: I love that feature. Going back to, I think, what was your original inspiration here is really trying to make me personally more productive and the teams that I meet with collectively more productive. It’s this awesome combination of productivity and action, and I think something you like to call connected intelligence. Talk a little bit more about this concept of connected intelligence.

Read AI: The Future of AI Productivity & Intelligent Agents

David: It really plays in with intelligent apps. Intelligent apps, you’ve got that marketplace set up, but when you go to connected intelligence that’s going in and saying, How do I connect those individual apps so they talk with one another? If I have a meeting about a new product launch, well, that meeting will generate a report today, and it’ll send it out to everyone, and that’s great, and there’s some actionability there, but what if that email, sorry, what if that meeting then talked to an email that was sent up that had follow-ups that said, Here’s the deck associated with that. Here’s some proposed wording for the specific product launch and timelines. Now if those two can talk together, it creates a better summary, it creates a better list of action items, and now the action items are updated where it could go in and say, Hey, David was going to send this deck over to the PR team.

Did David actually send it over? Well, the email’s going to tell the meeting; yes, David did send that over; check that off of the list. So that is a completed deliverable. Now, the follow-up in the email is, Hey, the PR team is going to provide edits associated with this, and it hasn’t been delivered yet because we’re connected to email. These entities, at the end of the day, are able to talk with one another, and they act as your team. We tried the concept of team really early on where it’s like, it joins your meetings, it does this. That was the early version.

What we’re seeing now with the prevalence of AI is you can make each one of these things an entity and these entities can independently talk with one another and deliver content just in time where you’ve got a meeting coming up, you had an action item for the pre-read, but now we’ll look at your Slack messages, your Gmail messages and say, “Hey, these things haven’t been delivered yet, or the client’s going to ask you these three questions. Keep this in mind. That’s going to create a much more productive interaction at the end of the day. A shorter interaction. a more productive interaction.

Matt: No, I love this. It’s kind of going both to some of the things that you’re already doing and also some of the vision of where you’re going with the company. There are all these business processes and all these workflows, and they’re increasingly digitally based. As you point out, it is interconnected between email or Slack or Zoom calls or whatever it might be. I like to think of them as sort of these crossflows. They’re workflows and processes that cut across different applications that I live in. And effectively all those things are different forms of applications. So maybe say a little bit more about where you see this world going and Read AI’s role in it around this vision of connected intelligence.

David: Where we’ve got two similar visions, and this is in the next year we expect to get there. One is, let’s say you’re a project manager, and you have a number of different meetings that occur. You have a number of interactions that happen via email. You’ve got internal follow-ups within Slack and teams, and then you’re updating Asana, Jira, and updating tickets. All of those things today are manual. You have to go in and connect the dots. Now I’m going to look at the meeting report and look at what was delivered to the client. Now I’m going to think about did this file actually get delivered, and then I’m going to go into Asana, I’m going to check some things off, and I’m going to go to the Jira ticket. Those are a lot of steps that take place, and those are steps where it doesn’t require critical thinking. All the information is there. That connected intelligence is there.

Where Read AI is going is we’re going to update all that for you. Where if you’re a product or project manager, all of that mundane work, that busy work that moving around of papers that is taken care of where you don’t need to worry about that, that ticket is automatically updated. Then Jira is going to send an update to the people that are looking at that ticket and then it’s going to say, “Hey, this got completed. This is the next step because that was discussed in one of those entities, one of those connected apps.” You actually inform what the status is.

A bigger problem that comes up is sellers, Salesforce, and HubSpot; where I remember leading revenue at Foursquare, I could not get my sellers to update Salesforce. I would threaten them to say like, “Hey, you’re not going to get paid unless you update it.”

At the end of the day, it didn’t matter how many times I said it, and I was CEO at the time; people were busy. They don’t have time to do it. They’re going to prioritize it internally to say, I’m going to close deals. I’m not going to update the books. Where Read AI is going to come in is we’re going to go in and update that Salesforce opportunity. We’re going to go in and increase the probability of close based on the information that’s available. By doing that, you’re enabling your sellers to do what they do best and go into market. That’s what Read AI does on the back end, is to make sure that everything is up to date, and it’s talking with one another that says, “Hey, seller, you might want to go ping that client because it’s been three weeks and normally your cadence is to ping every three weeks and we see this is missing.”

Exploring Product-Led-Growth and Market Expansion

Matt: I love those prompts and nudges that allow the individual to be more personally productive. I think that’s been one of the great attributes of Read AI. It’s really been a bottoms-up product-led growth kind of motion. What’s cool, of course, is I all the time am using it and people are like, “Oh, what’s that?” And I get to tell them about it. There’s a neat embedded virality, but how have you thought about PLG, and what have you been learning about how to be successful as a product-led growth company?

David: The approach that we took is a little bit not the norm. I think when you’re a startup, you want to focus in on one segment. I think when we started Read AI, we took the approach with support from Madrona, and Matt was the one to say, “Hey, we want to go broader.” We want to go in and be mass market when it comes to engagement sentiment and wherever we apply it because we don’t know. This is a new technology, and we don’t know where it’s going to be used. It took us a while to figure out that product-market fit, but by being broad, we’re able to see use cases that we would’ve never gotten the ability to experience or get feedback on.

I’ll give one example that’s a little bit more individual and less of a big market, but really impactful. We have someone who has dementia, and they reached out to us and said, “This has changed my life because now when I meet with my family, Read AI joins the calls, it takes the notes, and before I meet with my family again online in Zoom, or in Google Meet, I can actually look at the notes and remember what we discussed.” They actually had us connect with their caregiver to say, “Hey, this is how you make sure this is paid for. This account is up to date and make sure all these things are set up.” Because he wasn’t able to do that, but he said, “This has changed my interaction with my family.” That was awesome. That wasn’t a market that we were going after, but that was great.

We see off-label use cases too. Off-label use cases would be —we have government agencies, state agencies, we have treasury departments in certain countries that are using Read AI. When they use Read AI, a lot of times it’s bottoms up. They just saw it, and they’re like, oh, this would help me out. What we found is that the bottoms-up approach finds new use cases. For this one agency that I won’t name, they have clients which we’ll call patients and they’ll go see these patients out in the field.

The old way to do that was they would go to the meeting, they would have a tape recorder, they would record that meeting, they would take notes, and they would interact with that person, and then they would go to the next client, the next client, the next client. A lot of the time, they would spend about one day a week writing the notes, putting them into their patient and client management solution. That was a lot of work. Well, they started to go, and we introduced a feature where you could upload your audio and video. They started to record on their phone. They uploaded the audio from the interaction with the client, and they generated these meeting notes and summaries and action items and key questions, and they just cut and paste and uploaded that into their patient management solution.

They loved it. One person said, “I do not know how to write a report. I did not learn this in college. This is not what I specialized in. Now that I can use Read AI, I can interact with the clients, which is what I want to do. This is my job, and Reed will take care of it and upload all that information. They said this is phenomenal. Then they started to use our tagging feature and say, Hey, we’re going to tag individual clients, and now we can see how things are progressing across time because we don’t just summarize for a single meeting, but a series of meetings. So hey, are things improving? Or did we answer this question that came up last week? Hey, they wanted to know what was going on with this. Did we actually deliver an update on that? Those are things where, a lot of times, we get lost in the noise with the amount of work that we have. Read AI is able to make sure no one’s lost in the noise — none of those action items are lost.

Matt: That’s fantastic. I remember back to some of the earlier days of cloud collaboration and how Smartsheet, one of our companies that we backed, gosh, 16-17 years ago, started out in a very horizontal set of use cases like you’re describing. I think it’s important to have a big disruption, whether it was cloud back then or now, all these capabilities, all these different kinds of models I can use in applied AI to build connected intelligence. And I think that’s part of why you can start more horizontally at this point in time and let the market teach you and tell you about different kinds of use cases

Horiztonal v. Vertical

David: The level of understanding is key, especially in this early market because if we had gone too narrow, we would’ve missed out on these opportunities. I can tell you that 30% of our traffic is international. Outside of the US, that traffic is predominantly centralized in South America and Africa. If you said when I first started Read AI, would 30% of my traffic come from South America and Africa, I’d say, “No, that’s not the market that I would expect to adopt AI very quickly and go in and use it in their day-to-day.” What we’re finding is the adoption has been phenomenal where we’re covering 30%, 40% of a university student base where they’re starting to use it and adopt it.

We’re starting to see our peak; this was a couple of weeks ago: 2% of the population in South Africa was using Read AI, not necessarily as the host of the meeting, but Read AI was on a call that had participants in that meeting. I think those things get me really jazzed up to say like, “Wow, this is something where AI is not just about the enterprise.” There is a clear enterprise opportunity, but it’s how do you help the people from a PLG perspective? How do you actually deliver ROI to the oil driller in Nigeria who has to write a report and send it back to China, which is an actual use case — and they’re using it.

Matt: Wow, amazing set of use cases there. Then sort of embedded in that is just the ability to do this across languages and there’s all kinds of exciting things that you’ve done and you’re working on. One of my wrap-up questions here is what are the challenges for what I’ll call a gen-native company like yours, and in particular relative to the incumbents, the companies that are trying to enhance their existing software applications with generative AI capabilities, how do you think about native versus gen enhanced?

Gen-native v. Gen-enhanced

David: The gen-enhanced, if I was going to say from a competitive standpoint for Read AI, a lot of people would say, is it Copilot? Is it Zoom Companion AI? Is it Google Duets? And for us, it’s not really the case. It’s going in and saying they’re educating the market. I’ve been in a market where I had to educate everyone, and that is a very expensive thing to do. These incumbents are educating the market about the value proposition. People are using it. The free users are going to go in and 80% are going to say, “This is great, this is good enough.” There there is the audience a little older like me where it’s like — if you remember Microsoft Works, that was the cheap version of Microsoft Office. Microsoft Works was $99. Office was $300. A lot of people used Works, and they started to use it, and they’re like, “Oh, this is actually pretty good. But when I need to do a pivot table. Okay. I need to upgrade to that next version.”

What we’re seeing is there’s this whole new base of users that understand AI and the value, and they’re going in and saying, “I need more. I need specialization. I need cross-platform compatibility where half of our users use Zoom and some other solution, or use Google and some other solution, or Teams and some other solution.” From that standpoint, it has been great to actually get the incumbents to adopt this technology and evangelize it.

What you’re going to see is the Smartsheets of the world come in when it comes to AI. You’re going to see the Tableaus of the world, where there’s an incredibly large market to be had there. I think it’s just the start, and I think this is where the consumer and the horizontal play is actually really big, is that we are seeing that AI provides value even without an enterprise involved. If you can take that level of education, accelerate it, and show the value of one step above for $15-$20-$30 a month, that’s a slam dunk. We’re seeing that level of adoption today.

Importance of Model Cocktails

Matt: You’ve got this whole set of cross-platform capabilities. I’ve also been really impressed with the way that you’ve used different kinds of models, some things that you’ve fine-tuned yourself, and others that you leveraged something like an OpenAI and how you’ve brought those things together to get the transcription you were talking about before, to get the very best out of, I like to call it model cocktails, where you’ve mixed a bunch of models together to create these amazing instant meeting recaps and now increasingly these kinds of connected intelligence crossflows.

David: That will be key because if you only rely on one single model, you become a prompt engineering company at the end of the day. We’ve seen some competitors, great competitors, of course, use our solution, but competitors are good, but they’re going deep into like, “Hey, do you want to pay a little bit more for ChatGPT-4 versus 3.5 versus 3?” For us, that just highlights that you’re too dependent on that solution. You’re not differentiating, you’re not adding enough value that you’re just going to show that underlying technology that you’re utilizing. It’s been really important to go and say, “Let’s use a mix of models.”

It’s valuable from a language model perspective for transcription if you use only one single model. There are some really good ones out there, open source as well as paid. If you’re able to leverage two on top of each other, it goes in and says, “How do I stop some of the hallucination that comes up where certain words are totally incorrect? If we’ve got a score between model one and model two and the variance can’t be more than X, you can start to identify points where it starts to hallucinate a little bit or goes off the rails.” Those are kind of those checks and balances that you get when you have multiple models running, and then you bring in your own proprietary model on top of that, it says, “Okay, what other secret sauce can I put into that mix?” I think that is where the market is going to go more than a standalone.

Matt: I totally agree with you. Even more generally, apart from your specific area of personal and team productivity, this is going to be a big year around these intelligent applications and applied AI. Where do you see some of those big areas of opportunity outside of your particular category? What are things that you’re excited about in this world of intelligent applications?

Opportunities for Founders

David: From an education standpoint, I think there’s a lot of opportunity. I’ve been talking with a few teachers at different grade levels, and some of them don’t even know OpenAI exists. Some of them are starting to say like, “Hey, I’ve heard about this. I think my kids are probably using this, but I don’t have a POV there.” I think there’s an ability to provide personalized, scalable education that’s customized to the student. I’m excited about that as an opportunity, especially as an uncle to go in and say, “Hey, where you’re strong, we can make adjustments, and where you’re not as strong, we can provide a little bit more focus in the hand holding that the school system might not be able to provide at any given point in time.” That’s really interesting for me.

I think when it comes to productivity AI, so it falls in our space, but I think there are some really interesting things around things that we do every single day, like emails. Emails could be so much better. There’s the concept of a context window, and these context windows are getting larger and larger. If you can have intelligent apps that have connected intelligence, those context windows aren’t related to just email, you can start to bring in other things. The ability to bring in different data sets is going to find some interesting learnings.

Matt: I love both of those points. The education domain, there are so many opportunities to be helpful to the teachers, more personalized to the students. It’s going to be a very exciting time ahead. As you point out, whether it’s when we just had Llama 3 announced, no doubt GPT-5 is quickly around the corner here. As you say, things like context windows are going to make the capabilities even more robust. It’s going to be an exciting time ahead. You’ve been just an awesome founder and CEO to work with. You’ve got an amazing team, and I’m looking forward to the journey to build Read AI into realizing its fullest potential. Thanks for joining me here today.

David: Absolutely, Matt and you and the team at Madrona have been phenomenal champions, especially for Read AI and for Placed when we were just an idea and the market was starting to get founded. This is the opportunity that if you’re listening here, it’s a great time to actually build a company. It’s never been a better time. It’s never been faster and easier to build and scale.

Matt: Well, let’s go build. Thanks very much, David. Enjoyed having you.

David: All right. Thanks, Matt. Appreciate

Other stories

Share The IA40
https://ia40.com
Copy link