Writer Co-founder and CEO May Habib

Post on

March 6, 2024

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Vivek Ramaswami hosts May Habib, co-founder and CEO of Writer, a 2023 IA40 winner. Writer is a full-stack generative AI platform helping enterprises unlock their critical business data to deliver optimized results tailored to each individual organization. In this episode, May shares her perspective on founding a GenAI company before the ChatGPT craze, building an enterprise-grade product and go-to-market motion with 200% NRR, and whether RAG and vector DBs even have a role in the enterprise. This is a must-listen for anyone out there building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: So, May, thank you so much for joining us. And to kick us off, we’d love to hear a little bit more about the founding story and your founding journey behind Writer. And speaking of, you founded Writer before this current LLM hype, before ChatGPT and in this pre-ChatGPT era, we like to call it. And so much has changed since then. So what is Writer today? What is the product? What are the main value propositions for the customers that you’re serving? We’ll get into the enterprise customers, this great list that you have, but maybe just talk a little bit about what Writer is actually doing and how you’re solving these problems for the customers you’re working with today.

May: Writer is a full-stack generative AI platform. Most of the market sells folks either API access to raw large language models or productivity assistance. And there’s lots of room for insider companies to use both solutions and different use cases. For team productivity and shared workflows, there’s a real need to hook up LLMs to unstructured data to do it in ways where there are real guardrails around the accuracy risks, the hallucination risks, the brand risks, the compliance risks, and regulated industries. And the time to value when you’re building things like synthesis apps, digital assistant apps, and insight apps, there’s a real need to capture end-user feedback.

Putting all of that in a single solution, we have drastically sped up the time market on folks spinning up really accurate, ready-to-scale applications; doing that in a secure way, we can sit inside of a customer’s virtual private cloud. And so we’ve had to be able to build that. We’ve had to own every layer of the stack. We built our own large language models. They’re not fine-tuned open source, and we’ve built them from scratch. They’re GPT-4 quality, so you’ve got to be state-of-the-art to be competitive. But we’ve hooked up those LLMs to the tooling that companies need to be able to ship production-ready stuff quickly.

The importance of building your own AI models

Vivek: You talk about how Writer has built their own models, and this has been a big part of the moat that Writer has built, I think this is something that we’ll touch on, but we continue to see as so important is being able to own your stack, given how many different technologies we see competing on either end. In terms of building your own models, what did that entail for how you thought about building this company from day one? And how would you describe what Writer has done around building models and this movement from large models to small models?

May: It’s really hard to control an end-user experience in generative AI if you don’t have control of the model. And given that we had chosen the enterprise lane, uptime matters, inference matters, cost of scale matters, and accuracy really matters. We all really deemed those things early on pretty hard to do if we’re going to be dependent on other folks’ models. And so we made the strategic decision for text, so multimodal ingestion, text production, so we can read a chart and tell you what somebody’s like, I don’t know, blood sediment rate is because we read it somewhere on the chart, we can analyze and add and tell you if those compliant to brand guidelines, but we’re not producing imagery. With the multimodal ingestion to text and insight production, we made a strategic call almost a couple of years ago that we’re going to continue to invest in remaining state-of-the-art. Today, our models are from the Palmyra-X general model, to our financial services model, to our medical model, and our GPT-4 zero-shot equivalent.

When you pair that with folks’ data, it’s pretty magnificent. Unlike other powerful and state-of-the-art models, this is a 72-bill model that can actually sit inside somebody’s private cloud and not require a ton of resources. For us, a whole host of things have allowed us to be state-of-the-art and still relatively small. That’s still a massive internet-scale model, but everything from the number of tokens the models have seen to just how we have trained them has helped us be super-efficient.

Those are all decisions that stem from that first strategic one, and really important problems are going to have to be connected to data that folks don’t want to leave their cloud. And so to do that, we’d have to be in there, and it would have to be a model that could be efficient, and so we weren’t going to have a bunch of different skills in one. That’s why we’ve got 18 different models, similar types of training data, not too dissimilar, but the skills that they are built for are different.

The role of vector databases and RAG in enterprise AI

Vivek: One point you made here makes me think of a LinkedIn post you recently wrote, illuminating in many ways. You talked about unstructured data and where Writer can go using its models. You sit inside an enterprise and take advantage of the enterprise’s data, which is so important. This is something we hear a lot from companies, which is they need to be able to use their own data securely and efficiently when entering data into these models. We’re hearing a lot about RAG, Retrieval-Augmented Generation, and a lot about vector databases, and a number of them have caught fire. We’re seeing a lot get funded. And I’m sure a number of the founders who listen to this podcast have either used or have played with a vector DB. You have an interesting perspective on RAG and vector DBs, especially from the enterprise perspective. Please share a little bit about the posts you wrote and the perspectives that you have about this tech.

May: I don’t want to be like the anti-vector DB founder. What we do is an output of just the experiences that we’ve had. If embeddings plus vector DB were the right approach for dynamic, messy, really scaled unstructured data in the enterprise, we’d be doing that, but it didn’t, at scale, lead to outcomes that our customers thought were any good. A 50-document application, so a digital assistant where folks are querying across 100 or 200 pages across a couple of things, then vector DB embedding approach, fine. But that’s not what most folks’ data looks like. If you’re building a digital assistant for nurses who are accessing both a decade-long medical history against policies for a specific patient, against best practice, against government regulation on treatment, against what the pharmaceutical is saying about the list of drugs that they’re on, you just don’t get the right answers, when you are trying to chunk passages and pass them through a prompt into a model.

When you’re able to take a graph-based approach, you get so much more detail. Folks associate words like ontologies with old-school approaches to knowledge management, but especially in the industries that we focus on and regulated markets and healthcare and financial services. Those have really served those organizations well in the age of generative AI because they’ve been huge sources of data so that we can parse through their content much more efficiently and help folks get good answers. When people don’t have knowledge graphs built already, we’ve trained a separate LLM. It’s seen billions of tokens. So this is a skilled LLM that does this, that actually builds up those relationships for them.

Vivek: I was going to say you were saying that you don’t want to be the anti-vector DB, and I don’t think it’s anti-vector DB; I think it’s things that chunking and vector DBs work for specific use cases. What was interesting about your post was that, hey, from the enterprise perspective, you need a lot more context than what chunking can provide. This is helpful because many of the founders or companies working in narrow areas don’t often see the enterprise perspective, where all of this context matters versus some chunking. You probably got some interesting responses.

May: Both customers and folks were like, “Thank you so much. I sent that to my boss. Thank you, God.” I’m a non-technical person, so when I explain things to myself, I try to share them with other non-technical folks so that they can also understand them, and that actually helps technical people explain things to other technical people.

We got a lot of folks reaching out, and thanks. Now, of course, our customers know this already. Still, we’re in market maturation, where the underlying techniques, algorithms, and technologies matter to people because they seek to understand. In a couple of years, when this is a much more mature market, people will be talking solution language. Nobody buys Salesforce and is like, so what DB is under there? What are you using? Can I bring my own? But we’re not there in generative AI. And I think that’s a good thing because you go way deeper into the conversation.

People are much better at understanding natural limitations. Nobody is signing up for things that just aren’t possible. The other side to this conversation being so technical is there are people who don’t know as much as they would like to and are now afraid to ask questions. We’re seeing that a little bit, especially in the professional services market, where folks need to come across as experts because they’ve got AI in all their marketing now. Still, it’s much harder to have real, grounded conversations.

Navigating the challenges of enterprise sales in AI

Vivek: The commercial side is interesting because there’s so many startups in AI, and there’s so many technical products and technical founders and companies, but not many of them have actually broken into commercial sales. Even fewer of those have broken into enterprise sales. I know Writer has customers like L’Oreal and Deloitte and a number of others, some of which we don’t really associate with being first movers in tech, and especially first movers in AI. And so maybe you can take us through a little bit of how Writer approaches the commercial aspect of things in terms of getting all of these AI solutions in the hands of enterprise users. Take us through the first few customers that Writer was able to get and how you broke into this. What was the story behind that?

May: Our first company sold to big companies, so in the machine translation and localization era, Waseem and I sold to large companies. And we started off selling to startups, and then I can’t remember, someone introduced us to Visa, and we were like, oh, that’s an interesting set of problems. And definitely, that was probably Qordoba circa early 2016. And so for three solid years, we penetrated enterprises with a machine translation solution that hooked up to GitHub repos, and it was great. We just learned a ton about how companies work, and it really did give us this cross-functional bird’s eye view of a bunch of processes because you think about a company going into a new market, they take a whole slice of their business cross-functionally, and that now has to operate in that language. And once you’re in kind of $100 million cost takeout land, it is really hard to go back to anything else.

Our mid-market deals are six figures, and it’s because of the impact that you can have. Now, it does mean that it’s harder to hire, so yes, we’re under 200 people. I’d love to be 400 people. But we’re interviewing so many folks, dozens and dozens for every open role because you really have to have a beginner’s mindset and just this real curiosity to understand the customer’s business. No customer right now in generative AI wants to have somebody who’s learning on the job on the other side of the phone. And the thing is, in generative AI, we’re all learning on the job because this is such a dynamic space, technology is moving so fast, the capabilities are coming so fast. Even we were surprised at how quickly we got to just real high quality. We launched 32 languages in December, GA in Jan, and it was like, whoa, I really thought it would be a year before we were at that level of quality.

All to say that, we need people who can go really deep. Enterprise sales requires everybody in the company to speak customer, and not generic customer, but you’re talking to CPG, it’s a different conversation in retail, it’s a different conversation in insurance, and really understanding how our customers see success. And it’s not this use case or that use case. That’s a real underutilization of AI when you think about it that way. But what are the business outcomes they’re trying to achieve? And really, not just tying it to get the deal done, but actually making that happen faster than without you. That’s what the whole company has to be committed to.

Hiring for GenAI

Vivek: How do you find that profile? Technology is moving so fast that we’re not experts, and many of us are learning on the job and as learning as things come through. At the same time, you have to find a terrific sales leader or an AE or someone who not only understands AI and the product but understands and can speak to enterprises. So hiring is difficult, but how do you find that person, or are there certain qualities or experiences that you look for that you think are the best fit for the sales culture and group that you’re building at Writer?

May: I would start with the customer success culture because that was hard to get to the right profile. We believe in hiring incredibly smart, curious, fast-moving, high-clock-speed people. And we’re all going to learn what we need to learn together. So there’s no like, oh, that was the wrong profile, fire everybody, and let’s hire the new profile. We don’t do that. What I mean by profile is what we need folks to do. And, of course, over time, you refine the hiring profile so you don’t have to interview so many people to get to that right set of experiences and characteristics. On the post-sales side, we’re looking for end-to-end owners. In customer success, it can be easy for folks to be happy that they got their renewal, or we’re over the DAU/MAU ratio we need to be. We’re just going through a check-the-box exercise. We have a 200% NRR business, and it’s going to be a 400% NRR business soon.

And that doesn’t happen unless you are maniacally focused on business outcomes. This is a no-excuse culture, and it’s necessary in generative AI because the gates come around all over. Matrixed organizations are the enemy of generative AI because how do you scale anything? The whole point of this transformation is that intelligence will now scale, and most organizations are not set up for that. As a CSM, you have to navigate that with our set of champions and our set of technical owners inside of the customer, and that just requires real insistence, persistence, business acumen, and a relationship builder who’s also a generative AI expert. So it’s a lot. And then on the CS side, on the sales side rather, it’s definitely the generative AI expertise, and it’s a combination of the hyperscaler salespeople swagger. And we don’t hire from the hyperscalers.

We interviewed a bunch of folks, but it’s like a guaranteed budget item and a guaranteed seven-figure salary for those sales folks. Obviously, the brands are humongous, and the events budgets are humongous, so it just hasn’t been the right profile. We have loved the swagger. When you can talk digital transformation, and you’re not stuttering over those words, there’s just a confidence that comes across. Interviewing lots of different profiles has helped us come up with ours, and it is growth mindset, humility, but real curiosity that does end up in a place of true expertise and knowledge about the customer’s business, the vertical we have potted up in terms of verticalization that’s going to extend all the way into the product as well soon. My guess is most folks who are building go-to-market orgs in generative AI companies are doing more in a year than other companies do in five because our buyer is changing, and the process is changing. It’s a lot of work streams.

Vivek: It’s a lot. And I think I heard you drop the number 200% NRR or something in there. I want to make sure I heard that correctly because that’s really incredible. And so hats off to the team that’s-

May: 209.

Vivek: That’s basically the top 1% of 1%. It’s interesting to contrast that with other AI companies that we’ve seen in the last 12 or 18 months. We’ve all heard stories of others where, probably not enterprise-focused GenAI products, but the term GenAI wrapper has been thrown around, and a lot of them have focused on more of the consumer use cases. They’ve had incredible early growth, and then in the last six to 12 months, we’ve also seen a pretty rapid decrease or a lot of churn. Is that something that you all had thought about early on in terms of the focus of Writer? Did you think about that early as a founder trying to see what worked?

Creating a Generative AI Platform

May: Around ChatGPT time, I think there was a fundamental realization among our team, and we wrote a memo about it to everybody and sent it to our investors, that real high-quality consumer-grade multimodal was going to be free. It was going to go scorched earth. That was clear, and it has come to pass. The other truths that we thought would manifest that we wrote about 15 months ago, every system of record coming up with adjacencies for AI that the personal productivity market would be eaten up by Microsoft. And so for us, what that really meant was, how do we build a moat that lasts years while we deepen and expand the capabilities of the Generative AI platform? And so what was already happening in the product around multifunctional usage right after somebody had come on, we basically were able to use that to really position horizontal from the get-go.

That got us into a much more senior conversation, and we worked to buttress the ability of our Generative AI platform to be a hybrid cloud. We’ve got this concept of headless AI where the apps that you build can be in any cloud. The LLM can be anywhere. The whole thing can be in single tenant or customer-managed clouds, which has taken 18 months to come together. We will double down on enterprise, security, and state-of-the-art models. That’s what we’re going to do, and we’re going to do it in a way that makes it possible for folks to host the solution. I think even those companies have reinvented themselves. A lot of respect for them. But the difference is that in a world of hyperscalers and scorched earth, all the great things OpenAI is doing are super innovative, and every other startup is trying to get a piece. The bar for differentiation went way up 15 months ago for everybody.

Vivek: Hats off on navigating the last 15 to 18 months in the way that you and the team have because it’s incredible to see, compared to a lot of the other companies that are both on the big side and the small size incumbents, startups that are all challenging different parts of the stack. Two questions for you that are more founder-related for you as a founder; let’s start with the first one: what is a challenge that came up unexpectedly, call it, in the last six months that you had to navigate, and how did you navigate it?

May: More than half the company today wasn’t here six months ago. Six months ago we had just closed the series B. And I think in the last few months, it’s been just this transition from running the company to leading the company — if that makes sense. Then working with the entire executive team around just naming the behaviors, the inclinations, the types of decisions that we wanted everybody to be making and to be empowered to make, and then running a really transparent org where information went to everybody pretty quickly.

We’ve got a very competitive market that’s really dynamic, that is also new. Signal-to-noise has to be super high or else everybody would end up spending 80% of their day reading the news and being on Twitter. We needed folks to make the right decisions based on the latest insights, the latest things customers and prospects were telling us, the latest things we were hearing, latest things product was working on, and all those loops had to be super tight.

Vivek: Its execution and speed matters, especially in this space.

May: Yes and execution while learning. I think it’s easier if you’re like, all right, Q1, OKRs, awesome, CU and a Q1. But this is a really dynamic space, and the hyperscalers are militant. This is really competitive.

Vivek: All right, last one for you, what’s the worst advice you’ve ever gotten as a founder?

May: The worst advice that I have taken was early in Qordoba days, hiring VPs before we were ready. It felt like a constant state of rebuilding some function or other on the executive team. That’s such a drain. We have an amazing executive team, we’ve got strengths, we’ve got weaknesses. We’re going to learn together. This is the team. And it’s why we spend so long now to fill every gap. We’ve got a head of international, CFO, CMO. We’re going to take our time and find the right fit. But those were hard-won lessons. The advice that we got recently that we didn’t take, was to not build our own models. And I’m really glad we didn’t take that advice.

Vivek: I was wondering if that might be something that came up because you’re right; we see so much activity around saying, hey, build on top of GPT, build on top of this open-source model. It works for some sets of companies, but as you say, thinking about moats early on in technology and IP moats from the very beginning is only going to help you in the long run. Well, thank you so much, May. Congrats on all of the success at Writer so far. I’m sure the journey’s only beginning for you, and we’re excited to see where this goes.

May: Thank you so much, Vivek. For folks listening, we’re hiring for literally everything you might imagine. So I’m May@writer, if this is interesting.

Vivek: Perfect. Thanks so much.

Other stories

Share The IA40
https://ia40.com
Copy link