podcast

Snorkel Co-founder and CEO Alex Ratner

Post on

June 17, 2022

In this episode of Founded and Funded, we spotlight Intelligent Application 40 winner Snorkel AI. Managing Director Tim Porter not only talks with Snorkel Co-founder and CEO Alex Ratner all about data-centric AI and programmatic data labeling and development, but they also dive into the importance of culture — especially now — and how to take advantage of what Alex calls "one of the most historic opportunities for growth in AI."

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Founded and Funded. This is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group. Today, Managing Director Tim Porter talks to Snorkel Co-founder and CEO Alex Ratner all about data-centric AI and programmatic data labeling — the two core hypotheses Snorkel was founded around. The research behind Snorkel started out as what Alex calls an "afternoon project" in 2015, but it quickly became so much more than that and officially spun out of the lab in 2019. Since then, the company has raised a total of $135 million to continue its focus on easing the burden required to label and manage the data necessary for AI and ML models to work and to extend its Snorkel Flow platform into an entire data-centric programmatic workflow for enterprises. Machine learning models have never been so powerful, automated or accessible as they are today. But we are still in the early innings of what it can do. IA40 companies are solving issues across the AI ML stack, but it all starts with clean data.  Snorkel has built an incredible platform that taps into human knowledge and dramatically speeds up the data labeling that is necessary for the rest of the pipeline to work. Alex says that even the largest organizations in the world are blocked from using AI when it takes someone or a team of someone's months of manual effort to label data every time a model needs to be built or updated. But that's where Snorkel and its Snorkel Flow platform comes in. It should be no surprise that Snorkel was one of our 2021 intelligent application 40 winners, so I'll go ahead and hand it over to Tim to dive into all of this and so much more with Alex. Take it away, guys.

Tim: Well, it is a real pleasure to be here with Alex Ratner professor of computer science at the Paul G Allen School of Computer Science and Engineering at the University of Washington, but even more relevantly the Co-founder and CEO of Snorkel AI. Congratulations on being recognized as one of the top 40 most innovative and high potential companies and ML/AI space broadly as voted by a large panel of VCs that are active in the space. None of whom could vote for their own portfolio companies. So, congratulations on that, Alex, and thank you so much for being here today.

Alex: Tim, thank you so much for having me and we're obviously incredibly excited and more importantly, humbled by the honor. And I'll note I'm not the professor yet. I'm an assistant professor, so it's still a ways to go. But obviously, I'm very excited about the work that goes into both that on the academic side and Snorkel the company around what we call data-centric AI.  

Tim: A VC once again, slightly over-promoting. I apologize for misspeaking on the title. But being based in Seattle and having a lot of connections with the University of Washington, we're thrilled that, you know, over time that'll be a home for you and you're already doing a lot of things to impact the school there. But let's talk about Snorkel. It's been a company that I've followed for a long time. You and I have known each other for a number of years, Alex. But maybe we could start out by just telling our audience what exactly is Snorkel AI and what problems are you solving for customers?  

Alex: We developed a platform called Snorkel Flow and it's one of the first — we call it a data-centric and programmatic development platform for AI. I think a lot of people know that AI today involves data, and it has centrally for quite some time. But what we do really is support this new reality that a lot of the success or failure in building and deploying AI applications has to do with the data they learned from. And not just any data, but carefully labeled what's called training data, to teach them to do something.

So, we work with all kinds of customers — the top five U.S. banks processing everything from loan documents to customer complaints and conversations in their chatbots all the way to medical images, network data, all sorts of stuff where basically users are trying to build machine learning models or AI applications that learn to predict or label something. And they do this by training on or learning from tons of data that's been labeled with kind of the correct answer. If you look back 5+ years to when we started the project originally at the Stanford AI Lab, if you went onto the field and asked the practitioner, what are you spending your time on? What are you throwing team hours at? It would be all about the machine learning model or the AI application — building some bespoke machine learning model architecture to handle chest x-rays or loan documents or conversational intents. And the data was an afterthought, or it was something that someone else prepared and labeled and did. You downloaded it from something like Kaggle or Image Net, and then you started your machine learning. This is what we now call model-centric development, where the data is something exogenous to the process that happens before from someone else. And you're just iterating on your model. Fast forward a couple of very exciting years in the ML/AI space — a lot of that model development is now for a staggeringly large range of problems, almost push-button — a couple of lines of code thanks to some of the great companies and vendors and open-source contributions out there. And also, more powerful and more automated than ever before. But there's always a trade-off. And the trade-off is that these new approaches are much more data hungry. So, the buck has shifted from model development to data labeling and development. And so, what Snorkel does is it tries to automate that data labeling development process- make it more programmatic, like software development — writing code, pushing buttons to label and develop your data — and solve that thing that's often the bottleneck in AI today. This is complementary to the model development, which is often much more push button or, a line or two of open-source code. And this is based on techniques for this kind of data-centric programmatic development that we've developed over the last six and a half, seven years at, places including now UW, but also originally back to the Stanford AI Lab and co-developed and deployed with, lots of different tech companies, government agencies healthcare systems, et cetera.

Tim: This move from model-centric to data-centric — I really like how you frame that. We see that across companies and tying it back even to this notion of intelligent applications that data is really remaking how all applications are made, bringing new data to bear and providing new insights. I think most people also realize that ML is only as good as the training data that you bring to the problem. And this really helps speed and improve that. How the heck do you do it, though? It sounds a little bit like magic, so instead of having a person or a subject matter expert, being able to label this data, you're able to do it with code. Two pretty hot areas that are talked about in the field — weak supervision and generative models — are two important building blocks on how you make that happen. Maybe you could spend a minute just explaining a little bit about the nuts and bolts and how you do this thing called programmatic labeling.  

Alex: Yeah. So, I'll start with the first thing that you said about magic. That's one of the things I like to anchor on first in demos and customer presentations is that this is decidedly not aiming to be push-button auto magic. It's still a human loop process, but it's one that we aim to make look more like software development than just clicking one data point at a time to label it. Imagine, you're trying to train a model to triage a customer complaint at a bank and maybe flag it as urgent or not urgent, or maybe, flag it with a specific regulation that it should be reviewed against. The traditional legacy approach that a lot of machine learning progress is based on is you'd have a bunch of people sit down and click through customer complaints, one at a time with the correct label. And that's what your model would learn from. And in some ways, you know, what we've been working on is as much about inefficiencies or gross inefficiencies in that process, as it is about clever, algorithmic, theoretical and systems work that we do.  

One way to think of it from that perspective is, if you have a subject matter expert sitting there who knows about all of these regulations and is reading these customer complaints, they probably know certain things they're looking for. They have a bunch of domain expertise. They're looking for certain phrases, certain keywords, certain metadata patterns, etc. Why can't you just have them tell that to the model? In Snorkel Flow, they do exactly that. And some of it's through no-code UI techniques, some of it's through very heavily, auto-suggested, auto-generated techniques where they can even explain something they're looking for and automatically label data that way. So, lots of acceleration automation, but at the core, it's a domain expert using domain knowledge, heuristics, and programs to label the data, versus clicking one data point at a time.  

We have customers who will take six months of manual labeling and collapse it into a couple of hours of this kind of programmatic labeling and development. The magnitude of the problem here is that a lot of these projects just don't get tackled, especially take that example of customer complaints. You have data that's very private, data that requires specific expertise, and data that's always changing as input data changes and regulations change. We had a series of papers with Google and YouTube on how they were throwing away hundreds of thousands of labels a week before they started deploying Snorkel tech because even for a company with those kinds of resources, wasn't scalable to label and re-label, every time something changed. So, we're used to a lot of the kind of tip of the iceberg problems where we're using machine learning to solve very standard problems. Is it a cat or a dog or a stop sign, a pedestrian, a positive or negative restaurant review? But there's this whole iceberg under the surface of enterprise and organizational problems that are much, much more difficult to label at scale in this old way.

Tim: It's fascinating, and just to double click on this point that it's not magic, and you still need humans and subject matter experts, is that you want to start out with a set of ground truth labels that maybe comes from a human, but then you can use your technology to extrapolate from that to a much larger set of data, much, much more rapidly. I think it's also clever how you use other organizational signals to try to come up with those labels in an accurate way.  

Alex: Yeah. I like the parallel to software development. You don't write down a bunch of zeros and ones every single time you want to compile a new program and you reuse assets, you use a higher level of abstraction, this higher-level knowledge. And similarly, that's what we're trying to do here. A lot of times, people try to apply AI in some enterprise setting. And the recommendation they get is — okay, great, you built all these things before, you have knowledge bases, you have legacy rules or heuristics. You have experts internally who have all this rich knowledge. You have models, you have all this other stuff — throw it out and start labeling data from scratch. Every single time you want to train a new model on a new setting. And what we're saying instead is no use all that information. Use those organizational resources, whether it's in a subject matter expert's head, an underwriter, a legal analyst, the government analyst, a network technician, a clinician, and use other models, heuristics, and legacy systems to teach or bootstrap your machine learning model. And the cool thing is you can actually do this without any ground truth labeled data, although it's often helpful to have a little bit of that.

You'd asked about weak supervision and generative models. A lot of the original theoretical work was diving into the algorithmic and theoretical aspects of this problem of, okay. If we shift the paradigm from labeling data points one by one, and assuming that they're perfectly accurate, that they're ground truth, which is, by the way, a very faulty assumption already, but, if you shift it to now having some programs, we call them labeling functions, that are radically more efficient, more auditable, more reusable, more, more adaptable, but also going to be messier and their heuristics. They're not going to be perfectly accurate. They're not going to perfectly cover all the diversity of data out there. They might conflict with each other and have all kinds of other messy aspects. How do you, ideally with formal guarantees, clean and de-noise and integrate those into a training set you can use. And that's in fact, what we've spent half a decade working on about theoretically grounded techniques for using generative models and other approaches to figure out which of these labeling functions to up weight, down weight, how to de-noise and clean them. So, you can take this much more efficient, direct, but somewhat messier input and use it to train high-performance machine learning models.

Tim: Are there certain classes of problems or certain types of data or applications that this is best fit for?  

Alex: So, we've applied it to everything from, self-driving to genomics to machine-reading and beyond, but I think some rules of thumb in terms of our inbound filtering, our outbound targeting is around where we think it will provide the biggest delta above other approaches.

One, first of all, is we handle structured data, especially messy structured data, but we also have a lot of focus on unstructured data. You hear about all these advances in machine learning, AI, all these new deep-learning or representation-learning models that are super powerful, but super data hungry. A lot of them provide the biggest deltas on unstructured data — think text, image, video, network data, data, PDF, website, etc. All this very messy long-tail data. Bigger data, more data-hungry models, and more need for data-centric AI development. That's often where we sit.

Another rule of thumb is how expensive and difficult it is to actually label and relabel and maintain these big training datasets. If you're talking about something that can be, a stop sign versus a pedestrian. Stop signs don't change much, so you just leave it and maybe you can get by with a legacy manual approach there. But if you look at that iceberg under the surface of problems we tackle, think about very private data, very expertise, intensive data- financial insurance, medical government and most industries, honestly, anything with a user data network or technology and also settings where your data is changing, and your objectives are changing. So, you have to be constantly relabeling. So, when you have these aspects, suddenly the cost of just throwing people at the problem to kind of click, click, click, you know, a week or a month or longer at a time per model becomes infeasible literally for the world's most well-resourced ML teams. And that's where we like to step in to ease that bottleneck.

Tim: You mentioned moving towards even more data, hungry, deep learning, large models, you know, to extend that thought, these, very large-scale transformer models, foundational models that have gotten a lot of coverage. And then the ability that they provide to maybe do, one-shot or no-shot learning on certain problems or to refine them or train them with a smaller set of data for your specific use case. Do you see that as an overall large trend? And does that create more need for programmatic labeling?  

Alex: I think it's an extremely exciting trend, although it definitely will take a while to percolate into the enterprise for reasons we can go into about, everything from, efficient deployment to governance and auditability. But just talking about the tech trend for a second, it's something that we're very excited about. We had a recent webinar and a paper from my co-founder Chris's lab at Stanford on combining these foundation or large language models with weaker programmatic supervision. And then we had another paper we just posted about using zero-shot learning on top of the large language models to automate some of these data-centric labeling and development techniques. So, we're very excited about a whole host of complementary intersection points. And we already support basic pre-trained models of this class and these large language models in our deployed platform. In a nutshell, how I paint it is that these foundation models and a lot of us are not calling them foundation models because they serve as a great foundation for building or training or fine-tuning custom models or applications on top of them.  

They do still fall into the general body of transfer learning techniques. And I'll stick to very basic and old intuition that, you get what you train on, right? So, these large language models, let's say they're trained on something like web data when you actually want to use them to handle geological mining reports or clinical trial documents, or, loan documents, or, you know, the list goes on, they don't just magically work out of the box, the zero or a few shot techniques don't just suddenly solve the problem. You still need to label a bunch of data for the specific data and task at hand to get utility out of them. So, they're very nicely complementary. But there's a lot of very cool, but somewhat cherry-picked examples of what they do out there. They still need a lot of additional work to get them to be production level for enterprise use cases. So, there's a nice complement there we're very excited about.

Tim: You get what you train on — it continues to be a truism. Hey, you mentioned you know, the Stanford AI lab and your partnership with your Co-founder Chris Ray, maybe take us back to the founding story here, Alex. You put a lot of time into this before launching a company. You know, I feel like over the last few years there's been this rush to start the company. Venture dollars are plentiful, so get while the getting's good and you definitely took a longer path to get it to the point of saying, "Hey, we're ready to commercialize this."  

Alex: Well, it all started with a massive con from my advisor and Co-founder Chris — he suggested this as an "afternoon project."  

Tim: I know there's some lore about this started as like some math on a whiteboard, right?  

Alex: Yeah, it was math on a whiteboard and then there was a Jupiter notebook. and we were, uh, teaching an intro course and we had just refactored it over the summer to center around Jupiter Notebooks, which was a really cool idea that also led to one of the most traumatic office hour sessions I had ever, where everyone in some massive intro course at Stanford came asking how they could install Jupiter Notebooks on every device imaginable. You know, My Tesla screen doesn't support Jupiter notebooks, please help me ASAP — a five-alarm fire drill. I still remember that. It was a confluence of trends. One trend that was coming in was we had been working on all these systems for things that were more in the model-centric world. So feature engineering, model development, joint inference at scale, all these things that are super cool, but we were seeing this trend, kind of hitting us in the face from our users, who were, you know, biomedical data, scientists, geologists, all kinds of data scientists saying, "Hey, this is all great, but we're starting to use these deep learning models and these other models and really our pain point is labeling the data. So could you help us with that?" Like everyone else back then — this is 2015, we said," that's not our problem, that's someone else's problem. We do machine learning. So. We'll keep helping you with the fancy models." Eventually, after being smacked in the face of this enough times, we said, "Hey there's actually something here". Everyone is getting stuck on the data and the data labeling and the data curation. Maybe we should look at that. So that was step one. Then, as we started thinking about more clever ways and looking at old techniques that did some of this and what our users were doing. Some of our users were getting very creative hacking together, ways to heuristically ad hoc labeled data. We said, okay, first of all, this is so painful that people are doing ungodly contortions just to hack together training sets. So, we said, “Okay, we've got to shift to this data-centric, versus model-centric realm. Because there's something here.”  

And then number two, we started asking, okay, how can we support this a bit more? So, we had this idea that we would come up with this kind of Jupiter Notebook where domain experts could quickly dump in some heuristics of how they were labeling data. And we would try to turn that into labeled training data. And that was the “afternoon project” that then spiraled wildly out of control because it just led to all these interesting problems of okay, well, how do you solicit information from the subject matter expert? How do you then clean it? Because even a subject matter expert is still going to give you rules of thumb that are only somewhat accurate. So how do we clean that and model it — that's this weak supervision idea so that it's clean enough to train a model? Then how do we build this broader iterative development loop that involves this kind of programming of data and then training models, and then, getting feedback on where to develop and debug next. That was how it all spun up. And then to your question of why were we so slow? Which is a fair one.  

Tim: It's a hard problem.  

Alex: Yeah well, and honestly, I mean, we were and are very invested in this problem and in the kind of pathway that we've been charting with this you know, data-centric direction. And we're always anchored on where's the best place for us specifically to you know, center this effort. And for many years, I'm super biased, but we couldn't ask for a better place than academia and the purview we had at Stanford. We had office hours weekly. We had everyone from major consulting companies to bioinformaticians to legal scholars coming by and trying to use these techniques. We had purview to work on getting the core theory and ideas. And we started to put some ideas out there, some code out there, we started to get some pull, started to get a bunch of people who were, trying to get me to drop out of the Ph.D. program with a pre-seed or seed round, or I had no idea what the terms meant back then, so it was all nonsense to me. But we thought that we had core problems to work out.  

And then, four and a half years in, we started looking at what our users were telling us. And they were telling us things like, "Hey, maybe instead of, working on another theorem, which is cool and all you could help us solve the UI problems or the platform problems or the data management problems or the deployment problems or the feedback and error analysis guidance problems." When that started happening, you know, we started poking our heads up and, decided, hey, it's entered the next phase. We're actually moving to a vehicle where we can put together a different set of people with different sets of skill sets to really build a product and a platform and engage more deeply with customers here. That was the next phase. So that was when we finally spun out.

Tim: I love that you were pulled by customers and customer-centric and making those decisions. It seems like you nailed the timing for when the market was ready and started to need these solutions on a bigger scale. But there's another piece that you just hit on that I wanted to ask you more about, you know, we've talked a lot about the labeling aspect and that's certainly the core of the solution that you provide. Snorkel Flow is a broader framework. Maybe talk a little bit about how that whole loop is important for Snorkel Flow.

Alex: Kind of our whole point, both research and product has always been that it can't just be about labeling. When you think of labeling as this separate step in a vacuum, that's where you get these very unscalable and impractical model-centric only setups.  

The idea of data-centric development is that labeling and developing your data — so not just labeling, but sampling, slicing, augmenting all these things that people do as modern data operations. This is your primary development tool — not just to get data ready for models, but to adapt and improve models over time. And so, you have to have that whole loop otherwise you're flying blind and you're not really completing this kind of idea of data-centric guided development. So, our platform today starts with looking at your data sampling, labeling more broadly, developing, slicing, augmenting, etc. But then includes a full auto ML suite. Mostly just to give very rapid feedback. Where am I successfully training a model and where do I need to go next to continue this data-centric development? And then, you can export the model from our platform. We actually support broader multimodal applications. You can, if you want, just pull out the training data and train your own external models, we're very open, but the core workflow has to include a model in the loop. And it's more feasible than ever before to do that. Given all the kind of great modeling technology that's out there in the open source these days.  

Tim: One interesting observation about this space called MLOps now is I feel like, and sometimes joke, that companies that start out providing one important piece of functionality across this pipeline, for lack of a better term, whether it's labeling or a feature store or deployment, you know, want to be end-to-end, and I think you just gave some good reasons why in this data-centric world, you need to be able to close the loop from watching how an application or a model is performing and tie that all the way back to iterating, to what's happening with your label data. So that's a good reason. But it also seems there's a little bit of just, you know, startup imperialism that you want to be end-to-end and provide all these pieces.

On the other hand, I think you talk about plugging in other frameworks, other deployment mechanisms, other infrastructure management, it seems like you give customers the choice of like, “Hey, you can use Snorkel end-to-end, or plugin you're best-of-breed for different pieces.” Is that the way you talk to customers about it? And is there a common way that customers tend to engage or is it really across the board?  

Alex: So, and maybe three comments that are Snorkel specific, then I want to go back to that awesome phrase of startup imperialism. So, for us, first of all, there's just a core definition of what we've been working on very publicly for over half a decade is this idea of data-centric development, which involves labeling. And that's one of the several key interfaces, it's one of the ways you can program your model. But it's part of this broader loop that involves a set of, development, activities, and feedback from models, as you said. So that's part of what we've always been, supporting and aiming to support. A second thing that's specific to us is that we're often approaching as I was talking about before a lot of zero-to-one type settings where you didn't have a very sophisticated high therapist modeling stack because you were blocked on the data. You're not predicting say customer churn where you already have the labels and you're predicting column 20 from columns 1 through 19, you have just a pile of documents or a pile of chest x-rays or a pile of network flows, and, because it's the zero to one state, there's more of a pull often to just actually get to an end-to-end solution when you do go from zero to one.

And then the third point is just to touch on what you've mentioned, which is giving customers optionality. Our goal is to support a workflow that we've been working to define over the last six or seven years. But how you integrate different pieces into that workflow is something that we're extremely open to. We have a Python SDK kind of one-to-one map throughout the whole process to make that really easy. And I think that's critical if you want to play in this space. I think, on the one hand, I'm super biased, I think the most exciting technologies and projects will have an opinion on a workflow that is more expansive than just one little layer. But I think that workflow has to integrate with just the space that's out there.

It's an interesting question about startup imperialism and starting off with kind of one slice and then moving toward end-to-end. I think for a lot of folks in the space, there is also just a lot more pull to fill gaps than people may realize. I think if you just skim blog posts and academic papers, you would get a vastly different sense of AI maturity in the enterprise and the market than is actually the case. So, I think, people think we have this very complex, blog-post-defined-stack and every enterprise, but because of these problems of data is one of them and others around deployment and risk management, etc., we're a lot earlier I think than many others do. And so often companies get pulled because there's actually a bigger gap to fill them than people realize.  

Tim: That's a great point. And I want to talk more about that. So, you start helping a customer with problem X, maybe it's, you know, the labeling issue here, and they're actually asking you like, "Hey, we're using you for this. We don't have good solutions for these other pieces. Now help us deploy, now help us monitor. Okay. Now help us close the loop. But that's a customer pull piece more than it is a high-level architecture strategy decision.  

Alex: Yeah. And there's a lot more of this pull in enterprise AI than people realize because there's a lot less maturity than people realize just because there's just so much to do. I think that one of the big challenges from a design perspective is where you draw the line so that you can really focus on what you're uniquely best at. And we try our best to navigate that. We expand to cover this data-centric loop. We often push customers off and try to help them with reference architectures or connectors for pieces that we don't think we have a special sauce around or that we shouldn't spread into.  

Tim: So, on this level of enterprise maturity, we have a thesis that we're really at the beginning of a major wave of ML in production. Over these last several years, we're kind of coming out of a period of intense experimentation at enterprises. Lots of innovation groups working on ML, working on models, seeing where they can build insights, and trying to get their data pipelines together. Cutting-edge companies certainly have been doing ML for years, but in those sophisticated examples, maybe there's been an exponential increase in the number of models in production. Not that they are just getting them in production, but kind of the net effect is we're at the beginnings of a pretty big bow wave of ML in production, both for internal applications, as well as the external applications that might be your company's products.

So, is that what you're seeing? Like where are we in terms of the innings here?  

Alex: I think we're early innings and I think it's exciting because I don't believe we're early innings because of the lack of extreme concentrations of talent in the enterprise. There are historic levels of access to a lot of the core machine learning techniques, like the models, out there often in the open source than ever before. And so, you've got all the right ingredients, money has been put down, and they're extremely talented data science and AI/ML engineering teams. You've got a flood of open-source tooling, especially around the models in the market, but you still have these significant blockers and headwinds that I think enterprises are really just starting to solve. Obviously, the one that we're anchored around is the data. So, I think for that reason and everything else, that enterprises are very reasonably and responsibly trying to approach carefully, governance, auditability and interpretability, risk management, and deployment, we're still in the early innings. And you see this kind of shift from the science project phase to the real production phase happening. And it's a really exciting time to be in this space.

Tim: What industry verticals are you having the most success or focusing on the most and does that sort of map to this maturity that you're talking about?  

Alex: Our technology and our platform support a very broad set of problems. If you look at our publications and literature doing anything from, you know, self-driving to genomics to machine reading to many other things, but we focus on templatizing around certain core, very horizontal applications. And then today we work with very, highly sophisticated data science teams, and often with the kind of subject matter experts that have the domain knowledge about the problem in large enterprises across all sorts of verticals. We have a lot of customers — top five, top 10 U.S. banks and others in finance, insurance, biotech and pharma, healthcare, telecom, the government side and a range of others. So, it's really these cross-cutting applications, things like dealing with unstructured data and, classifying, extracting, and performing other modeling tasks over them that we then templatize and target per vertical, where there are these great highly sophisticated data science teams that are blocked on the data.

Tim: I've used the term MLOps a few times in this conversation to describe the space. And I noticed you have not. I wonder if you like that categorization. We had another recent podcast in this series recently where Clem from Hugging Face and Luis from Octo ML hypothesize that in a few years, there will be no such thing as MLOps, it's just DevOps and the problems that you have around machine learning deployment and management will be the same as any other application.

Do you like this category name MLOps, and do you think it has a future as its own thing or does it all converge?  

Alex: I would've liked to be in that room for that debate. I don't know if I'll do justice to what that discussion covered. Okay well, why, why haven't I used MLOpps? I think it's growing to become a very expansive term. And so, I don't have anything against it. I try to keep what we do a little bit more curtailed. I think there are many ways in which MLOps will remain its own thing and should. I mean, there's a big difference between, code that is directly defined versus essentially code or programs that derive from, large statistical aggregates over, massive data sets that is just, fundamentally different in terms of how you build them, how you audit them, how you govern them, how you think about them.

Even the academic methods are very different — think more like formal analysis versus, something closer to statistical physics types of analysis. So, I think there have to be parts that are different, but I think also at the same time, there has to be many ways in which MLOps becomes closer to traditional DevOps and traditional software development. Obviously, that's part of what we're trying to do with data. We're not going to get rid of all of the messy, unique properties of large data sets, but being able to at least treat the way they're labeled and managed as more of a code asset and take a more DevOps stance versus this kind of manual activity. So, I guess in summary, I'm a big believer in pushing MLOps closer to DevOps and we're, in some sense doing that at Snorkel, but I also think there are going to remain some aspects that just have to be unique and different even as they will get more standardized, commoditized and drift closer to DevOps as they have to.

Tim: Great points. Great framing. I completely agree with that. Let me switch gears a little bit. I was rereading your website, Alex, and I'm on the About Us page, you had your obligatory description of what the company does and then a rooted in research, point — we covered that and the cool beginnings from Stanford. And then I was struck, the next big part was about culture. How would you describe the culture at Snorkel? Are you a completely distributed company at this point? How have you continued to build the culture over these last few years, which have been tumultuous to say the least with everything going on in the world?  

Alex: So, by culture, do you mean what code lender do we use? What's our favorite sock emoji pack.  

Tim: Um, among other things, yes.  

Alex: I’m kidding. It's obviously one of the, or the most important, questions, even divorced of the very unique situation we've been in over the last couple of years. It'll sound somewhat vague and cheesy but, one of the most important things that starts at how we try to recruit and then goes into what we try to enforce a normalizes, is this, idea that can have extremely kind empathetic, friendly people who are also, very hard-charging and type A and obsessive about what they build and do, and that you don't need to have one or the other. I think you can find people to work with in any context who are very fun and very kind but maybe won't push as aggressively as you need to in the startup world. Finding people who can do both is the special thing. We always try to look for that intersection.

Of course, there are other extremely important things about building an inclusive, constructive, and positive environment. A lot of it is, again, back to cheesy comments, but about the balance of trying to always be extremely positive and supportive, but also, normalizing criticism and editorial input as much as possible as a positive, not a negative.

Tim: Are you fully distributed at this point? Is there an office-centric part of this? I'm sure everyone's hybrid to some degree — how does Snorkel work?  

Alex: Yeah. So, we just soft reopened the Redwood City office. So, for parts of our team where there, we have some parts of our go-to-market team in New York and distributed. We're trying to navigate that in a way that's responsive to what people want to do. We do plan to have some hybrid component and some in-person component. This is kind of an amateur hypothesis, but just from observations the last couple of years, I think you can do a really good job, and in some ways an even more efficient job of maintaining one-on-one relationships, small pods over virtual. But you face headwinds for cross-functional interactions and the broader social fabric. It's really hard to schedule a five-minute Zoom meeting on someone's calendar for like a bump into each other at the water cooler or walk by your office and overhear there's a good essay that gets passed around for a lot of uh, uh, kind of intro grad students. When you start a Ph.D. program called, "You and your Research." There was one statement I remember thereof saying that the people who always left their doors closed seemed to be much more efficient, but never really got anything done. So, I think there's some aspect of that of you can be much more efficient with just everything back-to-back Zoom calls, and we want to keep some aspects of that, but also you lose some of that aspect of creativity, cross-functional interaction, and of course, social interactions. So, we're going to try our best to navigate a path where we can capture the best of both. And that will be some form of hybrid that we're still figuring out with our team.

Tim: Makes sense. And by the way, we love cheesy comments. I think some of those that might seem cheesy are the things that stick with people. Is there one ritual that you've established over the last few years that just works well for Snorkel that's worth sharing?  

Alex: We started doing these things we call "Whatever you Want"— "WW" at the beginning of all hands. We used to do it more than weekly at the beginning of the pandemic, but we do it weekly now, it’s just a retitling of "Show and Tell," but it's a couple of slides about any topic you want. And so, it's a nice way to get to meet people who you're not getting to bump into in the hallway and hear a little bit about some aspect of their life — a hobby, where they're from a recent trip they went on. We did a series on failed past startups. So just little snippets and it adds a little bit more of the other dimensions to people beyond the purely kind of professional interaction. So that's one thing that we've liked.  

Tim: In this world of hybrid or remote, using the All Hands effectively, I think becomes really, really important. I did a panel at our CFO conference here with three chief people officers and the chief people officer from SeekOut had a different, but somewhat similar answer to what you just said. They said at their All Hands, they always kick it off with an opportunity for people to celebrate each other, which is something you said was core to your culture too, is to be, celebratory of each other, but still hard-charging. I think those little rituals mean a ton, especially in this world that we've been living in.  

Alex: Puns are very important also. The things I'm most excited about as I recently had a second child, and I was informed by the team that I'm allowed to make two dad jokes per day now. So that's been double the fun.  

Tim: I have two kids also, and dad jokes and bad puns are right down my alley. So, there’s a lot happening in the technology markets and the public markets have corrected or repriced. You raised this awesome $85 million round last August. That was great timing. I'm sure you have a lot of cash in the bank. Your business also clearly is going well. What is your posture that you and your management team and board are talking about? Is it sort of, let's keep accelerating here as fast as we can bear? Is there a little bit of a, 'Hey things are good now, but we're not sure' in coming quarters? So maybe we don't want to hire quite as quickly as we originally planned?" Or what's your posture between sort of the gas pedal and the brake here as we go into the back half of this year. I know that's a, no one knows no one has a crystal ball, but that's a top conversation with all the companies that I'm working with.  

Alex: It's certainly uh, an interesting time. Seeing some of it as a return to sanity is obviously, I think, a positive for the space. Those of us who work in an AI, especially are always wary of over-hype leading to winters. I think for us, in particular, as you mentioned, we had recently raised a round. I think once you raise a bunch of cash in succession, you can either kind of go off the deep end or you can kind of instill good cultural habits and practices and grow up a little bit as a company. So, we were always planning to do the ladder and grow up a little bit. Obviously, the most important thing is being responsive to our customers, and we see just the same level of demand and, even more so for a lot of the projects that we try to anchor around with customers that are about increasing efficiencies and adding massive business value. And so, we're still charging ahead at full speed. But we do think it's a good reminder to be mature as a company and value efficiency and have that kind of culture and cadence. And I think it's also a good reminder for the AI space to really, again, this is a little biased because we've been trying to do this from the beginning but focus on the business value rather than the science projects. We spend a lot of effort in our product building our go-to-market motion, trying to align with those teams and projects and budgets that are going to deliver a meaningful impact that's robust. And so, I think it's a good validation of that approach.  

Tim: Very wise and very consistent with what we're trying to counsel our company — don't stop being aggressive, but efficiency ultimately also matters. And really inspect you know, new investments that you're making because you may want to err on the side of making the runway last, even longer.

Alex: Yeah. I mean, we don't want to slow down during one of the most historic opportunities for growth in AI, but I think you can keep going aggressively forward while also taking a nice reminder about the importance of building good, scalable practices, culture, etc.  

Tim: Here, here! So, I'd be remiss to not ask, is there a company or two that you think are particularly cool or innovative in the field of ML broadly? Whether it's an enabling company or a, finished application?  

Alex: I may not sound too original cause the names already came up, but we're big fans of Hugging Face and OctoML as representatives of those other areas of the ecosystem that, are very exciting and what they're doing and just the evolution around models around infrastructure and, the fact that those companies exist and those technologies are at the stage of maturity, they are what makes data-centric AI development such a thing.

Tim: I'm sure we could do a whole separate podcast on learnings and tips and advice, but any tips for, maybe the technical founder, your best piece of advice that you've gotten on this journey — anything come to mind that you always think of first.

Alex: This is a little specific to data science and AI/ML but gravitates toward real customer problems and real customer pain. Don't obsess over fitting into the perfect stack diagram or, you know, matching the perfect paradigm of scalability right away, go to where there are real problems, real data real use cases and learn from that.

Tim: Terrific customer-obsessed — customer-focused — is the most important thing. So, I've got to tell you maybe as we wrap up here, I'm sitting here, the audience, can't see, I have my Snorkel T-shirt on, I'm a little bit of a Snorkel fanboy. A few years ago, some website or magazine interviewed me and said, what is a company you're not an investor in that you're most excited about? And I said, Snorkel. And Alex rewarded me with a box of swag, so I have this T-shirt to show for it. But the other piece that you don't know, Alex, is that there was a pair of socks that you sent me, and you have the very kind of fun snorkel logo. And the socks were kind of too small for me. And my daughter saw them sitting on my desk at my home office, and they became her favorite pair of socks. She plays a lot of basketball. She's in seventh grade. And I just want you to know that in the seventh-grade girl's hoop leagues of Seattle, programmatic data labeling is being represented well with some flashy footwear.

So, thanks for that.  

Alex: I think that's going to be one of our biggest growth markets. We're playing the long game here. So, I'm both incredibly humbled and incredibly appreciative because that's going to be some great long-term value.

Tim: This is terrific. Thank you so much for your time. Congrats on everything you're building at Snorkel. Thanks for the insights for other entrepreneurs and customers who are building in this world of machine learning and intelligent applications. And hopefully, we can do this again sometime.  

Alex: Tim, thank you so much. And this was awesome.

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you'd like to learn more about Snorkel, they can be found Snorkel.ai. To learn more about IA40, please visit IA40.com. Thanks again for joining us and tune in, in a couple of weeks for our next episode of Founded and Funded. We'll be spotlighting another IA40 winner next month.  

Other stories

Share The IA40
https://ia40.com
Copy link