podcast

Hugging Face CEO Clem Delangue and OctoML CEO Luis Ceze

Post on

May 4, 2022

This week on Founded and Funded, we spotlight our next IA40 winners – Hugging Face and OctoML. Managing Director Matt McIlwain talked to Hugging Face Co-founder and CEO Clem Delangue and OctoML Co-founder and CEO Luis Ceze all about foundation models, diving deep into the importance of detecting biases in the data being used to train models as well as the importance of transparency and the ability for researchers to share their models. They discuss open source, business models, the role of cloud providers and debate DevOps versus MLOps, something that Luis feels particularly passionate about. Clem even explains how large models are to machine learning like what Formula 1 is to the car industry.

This transcript was automatically generated and edited for clarity.

Coral: Welcome to Founded and Funded. This is Coral Garnick Ducken, Digital Editor here at Madrona Venture Group. And this week we're spotlighting two 2021 IA40 winners. Today Madrona Managing Director Matt McIlwain is talking with Clem Delangue Co-founder and CEO of Hugging Face and Luis Ceze Co-founder and CEO of OctoML. Both of these companies were selected as a top 40 intelligent application by over 50 judges across 40 venture capital firms. Intelligent applications require enabling layers, and we're delighted to have Clem and Luis on today to talk more about the enabling companies they co-founded, which can work in tandem and are both rooted in open source. 

Hugging Face is an AI community and platform for ML models and datasets that was founded in 2016 and has raised $65 million, and OctoML is an ML model deployment platform that automatically optimizes and deploys models into production on any cloud or edge hardware. OctoML spun out of the University of Washington and is one of Madonna's portfolio companies. Founded in 2019, Octo has raised $133 million to date.  

I'll hand it over to Matt to dive into foundation models, the importance of detecting biases in data being used to train models, as well as the importance of transparency and the ability for researchers to share their models. And of course, how large models are to machine learning like what Formula 1 is to the car industry. But I'll let Clem explain that one. So, I'll hand it over to Matt. 

Matt: Hello, this is Matt McIlwain. I'm one of the Managing Directors at Madrona Venture Group. So, let's dive in with these two amazing founders and CEOs, and I want to start with a topic that's important not only historically in software, but certainly relevant in some new and different ways in the context of intelligent applications and that is open source. Luis, I know your company,  OctoML plays on top of your open-source work that you and your team, built with TVM, how do you think about that distinction between the OctoML role versus TVM. 

Luis: Just to be clear, the OctoML platform is really an automation platform that takes machine learning models to production. That involves automating the engineering required to get your model and tune for the right hardware, the right choices, reasons, rights, other pieces of the ecosystem, and then wrapping it up into a stable interface that it can go and deploy in the cloud and in the edge. 

And TVM is a piece of that, but TVM is a very sophisticated tool that is usable by, I would say machine learning engineers in general. So, the platform automates that and makes it accessible to a much broader set of skill sets, a much broader set of users, and then also pairs TVM with other components of the ecosystem. For example, when should you use a certain hardware-specific library is something that we automate as well. What we want here in the end, is to enable folks deploying machine learning models and teams deploying machine learning models to treat ML models as if they were any other piece of software. Okay, so you don't have to worry about how you're going to go and tune and package it to a specific deployment scenario. You have to think about that very carefully today with ML deployment. We want to automate that away and make that be fully transparent and automatic.  

So why do we make Apache TVM open source? One of the things that TVM solves — we call the matrix from hell. And if you have a bunch of models and a bunch of hardware targets, and you are mapping any model on any hardware, this requires a lot of diversity, right? What better way to deal with diversity of these combinations of models to hardware than actually having a community that is incentivized to do that. For model creators and framework developers, by using TVM, they have more reach to hardware. So, creating this incentive and folks participating and putting all hands on deck and creating this diverse infrastructure is a perfect match for an open source. So TVM is, and will always be, open source and very grateful to that. 

Matt: Clem, frame a little bit for us, how you thought about open source and how you've thought about it in the context of your marketplace.  

Clem: Basically, at Hugging Face, we believe that machine learning is like the technology trend of the decade, that it's becoming the default way of building technology. If you look at it like that, you realize that it's not going to be the product of one single company, it's really going to take collaboration of hundreds of different companies to achieve that. So that's why we've always taken a very open source, collaborative, platform approach to machine learning. 

And a little bit, like what GitHub did for software, meaning becoming this repository of code, this place where software engineers collaborate, version their code, and share their code to the world. We've seen that there was value, thanks to the usage of our platform, in doing something similar, but for machine learning artifacts — so for models and data sets. So, what we've seen is that by building a platform, by being community first, we've unlocked, for now 10,000 companies using us, the ability to build machine learning better than what they were doing before.  

Matt: So, Clem, that's really interesting. Maybe just to build on that last point. When people are trying to use these models, there is often some kind of underlying software that's involved with the building, the training, the leveraging of the model. There's also datasets — some that are open public data sets, some that are not. So, in that context, how do you all work with both the software and the data set elements that are more or less open in terms of leveraging your platform? 

Clem: Yeah. So, something that we were pretty convinced about since we started working on this platform three years ago, is that for it to work and really empower companies to really build the machine learning, it had to be extensible, modular, and open. We don't believe in this idea of providing an off-the-shelf API for machine learning — like having one company doing machine learning and then the rest of the world won't be doing machine learning. It can be useful for a subset of companies, but the truth is at the end of the day, most companies out there will want to build machine learning. So, you need to give them tools that fits their use cases that fit their existing infrastructure that can be integrated with, parts of the stack that they already have.  

So, for example, for private-public, what we're seeing is that by giving the choice to the companies to pick which part they want to be private, which part they want to be public — what's interesting is that it usually evolves over time in the machine learning life cycle. If you think of like the beginning of a machine learning project, what you want to do is maybe train a new model on public data sets because it's already available, it's already formatted the right way for you task. That gets you to a minimal viable product model really fast. Then once you've validated that it could be include into your product, then you can maybe switch to private data sources and then train a model that you're going to only keep for your company and keep public. Maybe you'd use that for one year, two years, and then you're like, okay, now I've used it a lot and we're comfortable sharing that with the world, and then you're going to move your model into the public domain just to contribute to the whole field. It's really interesting to see the timeline on these things and how the lines between public and privates are probably much more blurrier than we can think looking at it from the outside. 

Matt: That's super interesting. At one level delineates between the public data sources that presumably people are free to use and the private data sources, which might have some proprietary usage, rights, and permissions. Maybe one other level in there is kind of the — I want to know what data was used in my model. So, kind of this data lineage piece, and how do you help people with that topic. 

Clem: So, we have a bunch of tools. We have a tool that is called the data measurement tool that is very important and useful to try to detect biases in your data, which is a very important topic for us. 

We have someone called Dr. Margaret Mitchell, who co-leads and co-created, the machine learning ethics team at Google in the past, and who created something called Model Cards that are now adapted to data, too, which are a way to bring more transparency into the data. Which for me, is incredibly important most actually on the data side than the model side, because if you look today at a lot of the NLP models, for example, if you look at BERTs, it's incredibly biased, right? If you take like a simple example, like you ask the model to predict the word, when you say, "Clem's work," "Clem's job is" or "Sofia's work is." You'll see that the word that is predicted is very different if the first name is a male or if it's female. You'll even get on the woman's side, the fist prediction of a BERT model is “prostitute,” which is incredibly offensive and incredibly biased. So, it's really important I feel like today in our field that we just acknowledge that. That we don't try to put that under the rug and build transparency tools, bias mitigation tools, for us to be able to take that into account and make sure we use this technology the right way. 

Matt: Yeah, that's incredibly powerful and helps illustrate beyond the sort of the first set of challenges of building machine learning models that there are these second- and third-order derivative challenges that are going be hard to tackle for a long time to come but are important as you point out to put on the table and acknowledge and work on.  

Luis, I'm curious, you referenced this data engineer as your initial customer. Can you tell us a little bit of what you're learning about the state of these customers and who this data engineer is? Who else might be key decision-makers and using, let's even put aside like paying for your stuff, just wanting to use it?  

Luis: I wouldn't call them necessarily data engineer. It's more like ML engineer or ML infra-engineer. So those are folks that think about how to deploy machine learning models today. But what we want here is to have any software developer to be able to deploy the machine learning models and use their existing DevOps infrastructure and existing DevOps people. Right? We are learning a bunch of things from them. First is that it's just incredibly manual. There's something that we call the handoff problem from, a model created by a data scientists or folks that create that model to something that's deployable today involves many steps that are done by humans. 

For example, turning a model into code is one step that's done by hand. Then after that, just figuring out how you're going to run it. Where are you going to run? It is something that requires a lot of experience with system software tools. If you're going to deploy on Nvidia, you have to use a certain set of tools. You're going to deploy an Intel, CPU's are going to have to use a set of tools. 

That's done by different companies and different customers that have different names for this. Some of those are sophisticated DevOps engineers. Some companies call those machine learning infrastructure engineers, and as the maturity of ML deployment increases in these companies. I'm sure there will be a common name across them, but honestly, if you talk to 10 customers, you're going to have more than 10 ways of calling those people. 

Matt: Is this the same entry point for you, Clem? 

Clem: Yeah. What's interesting to me, the other day I was thinking about, if we want to make like machine learning, the default way of building technology — like software 2.0, in a way. It's interesting to look at how software became kind of like democratized. If you think about software, like maybe 15, 20 years ago, and who was building software. You realize that maybe, obviously software got adopted really fast, but if there was one thing that was limiting is how to train a software engineer. Because it's hard, to take maybe someone who was a consultant before, or like was working on finance and then train them to become a software engineer is hard work. It's not something that they're going to do really fast. What's beautiful with machine learning is that, this wave of education of software engineers almost kind created the foundations to go much faster on a machine learning because turning a software engineer into someone who can do machine learning is much faster. For example, with the Hugging Face course, which takes a few hours to take, we see software engineers starting this course and at the end of the course, being able to start building machine learning products, which is pretty amazing. So when you think about the future of machine learning and the rates of adoption, one of the reasons why I'm super optimistic is that I think it's not crazy to think that, maybe in four or five years, we might have more people able to build machine learning than software engineers today. I don't really know how we're going to close them. Maybe they're still going to be called software engineers. Maybe they're going to be called machine learning engineers? Maybe they'd have another name. 

Luis: Maybe just application engineers because applications have any intelligent components, it should just be application engineers, right? 

So, Matt I have a bunch of questions for Clem too. So let me know when we can ask questions to each other here.  

Matt: Let me ask one question of you and then you can go. You've shared with me a few times that you think this whole construct of MLOps, which I guess arguably today is the cousin of DevOps is just going to go away and maybe this gets back to this, what are we going to call the people? It doesn't really matter, maybe they're all application engineers over time. Do you see MLOps and DevOps merging or is MLOps just automated away? What's your vision around that Luis?  

Luis: To be very clear for the rest of the audience here. So creating models or arriving at a model that does something useful for you, it's very distinct to how we've been writing software so far. I know to Clem's point, he put it very well. That part, I don't know what name that has. I do not include that in MLOps. But MLOps, I mean, like, once you have a model, how do you put it in operation and manage it? That's the part that whenever I look at it super closely today, it involves turning a machine learning model into deployment artifact, integrating the machine learning model process and deployment with the regular application life cycle deployments, like CICD and so on. And even monitoring a machine learning model once it's in deployment. So, all of that, the people call MLOps. If we did it right and enabled a machine learning model to be treated like any other piece of software module today, you should use the existing CICD infrastructure. You should use the existing DevOps people. You should even use your existing ways of collecting data for things in deployment, like what Datadog does, and then put views and interpretation on top of that.  

So, our view here is that if we do all of this we should be able to, once you have a model, you turn that into an artifact that you can use the existing DevOps infrastructure to deal with. So, in that view, I would say that MLOps shouldn't be called anything else other than DevOps. Because you have a model that you can treat as if it were any other piece of software. So that's our vision. 

Matt: Clem do you agree with this vision? 

Clem: Yeah, yeah — I think it is very accurate. 

Matt: Good. Luis, what were you going to ask Clem?  

Luis: First, what makes some models wildly popular? Out of these tens of thousands of models I'm sure there's a very bi-modal distribution there. Do you see any patterns of what makes models, especially popular with the general audience?  

Clem: It's a tough question. I think it varies wildly based on where the company is in terms of like their machine learning life cycle. Like when they start with machine learning, they're going to tend to use the most popular, more generic kind of models. They're going to start with BERTs, with DistilBERT, for example for NLP. And then move towards kind of like more sophisticated, sometimes more specialized models for their use cases. And sometimes even training their own models. So, it's very much kind of like a mix of what problem it solves, how easy it solves the problem, how big the model is. Obviously like a big chunk of your work at OctoML is, you know, to make the scaling of these models cheaper for companies to run billions of inferences. It's all that plus I think one layer that we really created that wasn't there before is the sort of social or peer validation. 

And that's what you find on GitHub. It's hard to assess the quality of a repository if you didn't have things to like numbers of stars, numbers of forks, numbers of contributors to the model. So that's what we provide also at Hugging Face for models and data sets where you can start to see oh, is this model has been liked a lot. Who's contributing to this model? Is it evolving and things like that? That also, I think provides like a critical way to peak models, right? Based on what your peers and what the community has been using. 

Luis: Yeah, that makes sense. Peer validation is incredibly powerful. I want to touch on another topic quickly and then I'll pass the token back, you mentioned public data versus private data. There was a really interesting discussion that I think parallels really well with the trends in foundational models. Where you can actually train a giant foundational model on public data and go and refine it with private data. Of course, there's some risk of bias and we need to manage that. But I'd love to hear your thoughts and where you see the trends of, making the creation of foundational models or even the access to foundational models be something that's wide enough to have many users refining upon that. We keep hearing about some of these models costing a crazy amount of money to train. Of course, folks are going to want to see a return on that.  

Clem: Yeah. I mean, for us, transparency and the ability for researchers to share their work is incredibly important for these researchers, but also for the field in general. I think that's what powered the progress of the machine learning field in the past five years. And you're starting to see today some organizations deciding not to release models, which to me is something negative happening in our field, and something we should try to mitigate because we do believe that some of these models are so powerful that they shouldn't be left only in the hands of the couple of very large organizations. 

In the science field there's always been this trend and this ability to release research for the whole field to have access to them and be able to, for example, mitigate biases, create counter powers, to mitigate like the negative effects that it can have. To me, it's incredibly important that researchers are still able to share their models, share the data sets publicly for the whole field to really benefit from them. Maybe just to, to complement on that we've led with Hugging Face, an initiative called BigScience, which is gathering almost a thousand researchers all over the world. Some from some of the biggest organizations, some more academic — from more than 250 institutions to train ethically and publicly the largest language model out there. It's really exciting because you can really follow the training in the open.  

Luis: I've been seeing that's fantastic to see that.  

Clem: I like to joke sometimes that very large models are to machine learning, what Formula 1 is to the car industry. In the sense that the two main things that they do is first they're good branding. They're good PR, they're good marketing — the same way Formula 1 is. And second, they are pushing the limits of what you're able to do to have some learning. The truth is you and I, when we are going to work, we're not going to use like Formula 1, because it's not practical. It's too expensive. And so that's not what we're going to be using. And not all like car manufacturers need to get into Formula 1 — like Tesla is not doing Formula 1. 

Matt: I'm going to have to ask you about Charles Leclerc then. Because I have a feeling you might be a big fan.   

Clem: Yeah, absolutely. But so, if you think about large language models, that way. And if you realize that the biggest thing is the learning that you get by pushing everything to the extremes, then it creates even more value in doing it in the open. And that's basically what, BigScience it is kind of like doing this whole process of training a very large language model in the open so that everyone can take advantage of the learning of it. So, if you go on the website, if you check on GitHub, all the learning in terms of oh, it failed because of that, it worked because of that. We tweak that and completely change the learning rate and things like that. So that's super exciting about that in the sense that it's building some sort of an artifact for the whole science community, for the whole machine learning community to learn from and get better at doing these things. 

Luis: I like the parallel a lot. One of the parallels that I like to think as well as the training these giant models should be equivalent to building a large scientific instrument, say the Hubble Telescope. We spent, a few billion dollars to put it in space and a lot of people can use it. On the commercial side you build a giant machine that you give people some time on to go and do things. I see the parallel, like as any huge engineering effort that's done upfront to enable future uses. I think that's the computational equivalent of that, where you have a giant amount of computation whose result is an asset that should be shared. So, in a way that makes sense.  

Matt: What I'm trying to get my head around, not to extend this analogy too much is, every team has to build their car. And they don't tell you everything that they're doing to make it the fastest car on the track. So, what's the right layer or layers of abstraction here. Open AI with GPT-3, there's some things that you can work with and play with, and you can do prompt engineering and all, but there's some things that are let's call them in more of the black box, what has been additive about OpenAI's efforts? And maybe touch a little bit on, with projects like BigScience, what are different and also needed to put it that way. 

Clem: I think different layers of abstraction or needed by different kinds of companies and are solving different use cases. Providing an off-the-shelf API for machine learning is needed for companies that are not really able to do machine learning — who just need to call an API to get the prediction. It's almost the equivalent of a Wix or a Squarespace for technology, right? People were not able to build software to write codes, they're going to use kind of like a no-code interface to build the websites. And that's the same thing here, I think. Some use cases are better served with providing an off-the-shelf API and not doing any machine learning yourself. Some others you need to be able to see the layers of the model and be able to train things, to understand things for it to work. So, I think it really depends on the use case, the type of company that you're talking to. So, for example, the largest open-source language models are on Hugging Face. So, it's like the models from Editor AI, it's like the biggest T five models. And they have some usage, but it's not massive to be honest. Even if they're like a fraction of the size of the ones that are not public. So at the end of the day, again, it's Formula 1 there are a couple of cars that a couple of drivers are building, but most of the things that are happening today are actually happening in much smaller models. From what I see, I don't know if Luis is seeing the same thing. Even like Codex for example — the one that is actually used in production is much, much smaller than what the, like the big number it's claims in terms of size of the models. I don't know. Luis, are you seeing the same thing? 

Luis: Yeah, similar thing even private companies, right? So, they develope their large models in private, and they go and specialize it — they have their own foundational models and specialized specific use case and deploy that to typically much smaller and much more appropriate for the broader, deployment. I think it'd be interesting to see in the spirit of building communities around it and having people refine on top of large-scale models, is creating broader incentives for folks actually go and pay the high computational costs of training these models. But once they make available, is there a way for them to share some of the upside that people get by refining those models specific use cases. Again, like how I repeat what I said before. I see this giant piles of computation involved in training these models as producing an asset, so that can be used in a number of ways.  

Matt: That's actually a great segue into business models. So, I take a pre-chain model that's in the Hugging Face market, and I decide to use it and adapt it for my own purposes. How does that work from a business model perspective? 

Clem: So, I think the business model of open source and platforms are always similar in terms of high level, in the sense that they like some sort of a premium model, where like most of the companies that are using your product, are not paying most of the time and it creates your top of the funnel? For us, it's 10,000 companies using us for free. Then a smaller percentage of the companies that are using your platform are paying for additional premium features or capabilities. What we've seen is that there was definitely some companies that were obviously very willing to pay because they had specific constraints. When you think about enterprise, especially in like regulated industries. If you think about banking, if you think about healthcare. Obviously, they specific constraints, that make them willing to pay for help on these countries. So that's one way that we monetize today. The other way is around infrastructure because obviously infrastructure is important for machine learning. And what we're saying at Hugging Face is that we almost becoming some sort of a gateway for it in the sense that because companies are starting from the model hub, taking their models and then making decisions from them. We can act somehow as a gateway for compute for infrastructure. It is definitely like very much early days, right? As most of our focus has really been on adoption, which I think is what's making us unique. But I think there is a growing consensus that as machine learning is becoming key for so many companies that machine learning tools, providers, are going to be able to build these big businesses — especially if they have a lot of usage. 

Matt: And Luis, similarly, you've got a lot of demand and interest for your SaaS offering, as you call it. Maybe tell us a little bit more about that and what you're seeing in terms of early usage and thoughts about business model.  

Luis: Yeah, absolutely. We call it the OctoML platform. So, it's model in, deployable container out. It's a simple model people pay to use it. And then the pricing is a function of the number of model hard repairs and also the size of deployment. And what customers are paying for there really is first for automation, right? So often when you're replacing what humans are doing when taking models to deployment. It's turning to either using our web interface or an API call. Imagine instead of actually having an engineering team where data scientists say here's a model and then the deployment folks like, oh, give me the container to deploy it. We put an API on that and run it automatically. It's a different motion than what Clem just described because the open-source users of TVM — and these are folks that are more sophisticated, they're using TVM directly. Some of them want to use a platform because they want more automation. For example, they don't want to go and have to set up a fleet of devices to do tuning on. They don't have to go and collect the data sets to feed TVM for it to do it's machine learning, information learning things — all of that is just turn key. And we have, what I call altar loop automation, where you could give a set of models, get a set of harder targets and we solve the matrix from hell for them automatically. Given that there's a huge difference between using TVM directly or the experience of the platform provides that in that case, it's very clear. And the platform is a commercial product folks have to pay to use.  

Clem: I'd be interested Luis, to hear you about how you see your relationship with the cloud providers is that mostly as, you know, potential customers, partners, competitors. How do you see them? 

Luis: Oh, great question. And it's a good segue here too. I see them as potential customers and partners. Less so as a competitor, and I'll elaborate. Even though there is some specific points that might seem contradictory to them saying. First of all, so some cloud providers happen to have popular applications that they run on their own cloud and these applications use machine learning in that case, customers — I call that "sell to." 

But the bigger opportunity that I see here is "sell with." And, from all cloud vendors, what they care about is driving usage in their clouds. So, the way you drive usage in their cloud is to make it very easy for users to get machine learning models, use a lot of computation, and make it really easy to get them on their cloud. So, in whether a service provides us turning models into highly optimized containers that can be moved around in different instances and the cloud vendors like that because it drives up utilization in their cloud. 

So, in that case, we're not seeing resistance. In fact, we're seeing a lot of encouragement in working with cloud vendors as partners. So talking about selling to and selling with — now, of course, one of these cloud vendors have a service that also builds on TVM — Amazon has something called SageMaker Neo, which is an early offering of using TVM to compile models to run on Amazon cloud. We see our services differentiated in a number of ways. First, there's some technical differentiation of how we do the tuning of the model to make the most out of the hardware target by using our machine learning for machine learning magic. But more broadly, I would say that the key thing that there's no competition here is because we support all cloud vendors. And if there's that one cloud vendor where they can't be is to be the other cloud vendor at the same time. So, the fact that we sit on top of all these cloud vendors is a huge selling point that I feel likes makes the competition not be relevant. be  

Matt: What I think is really interesting here is it's like what are going to be the right abstraction layers to deliver value in the future? What are the kinds of application areas that are most exciting to you all for the future? 

Clem: What I'm super excited obviously is that transformers are starting to make their way from NLP from texts to all the other machine learning domains. If you're starting to look at computer vision, you're starting to see vision transformers, if you're starting to look at speech, you're seeing like a WAV to VIC, you're starting to see things in a time series. Uber announced that they're using transformers now to do a time series for their ETA right? You starting to see biology and chemistry basically taking over all the science benchmarks. So it's really exciting. Not so much because I feel like the other fields are going to get accelerated as fast as the NLP field did, but also because I think you're going to start to be able to build much greater bridges between all these domains, which is going to be extremely impactful for final use cases. Let's say, for example, you think about fraud detection, which is a very important topic for a lot of companies, especially financial companies. Because before, like the domains were very siloed and separated, you were doing it mostly with a time series, right? So, prediction on events and things like that. But now if you're seeing that everything is powered by transformers, you can actually do a little bit of time series, but also NLP. Because obviously fraud is also predicted by the kind of texts that someone trying to fraud or like a system trying to fraud is sending you. And so you're starting to see these frontiers between domains getting blurrier and blurrier. In fact, I'm not even sure that these different domains will really exist in a few years. If it's not going to be all machine learning, all transformers just with different input, right? Like a text input or audio input, image, input, video input, numbers input. And that’s probably like the most exciting thing that I've seen in the past few months on Hugging Face. Now we're seeing it out of adoption for computer vision models, for speech models, time series models, recommender systems. So, I'm super excited about that and the kind of like use cases that it is going to unlock. 

Luis: I feel like it's pretty clear today that almost every single interesting application has multiple machine learning models in them and as an integral part of that. And they're naturally multimodal as well. There's language models with computer vision models, with time series models. I think the right abstraction here would be you declare it, that you know, where your ensemble of models are and you should give it to the infrastructure. And infrastructure automatically decides where and what should run, that includes mobile and cloud, right? 

So almost every single application has something that's closer to an end-user and cloud counterparts and even knowing what should run on the edge, what should run in the cloud, that should be automatically done by the infrastructure. So, for us to get there, it requires a level of automation that is not quite there yet. Even like when you give a set of models and deciding maybe a given model should be split into two, where part of it runs in the cloud and part of it runs on the edge. So that's where I think the abstraction should be. You should not worry about where things are running and how. That should be fully automatic.  

Now on the — what is an exciting application? This is going to be more personal and Matt, that's probably not going to be a surprise to you. I think there's so many exciting applications in life sciences. It's inherently multimodal — from using commodity sensors in smartphones to make diagnostic decisions. There is a lot of interesting progress there using microphones to measure lung capacity, for example, for using cameras to make skin cancer early diagnosis and things like that. All the way to, you know, much larger scale computations and everything that's going on in deep genomics in applying modern machine learning models into giant genomic datasets, is something that I find extremely exciting and not surprisingly a lot of those use transformers as well. So what I'm seeing actually, I'm also very excited about what, Clem said. It's fantastic to see what Hugging Face has been doing and showing the diversity of use cases, transformer models apply to. Just like, bring it a little bit closer in terms of the actual application, I feel like life science is the one that inherently puts everything together into a very high value and meaningful application of human health.  

Clem: And something I wanted to add because it's easy to miss it if you not following closely, but already today, if you think about your day, most of it is spent in machine learning.  And that is something new you have to realize because maybe two, three years ago, there was some like over-hype about AI, right? Everyone was talking about AI, but there was not really a lot of final use cases. Today, it's not the case anymore. If you think about you day you can, do a Google search — it's machine learning-powered. You're going to write an email — autocomplete its machine learning-powered. And you're going to order Uber, your ETA is machine learning-powered. You're going to go on zoom or this podcast, noise-canceling and background removal is machine learning. Going to go on social network, your feed is machine learning-powered. So already today you're spending most of your day in machine learning, which obviously is extremely exciting. 

Matt: Yeah, it kind of leads to a question what's the technology, that's the greatest source of disruption and innovation that you see in the next five to 10 years. 

Clem: So, for me, it might not be a technology in itself, but I'm really excited about everything decentralized. And not just in the crypto blockchain kind of sense. So, for example, Hugging Face, we're trying to build a very decentralized organization in the sense that decision making is done kind of like everywhere in the organization in a very bottom-up way rather than top-down. And I'm really excited about applying this notion of decentralization. I think it's going to fundamentally change the way that we build technology. 

Luis: For me, it is impacted by AI too, but it's molecular level manipulation. It's just everywhere. You saw Nvidia's announcement of 4-nanometer transistor technologists, soon we're going to see 2 nanometers — we're closely getting to the molecular scale there. So, this is applied to manufacturing electronics, but then, going back to life sciences, our ability to design, synthesize and read things at the molecular scale is something that's there today already. So just think about DNA sequencing. You can read individual pieces of DNA with extreme accuracy, in large part because of AI algorithms that decode very noisy data, but our ability to read individual molecules is there and the ability to synthesize them. 

So, I hope I'm not being confusing putting these two things together. I think in the end, being able to manipulate things at the molecular scale has a deep impact on how we build computers, because computers are in the end dependent on how you put the right molecules together, and same thing applies to living systems. So in the end, we're all composed of molecules and being able to engineer synthesize the right ones has profound impacts on life. So that's my favorite one, yeah.  

Matt: I don't know how I can bring us back down after that. Basically, to synthesize it, the journey from atoms and physics to bits and computing, to bases and biology, you know, and the intersections of those worlds. And what's going to happen in the future as a result. 

I know you and I are both passionate about that and no doubt from what Clem is saying, too, and bringing in this point about decentralization as well. And how does that change the way that we can work and learn and discover together. Very exciting. Hey, is there a company, in this intelligent application world, maybe more up at the application level, as opposed to the enabling level where both of your companies are playing today, that you're, you really just admire and think a lot of maybe it's because of some of these cultural attributes about the centralization clam, or maybe cause of the problem that they're trying to solve that you'd say, wow, that's one of the coolest, private, innovative, intelligent application companies? 

Clem: I recently talked to a Patricia from a Private AI, which to me is doing something really exciting because initially it sounds like a boring topic in a way, which is a PII detection, like detecting personal information in for example, your data or your sets. But I think it's incredibly important to understand better what's in your data, what's in your model, in terms of problems, right? 

Like is there personal information that you don't want to share? Are there biases? I think of being much more like valuing forums and kind of like building technology with values rather than thinking that you're just a tool, that doesn't have value and kind of like the harm comes from people using your tool. I think it's a very big technology switch that we're seeing happening now with companies and organizations having to be very intentional about the product decisions that they take, to make sure that you reflect their values and the values that they want to kind of like broadcast.  

Luis: One company that I think is doing really cool, intelligent applications is a company called RunwayML. That's the ability of manipulating media in a very easy way using machine learning, really cool. Like for example, how you can very easily edit videos in a pretty profound way, that had been incredibly manual and hard in the past. Now turning that into something that's point and click it's pretty exciting. Also comes from the ability of training, large you know, models to generate visual content. So that's one of them.  

Matt: Let me bring us to kind of a wrap up with a question around your own entrepreneurial journeys. We have a lot of folks that are listening that are starting or thinking about starting companies. And if you could share with us, one or perhaps two, the most important lessons, things that you've learned, wished you knew better going into the entrepreneurship journey that might be helpful for others. I think that would be tremendously valuable to our listeners.  

Clem: It's a tough question because I think the beauty of entrepreneurship is that you can really own your uniqueness and really build a company that plays on your strengths and doesn't care about your weaknesses. So, I think there are as many journeys as they are startups. Right? But if I had to kind keep it very general. I would say for me, like the biggest learning was to take steps, just one at a time. You don't really know what's going to happen in five years in three years. So just like deal with the now, take time to enjoy your journey and enjoy where you are now because I don't know if Luis it's the same, but you obviously look back at the first few years, and at the time you felt like you were struggling, but at the end of the day it was fun. Then, yeah, obviously to trust yourself as a founder, you know, like you'll get millions of advices, usually conflicting. For me it's been a good learning just to learn, to trust myself, go with my gut and usually it pays off. 

Luis: It's hard to top that, but I will say, for me, personally coming from academia, it's been fantastic to see a different form of impact because as a professor, you can have impact by writing papers that people read and then can change fields or training students that go and do their own thing and become professors and so on. But then I see building a company out of research that started in universities and all the ways of impact that actually putting products in people's hands. Some of the lessons that I've learned as you know, Matt, there's massive survivor bias here, but you know, just picking people that you generally like to work with is incredibly important. People that are supported, they can count on people around you and feel like there is a very trusting relationship with the folks that you work closely with. It's just something that is true in building a company. I'm sure it's true in many other things in life as well, but I'm extremely grateful to be surrounded by people that I deeply trust. I have no worries about showing weaknesses and having to be always right. No, I think it's great when you say you know what I did wrong, I'm going to fix it. It's much better to admit if you're wrong and fix it quickly than trying to insist that being right is important. But a funny thing that I've learned like yet again, is that we overestimate what we can do in the short term, but we underestimate them, what we can do in the long run. When putting plans together, we all have this ambitious things, we're going to get this into the next two months. And you almost always get that wrong because you overestimate that. But then when you think about a plan that is a few years, like a couple of years out, you almost always, undershoot, right? 

So, when I keep seeing this time, and again, and this is something that I think affects how you think about building your company, putting plans together, especially when things are moving fast. It matters a lot. So put a lot of thoughts into plans, writing things down a lot.  

Matt: Well, you've heard this from me before Luis, but Clem, I love what you said too, because it is, the customer and the founder are almost always right. And the VC is often wrong. So, they're trying hard. We try hard! Well, gosh, I've just so enjoyed getting a chance to listen to both of you and asking a few questions and, you know, excited to see where this world of enabling technologies like Hugging Face and Octo ML and the underlying capabilities around that go in the future. What that portends for the future of intelligent applications that are brought together and really can, I think, transform the world where I think you're probably both right. That in the future, we're not going to think about DevOps and MLOps, we're not going to think about apps and other apps. We're just going to have this kind of notion of application engineering. But there's lots of problems to solve along that journey. So thank you so much for spending time with us. Congratulations again on being winners in the intelligent application inaugural class. 

And we look forward to seeing all the progress in the future for both your companies. 

Clem: Thanks so much.  

Coral: Thank you for joining us for this IA40 spotlight episode of Founded and Funded. If you'd like to learn more about Hugging Face, they can be found at HuggingFace.co to learn more about OctoML, visit OctoML.ai. And, of course, to learn more about IA40, please visit IA40.com. Thanks again for joining us, and tune in in a couple of weeks for Founded and Funded's next spotlight episode on another IA40 winner.  

Other stories

Share The IA40
https://ia40.com
Copy link