Kolena, a startup building tools to test, benchmark and validate the performance of AI models, today announced that it raised $15 million in a funding round led by Lobby Capital with participation from SignalFire and Bloomberg Beta.
The new cash brings Kolena’s total raised to $21 million, and will be put toward growing the company’s research team, partnering with regulatory bodies and expanding Kolena’s sales and marketing efforts, co-founder and CEO Mohamed Elgendy told TechCrunch in an email interview.
“The use cases for AI are enormous, but AI lacks trust from both builders and the public,” Elgendy said. “This technology must be rolled out in a way that makes digital experiences better, not worse. The genie isn’t going back in the bottle, but as an industry we can make sure we make the right wishes.”
Elgendy launched Kolena in 2021 with Andrew Shi and Gordon Hart, with whom he’d worked for around six years at AI divisions within companies including Amazon, Palantir, Rakuten and Synapse. Through Kolena, the trio sought to build a “model quality framework” that delivered unit testing and end-to-end testing for models in a customizable, enterprise-friendly package.
“First and foremost, we wanted to provide a new framework for model quality — not just a tool that simplifies current approaches,” Elgendy said. “Kolena makes it possible to continuously run scenario-level or unit tests. It also provides end-to-end testing of the entire AI and machine learning product, not just sub-components.”
To this end, Kolena can provide insights to identify gaps in AI model test data coverage, Elgendy says. And the platform incorporates risk management features that help to track risks associated with the deployment of a given AI system (or systems, as the case may be). Using Kolena’s UI, users can create test cases to evaluate a model’s performance and see potential reasons that a model’s underperforming while comparing its performance to various other models.
“With Kolena, teams can manage and run tests for specific scenarios that the AI product will have to deal with, rather than applying a blanket ‘aggregate’ metric like an accuracy score, which can obscure the details of a model’s performance,” Elgendy said. “For example, a model with 95% accuracy in detecting cars isn’t necessarily better than one with 89% accuracy. Each has their own strengths and weaknesses — e.g. detecting cars in varying weather conditions or occlusion levels, spotting a car’s orientation, etc.”
If Kolena works as advertised, it could indeed be useful for the data scientists who spend lots of time building models to power AI apps.
According to one survey, AI engineers report devoting only 20% of their time to analyzing and developing models, with the rest going to sourcing and cleaning the data used to train them. Another report finds that, due to the challenges in developing accurate, performance models, only about 54% of models ultimately move from pilot to production.
But there are other players building tools to test, monitor and validate models. Beyond incumbents like Amazon, Google and Microsoft, a wealth of startups are piloting novel approaches to measuring the accuracy of models before — and after — they go into production.
Prolific recently raised $32 million for its platform to train and stress-test AI models using a crowdsourced network of testers. Robust Intelligence and Deepchecks, meanwhile, are creating their own toolsets for businesses to prevent AI models from failing — and to continuously validate them. And Bobidi is rewarding developers for testing companies’ AI models.
But Elgendy argues that Kolena’s platform is one of the few that allows customers to take “full control” over the data types, evaluation logic and other components that make up an AI model test. He also emphasizes Kolena’s approach to privacy, which eliminates the need for customers to upload their data or models to the platform; Kolena only stores model test results for future benchmarking, which can be deleted upon request.
“Minimizing risk from an AI and machine learning system requires rigorous testing before deployment, yet enterprises don’t have strong tooling or processes around model validation,” Elgendy said. Ad-hoc model testing is the norm today, and unfortunately, so are failed machine learning proof of concepts. Kolena focuses on comprehensive and thorough model evaluation. We give machine learning managers, product managers and executives unparalleled visibility into a model’s test coverage and product-specific functional requirements, allowing them to effectively influence product quality from the start.”
San Francisco-based Kolena, which has 28 full-time employees, wouldn’t share the number of customers it’s currently working with. But Elgendy said that the company’s taking a “selective approach” to partnering with “mission-critical” companies for now, and plans to roll out team bundles for mid-sized organizations and early-stage AI startups in Q2 2024.