Saturday, February 15, 2025
3.3 C
London
HomeAIDistributional wants to develop software to reduce AI risk

Distributional wants to develop software to reduce AI risk

Date:

Goldman Sachs Develops AI Assistant Mimicking Seasoned Bankers

Generative AI Tool Aims to Enhance Efficiency and Decision-Making Highlights:...

Anduril and Palantir Partner to Enhance AI Capabilities for National Security

Subheading The collaboration between Anduril and Palantir aims to revolutionize...

Companies are increasingly curious about AI and the ways in which it can be used to (potentially) boost productivity. But they’re also wary of the risks. In a recent Workday survey, enterprises cite the timeliness and reliability of the underlying data, potential bias and security and privacy as the top barriers to AI implementation.

Sensing a business opportunity, Scott Clark, who previously co-founded the AI training and experimentation platform SigOpt (which was acquired by Intel in 2020), set out to build what he describes as “software that makes AI safe, reliable and secure.” Clark launched a company, Distributional, to get the initial version of this software off the ground, with the goal of scaling and standardizing tests to different AI use cases.

“Distributional is building the modern enterprise platform for AI testing and evaluation,” Clark told TechCrunch in an email interview. “As the power of AI applications grows, so does the risk of harm. Our platform is built for AI product teams to proactively and continuously identify, understand and address AI risk before it harms their customers in production.”

Clark was inspired to launch Distributional after encountering tech-related AI challenges at Intel post-SigOpt acquisition. While overseeing a team as Intel’s VP and GM of AI and high-performance compute, he found it nearly impossible to ensure that high-quality AI testing was taking place on a regular cadence.

“The lessons I drew from my convergence of experiences pointed to the need for AI testing and evaluation,” Clark continued. “Whether from hallucinations, instability, inaccuracy, integration or dozens of other potential challenges, teams often struggle to identify, understand and address AI risk through testing. Proper AI testing requires depth and distributional understanding, which is a hard problem to solve.”

Distributional’s core product aims to detect and diagnose AI “harm” from large language models (à la OpenAI’s ChatGPT) and other types of AI models, attempting to semi-automatically suss out what, how and where to test models. The software offers organizations a “complete” view of AI risk, Clark says, in a pre-production environment that’s akin to a sandbox.

“Most teams choose to assume model behavior risk, and accept that models will have issues.” Clark said. “Some may try ad-hoc manual testing to find these issues, which is resource-intensive, disorganized, and inherently incomplete. Others may try to passively catch these issues with passive monitoring tools after AI is in production … [That’s why] our platform includes an extensible testing framework to continuously test and analyze stability and robustness, a configurable testing dashboard to visualize and understand test results, and an intelligent test suite to design, prioritize and generate the right combination of tests.”

Now, Clark was vague on the details of how this all works — and the broad outlines of Distributional’s platform for that matter. It’s very early days, he said in his defense; Distributional is still in the process of co-designing the product with enterprise partners.

So given that Distributional is pre-revenue, pre-launch and without paying customers to speak of, how can it hope to compete against the AI testing and evaluation platforms already on the market? There’s lots after all, including Kolena, Prolific, Giskard and Patronus — many of which are well-funded. And if the competition weren’t intense enough, tech giants like Google Cloud, AWS and Azure offer model evaluation tools as well.

Clark says that he believes that Distributional is differentiated in its software’s enterprise bent. “From day one, we’re building software capable of meeting the data privacy, scalability and complexity requirements of large enterprises in both unregulated and highly regulated industries,” he said. “The types of enterprises with whom we are designing our product have requirements that extend beyond existing offerings available in the market, which tend to be individual developer focused tools.”

If all goes according to plan, Distributional will start generating revenue sometime next year once its platform launches in general availability and a few of its design partners convert to paid customers. In the meantime, the startup’s raising capital from VCs; Distributional today announced that it closed an $11 million seed round led by Andreessen Horowitz’s Martin Casado with participation from Operator Stack, Point72 Ventures, SV Angel, Two Sigma and angel investors.

“We hope to usher in a virtuous cycle for our customers,” Clark said. “With better testing, teams will have more confidence deploying AI in their applications. As they deploy more AI, they will see its impact grow exponentially. And as they see this impact scale, they will apply it to more complex and meaningful problems, which in turn will need even more testing to ensure it is safe, reliable, and secure.”

source

Rinsu Ann Easo
Rinsu Ann Easo
Diligent Technical Lead with 9 years of experience in software development. Successfully lead project management teams to build technological products. Exposed to software development life cycle including requirement analysis, program design, development and unit testing and application maintenance. Has worked on Java, PHP, PL/SQL, Oracle forms and Reports, Oracle, Bootstrap, structs, jQuery, Ajax, java script, CSS, Microsoft Excel, Microsoft Word, C++, and Microsoft Office.

Related stories

spot_img

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories