Follow Datanami:
October 18, 2022

Stable Diffusion Creator Stability AI Raises $101M

Stability AI, parent company of digital art tool Stable Diffusion, has announced a $101 million seed funding round.

The funding round was led by Coatue, Lightspeed Venture Partners, and O’Shaughnessy Ventures LLC. Bloomberg reports the company has reached a $1 billion valuation. According to a release, Stability AI will use the funding to accelerate the development of open AI models for image, language, audio, video, 3D, and more, for consumer and enterprise use cases globally.

Launched in August, Stable Diffusion is an open source text-to-image generator similar to OpenAI’s DALL-E. The engine is the result of a collaboration between Stability AI, RunwayML, Heidelberg University researchers, and the EleutherAI and LAION (Large-scale Artificial Intelligence Open Network) research groups. The generator was trained on a 250-terabyte dataset that contains 5.6 billion images captured from the internet, LAION 5B. Stable Diffusion has been downloaded and licensed by over 200,000 developers, according to the company.

Stable Diffusion has been the subject of recent controversy due to the release of the full code of its AI model which has allowed for the creation of sometimes violent, racially-biased, and pornographic images. Additionally, because the model was trained with internet-scraped copyrighted material, there may be intellectual property implications.

Stable Diffusion’s terms of service forbid the use of the platform for lewd or sexual material, hateful or violent imagery, personal information, and copyrighted material. There are also keyword filters in place for the official version of Stable Diffusion, but the company admits they need improvement.

This image was created with Stable Diffusion by Discord member DriteWhake for a Pick of the Week contest held on the Stable Diffusion Discord server. Source: Stable Diffusion

“We have developed an AI-based Safety Classifier included by default in the overall software package. This understands concepts and other factors in generations to remove outputs that may not be desired by the model user. The parameters of this can be readily adjusted and we welcome input from the community on how to improve this. Image generation models are powerful, but still need to improve to understand how to represent what we want better,” Stability AI said in a blog post addressing the controversy.

While the consumer versions of Stable Diffusion contain these safeguards, they are purportedly easy to bypass. Once the software is downloaded to an individual computer, there are no enforceable technical constraints. Ars Technica reported there are already private Discord servers dedicated to pornographic output from the model.

Stability AI CEO Emad Mostaque has said he believes people are inherently good while emphasizing personal responsibility. He says the platform’s openness will encourage innovation.

“AI promises to solve some of humanity’s biggest challenges. But we will only realize this potential if the technology is open and accessible to all,” said Mostaque. “Stability AI puts the power back into the hands of developer communities and opens the door for ground-breaking new applications. An independent entity in this space supporting these communities can create real value and change.”

Related Items:

Do We Need to Redefine Ethics for AI?

Europe’s New AI Act Puts Ethics In the Spotlight

Fighting Harmful Bias in AI/ML with a Lifelong Approach to Ethics Training

Datanami