Last month, we began previewing DALL·E 2 to a restricted variety of trusted customers to study concerning the know-how’s capabilities and limitations.
Since then, we’ve been working with our customers to actively incorporate the teachings we study. As of in the present day:
- Our customers have collectively created over 3 million pictures with DALL·E.
- We’ve enhanced our security system, bettering the textual content filters and tuning the automated detection & response system for content material coverage violations.
- Less than 0.05% of downloaded or publicly shared pictures have been flagged as doubtlessly violating our content material coverage. About 30% of these flagged pictures have been confirmed by human reviewers to be coverage violations, resulting in an account deactivation.
- As we work to grasp and handle the biases that DALL·E has inherited from its coaching information, we have requested early customers to not share photorealistic generations that embody faces and to flag problematic generations. We imagine this has been efficient in limiting potential hurt, and we plan to proceed the follow within the present section.
Learning from real-world use is an essential half of our dedication to develop and deploy AI responsibly, so we’re beginning to widen entry to customers who joined our waitlist, slowly however steadily.
We intend to onboard as much as 1,000 folks each week as we iterate on our security system and require all customers to abide by our content material coverage. We hope to extend the speed at which we onboard new customers as we study extra and acquire confidence in our security system. We’re impressed by what our customers have created with DALL·E to this point, and excited to see what new customers will create.
In the meantime, you may get a preview of those creations on our Instagram account: @openaidalle.