[ad_1]
Big Tech is surprisingly dangerous at catching, labeling, and eradicating dangerous content material. In principle, new advances in AI ought to enhance our skill to do this. In observe, AI isn’t superb at deciphering nuance and context. And most automated content material moderation programs have been skilled with English knowledge, that means they don’t perform properly with different languages.
The latest emergence of generative AI and huge language fashions like ChatGPT implies that content material moderation is more likely to change into even tougher.
Whether generative AI finally ends up being extra dangerous or useful to the net data sphere largely hinges on one factor: AI-generated content material detection and labeling. Read the complete story.
—Tate Ryan-Mosley
Tate’s story is from The Technocrat, her weekly publication supplying you with the within observe on all issues energy in Silicon Valley. Sign up to obtain it in your inbox each Friday.
If you’re taken with generative AI, why not take a look at:
+ How to identify AI-generated textual content. The web is more and more awash with textual content written by AI software program. We want new instruments to detect it. Read the complete story.
+ The inside story of how ChatGPT was constructed from the individuals who made it. Read our unique conversations with the important thing gamers behind the AI cultural phenomenon.
+ Google is throwing generative AI at the whole lot. But specialists say that releasing these fashions into the wild earlier than fixing their flaws might show extraordinarily dangerous for the corporate. Read the complete story.
