Deeper Learning
Catching dangerous content material within the age of AI
In the final 10 years, Big Tech has change into actually good at some issues: language, prediction, personalization, archiving, textual content parsing, and information crunching. But it’s nonetheless surprisingly dangerous at catching, labeling, and eradicating dangerous content material. One merely must recall the unfold of conspiracy theories about elections and vaccines within the United States over the previous two years to grasp the real-world injury this causes. The ease of utilizing generative AI might turbocharge the creation of extra dangerous on-line content material. People are already utilizing AI language fashions to create faux information web sites.
But might AI assist with content material moderation? The latest giant language fashions are a lot better at deciphering textual content than earlier AI programs. In concept, they may very well be used to spice up automated content material moderation. Read extra from Tate Ryan-Mosley in her weekly e-newsletter, The Technocrat.
Bits and Bytes
Scientists used AI to discover a drug that might combat drug-resistant infections
Researchers at MIT and McMaster University developed an AI algorithm that allowed them to discover a new antibiotic to kill a sort of micro organism liable for many drug-resistant infections which are frequent in hospitals. This is an thrilling improvement that exhibits how AI can speed up and assist scientific discovery. (MIT News)
Sam Altman warns that OpenAI might give up Europe over AI guidelines
At an occasion in London final week, the CEO mentioned OpenAI might “cease operating” within the EU if it can not adjust to the upcoming AI Act. Altman mentioned his firm discovered a lot to criticize in how the AI Act was worded, and that there have been “technical limits to what’s possible.” This is probably going an empty menace. I’ve heard Big Tech say this many occasions earlier than about one rule or one other. Most of the time, the danger of dropping out on income on this planet’s second-largest buying and selling bloc is simply too massive, they usually determine one thing out. The apparent caveat right here is that many corporations have chosen to not function, or to have a restrained presence, in China. But that’s additionally a really completely different state of affairs. (Time)
Predators are already exploiting AI instruments to generate little one sexual abuse materials
The National Center for Missing and Exploited Children has warned that predators are utilizing generative AI programs to create and share faux little one sexual abuse materials. With highly effective generative fashions being rolled out with safeguards which are insufficient and simple to hack, it was solely a matter of time earlier than we noticed circumstances like this. (Bloomberg)
Tech layoffs have ravaged AI ethics groups
This is a pleasant overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their groups targeted on web belief and security in addition to AI ethics. Meta, for instance, ended a fact-checking mission that had taken half a 12 months to construct. While corporations are racing to roll out highly effective AI fashions of their merchandise, executives wish to boast that their tech improvement is secure and moral. But it’s clear that Big Tech views groups devoted to those points as costly and expendable. (CNBC)