The Download: conspiracy-debunking chatbots, and fact-checking AI

0
13457
The Download: conspiracy-debunking chatbots, and fact-checking AI


This is at present’s version of The Download, our weekday publication that gives a day by day dose of what’s occurring on the planet of know-how.

Chatbots can persuade individuals to cease believing in conspiracy theories

The web has made it simpler than ever earlier than to come across and unfold conspiracy theories. And whereas some are innocent, others could be deeply damaging, sowing discord and even resulting in pointless deaths.

Now, researchers consider they’ve uncovered a brand new instrument for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University discovered that chatting a couple of conspiracy concept with a big language mannequin (LLM) decreased individuals’s perception in it by about 20%—even amongst contributors who claimed that their beliefs have been vital to their id

The findings may characterize an vital step ahead in how we interact with and educate individuals who espouse baseless theories. Read the complete story.

—Rhiannon Williams

Google’s new instrument lets massive language fashions fact-check their responses

The information: Google is releasing a instrument referred to as DataGemma that it hopes will assist to cut back issues attributable to AI ‘hallucinating’, or making incorrect claims. It makes use of two strategies to assist massive language fashions fact-check their responses towards dependable information and cite their sources extra transparently to customers. 

What subsequent: If it really works as hoped, it may very well be an actual boon for Google’s plan to embed AI deeper into its search engine. But it comes with a number of caveats. Read the complete story.

—James O’Donnell

Neuroscientists and designers are utilizing this huge laboratory to make buildings higher

Have you ever discovered your self misplaced in a constructing that felt inconceivable to navigate? Thoughtful constructing design ought to middle on the individuals who will probably be utilizing these buildings. But that’s no imply feat.

A design that works for some individuals may not work for others. People have totally different minds and our bodies, and ranging desires and desires. So how can we issue all of them in?

To reply that query, neuroscientists and designers are becoming a member of forces at an unlimited laboratory in East London—one that permits researchers to construct simulated worlds. Read the complete story.

—Jessica Hamzelou

This story is from The Checkup, our weekly biotech and well being publication. Sign up to obtain it in your inbox each Thursday.

The must-reads

I’ve combed the web to search out you at present’s most enjoyable/vital/scary/fascinating tales about know-how.

1 OpenAI has launched an AI mannequin with ‘reasoning’ capabilities
It claims it’s a step towards its broader objective of human-like synthetic intelligence. (The Verge)
+ It may show notably helpful for coders and math tutors. (NYT $)
+ Why does AI being good at math matter? (MIT Technology Review)

2 Microsoft desires to prepared the ground in local weather innovation
While concurrently promoting AI to fossil gasoline firms. (The Atlantic $)
+ Google, Amazon and the issue with Big Tech’s local weather claims. (MIT Technology Review)

3 The FDA has authorized Apple’s AirPods as listening to aids
Just two years after the physique first authorized over-the-counter aids. (WP $)
+ It may essentially shift how individuals entry hearing-enhancing units. (The Verge)

4 Parents aren’t utilizing Meta’s little one security controls 
So claims Nick Clegg, the corporate’s international affairs chief. (The Guardian)
+ Many tech execs prohibit their very own childrens’ publicity to know-how. (The Atlantic $)

5 How AI is turbo boosting authorized motion
Especially in the case of mass litigation. (FT $)

6 Low-income Americans have been focused by false advertisements without cost money
Some victims had their medical health insurance plans modified with out their consent. (WSJ $)

7 Inside the stratospheric rise of the ‘medical’ beverage
Promising us the whole lot from glowier pores and skin to elevated vitality. (Vox)

8 Japan’s police power is heading on-line
Cybercrime is booming, as prison exercise in the true world drops. (Bloomberg $)

9 AI can replicate your late family members’ handwriting ✍
For some, it’s a touching reminder of somebody they liked. (Ars Technica)
+ Technology that lets us “speak” to our lifeless family members has arrived. Are we prepared? (MIT Technology Review)

10 Crypto creators are resorting to harmful stunts for consideration
Don’t do that at dwelling. (Wired $)

Quote of the day

“You can’t have a conversation with James the AI bot. He’s not going to show up at events.”

—A former reporter for Garden Island, an area newspaper in Hawaii, dismisses the corporate’s choice to put money into new AI-generated presenters for its web site, Wired studies.

The massive story

AI hype is constructed on excessive check scores. Those checks are flawed.

August 2023

In the previous few years, a number of researchers declare to have proven that enormous language fashions can go cognitive checks designed for people, from working by issues step-by-step, to guessing what different persons are pondering.

These sorts of outcomes are feeding a hype machine predicting that these machines will quickly come for white-collar jobs. But there’s an issue: There’s little settlement on what these outcomes actually imply. Read the complete story.
 
—William Douglas Heaven

We can nonetheless have good issues

A spot for consolation, enjoyable and distraction to brighten up your day. (Got any concepts? Drop me a line or tweet ’em at me.)

+ It’s nearly time for Chinese mooncake insanity to have a good time the Moon Festival! 🥮
+ Pearl the Wonder Horse isn’t only a remedy animal—she’s additionally an completed keyboardist.
+ We love you Peter Dinklage!
+ Money for Nothing sounds even higher on a lute.

LEAVE A REPLY

Please enter your comment!
Please enter your name here