Roboticists imagine that, utilizing new AI methods, they’ll unlock extra succesful robots that may transfer freely by way of unfamiliar environments and sort out challenges they’ve by no means seen earlier than.
But one thing is standing in the way in which: lack of entry to the varieties of information used to coach robots to allow them to work together with the bodily world. It’s far more durable to return by than the information used to coach probably the most superior AI fashions, and that shortage is without doubt one of the predominant issues at present holding progress in robotics again.
As a outcome, main firms and labs are in fierce competitors to seek out new and higher methods to collect the information they want. It’s led them down unusual paths, like utilizing robotic arms to flip pancakes for hours on finish. And they’re working into the identical types of privateness, ethics, and copyright points as their counterparts on the earth of AI. Read the complete story.
—James O’Donnell
My deepfake exhibits how helpful our information is within the age of AI
—Melissa Heikkilä
Deepfakes are getting good. Like, actually good. Earlier this month I went to a studio in East London to get myself digitally cloned by the AI video startup Synthesia. They made a hyperrealistic deepfake that seemed and sounded similar to me, with practical intonation. The finish outcome was mind-blowing. It may simply idiot somebody who doesn’t know me effectively.
Synthesia has managed to create AI avatars which might be remarkably humanlike after just one yr of tinkering with the newest technology of generative AI. It’s equally thrilling and daunting fascinated with the place this expertise goes. But they increase an enormous query: What occurs to our information as soon as we submit it to AI firms? Read the complete story.
This story is from The Algorithm, our weekly AI e-newsletter. Sign up to obtain it in your inbox each Monday.