[ad_1]
But researchers I’ve spoken with over the previous few months say the 2024 US presidential election would be the first with widespread use of micro-influencers who don’t usually put up about politics and have constructed small, particular, extremely engaged audiences, typically composed primarily of 1 explicit demographic. In Wisconsin, for instance, such a micro-influencer marketing campaign could have contributed to file voter turnout for the state supreme court docket election final yr. This technique permits campaigns to plug into a particular group of individuals through a messenger they already belief. In addition to posting for money, influencers additionally assist campaigns perceive their viewers and platforms.
This new messaging technique appears to function in a little bit of a authorized grey space. Currently, there aren’t clear guidelines on how influencers must disclose paid posts and oblique promotional materials (like, say, if an influencer posts about going to a marketing campaign occasion however the put up itself isn’t sponsored). The Federal Election Commission has drafted steerage, which a number of teams have urged it to undertake.
While a lot of the sources I’ve spoken with have talked in regards to the progress of this pattern within the US, it’s additionally occurring in different nations. Wired wrote an awesome story again in November in regards to the affect of influencers on India’s election.
Digital censorship
Crackdowns on speech by political actors are after all not new, however this exercise is on the rise, and its elevated precision and frequency is a results of technology-enabled surveillance, on-line concentrating on, and state management of on-line domains. The newest web freedom report from Freedom House confirmed that generative AI is now aiding censorship, and authoritarian governments are rising their management of web infrastructure. Blackouts too are on the rise.
In only one instance, latest reporting by the Financial Times reveals that the present Turkish authorities is tightening web censorship forward of elections in March by directing web service suppliers to restrict entry to non-public networks.
More broadly, digital censorship goes to be a crucial human rights concern and a core weapon within the wars of the long run. Take, for instance, Iran’s excessive censorship throughout protests in 2022, or the ongoing partial web blackout in Ethiopia.
I’d urge you to maintain a detailed eye on these three technological forces all through the brand new yr, and I’ll be doing the identical—albeit from afar!
On a private notice, that is my final Technocrat at MIT Technology Review, as I’ll be leaving to pursue alternatives outdoors of journalism. I’ve cherished having a house in your inboxes over the previous yr and am humbled by the belief you’ve given me to cowl tales of immense significance, like how police are surveilling Black Lives Matter protesters, the methods technology is altering magnificence requirements for younger ladies, and why government know-how is so exhausting to get proper.
Stories about how know-how is altering our nations and our communities have by no means been extra essential, so please preserve studying my colleagues at MIT Technology Review, who will proceed to cowl these subjects with experience, steadiness, and rigor. I’d additionally encourage you to join our different newsletters: The Algorithm on AI, The Spark on local weather, The Checkup on biotech, and China Report on all issues tech and China.
What I’m studying this week
- OpenAI has eliminated its ban on navy use of its AI instruments, in response to this nice report by Hayden Field in CNBC. The transfer comes as the corporate begins work with the Department of Defense on AI.
- Many of the world’s largest and brightest are in Davos this week on the World Economic Forum, and Cat Zakrzewski says the speak of the city is AI security. I actually loved her insider look in The Washington Post on the tech coverage issues which might be prime of thoughts.
- Researchers from Indiana University Bloomington have discovered that OpenAI and different giant language fashions energy some malicious web sites and providers, similar to instruments that generate malware and phishing emails. I discovered this write-up from Prithvi Iyer in Tech Policy Press actually insightful!
What I discovered this week
Google’s DeepMind has created an AI system that is excellent at geometry, a traditionally exhausting subject for synthetic intelligence. My colleague June Kim wrote that the brand new system, referred to as AlphaGeometry, “combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions.” She says the system is “a significant step toward machines with more human-like reasoning skills.”
