But researchers I’ve spoken with in recent months say the 2024 U.S. presidential election will be the first with widespread use of micro-influencers who do not typically post about politics and who have built a small, specific, and highly engaged audience, often comprised primarily of a particular demographic. In Wisconsin, for example, such micro-influencer campaign may have contributed to record turnout in last year’s state Supreme Court election. This strategy allows campaigns to connect to a specific group of people through a messenger they already trust. In addition to posting for money, Influencers also help campaigns understand their audience and platforms.
This new messaging strategy appears to be operating in a legal gray area. Currently, there are no clear rules on how influencers should disclose paid posts and indirect promotional materials (such as, for example, if an influencer posts about attending a campaign event but the post itself is not sponsored). The Federal Election Commission has drafted guidelines that several groups have urged him to adopt.
Although most sources I spoke with talked about the growth of this trend in the United States, it is also happening in other countries. Wired wrote a beautiful story in November on the impact of influencers on elections in India.
Digital censorship
The repression of the speeches of political actors is of course not new, but this activity is increasing, and its increased precision and frequency are the result of technological surveillance, online targeting, and state control of online domains. THE last internet freedom A Freedom House report showed that generative AI is now contributing to censorship and authoritarian governments are increasing their control over internet infrastructure. Power outages are also increasing.
In just one example, recent report from Financial Times shows that the current Turkish government is increasing Internet censorship ahead of the March elections by ordering Internet service providers to limit access to private networks.
More broadly, digital censorship will become a crucial human rights issue and an essential weapon in the wars of the future. Take, for example, Iran extreme censorship during protests in 2022, or the partial internet outage in progress in Ethiopia.
I invite you to keep a close eye on these three technological forces throughout the new year, and I will do the same, albeit from afar!
On a personal note, he is my last technocrat to MIT Technology Review, as I will be leaving to pursue opportunities outside of journalism. I’ve loved having a place in your inboxes over the past year and am humbled by the trust you’ve placed in me to cover stories of immense importance, like how the police monitor Black Lives Matter protesters, the means technology is changing beauty standards for young girls, and why government technology is so difficult to be right.
Stories about how technology is changing our countries and communities have never been more important, so keep reading my colleagues at MIT Technology Review, who will continue to address these subjects with expertise, balance and rigor. I also encourage you to subscribe to our other newsletters: The algorithm on AI, The spark on the climate, The balance sheet on biotechnology, and China Report on everything related to technology and China.
What I’m reading this week
- OpenAI has lifted its ban on military use of its AI tools, according to this superb report by Hayden Field in CNBC. The move comes as the company begins working with the Department of Defense on AI.
- Many of the world’s biggest and brightest people are gathering in Davos this week for the World Economic Forum, and Cat Zakrzewski says the main topic of conversation is AI safety. I really enjoyed it insider look The Washington Post to technology policy concerns which are a priority.
- Researchers at Indiana University Bloomington have found that OpenAI and other large language models power some malicious websites and services, such as malware-generating tools and phishing emails. I’ve found This item by Prithvi Iyer in Technology Policy Press really insightful!
What I learned this week
Google’s DeepMind has created an AI system that performs well in geometry, a historically difficult area for artificial intelligence. My colleague June Kim wrote that the new system, called AlphaGeometry, “combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make inferences.” She says the system is “an important step toward machines with reasoning capabilities closer to those of humans.”