Twitter's threat to the 2024 election is coming from inside the house
Elon Musk's social media platform is empowering the worst of the worst. It's not an accident.
Personal note from Aaron: My father’s funeral was on June 24, and I’m taking a step back from work for a couple weeks to grieve and clear my head. (ICYMI, you can read my tribute to my dad here and his obituary here.) I appreciate your support during this difficult time. You’ll be seeing less of my writing in the newsletter through this week, but thankfully I have lots of good stuff lined up from brilliant contributors like Thor, Noah Berlatsky, Lisa Needham, and Liz Dye.
Since Elon Musk took over Twitter, content moderation has been weaponized against the left. Purveyors of misinformation have been empowered, including directly by Musk, who began his tenure as the platform’s owner by spreading homophobic lies about Paul Pelosi.
This is good news for bad faith actors, but grim stuff for the rest of us. Musk is degrading his platform at a time when it’s becoming much easier to generate realistic deepfakes of people saying or doing things they never said or did. We’ve already started to see political deepfakes spread on Twitter, and it’s likely to get worse as technology advances and people test the limits of what they can get away with. Text-to-video is already in the works and will make it so anyone can produce a quality deepfake.
The prospect of technology being warped to deceive voters alarms Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights. Public Notice contributor Thor Benson spoke with Barrett, who’s an expert on social media’s impact on democracy. A transcript of their conversation, lightly edited for clarity and length, follows.
This piece is an exclusive for paid Public Notice subscribers. To read the full post, support our work, and have full access to the site going forward, please become a paid subscriber if you aren’t one already. Thanks.
Thor Benson
It’s going to get even easier to make deepfakes when it’s text-to-video, as we’re doing with images now. Does AI’s role in politics become more of a risk when the tools are this user-friendly?
Paul Barrett
That’s the primary way that it’s a danger. Generative AI is not the source of disinformation. We’ve had disinformation forever — long pre-dating the internet or social media — but the effect of technology is to make it easier to generate and then disseminate various types of misleading content. We saw a now notorious demonstration of that in the 2016 presidential election.
ChatGPT and other apps built on generative AI are going to provide a stimulus and a method to people who otherwise might not be quite clever enough to figure out how to disseminate disinformation and deepfakes. It’ll make it easy for them. You don’t need any technical expertise. It’s now quite easy to set up a totally phony account on Twitter and get it authenticated for $8 a month, and then you’re off to the races.
The kind of effort we saw in 2016 that required hundreds of Russian operatives in a physical place in St. Petersburg can now be done by a few individuals. I think we’re in for a very confusing and tumultuous campaign season where it’ll be very difficult for a lot of people to tell what’s real and what’s not real.
RELATED FROM PN: How Twitter became a haven for climate misinformation
Thor Benson
How concerned are you about the spread of deepfakes on social media ahead of the 2024 election?