It’s still January, so not too late for new year’s resolutions, right? Those of us who use social media platforms might need to make one about that.
It’s the beginning of a presidential election year in the U.S., and, as many have noted, a particularly chaotic one. Elections will also be held this year in at least 63 additional countries around the world. Needless to say, the consequences of all of those decisions will ricochet and have significant impacts across borders.
Social media platforms proudly boast of their reach across borders, too, and their ability to connect people, including voters. They speak less loudly about some of the negative implications of such reach, including the spread of misinformation and disinformation that can sway people’s votes, undermining democratic governance in the process.
In this election year, in which some of the major online platforms have announced that they will return to running political ads and are limiting (and not effectively enforcing) their own policies around such ads and other types of content, while others have changed their verification policies so that “verified” accounts are now some of the biggest spreaders of misinformation, social media users have to take on more responsibility. Specifically, we have a responsibility to share any election-related information more carefully and more slowly — even if that means we share less of it.
A year after the public launch of ChatGPT and various text, image, and voice-generating AI models, some experts predict a coming wave of AI-generated disinformation. No, the Pope never wore a puffy jacket, but remember that image? That example might seem innocuous; consider, then, the fact that in at least one country, state-run TV has shown propaganda clips featuring AI avatars purporting to be hosts of “a seemingly Western news broadcast” — clips that had initially been uploaded to YouTube.
Deepfake porn images of Taylor Swift are spreading online
Video: Moose hits the slopes at Colorado resort
Opinion: Let’s use TikTok to find the next president
US heads into post-truth election as platforms shun arbiter role
Olson: Facebook’s stubborn tolerance for audio deepfakes is absurd
Others argue that misinformation and disinformation don’t need an AI boost: they can be generated by old-fashioned means like misrepresenting what real images show, cherry-picking information or simply making shocking claims without any evidence to back them.
Most people are not misinformation experts, though, and might not be familiar with terms like “deepfakes,” “data voids,” and “availability cascades” (let alone possible interventions like “prebunking”). We are all, however, potential participants in the spread of disinformation. This is where the New Year’s resolution comes in.
As misinformation researchers like Kate Starbird have argued, we all tend to fall for stories or images that confirm our pre-existing views — and the purveyors of disinformation know that. They will generate messages that seem to demonstrate exactly what some people believe, usually in some novel or shocking way, and then funnel those messages at those particular people — who will then spread them through their own networks, adding their own credibility to the posts in the process.
It’s not “them,” though — it’s “us.” We are all likely to share misinformation or disinformation if it aligns with our take on a topic. As Starbird puts it, “Perhaps the most dangerous misconception is that disinformation targets only the unsavvy or uneducated, that it works only on ‘others.’” Getting rid of that misconception is key.
It’s true that new year’s resolutions often fall by the wayside as the year becomes less new and we get used to adding that different digit in various places. Maybe a resolution to not share as quickly, to wait for confirmation, to re-post less stuff we’re not sure about, will dissipate, too. Then again, given the stakes and the chaos, maybe we can put reminders on our calendars to renew this resolution again and again, in the spring and summer and fall, and make it a full election year resolution.
Irina Raicu is the director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.