Advancements in artificial intelligence could help bad actors influence political discourse ahead of the midterm elections, a former North Carolina cybersecurity official warns.
Improvements in AI technology are making it easier for people to create accounts on social media that are controlled by bots, says Torry Crass, a cybersecurity analyst who previously worked for North Carolina’s Department of Information Technology and state Board of Elections. Those bots post about controversial political subjects in hopes of influencing North Carolina voters, Crass said Wednesday during a virtual presentation on the issue.
“There is really a kind of transformation that has taken place in the last couple of years related to bots, as well as it pertaining to mis-and-disinformation,” Crass said during the event, hosted by the Catawba College Center for North Carolina Politics and Public Service.
“As we get closer to the election, I would expect to see the generated AI content go significantly higher than what it is today,” he said.
The November elections will determine who controls Congress, state legislatures and more. North Carolina’s U.S. Senate race between Democrat Roy Cooper and Republican Michael Whatley is expected to be among the tightest in the nation.
Federal laws don’t prohibit the use of AI to create misleading audio or video in political ads. And AI-use by political groups has increased in recent years. People have used AI in subtle ways, such as to generate video of military jets flying over a candidate’s campaign events. People have also used it to impersonate a candidate’s image or voice.
People have also created social media accounts with the sole purpose of generating outrage that influences a voter’s opinion of a certain candidate or political party.
In previous elections, Crass said, bot-controlled social media accounts featured flaws that were easy to detect. Those accounts might feature only one photo of the alleged social media user or inconsistencies in the user’s biographical information. Now, AI can create authentic-looking images and populate social media accounts with more consistent details.
Still, Crass said, voters can identify bot-operated social media accounts by finding certain clues:
- If a social media account was created mere months before an election, that could indicate an intent to influence voters.
- If an account’s posts are nearly identical to posts by other accounts, it might be operated by a bot.
- If an account regularly posts content at night when most people are sleeping, it might be operated by a bot or by someone from another country.
- If an account only posts political content — and doesn’t it produce posts about the user’s personal life or other topics — it might be a bot.
“That is indicative of how AI functions, because it’s purpose-built,” Crass said.
“Each of these things is a breadcrumb and a red flag,” he said. “It doesn’t necessarily mean something is malicious or bad, but when you start seeing these red flags, and you start adding them up, it starts becoming very clear that something more is going on.”
