Connect with us


Concerns grow that AI-generated letters to lawmakers may skew our politics



NEW YORK — Artificial intelligence technology continues to develop at a rapid pace, and with the emergence of AI-powered language models, such as ChatGPT, concerns are developing about the potential spread of misinformation.

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems, and it’s becoming more prevalent in U.S. society day by day.  A new Cornell University study shows politicians from across the country couldn’t tell the difference between constituency letters written by humans versus letters written by AI. 

Sarah Kreps, director of Cornell’s Tech Policy Institute, co-led the recent study.  

“In a representative democracy, elected leaders are representing the people, and so what do they, how do they that? They do that by listening to their constituents,” Kreps said. 

Kreps shared examples of two constituency letters that were sent out — one written by a human and the other AI composed by an older version of ChatGPT. Both were on the topic of gun control. 

In the study, more than 7,132 legislators that were sent emails, 30% replied. Out of that 30%, all of the responses were nearly the same to the AI-generated and human-written emails, according to the study. 


“So instead of that legislative agenda being set by real citizens, now that’s being set by a malicious actor, or a malicious country, a country with malicious aims to make it seem like there’s public support for something when there actually isn’t,” Kreps said. 

Kreps said the study was prompted by Russian agents use of bots during the 2016 election in an attempt to impose certain ideals onto the American electorate via social media. 

Justin Hendrix is the CEO and editor of Tech Policy Press. He told CBS2’s Zinnia Maldonado as AI becomes more sophisticated, its ability to distort democracy is becoming alarming. 

“What happens, not only when activists or interest groups are trying to use these automated technologies to influence legislators, but also when legislators are using them to respond to constituents?” he said. 

Another question—- how can someone spot the difference when it comes to something written by a person or AI? 

“You can see once in a while on certain things it gets it wrong or there will be little grammatical errors,” Kreps explained. 

“Some odd irregularities around gender, certain details. Evidence that perhaps the constituent was not from the same geography as the lawmaker. But you can’t rely on the fact that those are always going to be there. It’s going to be very difficult,” Hendrix said.


Maybe even more important is what lies ahead. 

“We have to ask ourselves as well, what happens when the internet is awash of content generated by machines?” Hendrix said. 

Source: CBS

Follow us on Google News to get the latest Updates