News
We Need to Stop Freaking Out About AI Deepfakes

Published
2 months agoon
By
New Yorker
The photos are evocative. Former President Donald Trump is yelling, writhing, fighting as he’s detained by police. A swarm of officers surrounds him. His youngest wife and eldest son scream in protest. He’s in a mist—is that pepper spray?—as he charges across the pavement.
The photos are also … off. The pepper spray emerges, ex nihilo, from behind Trump’s head and in front of his chest. Behind him, a storefront sign says “WORTRKE.” In one image, a cop’s arm is outside its empty sleeve. In another, Trump has only half a torso. The officers’ badges are all gibberish. “PIULIECE” reads a cop’s hat behind a grotesque Melania Trump-like creature from uncanny valley.
All of this, you see, is fake. The photos are not photos at all but deepfakes, the work of generative AI. They’re a digital unreality created by Midjourney, a program similar to the better-known DALL-E 2 image generator and GPT-4 chatbot. And, for American politics, they’re a portend of things to come.
That’s not necessarily as scary as it may sound. There will be an adjustment period, and the next few years will be uniquely vulnerable to AI-linked confusion and manipulation in political discourse online. But in the longer term, while generative AI almost certainly won’t make our politics any better, it probably won’t make things meaningfully worse, because humans have already made them thoroughly bad.
The near-term risk is twofold. Part of it is about a single man: Trump. His behavior is uniquely outlandish; he has a long record of proven deception around matters large and small; he generates an immediate emotive response in tens of millions of Americans; and he is very difficult to ignore.
That combination makes Trump unmatched as a target for plausible deepfakes. Take these arrest images: They don’t stand up to a second’s serious scrutiny. The garbled words are a giveaway even if you somehow fail to notice the Gumby poses and not-quite-human faces.
But the concept itself isn’t immediately dismissible, is it? Trump is reportedly fixated on the possibility of doing a perp walk in cuffs, and if he wants to make a scene, a few anguished expressions from Your Favorite Martyr would be a good start. The same concept doesn’t and can’t work as well for any other figure of remotely similar prominence, including Trump’s own imitators and would-be successors in the GOP.
The other near-term risk is generational. The canny of “digital natives” is routinely overblown—plenty of young people believe plenty of internet nonsense—but research suggests age is a real factor in the spread of misinformation online. In fact, per a 2019 study published in Science Advances, it’s among the most important factors.
During the 2016 election, “[m]ore than one in 10, or 11.3 percent, of people over age 65 shared links [on Facebook] from a fake news site, while only 3 percent of those age 18 to 29 did so,” the researchers wrote at The Washington Post.
“These gaps between old and young hold up even after controlling for partisanship and ideology,” they found. “No other demographic characteristic we examined—gender, income, education—had any consistent relationship with the likelihood of sharing fake news.” (Incidentally, though institutional distrust and brokenism are relevant factors, too, Republicans are a bit older than Democrats, and studies have found a higher rate of misinformation sharing on the right.)
This difference isn’t something inherent to older or younger generations. It’s just a matter of familiarity with internet culture—an accident of birth. The longer generative AI is with us, then, even as the technology improves, the more we’ll develop that familiarity with its output. We’ll become more accustomed to noticing signs of deception, to subconsciously realizing a piece of content is somehow artificial and untrustworthy.
Or, at least, we’ll develop those instincts of skepticism if we want them. Many won’t.
Ironically, that unfortunate reality is why I don’t share the fears expressed in a New York Times report this week on the prospect of politically biased AI. The risk of partisan “chatbots [making] ‘information bubbles on steroids’ because people might come to trust them as the ‘ultimate sources of truth’” strikes me as overblown.
“People are gullible and tribalistic already. Misinformation can even spread by accident. It doesn’t need intelligence, let alone artificial intelligence, to get going.”
Our political information environment is already very high-quantity and variant in quality. AI content generation will marginally reduce the barrier of effort it takes to add lies to that mix, but not by much. People are gullible and tribalistic already. Misinformation can even spread by accident. It doesn’t need intelligence, let alone artificial intelligence, to get going.
Moreover, acceptance of fabricated content isn’t typically tied to how well-written or well-designed it is. The pixelated Minions memes propagating garbage “facts” on Facebook aren’t exactly a high-effort product. If anything, it might be easier to realize you were fooled by a fake Trump arrest image than by whatever lie or half-truth those memes tell. After all, Trump will soon appear in public unscathed by the violent arrest that never happened. Untold millions of old-fashioned memes will be share, believed, and never debunked.
So it’s not that chatbots won’t be biased and image generators won’t be used to deceive. They will, on both counts. But we don’t need AI to lie to each other. We don’t need politicized chatbots have information bubbles on steroids. And anyone who thinks a chatbot is the ultimate source of truth wouldn’t have been a discerning political thinker even in a pre-digital age.
Source: The Daily Beast

Tim Scott draws boos during chaotic ‘View’ appearance, forcing Whoopi Goldberg to apologize

Mother of 6-year-old who shot Virginia teacher charged with federal gun crimes

Sen. Blumenthal introduces bill to screen newborns for cytomegalovirus, which can cause deafness and seizures

Arab icon Jabeur banking on ‘romantic’ atmosphere in Paris to win first Grand Slam

The two important ways 2024 isn’t like 2016

Scotland’s Latest Attempt To Kill Its Own Economy

Hiking for Beginners: 9 Tips to Help You Hit the Trails

Week in Review: Most popular stories on GeekWire for the week of May 21, 2023

The best bizarre museums to visit in Europe, according to tourists

ST. REGIS HOTELS & RESORTS AND VILEBREQUIN REVEAL LIMITED-EDITION CAPSULE COLLECTION

George Harrison Invited the Hells Angels to a Party but Was Too Afraid to Attend

Buckle Up For Semiconductor Surge: Taiwan’s TSM Unleashes Game-Changing 3nm Chips

Pics: Somerville’s Sligo Pub says a not-so-Irish Goodbye

Apple Expands Its On-Device Nudity Detection to Combat CSAM
Prince William’s 2-Word Demand to Kate Middleton During Jordan Royal Wedding Revealed
Trending
-
News23 hours ago
Asia markets rise after Biden signs debt ceiling bill; oil surges on OPEC+ cuts
-
News14 hours ago
BRICS currency gambit a timely warning to the buck
-
Auto14 hours ago
Future BMW Gas Models To Drop The “i” From The End Of Their Names: Report
-
Auto18 hours ago
Lexus LBX Debuts As Small Luxury Crossover With 134 Hybrid Horsepower
-
Auto16 hours ago
12 Three-Row SUVs Drag Race Before Surprise Guest Arrives To Take The Win
-
News22 hours ago
NRA director pays heartbreaking tribute to daughter and granddaughter killed in Cessna crash
-
News22 hours ago
IMF chief says there’s no significant slowdown in lending and the Fed may need to do more
-
Travel15 hours ago
From Nazareth to Jerusalem: A Christian Itinerary in Israel