Date: 21/04/2024 19:56:39
From: ScarlettaPimpernella
ID: 2147025
Subject: Identifying Political Deepfakes in Social Media Using AI

Identifying Political Deepfakes in Social Media Using AI

https://www.truemedia.org/

For nearly 30 years, Oren Etzioni was among the most optim­istic of arti­fi­cial intel­li­gence research­ers.

But in 2019 Dr. Etzioni, a Uni­versity of Wash­ing­ton pro­fessor and found­ing chief exec­ut­ive of the Allen Insti­tute for A.I., became one of the first research­ers to warn that a new breed of A.I. would accel­er­ate the spread of dis­in­form­a­tion online. And by the middle of last year, he said, he was dis­tressed that A.I.-gen­er­ated deep­fakes would swing a major elec­tion. He foun­ded a non­profit organ­iz­a­tion, True­Media.org in Janu­ary, hop­ing to fight that threat.

On Tues­day, the organ­iz­a­tion released free tools for identi­fy­ing digital dis­in­form­a­tion, with a plan to put them in the hands of journ­al­ists, fact check­ers and any­one else try­ing to fig­ure out what is real online.

Oren Etzioni, right, the founder of the Allen Insti­tute for A.I., star­ted True­Media.org to detect doctored images, audio and video.
On Tues­day, the organ­iz­a­tion released free tools for identi­fy­ing digital dis­in­form­a­tion, with a plan to put them in the hands of journ­al­ists, fact check­ers and any­one else try­ing to fig­ure out what is real online.

The tools, avail­able from the True­Media.org web­site to any­one approved by the group, are designed to detect fake and doctored images, audio and video. They review links to media files and quickly determ­ine whether they should be trus­ted.

Dr. Etzioni sees these tools as an improve­ment over the patch­work defense cur­rently being used to detect mis­lead­ing or decept­ive A.I. con­tent. But in a year when bil­lions of people world­wide are set to vote in elec­tions, he paints a bleak pic­ture of what lies ahead.

“I’m ter­ri­fied,” he said. “There is a very good chance we are going to see a tsunami of mis­in­form­a­tion.”

In just the first few months of the year, A.I. tech­no­lo­gies helped cre­ate fake voice calls from Pres­id­ent Biden, fake Taylor Swift images and audio ads, and an entire fake inter­view that seemed to show a Ukrain­ian offi­cial claim­ing respons­ib­il­ity for a ter­ror­ist attack in Moscow. Detect­ing such dis­in­form­a­tion is already dif­fi­cult — and the tech industry con­tin­ues to release increas­ingly power­ful A.I. sys­tems that will gen­er­ate increas­ingly con­vin­cing deep­fakes and make detec­tion even harder.

In just the first few months of the year, A.I. tech­no­lo­gies helped cre­ate fake voice calls from Pres­id­ent Biden, fake Taylor Swift images and audio ads, and an entire fake inter­view that seemed to show a Ukrain­ian offi­cial claim­ing respons­ib­il­ity for a ter­ror­ist attack in Moscow. Detect­ing such dis­in­form­a­tion is already dif­fi­cult — and the tech industry con­tin­ues to release increas­ingly power­ful A.I. sys­tems that will gen­er­ate increas­ingly con­vin­cing deep­fakes and make detec­tion even harder.

In just the first few months of the year, A.I. tech­no­lo­gies helped cre­ate fake voice calls from Pres­id­ent Biden, fake Taylor Swift images and audio ads, and an entire fake inter­view that seemed to show a Ukrain­ian offi­cial claim­ing respons­ib­il­ity for a ter­ror­ist attack in Moscow. Detect­ing such dis­in­form­a­tion is already dif­fi­cult — and the tech industry con­tin­ues to release increas­ingly power­ful A.I. sys­tems that will gen­er­ate increas­ingly con­vin­cing deep­fakes and make detec­tion even harder.

Many arti­fi­cial intel­li­gence research­ers warn that the threat is gath­er­ing steam. Last month, more than a thou­sand people — includ­ing Dr. Etzioni and sev­eral other prom­in­ent A.I. research­ers — signed an open let­ter call­ing for laws that would make the developers and dis­trib­ut­ors of A.I. audio and visual ser­vices liable if their tech­no­logy was eas­ily used to cre­ate harm­ful deep­fakes.

At an event in New York hos­ted by Columbia Uni­versity on Thursday, Hil­lary Clin­ton, the former sec­ret­ary of state, inter­viewed Eric Schmidt, the former chief exec­ut­ive of Google, who warned that videos, even fake ones, could “drive vot­ing beha­vior, human beha­vior, moods, everything.”

At an event in New York hos­ted by Columbia Uni­versity on Thursday, Hil­lary Clin­ton, the former sec­ret­ary of state, inter­viewed Eric Schmidt, the former chief exec­ut­ive of Google, who warned that videos, even fake ones, could “drive vot­ing beha­vior, human beha­vior, moods, everything.”

“I don’t think we’re ready,” Mr. Schmidt said. “This prob­lem is going to get much worse over the next few years. Maybe or maybe not by Novem­ber, but cer­tainly in the next cycle.”

The tech industry is well aware of the threat. Even as com­pan­ies race to advance gen­er­at­ive A.I. sys­tems, they are scram­bling to limit the dam­age that these tech­no­lo­gies can do. Anthropic, Google, Meta and OpenAI have all announced plans to limit or label elec­tion­re­lated uses of their arti­fi­cial intel­li­gence ser­vices. In Feb­ru­ary, 20 tech com­pan­ies — includ­ing Amazon, Microsoft, Tik­Tok and X — signed a vol­un­tary pledge to pre­vent decept­ive A.I. con­tent from dis­rupt­ing vot­ing.

The tech industry is well aware of the threat. Even as com­pan­ies race to advance gen­er­at­ive A.I. sys­tems, they are scram­bling to limit the dam­age that these tech­no­lo­gies can do. Anthropic, Google, Meta and OpenAI have all announced plans to limit or label elec­tion­re­lated uses of their arti­fi­cial intel­li­gence ser­vices. In Feb­ru­ary, 20 tech com­pan­ies — includ­ing Amazon, Microsoft, Tik­Tok and X — signed a vol­un­tary pledge to pre­vent decept­ive A.I. con­tent from dis­rupt­ing vot­ing.

That could be a chal­lenge. Com­pan­ies often release their tech­no­lo­gies as “open source” soft­ware, mean­ing any­one is free to use and modify them without restric­tion. Experts say tech­no­logy used to cre­ate deep­fakes — the res­ult of enorm­ous invest­ment by many of the world’s largest com­pan­ies — will always out­pace tech­no­logy designed to detect dis­in­form­a­tion.

Last week, dur­ing an inter­view with The New York Times, Dr. Etzioni showed how easy it is to cre­ate a deep­fake. Using a ser­vice from a sis­ter non­profit organ­iz­a­tion, CivAI, which draws on A.I. tools read­ily avail­able on the inter­net to demon­strate the dangers of these tech­no­lo­gies, he instantly cre­ated pho­tos of him­self in prison — some­where he has never been.

Last week, dur­ing an inter­view with The New York Times, Dr. Etzioni showed how easy it is to cre­ate a deep­fake. Using a ser­vice from a sis­ter non­profit organ­iz­a­tion, CivAI, which draws on A.I. tools read­ily avail­able on the inter­net to demon­strate the dangers of these tech­no­lo­gies, he instantly cre­ated pho­tos of him­self in prison — some­where he has never been.

“When you see your­self being faked, it is extra scary,” he said.

Later, he gen­er­ated a deep­fake of him­self in a hos­pital bed — the kind of image he thinks could swing an elec­tion if it is applied to Mr. Biden or former Pres­id­ent Don­ald J. Trump just before the elec­tion.

True­Media’s tools are designed to detect for­ger­ies like these. More than a dozen start-ups offer sim­ilar tech­no­logy.

But Dr. Etzioni, while remark­ing on the effect­ive­ness of his group’s tool, said no detect­ors were per­fect because they were driven by prob­ab­il­it­ies. Deep­fake detec­tion ser­vices have been fooled by images of kiss­ing robots and giant Neander­thals, rais­ing con­cerns that such tools could fur­ther dam­age soci­ety’s trust in facts and evid­ence.

But Dr. Etzioni, while remark­ing on the effect­ive­ness of his group’s tool, said no detect­ors were per­fect because they were driven by prob­ab­il­it­ies. Deep­fake detec­tion ser­vices have been fooled by images of kiss­ing robots and giant Neander­thals, rais­ing con­cerns that such tools could fur­ther dam­age soci­ety’s trust in facts and evid­ence.

When Dr. Etzioni fed True­Media’s tools a known deep­fake of Mr. Trump sit­ting on a stoop with a group of young Black men, they labeled it “highly sus­pi­cious” — their highest level of con­fid­ence. When he uploaded another known deep­fake of Mr. Trump with blood on his fin­gers, they were “uncer­tain” whether it was real or fake.

When Dr. Etzioni fed True­Media’s tools a known deep­fake of Mr. Trump sit­ting on a stoop with a group of young Black men, they labeled it “highly sus­pi­cious” — their highest level of con­fid­ence. When he uploaded another known deep­fake of Mr. Trump with blood on his fin­gers, they were “uncer­tain” whether it was real or fake.

“Even using the best tools, you can’t be sure,” he said.

The Fed­eral Com­mu­nic­a­tions Com­mis­sion recently out­lawed A.I.-gen­er­ated rob­ocalls. Some com­pan­ies, includ­ing OpenAI and Meta, are now labeling A.I.-gen­er­ated images with water­marks. And research­ers are explor­ing addi­tional ways of sep­ar­at­ing the real from the fake.

The Fed­eral Com­mu­nic­a­tions Com­mis­sion recently out­lawed A.I.-gen­er­ated rob­ocalls. Some com­pan­ies, includ­ing OpenAI and Meta, are now labeling A.I.-gen­er­ated images with water­marks. And research­ers are explor­ing addi­tional ways of sep­ar­at­ing the real from the fake.

Since he cre­ated True­Media.org, OpenAI has unveiled two new tech­no­lo­gies that prom­ise to make his job even harder. One can recre­ate a per­son’s voice from a 15-second record­ing. Another can gen­er­ate full-motion videos that look like something plucked from a Hol­ly­wood movie. OpenAI is not yet shar­ing these tools with the pub­lic, as it works to under­stand the poten­tial dangers.

Since he cre­ated True­Media.org, OpenAI has unveiled two new tech­no­lo­gies that prom­ise to make his job even harder. One can recre­ate a per­son’s voice from a 15-second record­ing. Another can gen­er­ate full-motion videos that look like something plucked from a Hol­ly­wood movie. OpenAI is not yet shar­ing these tools with the pub­lic, as it works to under­stand the poten­tial dangers.

(The Times has sued OpenAI and its part­ner, Microsoft, on claims of copy­right infringe­ment involving arti­fi­cial intel­li­gence sys­tems that gen­er­ate text.)

Ulti­mately, Dr. Etzioni said, fight­ing the prob­lem will require cooper­a­tion among gov­ern­ment reg­u­lat­ors, the com­pan­ies cre­at­ing A.I. tech­no­lo­gies and the tech giants that con­trol the web browsers and social media net­works where dis­in­form­a­tion is spread. He said, though, that the like­li­hood of that hap­pen­ing before the fall elec­tions was slim.

Ulti­mately, Dr. Etzioni said, fight­ing the prob­lem will require cooper­a­tion among gov­ern­ment reg­u­lat­ors, the com­pan­ies cre­at­ing A.I. tech­no­lo­gies and the tech giants that con­trol the web browsers and social media net­works where dis­in­form­a­tion is spread. He said, though, that the like­li­hood of that hap­pen­ing before the fall elec­tions was slim.

“We are try­ing to give people the best tech­nical assess­ment of what is in front of them,” he said. “They still

‘Ter­ri­fied’ A.I. researcher hopes to thwart elec­tion deep­fakes

I lost the URL.

Reply Quote

Date: 21/04/2024 21:29:46
From: SCIENCE
ID: 2147038
Subject: re: Identifying Political Deepfakes in Social Media Using AI

https://www.nytimes.com/2024/04/02/technology/an-ai-researcher-takes-on-election-deepfakes.html

Reply Quote