Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

Deepfakes- What needs to be done next

DEEP FAKES UPDATED READ THEN WATCH VIDEO MIND BLOWING The GUARDIAN, 2018 https://www.technologyreview.com/s/613846/a-new-deepfake-detection-tool-should-keep-world-leaders-safefor-now/ Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? READ ARTICLE THEN WATCH THE VIDEO MIND BOGGLING https://edition.cnn.com/videos/business/2019/06/11/deepfake-videos-2020-election.cnn/video/playlists/business-news/ https://www.theguardian.com/profile/oscar-schwartz On Thursday, the US House of Representatives held its first dedicated hearing on deepfakes, the class of synthetic media generated by AI. As if on cue, two high-profile reports of deepfakes on social media surfaced in the news. The first was of a forged video of Mark Zuckerberg, among other famous figures, created by artists as part of an exhibition to raise awareness about data privacy. The second was a report from AP about how a spy likely used a non-existent face on LinkedIn to infiltrate the Washington political scene. Deepfakes are arguably still not mainstream, but they are here already. The technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. "Many of the ways that people would consider using deepfakes-to attack journalists, to imply corruption by politicians, to manipulate evidence-are clearly evolutions of existing problems," says Sam Gregory, the program director of the human right non-profit Witness, "so we should expect people to try on the latest ways to do those effectively." They really don't have to be good to be damagingly deceptive, either. In fact, faked videos don't have to be deepfakes at all. The recent doctored video of Nancy Pelosi, which was merely slowed down to make her appear drunk, is an example of a "cheapfake" that can also get out of hand. Preparing for the era of deepfakes, therefore, is also about addressing our current state of fake news and misinformation. During the hearing, the House members and experts present discussed the current state of the technology, what regulators might do, and possible methods of retaliation against foreign governments should they use deepfakes to threaten national security or disrupt elections. The focus was largely on the upcoming 2020 elections, but the discussion also touched upon the impact of deepfakes on journalists, particularly female journalists, and other vulnerable populations online. "My overall impression was that it was-with a couple of exceptions-genuinely thoughtful," says Jack Clark, OpenAI's policy director who was among the experts who testified. "The members there were asking what I felt were quite reasonable and detailed questions." So where do we go from here? A Witness report and a bill on deepfakes, introduced by Representative Yvette Clarke in parallel, offered similar recommendations for the path

Deepfakes- What needs to be done next? On Thursday, the US House of Representatives held its first dedicated hearing on deepfakes, the class of synthetic media generated by AI. As if on cue, two high-profile reports of deepfakes on social media surfaced in the news. The first was of a forged video of Mark Zuckerberg, among other famous figures, created by artists as part of an exhibition to raise awareness about data privacy. The second was a report from AP about how a spy likely used a non-existent face on LinkedIn to infiltrate the Washington political scene. Deepfakes are arguably still not mainstream, but they are here already. The technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems,” says Sam Gregory, the program director of the human right non-profit Witness, “so we should expect people to try on the latest ways to do those effectively.” They really don’t have to be good to be damagingly deceptive, either. In fact, faked videos don’t have to be deepfakes at all. The recent doctored video of Nancy Pelosi, which was merely slowed down to make her appear drunk, is an example of a “cheapfake” that can also get out of hand. Preparing for the era of deepfakes, therefore, is also about addressing our current state of fake news and misinformation. During the hearing, the House members and experts present discussed the current state of the technology, what regulators might do, and possible methods of retaliation against foreign governments should they use deepfakes to threaten national security or disrupt elections. The focus was largely on the upcoming 2020 elections, but the discussion also touched upon the impact of deepfakes on journalists, particularly female journalists, and other vulnerable populations online. “My overall impression was that it was—with a couple of exceptions—genuinely thoughtful,” says Jack Clark, OpenAI’s policy director who was among the experts who testified. “The members there were asking what I felt were quite reasonable and detailed questions.” So where do we go from here? A Witness report and a bill on deepfakes, introduced by Representative Yvette Clarke in parallel, offered similar recommendations for the path forward: companies and researchers who produce tools for deepfakes must also invest in countermeasures; social media and search companies should invest in and integrate manipulation detection features directly into their platforms; and regulators should not just focus on politicians but also consider vulnerable populations and international communities. Clark has another: the government should develop ways of measuring the state of the technology by engaging directly with the scientific literature. It would help them pre-empt the issues much earlier the next time around, he says. “I do think we could’ve had this conversation two years ago.” Deepfakes have got Congress panicking. This is what it needs to do. With the election approaching, lawmakers are facing up to the fact they need to do something about the explosion in manipulated media. by Karen Hao Jun 12, 2019 The recent rapid spread of a doctored video of Nancy Pelosi has frightened lawmakers in Washington. The video—edited to make her appear drunk—is just one of a number of examples in the last year of manipulated media making it into mainstream public discourse. In January, a different doctored video targeting President Donald Trump ended up airing on Seattle television. This week, an AI-generated video of Mark Zuckerberg was uploaded to Instagram. (Facebook has promised not to take it down.) With the 2020 US election looming, the US Congress has grown increasingly concerned that the quick and easy ability to forge media could make election campaigns vulnerable to targeting by foreign operatives and compromise voter trust. In response, the House of Representatives will hold its first dedicated hearing tomorrow on deepfakes, the class of synthetic media generated by AI. In parallel, Representative Yvette Clarke will introduce a bill on the same subject. A new research report released by a non-profit this week also highlights a strategy for coping when deepfakes and other doctored media proliferate. It’s not the first time US policymakers have sought to take action on this issue. In December of 2018, Senator Ben Sasse introduced a different bill attempting to prohibit malicious deepfakes. Senator Marco Rubio has also repeatedly sounded the alarm on the technology over the years. But it is the first time we have seen such a concerted effort from US lawmakers. The deepfake bill The draft bill, a product of several months of discussion with computer scientists, disinformation experts, and human rights advocates, will include three provisions. The first would require companies and researchers who create tools that can be used to make deepfakes to automatically add watermarks to forged creations. The second would require social-media companies to build better manipulation detection directly into their platforms. Finally, the third provision would create sanctions, like fines or even jail time, to punish offenders for creating malicious deepfakes that harm individuals or threaten national security. In particular, it would attempt to introduce a new mechanism for legal recourse if people’s reputations are damaged by synthetic media. “This issue doesn’t just affect politicians,” says Mutale Nkonde, a fellow at the Data & Society Research Institute and an advisor on the bill. “Deepfake videos are much more likely to be deployed against women, minorities, people from the LGBT community, poor people. And those people aren’t going to have the resources to fight back against reputational risks.” The goal of introducing the bill is not to pass it through Congress as is, says Nkonde. Instead it is meant to spark a more nuanced conversation about how to deal with the issue in law by proposing specific recommendations that can be critiqued and refined. “What we’re really looking to do is enter into the congressional record the idea of audio-visual manipulation being unacceptable,” she says. The current state of deepfakes By coincidence, the human rights nonprofit Witness released a new research report this week documenting the current state of deepfake technology. Deepfakes are currently not mainstream: they still require specialized skills to produce, and they often leave artifacts within the video, like glitches and pixelation, that make the forgery obvious. But the technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. Two weeks ago, Samsung demonstrated that it was possible to create an entire video out of a single photo; this week university and industry researchers demoed a new tool that allows users to edit someone’s words by typing what they want the subject to say. It’s thus only a matter of time before deepfakes proliferate, says Sam Gregory, the program director of Witness. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems, so we should expect people to try on the latest ways to do those effectively,” he says. The report outlines a strategy for how to prepare for such an impending future. Many of the recommendations and much of the supporting evidence also aligns with the proposals that will appear in the House bill. The report found that current investments by researchers and tech companies into deepfake generation far outweigh those into deepfake detection. Adobe, for example, has produced many tools to make media alterations easier, including a recent feature for removing objects in videos; it has not, however, provided a foil to them. The result is a mismatch between the real-world nature of media manipulation and the tools available to fight it. “If you’re creating a tool for synthesis or forgery that is seamless to the human eye or the human ear, you should be creating tools that are specifically designed to detect that forgery,” says Gregory. The question is how to get toolmakers to redress that imbalance. Like the House bill, the report also recommends that social-media and search companies do a better job of integrating manipulation detection capabilities into their platforms. Facebook could invest in object removal detection, for example, to counter Adobe’s feature as well as other rogue editing techniques. It should then clearly label videos and images in users’ newsfeeds to call out when they have been edited in ways invisible to the human eye. Google, as another example, should invest in reverse video search to help journalists and viewers quickly pinpoint the original source of a clip. Beyond Congress Despite the close alignment of the report with the draft bill, Gregory cautions that the US Congress should think twice about passing laws on deepfakes anytime soon. “It’s early to be regulating deepfakes and synthetic media,” he says, though he makes exceptions for very narrow applications, such as their use for producing non-consensual sexual imagery. “I don’t think we have a good enough sense of how societies and platforms will handle deepfakes and synthetic media to set regulations in place,” he adds. Gregory worries that the current discussion in Washington could lead to decisions that have negative repercussions later. US regulations could heavily shape what other countries do, for example. And it’s easy to see how in countries with more authoritarian governments, politician-protecting regulations could be used to justify the takedown of any content that’s controversial or criticizes political leaders. Nkonde agrees that Congress should take a measured and thoughtful approach to the issue, and consider than just its impact on politics. “I’m really hoping they will talk [during the hearing] about how many people this technology impacts,” she says, “and the psychological impact of not being able to believe what you can see and hear.” Author Karen Hao is the artificial intelligence reporter for MIT Technology Review. In particular, she covers the ethics and social impact of the technology as well as its applications for social good. She also writes the AI newsletter, the Algorithm, which thoughtfully examines the field’s latest news and research. Prior to joining the publication, she was a reporter and data scientist at Quartz and an application engineer at the first start-up to spin out of Google X. Deepfakes may be a useful tool for spies A spy may have used an AI-generated face to deceive and connect with targets on social media. The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it. Easy target: LinkedIn has long been a magnet for spies because it gives easy access to people in powerful circles. Agents will routinely send out tens of thousands of connection requests, pretending to be different people. Only last month, a retired CIA officer was sentenced to 20 years in prison for leaking classified information to a Chinese agent who made contact by posing as a recruiter on the platform. Weak defense: So why did “Katie Jones” take advantage of AI? Because it removes an important line of defense for detecting impostors: doing a reverse image search on the profile photo. It’s yet another way that deepfakes are eroding our trust in truth as they rapidly advance into the mainstream.
Deepfakes- What needs to be done next? On Thursday, the US House of Representatives held its first dedicated hearing on deepfakes, the class of synthetic media generated by AI. As if on cue, two high-profile reports of deepfakes on social media surfaced in the news. The first was of a forged video of Mark Zuckerberg, among other famous figures, created by artists as part of an exhibition to raise awareness about data privacy. The second was a report from AP about how a spy likely used a non-existent face on LinkedIn to infiltrate the Washington political scene. Deepfakes are arguably still not mainstream, but they are here already. The technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems,” says Sam Gregory, the program director of the human right non-profit Witness, “so we should expect people to try on the latest ways to do those effectively.” They really don’t have to be good to be damagingly deceptive, either. In fact, faked videos don’t have to be deepfakes at all. The recent doctored video of Nancy Pelosi, which was merely slowed down to make her appear drunk, is an example of a “cheapfake” that can also get out of hand. Preparing for the era of deepfakes, therefore, is also about addressing our current state of fake news and misinformation. During the hearing, the House members and experts present discussed the current state of the technology, what regulators might do, and possible methods of retaliation against foreign governments should they use deepfakes to threaten national security or disrupt elections. The focus was largely on the upcoming 2020 elections, but the discussion also touched upon the impact of deepfakes on journalists, particularly female journalists, and other vulnerable populations online. “My overall impression was that it was—with a couple of exceptions—genuinely thoughtful,” says Jack Clark, OpenAI’s policy director who was among the experts who testified. “The members there were asking what I felt were quite reasonable and detailed questions.” So where do we go from here? A Witness report and a bill on deepfakes, introduced by Representative Yvette Clarke in parallel, offered similar recommendations for the path forward: companies and researchers who produce tools for deepfakes must also invest in countermeasures; social media and search companies should invest in and integrate manipulation detection features directly into their platforms; and regulators should not just focus on politicians but also consider vulnerable populations and international communities. Clark has another: the government should develop ways of measuring the state of the technology by engaging directly with the scientific literature. It would help them pre-empt the issues much earlier the next time around, he says. “I do think we could’ve had this conversation two years ago.” Deepfakes have got Congress panicking. This is what it needs to do. With the election approaching, lawmakers are facing up to the fact they need to do something about the explosion in manipulated media. by Karen Hao Jun 12, 2019 The recent rapid spread of a doctored video of Nancy Pelosi has frightened lawmakers in Washington. The video—edited to make her appear drunk—is just one of a number of examples in the last year of manipulated media making it into mainstream public discourse. In January, a different doctored video targeting President Donald Trump ended up airing on Seattle television. This week, an AI-generated video of Mark Zuckerberg was uploaded to Instagram. (Facebook has promised not to take it down.) With the 2020 US election looming, the US Congress has grown increasingly concerned that the quick and easy ability to forge media could make election campaigns vulnerable to targeting by foreign operatives and compromise voter trust. In response, the House of Representatives will hold its first dedicated hearing tomorrow on deepfakes, the class of synthetic media generated by AI. In parallel, Representative Yvette Clarke will introduce a bill on the same subject. A new research report released by a non-profit this week also highlights a strategy for coping when deepfakes and other doctored media proliferate. It’s not the first time US policymakers have sought to take action on this issue. In December of 2018, Senator Ben Sasse introduced a different bill attempting to prohibit malicious deepfakes. Senator Marco Rubio has also repeatedly sounded the alarm on the technology over the years. But it is the first time we have seen such a concerted effort from US lawmakers. The deepfake bill The draft bill, a product of several months of discussion with computer scientists, disinformation experts, and human rights advocates, will include three provisions. The first would require companies and researchers who create tools that can be used to make deepfakes to automatically add watermarks to forged creations. The second would require social-media companies to build better manipulation detection directly into their platforms. Finally, the third provision would create sanctions, like fines or even jail time, to punish offenders for creating malicious deepfakes that harm individuals or threaten national security. In particular, it would attempt to introduce a new mechanism for legal recourse if people’s reputations are damaged by synthetic media. “This issue doesn’t just affect politicians,” says Mutale Nkonde, a fellow at the Data & Society Research Institute and an advisor on the bill. “Deepfake videos are much more likely to be deployed against women, minorities, people from the LGBT community, poor people. And those people aren’t going to have the resources to fight back against reputational risks.” The goal of introducing the bill is not to pass it through Congress as is, says Nkonde. Instead it is meant to spark a more nuanced conversation about how to deal with the issue in law by proposing specific recommendations that can be critiqued and refined. “What we’re really looking to do is enter into the congressional record the idea of audio-visual manipulation being unacceptable,” she says. The current state of deepfakes By coincidence, the human rights nonprofit Witness released a new research report this week documenting the current state of deepfake technology. Deepfakes are currently not mainstream: they still require specialized skills to produce, and they often leave artifacts within the video, like glitches and pixelation, that make the forgery obvious. But the technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. Two weeks ago, Samsung demonstrated that it was possible to create an entire video out of a single photo; this week university and industry researchers demoed a new tool that allows users to edit someone’s words by typing what they want the subject to say. It’s thus only a matter of time before deepfakes proliferate, says Sam Gregory, the program director of Witness. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems, so we should expect people to try on the latest ways to do those effectively,” he says. The report outlines a strategy for how to prepare for such an impending future. Many of the recommendations and much of the supporting evidence also aligns with the proposals that will appear in the House bill. The report found that current investments by researchers and tech companies into deepfake generation far outweigh those into deepfake detection. Adobe, for example, has produced many tools to make media alterations easier, including a recent feature for removing objects in videos; it has not, however, provided a foil to them. The result is a mismatch between the real-world nature of media manipulation and the tools available to fight it. “If you’re creating a tool for synthesis or forgery that is seamless to the human eye or the human ear, you should be creating tools that are specifically designed to detect that forgery,” says Gregory. The question is how to get toolmakers to redress that imbalance. Like the House bill, the report also recommends that social-media and search companies do a better job of integrating manipulation detection capabilities into their platforms. Facebook could invest in object removal detection, for example, to counter Adobe’s feature as well as other rogue editing techniques. It should then clearly label videos and images in users’ newsfeeds to call out when they have been edited in ways invisible to the human eye. Google, as another example, should invest in reverse video search to help journalists and viewers quickly pinpoint the original source of a clip. Beyond Congress Despite the close alignment of the report with the draft bill, Gregory cautions that the US Congress should think twice about passing laws on deepfakes anytime soon. “It’s early to be regulating deepfakes and synthetic media,” he says, though he makes exceptions for very narrow applications, such as their use for producing non-consensual sexual imagery. “I don’t think we have a good enough sense of how societies and platforms will handle deepfakes and synthetic media to set regulations in place,” he adds. Gregory worries that the current discussion in Washington could lead to decisions that have negative repercussions later. US regulations could heavily shape what other countries do, for example. And it’s easy to see how in countries with more authoritarian governments, politician-protecting regulations could be used to justify the takedown of any content that’s controversial or criticizes political leaders. Nkonde agrees that Congress should take a measured and thoughtful approach to the issue, and consider than just its impact on politics. “I’m really hoping they will talk [during the hearing] about how many people this technology impacts,” she says, “and the psychological impact of not being able to believe what you can see and hear.” Author Karen Hao is the artificial intelligence reporter for MIT Technology Review. In particular, she covers the ethics and social impact of the technology as well as its applications for social good. She also writes the AI newsletter, the Algorithm, which thoughtfully examines the field’s latest news and research. Prior to joining the publication, she was a reporter and data scientist at Quartz and an application engineer at the first start-up to spin out of Google X. Deepfakes may be a useful tool for spies A spy may have used an AI-generated face to deceive and connect with targets on social media. The news: A LinkedIn profile under the name Katie Jones has been identified by the AP as a likely front for AI-enabled espionage. The persona is networked with several high-profile figures in Washington, including a deputy assistant secretary of state, a senior aide to a senator, and an economist being considered for a seat on the Federal Reserve. But what’s most fascinating is the profile image: it demonstrates all the hallmarks of a deepfake, according to several experts who reviewed it. Easy target: LinkedIn has long been a magnet for spies because it gives easy access to people in powerful circles. Agents will routinely send out tens of thousands of connection requests, pretending to be different people. Only last month, a retired CIA officer was sentenced to 20 years in prison for leaking classified information to a Chinese agent who made contact by posing as a recruiter on the platform. Weak defense: So why did “Katie Jones” take advantage of AI? Because it removes an important line of defense for detecting impostors: doing a reverse image search on the profile photo. It’s yet another way that deepfakes are eroding our trust in truth as they rapidly advance into the mainstream.