FAKE: manipulation and digital ethics | LUCILA VILELA

Body: 

português

 

 

The fast development of communication and information technologies has affected the way of creating, recording, storing, and sharing photographs, videos, links and points of view. Therefore, generating a fine line between fiction and reality, true and false.

Fakenews, deepfakes, fake profiles and clicks fill the internet territory. In the online universe, images and information are malleable and therefore easily manipulatable, presenting a seductive and frightening aspect at the same time.

An example of that would be deepfakes: a technology that uses artificial intelligence in order to transfer faces and voices into other people’s bodies. These montages are realistic and easily believable and can trigger discussions about freedom of expression as well as moral damage.

Deepfakes are often humorous: comic videos that play with one person's face or voice on another's body creating unusual situations. Like the videos created by the Brazilian Bruno Sartori that satirizes political figures and celebrities.

In the video "Animation of images using artificial intelligence", Sartori offers a sample of the creation of deepfakes.

Bruno Sartori: Animation of images using artificial intelligence

The video created from the article First Order Model for Image Animation, by Siarohin et.al., presents a detailed explanation of how the artificial intelligence system works.

First Order Model for Image Animation

However, this type of montage has also been applied for more harmful purposes generating a series of ethical and moral issues.

The term deepfake (deep learning + fake) appeared in 2017, when a Reddit user used “deepfake” as a pseudonym, to post a pornographic video involving the faces of actresses Emma Watson and Gal Gadot. The fake montage raised issues of invasion and violation of privacy.

Due to the interest in this kind of artifice, many adult content sites are incorporating deepfake technology. The falsehood or veracity of such scenes seems not to interfere with the pleasure of those who consume the content and this is the argument of the porn industry to defend the use of this technology. In addition to the fusion of identity, clients can still create scenes of sexual relations with their favorite celebrities. The problem is that the use of images without proper authorization can cause serious consequences of moral damages.

Another use of deepfake that presents a serious menace to the ethics, are the videos that present false speeches attributed to political figures.

These videos can mislead many people who are not familiar with the language.

An example is the video in which former President Barack Obama delivers an absurd speech. With the use of artificial intelligence system, it was possible to transform the expressions of an actor in Obama's face, and the voice was altered by a software capable of reproducing a timbre identical to the former president's voice.

In other words, with artificial intelligence it is possible to make anyone say anything, including what that person would never say. And this is done likely to believe.

The American actor and director Jordan Peele, who made Obama's deepfake, also made a video using the same image of the former president to warn about the dangers of deepfakes. The Obama that appears in the video is actually Jordan Peele with effects manipulated by artificial intelligence. All the information is false. But even with this explanation, the video has been confusing many people.

This video was subtitled in Portuguese and posted on YouTube by the company BuzzFeed Brasil with the title: "you won't believe what Obama says in this video! The video was uploaded without any explanation of the content and with an alarming appeal typical of fake news strategies, generated a lot of disagreement, as you can see in the comments.

Deepfake Barack Obama:

The content of the comments on this video in Brazil, illustrates the need for digital literacy to develop the cognitive process.

Each new technology can present a useful and harmful side. The ethical conscience of each person who has access to these tools is determinant for mankind to advance socially. However, it is common knowledge that this ethics is not always respected, and it is up to people to identify the truthfulness of the information that circulates in the slippery digital environment.

Deepfake Donald Trump:

Recognizing a deepfake is not very simple. Some videos are rough fakes, easy to identify, but others are extremely refined and realistic, and may not be noticeable to the eye.

At the same time that deepfakes are improved, a technology to detect fake content is being developed.

New technique for detecting deepfake videos:

 

An app like Impressions can popularize deepfakes on social media, allowing anyone to create deepfakes content without effort.

"Show off your impersonation skills, recreate classic scenes, create new memes, and celebrity face swaps. Share them with your friends and family via social media", says the app.

App Impressions:

More realistic Deepfakes require deeplearning basics, access to algorithms and a good graphics editor.

But while the race between technology advancement and fake content detectors develops simultaneously, the law-making drags on. So, all we have left is to rely on ethical judgment and research-based verification.

If in the last elections the dissemination of fake news contributed to the results, with deepfakes this problem could get worse. Political strategies are often unscrupulous in deceiving the voter.

The documentary The Great Hack, by Karim Amer and Jehane Noujaim (Netflix: 2019), highlighted the business strategy of fake news dissemination used to influence the election results in the United States in 2016. The documentary reveals the Cambridge Analytica scandal, that illegally used data from millions of Facebook users to induce political choices by individuals.

The same strategy influenced the 2018 presidential elections in Brazil with a different platform. If in the United States fake news spread through Facebook, in Brazil it happened through WhatsApp. The strategy used was the old method of emotional manipulation, based on the incitation of hatred and fear.

Trailer: The Great Hack (Netflix, 2019)

The biggest problem with fake news, however, is not the probability of deception. But the fact that, for most people it does not matter if the information is false, as long as it is compatible with their opinions. In other words, if the news or video message matches what the person thinks, they pass it on without worrying whether it is true or false; they pass it on as if it were an argument for what they believe. What follows then is a spread of personal opinions disguised as true news.

The Argentinean cartoonist Daniel Paz illustrated this question very clearly: "how is it possible that the news is false, if it is exactly what I think it is?

These news, Daniel Paz, 2020.

Hacking and manipulation of internet users can reach extreme levels, as shown in the Polish film The Hater, by director Jan Komasa (Netflix: 2020).

The Hater tells the story of a hacker who resents being expelled from college for plagiarism. Plagiarism is also perceived in his speech, when he copies the lines of other characters. Working for an agency specialized in manipulating public opinion on social media, the protagonist Tomasz Giemsa (Maciej Musialowski) infiltrates the electoral campaign to defeat the candidate for mayor of Warsaw, who advocated agendas that respect gender and racial diversity.

The looks of a young blond boy with the good boy speech hides a hacker with a psycho-social stalker profile, and mastery of emotional manipulation tactics. In the narrative, Tomasz Giemsa manages to architect a plan in which the actions of the virtual world interfere with the real facts, resulting in an overwhelming tragedy.

Trailer: The Hater (Netflix, 2020)

Professor and researcher Giselle Beiguelman, warns that this kind of behavior is becoming a common strategy of an ultra-authoritarian techno populism, that takes advantage of online resources and breached guidelines from major social networks and platforms like Twitter, Facebook, Instagram and YouTube.

The movie (The Hater) approaches a problem, which the researcher Cynthia M.Wong warned on the documentary (The Social Dilema) (Netflix, 2020): “some of the most troubling implications of governments and other bad actors weaponizing social media, is that it has led to real, off-line harm”.

The documentary (The Social Dilema) shows testimonies from specialists about power and manipulation in social media. According to Tristan Harris, Google Former Design Ethicist and co-funder of the Center for Humane Technology, persuasive technology is just sort of designed intentionally applied to the extreme, where we really want to modify someone’s behavior.

According to Tristan Harris: “there’s an entire discipline and field called “growth hacking”. Teams of engineers whose job is to hack people’s psychology, so they can get more growth. They get more users sign-ups, more engagement".

The computer scientist Jaron Lanier perceives that “we’ve created a world in which online connection has become primary, especially for younger generations. So we’ve created an entire global generation of people who are raised within a context where the very meaning of communication, the very meaning of culture is manipulation.

Trailer: The Social Dilemma (Netflix, 2020)

The information spread through false profiles, the participation of robots and professionals who encourage violence culminate in disasters. The creation of false profiles and faces does not require any complexity, you just need to know where to find them.

The profiles can be obtained in sites like This person does not exist and Fake name generator in which algorithms can generate unreal combinations.

This person does not exist  (https://thispersondoesnotexist.com/)

Fake name generator  (https://www.fakenamegenerator.com/)

The use of false identities, political tricks, photographic manipulations and speech distortions are not new in mankind's history. As the researcher Renée Diresta understands, “it’s not that highly motivated propagandists haven’t existed before. It’s that the platforms make it possible to spread manipulate narratives with phenomenal ease, and without very much money".

What changes are the tools, and the cognitive understanding time of the new languages. The rumor or lie that was before dissipated mouth to mouth, began to materialize with the advent of reproducibility techniques. It was in the French Revolution, and also in London in the 18th century that false news had its peak. The rise of the press and the distribution of pamphlets, pasquins and newspapers, contributed to the spread of lies that influenced political decisions.

Also, the photographic image since its appearance is manipulated. The Ecuadorian photographer José Domingo Laso (1870-1927) at the beginning of the 19th century, used to scrape glass plates to eliminate the indigenous people who appeared in the photographs of the streets of the city of Quito. Instead of the smudges he inserted white dresses and wide-brimmed hats, collaborating with the white and elitist look desired by the repulsive hygienic politics of the time.

Photography by José Domingo Laso, Teatro Sucre, published in the book Quito a la vista, 1911. Detail.

It is also known that Stalin removed people from his photographs as they became no longer allies. Trotsky, for example, was removed from several photographs in which he was standing next to Stalin, when political differences became more severe.

Original photo: Nikolai Yezhov is on the right of Josef Stalin. Edited Photo in stalinist regime: deleted Yezhov (1937).

The practice of manipulating photos was always common, and never stopped being used in political strategies. There are a countless number of lies that run through many historical documents.

In Brazil, an example is the photograph of the journalist Wladimir Herzog. During the military regime, it was shown in the newspapers as if he had committed suicide, when in fact he had been murdered.

In 1987, an advertisement in Folha de São Paulo newspaper showed how it is possible to "tell a lot of lies telling only the truth”. Publicity reveals the power of the source of information that can not only create false news, but can also lead to a false truth only by changing the way the story is told.

Folha de São Paulo advertisement, directed by Andrés Bukowinski (1987)

For a long time, the documentary feature of photographs and videos was taken for granted. The documentary, however, cannot be attributed only to the technical environment, actually the sources are what give credibility to the material.

The falsification of images and videos with a documentary aspect is more and more credible. On the internet a lot of information jumps from one site to another without control, taken from its original context and without the original source. Thus, false information disguised as news spreads with speed. As José Luís Pardo observes, digital technology has increased our capacity to deceive ourselves.

Deepfakes had already appeared in major film productions, but this required an investment of millions of dollars and months of work; now with the use of artificial intelligence the cost is almost none and time is near immediate.

Disney studios, for example, are way ahead of the curve in research that improves the technique and resolution of deepfakes. The development of technology contributes to the progress of work and research in visual arts; and has being incorporated into the entertainment industry, the cinema, film restorations and artistic productions.

Video restoration with artificial intelligence

In this way it is necessary to know how to take advantage of the positive aspects of technological advances. The philosopher Pierre Lévy understands that the Internet is not responsible for this contemporary wave of populism or ultraconservative governments. In the 1930s and 1940s the Internet did not exist, and we had an absolutely terrible wave of totalitarian regimes. Publicity has existed since the beginning of politics. So, you can't blame the Internet or any other technical media and not face the political developments themselves. People must assume their political responsibilities. It is the human element that is truly responsible.

When Harry Tristan thinks about social media, he understands that “we’ve moved away from having a tools-based technology environment to an addiction and manipulation-based technology environment. That’s what’s changed. Social isn’t a tool that’s just waiting to be used. It has its own goals, and it has its own means of pursuing them by using your psychology against you”.

To live with these devices, however, we need to understand this new operation and be conscious about how they work. Besides, it’s necessary to hasten with rules and laws.

For political scientist Peter Warren Singer, from a legal point of view, legislation should not ban deepfakes, as they can be used for good and evil, to entertain and to deceive. He supports Blade Runner Rule: not to ban for the sake of free speech, but to guarantee people's right to know when they are interacting with a robot. However, Singer points out that laws are essential for this.

But until these laws exist, it is necessary to trust in the character and ethical beliefs of those who master this kind of tool. Fake is a reality now. An image must be understood in its condition of image.

As it has been observed before, the practice of people passing on false information without caring if it is true, as long as it matches their personal opinions, is the biggest problem to be faced.

Piercarlo Maggiolini in his article "A study of the concept of digital ethics", observes a consequent lack of awareness of the use of new technologies. For the author, "the "technical" complexity requires an appropriate professional ethics of the agents involved, because the technical risks of malfunctioning are very high".

At the beginning of digital culture, in 1964, Alice Hilton, founder of the Cybercultural Research Institute, had already realized the need for an ethical conduct in the new era of automation.

"To create the age of cyberculture requires something far more difficult than scientific discoveries and technical inventions. We must reexamine our moral values and our ethical concepts and the deeply ingrained notions to which we give lip service. And we must understand the difference between the moral values of mankind and the ethos of a society".

With the spread of information and communication technologies, ethical reflection based on a principle of collective responsibility is crucial. Maggiolini understands that "the new Digital Ethics, cannot be a purely individual ethic, but must be above all a collective, public and professional ethic".

Especially at election times when discourses are fiery, it is necessary to secure the urgency of sharing on social media, and to have a few minutes to verify information. Misinformation spread by dishonest methods leads to government disasters.

The need for caution in processing information and educating the population to learn how to use content, are essential for understanding the Internet universe.

Browsing is imprecise.

 

 

*Translated by Janaina Santanna

***

Bibliography:

BARBOSA, Mariana (org.) Pós-verdade e Fake News. Rio de Janeiro: Cobogó, 2019.

BEIGUELMAN, Giselle; LEMOS, Ronaldo. Arte e política em tempos de deepfakes. Revista Zum. 29/07/2020.
https://www.youtube.com/watch?v=oJBVmg9lMeA

DIRESTA, Renée. Interview given to the Documentary The Social Dilema (Netflix. EUA, 2020)

FILHO, Marco Antonio. Arqueologia do apagamento. Porto Alegre: Cadernos de fotografia, 2019.
http://cadernos.festfoto.art.br/arqueologia-do-apagamento/

HARRIS, Tristan. Interview given to the Documentary The Social Dilema (Netflix. EUA, 2020)

HILTON, Alice. An ethos for the age of cyberculture Proceedings. Spring joint computer conference, 1964.
https://is.gd/xtzePY

LANIER, Jaron. Interview given to the Documentary The Social Dilema (Netflix. EUA, 2020)

LÉVY, Pierre. Cibercultura. São Paulo: ed.34, 1999.

LÉVY, Pierre. Internet e responsabilidade política. Fronteiras do Pensamento, 2020. Disponível em:
https://www.youtube.com/watch?v=OxmfPH4YJIQ

MAGGIOLINI, Piercarlo. Um aprofundamento para o conceito de ética digital. Revista de Administração de Empresas, vol. 54, n. 5, 2014.
Disponível em: https://is.gd/VXBYbo

PARDO, José Luis. Las verdades del Photoshop. El País, 2015.
https://is.gd/lkhExb

SARTORI, Bruno. Tonight CNN. 21/07/2020
Disponível em: https://www.youtube.com/watch?v=eauV-zHNqW

SINGER, Peter Warren. Interview given to Mariana Barbosa In: BARBOSA, Mariana (org.) Pós-verdade e Fake News. Rio de Janeiro: Cobogó, 2019, p.104.

WONG, Cynthia M. Interview given to the Documentary The Social Dilema (Netflix. EUA, 2020)

Filmes:

AMER, Karin; NOUJAIM, Jehane.  (Documentary) The Great Hack, Directed by Karim Amer and Jehane Noujaim. Netflix. EUA, 2019. 2h19m.

KOMASA, Jan. (Movie) The Hater. Directed by Jan Komasa. Netflix. Poland, 2020. 2h15m.

ORLOWSKI, Jeff. (Documentary) Social Dilemma. Directed by Jeff Orlowski. Netflix. EUA, 2020. 1h 33m.

Links e Websites:

BUKOWINSKI, Andrés. Comercial Folha de São Paulo,1987.
https://www.youtube.com/watch?v=bZaYeiptmd4

SARTORI, Bruno. Canal YouTube:
https://www.youtube.com/channel/UCaiGLmKrcve_cipRElqSnqA

Deepfake Obama:
https://www.youtube.com/watch?v=y5PZ5MNJSDg&t=5s

First Order Model for Image Animation:
https://www.youtube.com/watch?v=mUfJOQKdtAk&t=4s

Impressions side by side comparison
https://www.youtube.com/watch?time_continue=60&v=NVluHwj6Lmo&feature=emb_logo

New technique for detecting deepfake vídeos
https://www.youtube.com/watch?v=51uHNgmnLWI

Perfis falsos
https://www.fakenamegenerator.com/

Restauração de vídeo com inteligência artificial

https://www.youtube.com/watch?v=xHpqABfu63A

This person does not exist
https://thispersondoesnotexist.com/