Security through data

CONTENT

  • Home
  • Blog
  • Data
  • Directory
  • Events
  • Tutorials

FEATURED

  • CyberAlerts
  • CyberDecoded
  • CyberWeekly
  • CyberStory
  • CyberTips

COMPANY

  • About us
  • Advertise
  • Legal & Policy
Cybermaterial
  • CATEGORIES
    • Alerts
    • APIs
    • Apps
    • Blog
    • Cyber101
    • Documents
    • Entertainment
    • Learning
    • Quotes
    • Stats
    • Tools
No Result
View All Result
Contact Us
Newsletter
Cybermaterial
  • CATEGORIES
    • Alerts
    • APIs
    • Apps
    • Blog
    • Cyber101
    • Documents
    • Entertainment
    • Learning
    • Quotes
    • Stats
    • Tools
No Result
View All Result
Contact Us
Newsletter
Cybermaterial
No Result
View All Result

Disrupting Deepfakes: Adversarial AttacksAgainst Conditional Image TranslationNetworks and Facial Manipulation Systems

By Nataniel Ruiz, Sarah Adel Bargal, S. Sclaroff

in Documents, Papers
1 min read

Face modification systems using deep learning have become increasingly powerful and accessible. Given images of a person’s face, such systems can generate new images of that same person under different expressions and poses. Some systems can also modify targeted attributes such as hair color or age. This type of manipulated images and video have been coined Deepfakes.

In order to prevent a malicious user from generating modified images of a person without their consent we tackle the new problem of generating adversarial attacks against such image translation systems, which disrupt the resulting output image. We call this problem disrupting deepfakes.

Most image translation architectures are generative models conditioned on an attribute (e.g. put a smile on this person’s face). We are first to propose and successfully apply (1) class transferable adversarial attacks that generalize to different classes, which means that the attacker does not need to have knowledge about the conditioning class, and (2) adversarial training for generative adversarial networks (GANs) as a first step towards robust image translation networks. Finally, in gray-box scenarios, blurring can mount a successful defense against disruption. We present a spread-spectrum adversarial attack, which evades blur defenses. Our open-source code can be found at this https URL.

GET REPORT

Tags: DeepfakeDeepfake-documentsDeeptrace
15
VIEWS

Related Papers

Quantum computing 101: Seven questions corporate executives are asking
Documents

Quantum Computing: Lecture Notes

Quantum computing 101: Seven questions corporate executives are asking
Documents

Topological and Subsystem Codes on Low-Degree Graphs with Flag Qubits

Quantum computing 101: Seven questions corporate executives are asking
Documents

Quantum Computing for Computer Scientists

MORE

Password dumper has taken the top spot among breach Malware varieties

CISM

Accountability

Uncategorized

Events of the week – 2020.09.21

Memes

Testing during development…

ADVERTISEMENT

Tags

Books Cyber Definition Cybersecurity Hackers Malware Memes Movies Quantum Computing Software Word of the day

© 2021 | CyberMaterial | All rights reserved.

SECURITY THROUGH DATA

No Result
View All Result
  • Home
  • Blog
  • Data
  • Directory
  • Events
  • Tutorials
  • CyberDecoded
  • Stats
  • CyberStory
  • CyberTips
  • Cyber Weekly

© 2020 CyberMaterial - Cyber Decoded.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.