top of page

Scamming and Deepfakes: The Rise of Cyber-crime

Cases of fraud have always existed; only in recent years has it become increasingly technologically difficult to detect. Laws and policies have introduced stricter punishments, yet offenders often find loopholes within them. Today, skilled hackers and scammers manipulate individuals by exploiting their trust, often through email or messaging, using persuasive language and presentation. Unsuspecting individuals click on links or enter codes, unknowingly granting access to con artists who then obtain personal passwords, credit card numbers, and other sensitive information.


deepfakes being used via facial replication

Moreover, scammers employ tactics such as mimicking the voices of bank personnel or government officials, inducing anxiety and prompting immediate payment under false pretenses. By the time the victim realizes the deception, the transaction is completed, leaving them unaware of the extent of the damage. Sometimes, the truth remains concealed indefinitely, leaving victims ignorant of the significant sums they unwittingly transferred to unrelated organizations.


Adding to these challenges are deepfake technologies, which produce highly realistic videos and images depicting individuals engaging in actions or uttering words they never did. While earlier iterations of deepfakes exhibited noticeable flaws, advancements have rendered them increasingly difficult to detect. This sophistication underscores the urgency of implementing strict controls and regulations on deepfake technology.

 

Deepfake technology has been around for quite some time. It gained wider attention within the academic community in 2016 when Justus Thies and his colleagues presented their research on real-time face capture and manipulation at the Conference on Computer Vision and Pattern Recognition. This technology enables one individual (A) to manipulate the facial expressions of another individual (B) in a recording, resulting in realistic-looking videos where person B appears to mimic the expressions of person A.


The technology gained more prominence in 2017 when it became popular on Reddit, following posts by a user under the name "deepfake." This user shared various videos in which the faces of famous female actors were inserted into scenes taken from pornographic material. Several months later, an app emerged on the website allowing users to create their own videos using face swapping.


These two methods for creating deepfake videos differ in their approach. The first method involves facial re-enactment, where the visual representation of the targeted individual's face mimics and mirrors the facial expressions of another person. In contrast, the second method overlays the face of the targeted individual onto another, essentially acting as a mask but seamlessly blending to the point where it's difficult to distinguish between real and fake.


Despite their differences, both techniques share a key similarity: the resulting material appears authentic in appearance and sound but does not correspond to the actual actions or words of the individual. It is crafted purely to deceive the masses.



A well-known example of a deepfake video is one where Barack Obama purportedly calls Donald Trump a "total and complete dipshit" (BuzzFeed/YouTube, 2018). Initially, the video portrays Obama making a series of out-of-character statements. However, after a few minutes, the video transitions to a split-screen shot, revealing that the remarks were not made by the former US president but by actor, comedian, and director Jordan Peele. The video concludes with warnings about the potential misuse of deepfake technology, highlighting the importance of exercising caution when trusting internet sources.

The predominant use of deepfake technology has been for pornographic purposes. Research conducted by the cybersecurity company Deeptrace, published in October 2019, found that 96% of deepfake videos online were pornographic, replicating many of the scenarios discussed earlier in this article. This technology is frequently employed to fabricate graphic content, often depicting celebrities in compromising situations. Furthermore, individuals have utilized deepfake applications to create pornographic videos featuring acquaintances, friends, or classmates.


Deepfake videos are also capable of generating fabricated content depicting known politicians or candidates making racist or sexist comments. Leveraging Generative Adversarial Networks (GANs), these videos can portray sophisticated actions and remarks attributed to the actual individuals, making it challenging to refute them in a legal context.


Instances of criminal exploitation involving deepfake technology have been reported, such as a case in the United Arab Emirates in 2021, where perpetrators allegedly cloned a company director's voice to orchestrate a $35 million heist. Despite the bank manager's genuine intent to facilitate the transaction, they later discovered they had been deceived by scammers who used deepfake technology to replicate the targeted individual's voice, creating the illusion of legitimate communication.

 

The problems continue to escalate steadily due to the widespread adoption and endorsement of this technology by the general public. AI and its applications have permeated various aspects of social media, including video content on platforms like Instagram, Facebook, Twitter, and TikTok. Additionally, researchers anticipate its significant utilization in targeted military and intelligence operations.

 



The only effective means to break this web of deception are digital literacy and stringent regulations governing the realm of cybercrime. While rules and laws exist, they often fall short in addressing and prosecuting these fraudulent practices. We find ourselves in an era where the cunning only become more adept at evading legal repercussions, necessitating swift and decisive action. The transactions occur within mere minutes, providing a brief window of opportunity for authorities to gain the upper hand.


Other methods include:

  • Educating the general public to enhance their digital literacy and awareness. Providing information on these issues and discouraging the clicking of suspicious links or visiting websites generated from dubious messages.

  • Developing systems capable of tracking and documenting digital assets, although concerns about privacy breaches and other related issues may hinder or limit this approach.

  • Encouraging journalists and intelligence analysts to verify information thoroughly before publishing it in articles.  

The route to providing strong cyber-security and standing might come with a lot of hurdles, with the equally rapid developments in cyber-fraud, deepfakes a recent addition, privacy and other variables to be considered as well, it is the time to up the game to level the playing field. deepfake deepfake deepfake deepfake

The Latest 

Subscribe to the Imperium Newsletter!

Thanks for subscribing!

  • 3 Month Odyssey

    299₹
    Valid for 3 months
  • Best Value

    Silver Membership

    479₹
    Valid for 6 months
  • Gold Membership

    599₹
    Valid for one year
bottom of page