The new blackmail in Iraq: AI and the exploitation of women


Shafaq News

Artificial intelligence (AI),
celebrated worldwide as a driver of innovation, is increasingly being turned
into a weapon of abuse. Across Iraq, women have reported being targeted by
fabricated videos, cloned voices, and doctored images that spread rapidly
across digital platforms.

Once confined to sophisticated
laboratories, these tools are now available as free or low‑cost smartphone
apps, placing powerful means of harm within reach of anyone intent on abuse.

AI’s Dark Side

In Iraq, these risks are intensified
by limited digital literacy. The 2024 Iraqi census showed that more than 32
million Iraqis — around 73 percent of the population — are online, with women
representing nearly 48 percent of users. Teenagers and young adults dominate
this digital landscape, with over 60 percent of internet users under 25, which
increases their exposure to online exploitation and manipulation.

Technology specialist Othman Ahmed
Akram explained to Shafaq News that Iraq’s sudden exposure to the internet,
social media, and digital technologies created a shock bigger than many could
absorb.

“Useful tools were quickly twisted
into harmful ones,” he added.

Read more: Internet in Iraq: Snail-speed service, high costs, and digital divide

Blackmail Goes AI

Perpetrators in Iraq use voice
cloning to produce fake phone calls or audio clips that mimic confessions or
financial demands. Even short audio samples are enough to generate convincing
speech, while chatbots and social engineering techniques often build trust over
days or weeks to extract sensitive information.

Open-source intelligence (OSINT)
adds another layer to these abuses, enabling offenders to collect old
photographs, family details, and other personal data. These materials are then
used to fabricate conversations or screenshots that appear genuine, giving
false narratives an air of credibility.

Social media platforms have become
primary conduits for such digital manipulation. Facebook, TikTok, and Instagram
dominate Iraq’s online landscape, and many cases of harassment or extortion
begin there. The Ministry of Interior’s Community Department reported receiving
more than 13,000 complaints of digital harassment and blackmail in 2024, many
involving altered or falsified content.

However, the real number is believed
to be far higher, as victims often avoid reporting incidents due to fear of
stigma or retaliation.

The Hidden Scars

Last week, the al-Rusafa Criminal
Court in Baghdad sentenced a man to six years in prison for threatening to
publish fabricated images of a young woman, generated using AI.

The conviction relied on Article
430/1 of the Iraqi Penal Code, alongside Articles 47, 48, and 49, which cover
threats and complicity. Judges highlighted the limitations of Iraq’s current
laws, as no specific provision yet addresses crimes facilitated by artificial
intelligence, requiring such cases to be interpreted under existing statutes
for fraud, defamation, or extortion.

These legal gaps are compounded by
cultural and social norms. In a society where family honor is closely tied to
women’s behavior, activists warn that victims may face honor-based violence or
social isolation, often leaving them little choice but to comply with extortion
demands.

The severity of the situation
becomes clear through real-life experiences. One young woman in Baghdad,
speaking anonymously to Shafaq News, recounted being threatened with exposure
unless she paid money, despite the images being fake. “I could not sleep. I
thought my life was over. Even if the pictures were false, people would believe
them. How can I explain otherwise?” she recalled.

Read more: Beyond Oil: Digital work is forging Iraq’s new economic identity

The psychological impact is
significant. The Iraqi Women Network’s 2024 report noted that victims of
digital blackmail frequently experience anxiety, depression, and suicidal
thoughts. Many go so far as to cut off their online presence entirely, abandoning
social media and digital communication to protect themselves. Victims may also
lose access to education or employment when families respond harshly to online
rumors or manipulated content.

Update the Law Now!

Addressing AI-driven abuse requires
legal systems to catch up with technology. Globally, the European Union’s
Artificial Intelligence Act classifies deepfake technology as high-risk,
mandating strict labeling and accountability.

In the US, several states have
introduced bills criminalizing the non-consensual creation and distribution of
intimate deepfakes, while the UK has strengthened its Online Safety Act to
outlaw the sharing of fabricated sexual images without consent.

In Iraq, the judiciary relies on the
Penal Code, particularly Articles 430 and 433 on threats, defamation, and
extortion. Experts note these laws are outdated for AI crimes. “The law was
written in the mid-20th century, at a time when the internet and AI did not
exist. Applying old categories to new crimes leaves loopholes,” legal advisor
Furat Al-Azzawi highlighted.

Iraqi Civil society groups have also
called for a specific cybercrime law that addresses AI-driven abuse and online
harassment against women. Digital literacy campaigns, especially targeting
teenage girls, are also recommended to reduce vulnerability.

Read more: Iraq’s Gen Z: Caught between a digital future and fragile realities

Universities and schools could
integrate training on online safety, while the judiciary would benefit from
technical expertise to properly evaluate AI-facilitated crimes.

Experts warn that without urgent
reform, Iraq’s women remain highly exposed to AI-driven abuse. As artificial
intelligence reshapes both opportunity and danger, the gap between
technological misuse and legal accountability grows wider, leaving victims at
risk while perpetrators operate with relative impunity.

Written and edited by Shafaq News
staff.


Source

Visited 1 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound