Copied identities: What went wrong with Albania’s AI minister?
Albanian actress Anila Bisha poses with the AI-generated chatbot named "Diella," Tirana, Albania, Feb. 19, 2026. (AFP Photo)

When human faces, voices and identities are used for AI without consent, innovation risks becoming theft



Albania wanted to showcase the future. Instead, it may have created Europe’s most awkward legal test case for the artificial intelligence era.

Last September, Prime Minister Edi Rama’s government introduced "Diella,” an AI avatar presented as Albania’s first AI-powered minister overseeing public procurement. It was framed as bold innovation for a small Balkan nation, proof that Albania was modern, reform-driven and ready for the European Union. But Diella had a human face. And that face belonged to Albanian actress Anila Bisha.

Bisha is now suing the government over her right to control her own voice and likeness. She says she never agreed to become a political symbol. After trying repeatedly to contact authorities once Diella was unveiled, she turned to the courts.

Bisha claims she had agreed to lend her image and voice to a digital assistant on the government’s e-Albania services platform, a routine arrangement. What she says she did not agree to was becoming the face and voice of a virtual "minister,” featured in official communications and presented as part of the government’s reform agenda. The distinction is not technical. It is fundamental. A service chatbot is one thing. A state-branded AI official is another.

Her complaint argues that her biometric data, facial likeness and vocal patterns were repurposed beyond the scope of her original consent. Under Article 5(1)(b) of the EU’s General Data Protection Regulation, personal data must be collected for "specified, explicit and legitimate purposes” and cannot be further processed in ways incompatible with those purposes. Consent must be freely given, specific and informed. It is not open-ended permission for future political use.

Bisha is reportedly seeking about 1 million euros ($1.16 million) in damages and an immediate halt to the continued use of her likeness.

An AI avatar is not a static image. It can generate unlimited speech and new performances indefinitely. A face and voice become code. Replicable. Transferable. Permanent. For all the talk of machine intelligence, successful avatars still depend on human likeness. They work because they feel real. Because they resemble someone. Because they carry the subtle cues of humanity, a familiar tone, a believable expression. Strip that away, and the illusion collapses.

AI may be synthetic, but its power is borrowed.

Around the world, similar disputes are emerging. Voice actors in the United States have sued companies for cloning their voices. Indian courts have blocked deepfake misuse of public figures’ images. In China, judges have ruled that AI-generated voice replication can violate personal rights. Even global tech companies have faced backlash for using voices that sounded too close to real actors. But Albania’s case is different.

Elsewhere, it has largely been private companies testing the limits. In Albania, it is the state accused of repurposing a citizen’s identity for political messaging. When a government adopts AI to embody reform, it carries the authority of public power. If that embodiment rests on a real person’s biometric identity without clear consent, innovation turns into appropriation.

The European Union has just enacted the AI Act, promising risk-based regulation and stronger safeguards. Albania, as a candidate country, is expected to align with those standards. But regulation means little if the most basic boundary, control over one’s own face and voice, can be blurred in practice.

This lawsuit exposes a blind spot in the global AI debate. Policymakers focus on bias, surveillance and safety. Far less attention is paid to synthetic identity appropriation, the quiet absorption of human likeness into digital systems.

If courts fail to draw clear lines, it may normalize a dangerous idea: that once you agree to appear in one digital format, your identity can be endlessly repurposed.

Today, it is an AI minister. Tomorrow, it could be political ads, corporate campaigns or state messaging in contexts never imagined by the original individual.

AI may be efficient. It may be modern. It may even help governments function better. But it still relies on us.

If our voices, our faces and our expressions can be copied, scaled and redeployed without clear limits, then the real risk is not technological failure. It is the erosion of personal autonomy in a world where identity can be multiplied at the click of a button.

The future of AI will not be defined only by code. It will be defined by whether human beings remain owners of their own likeness. Because if we cannot control our own faces and voices, we will not control the digital future either.