... | 🕐 --:--
-- -- --
عاجل
⚡ عاجل: كريستيانو رونالدو يُتوّج كأفضل لاعب كرة قدم في العالم ⚡ أخبار عاجلة تتابعونها لحظة بلحظة على خبر ⚡ تابعوا آخر المستجدات والأحداث من حول العالم
⌘K
AI مباشر
316617 مقال 217 مصدر نشط 38 قناة مباشرة 6458 خبر اليوم
آخر تحديث: منذ 0 ثانية

From trendy toys to tools of political manipulation: Rapid advancement of AI avatars

سياسة
TASS
2026/05/05 - 03:00 504 مشاهدة
russian news agencyRUSearchSectionsCloseEconomySportsCulture CloseRussian Politics & DiplomacyRussian Politics & DiplomacyInto section →FOREIGN POLICYDOMESTIC POLICYWorldWorldInto section →Business & EconomyBusiness & EconomyInto section →Oil & gas industryInternet & TelecomTrade & CooperationTransportMilitary & DefenseMilitary & DefenseInto section →Science & SpaceScience & SpaceInto section →Emergencies Emergencies Into section →Society & CultureSociety & CultureInto section →Press ReviewPress ReviewInto section →SportsSportsInto section →Special projectsTASSAgencyTASS todayHistoryManagementContactsProjects & ServicesTASS-PHOTONEWS TERMINALPersonal data processing policy TASSPress ReleasesPrivacy Policy​ tass.comTerms of useANTI-CorruptionSubscribeSocial MediaTelegramSubscriptionsRSSAdvertisingContacts{{dayPoint.date | date : 'd MMMM yyyy'}}{{newsPoint.date * 1000 | date : 'HH:mm'}} {{newsPoint.mark}}{{newsPoint.title}}{{newsPoint.title+ ' '}}{{newsPoint.subtitle}}All newsFrom trendy toys to tools of political manipulation: Rapid advancement of AI avatars© Alexander Shcherbak/TASSAs you read this text, thousands of non-existent people are selling clothes, raising funds, and campaigning for politiciansThe emergence of AI avatars and virtual influencers is the logical evolution of filters, AR masks, and generative graphics: we have moved from light retouching to entirely synthetic faces. Within just a few years, this technology has spawned a distinct market, followed by unprecedented risks of fraud, exploitation, political propaganda, and long-term psychological issues for society. In this article, together with the Global Fact-Checking Network (GFCN) international experts, TASS will break down exactly how the digital clone market operates, what markers can still help spot them, and why they are becoming the primary weapon for cybercriminals and political spin doctors. One of the first landmark cases of digital avatars achieving commercial success was the creation of the virtual blogger Lil Miquela in 2016. She was positioned as a Brazilian-American girl managed by an American creative agency. Despite her emphasized robotic appearance, for a long time, it was unknown whether she was a real person. The account perfectly mimicked standard influencer activity and only revealed its true origins in 2019. The virtual model has collaborated with real brands, and her accounts on social media boast millions of followers. This isn't the only example of the technology's successful application. In Japan, a virtual human AI company created the model Imma in 2018, who also found success and has collaborated with famous brands. Beyond pure influencer personas, AI avatars are also being created for highly specific tasks. The avatars act as more than just virtual humans. They are actively used to manage organizational accounts, in advertising, and in the blogging of real individuals. An experiment in China featured avatars hosting livestreams instead of real influencers, having pre-trained on their material. Today, a significant portion of promotional content and reviews is generated with such avatars’ help. Analysts from international research firms predict the explosive scaling of the market for this technology's use. Reports estimate a market value ranging from $47.5 billion to $308.3 billion by 2034. Virtual characters allow the retail and entertainment sectors to combine a controllable image, low reputational risks, and 24/7 availability. As Alexey Parfun, a Russian GFCN expert on AI technologies, noted, the rapid development of the AI avatar industry is outpacing legislation. "An entirely new legal market is forming – the rights to digital copies of people. Who owns a virtual influencer: the agency, the brand, or the model's developer? Neither intellectual property nor copyright laws are yet equipped to accurately define an AI blogger as an asset," he said, commenting on the legal aspects of using the avatars. The evolution of AI and the emergence of avatar-generating platforms have significantly lowered the barrier to entry. A user can upload text, select an avatar, and the system will create a realistic video. Some web sites offer the ability to create AI influencers within a minute, only to allow some other service to make their social media management autonomous, completely bypassing human moderators. Nevertheless, recognizing synthetic media is still possible for now. "The main visual markers of AI are unnatural, overly rhythmic blinking and a narrow range of facial expressions," Parfun explained. "Moreover, generative models often produce a 'plastic' skin smoothness without natural pores, and static lighting that fails to react to head movements." Synthetic speech gives itself away through its "sterility." "It lacks natural pauses and random emphases. The algorithm doesn't understand anatomy, so it occasionally generates physiologically impossible sounds," Parfun underlined. "But in six months to a year, even professionals will stop noticing the difference. The creators of fakes are always one step ahead of the detectors because they target specific vulnerabilities, while the defense is forced to patch everything at once." Synthetic influencers are being actively deployed for malicious purposes. Alexandre Guerreiro, a GFCN expert from Portugal, pointed to the legal framework: "everything related to image is understood, within the EU context, as 'personal data' [article 4(1) of GDPR]. There is also the 'right to erasure' [article 17], according to which, everyone has the right to demand that any AI model trained specifically on that person’s likeness be 'untrained'. The concept of 'legitimate interest' is rarely a valid legal basis for processing biometric data for commercial avatar creation without explicit consent." However, scammers go further, generating fake documents based on massive data leaks, such as the incident involving the French National Agency for Secured Documents (ANTS), which affected 11.7 million accounts. Cybersecurity specialists are convinced that this data will go toward creating forged identity documents. Prabesh Subedi, a GFCN cybersecurity expert from Nepal, explained the mechanics of bypassing bank KYC systems: "Utilizing AI tools the scammers generate synthetic videos, audios, faces and IDs that mimic real human and utilize them in verification process. Applications such as virtual cameras help them to set a real-like environment. Using such fake documents to open bank accounts online help them to transfer money from high-jacked accounts. Widely accessible tools <…> have sophisticated ability to help ill intention of scammers." When such an avatar is used for phishing or bogus fundraising, the risks increase exponentially. Unlike email or SMS communication, live audio or video interactions happen instantly without providing much space for a victim to critically think, which amplifies chances of being scammed, Subedi noted. Regulatory attempts are currently lagging. Alexandre Guerreiro added that labeling can only solve the transparency aspect of the problem, but still it cannot solve the consent problem. "A label – 'AI-generated' tag – does not fully negate the reputational or emotional impact if the viewer has already cognitively processed the image as 'real' <...>. Malicious actors can bypass automated 'deepfake' filters, increasing the chances for the public to think 'if there is no label, it has to be real'. [We face] a threat to the 'epistemic security' <...> of our society," the expert said. Political deepfakes are often created for simple profit. Wired reported on a student from India who had created the AI persona Emily Hart ­– a Donald Trump and MAGA conservative movement supporter. A politically polarized persona quickly secured reach and monetization on social media, bringing the creator thousands of dollars. The man attempted to make several AI influencers, but Emily’s liberal counterpart and a neutral one did not make any money. Fauzan Al-Rasyid, an Indonesian GFCN political technology expert, explains the success of such strategies: "Engagement algorithms do not reward informative content, they reward emotionally activated users. Anger and fear are the most reliable fuels. <...> A synthetic persona designed to embody everything a target audience finds desirable, without any messy human contradictions, is essentially propaganda that has been A/B tested by the platform itself. That's not a side effect of the system. It's the system working as designed." The influence of AI on politics is becoming institutionalized. In the UK, the AI-Steve project even ran for parliament. "AI Steve got 179 votes <...> finishing dead last. On paper, a humiliating failure. <...> [But] the real threat isn't an AI on the ballot. It's a real politician using AI to become a synthetic ideal, saying what each demographic wants to hear, in real time, without contradiction or fatigue," Al-Rasyid said. According to the Indonesian expert, the majority of Europeans surveyed in 2021 said they would be open to AI replacing some politicians, not out of enthusiasm for machines, but because trust in human politicians had collapsed. "That's the gap this technology walks through," he concluded. Today, social networks are flooded with accounts of fake influencers praising political factions. For now, the sheer scale of such campaigns is driven by cheapness rather than generation quality. But the rapid advancement of commercial neural networks suggests that visually flawless AI avatars are already opening a new chapter in the history of political manipulation and the world of misinformation.
مشاركة:

مقالات ذات صلة

AI
يا هلا! اسألني أي شي 🎤