Last month, Meta, Inc. announced that it was working on a set of ethical guidelines for virtual influencers – animated, typically computer-generated, characters designed to attract attention on social media. When Facebook, Inc. renamed itself Meta late last year, it heralded a pivot towards the “metaverse.” Even Meta admits the metaverse does not really exist yet, and while the building blocks of a persistent, immersive virtual reality for everything from business to play are yet to be fully assembled, virtual influencers are, nonetheless, already online (on platforms like Meta-owned Instagram), and many are surprisingly convincing.
Given its recent history, it is worth pondering whether Meta is really the right entity to be setting the ethical standards for virtual influencers and the metaverse more broadly.
Who (or what) are virtual influencers?
Meta’s announcement on January 12 notes the “rising phenomenon” of synthetic media – an umbrella term for images, video, voice, or text generated by computerized technology, typically using artificial intelligence (“AI”) or automation. Many virtual influencers incorporate elements of synthetic media in their design – ranging from completely digitally rendered bodies to human models that are digitally masked with characters’ facial features.
At both ends of the spectrum, this process still relies heavily on human labor and input – from art direction for photo shoots to writing captions for social media. Like Meta’s vision of the metaverse, influencers that are entirely generated and powered by AI are a largely futuristic fantasy. But even in their current form, virtual influencers are of real value to Meta, both as attractions for their existing platforms and as avatars in the metaverse.
After all, interest in virtual influencers has rapidly expanded over the past five years, attracting huge audiences on social media and partnerships with major brands, including Audi, Bose, Calvin Klein, Samsung, and Chinese e-commerce giant Alibaba’s TMall platform. And a competitive industry specializing in the production, management and promotion of virtual influencers has already sprung up, although it remains largely unregulated.
So far, India is the only country to address virtual influencers in connection with national advertising standards, requiring brands to “disclose to consumers that they are not interacting with a real human being” when posting sponsored content.
There is a need for ethical guidelines in the space in order to help producers and their brand partners navigate this new terrain, and more importantly, to help users understand the content they are engaging with. Meta has warned that “synthetic media has the potential for both good and harm,” listing “representation and cultural appropriation” as among the specific areas of concern.
Indeed, despite their relatively short lifespan to date, virtual influencers already have a history of “overt racialization” and misrepresentation, thereby, raising ethical questions. But it is far from clear whether Meta’s proposed guidelines will adequately address these questions.
Becky Owen, head of creator innovation and solutions at Meta Creative Shop, said the planned ethical framework “will help our brand partners and AI creators explore what is possible, likely and desirable, and what is not.” This seeming emphasis on technological possibilities and brand partners’ desires leads to an inevitable impression that Meta is once again conflating commercial potential with ethical practice.
By its own count, Meta’s platforms already host more than 200 virtual influencers. But virtual influencers exist elsewhere too: they do viral dance challenges on TikTok, upload vlogs to YouTube, and post life updates on Chinese platform Weibo. They appear “offline” at malls in Beijing and Singapore, on 3D billboards in Tokyo, and star in television commercials.
Gamekeeper, or poacher?
This brings us back to the question of whether Meta is the right company to set the ground rules for this emerging space. The company’s history is tarred by claims of unethical behavior, from Facebook’s questionable beginnings in Mark Zuckerberg’s Harvard dorm room (as depicted in The Social Network) to large-scale privacy failings demonstrated in the Cambridge Analytica scandal. Fast forward to February 2021, and Facebook showed how far it was willing to go to defend its interests, when it briefly banned all news content on Facebook in Australia to force the federal government to water down the Australian News Media Bargaining Code. Last year also saw former Facebook executive Frances Haugen very publicly turn whistleblower, sharing a trove of internal documents with journalists and politicians. These so-called “Facebook Papers” raised numerous concerns about the company’s conduct and ethics, including the revelation that Facebook’s own internal research showed Instagram can harm young people’s mental health, even leading to suicide.
Today, Meta is fighting antitrust litigation in the U.S. that aims to restrain the company’s monopoly by potentially compelling it to sell key acquisitions including Instagram and WhatsApp. Meanwhile, the social media giant is scrambling to integrate its messaging service across all three apps, effectively making them different interfaces for a shared back end that Meta will doubtless argue cannot feasibly be separated, no matter the outcomes of the current litigation.
Given this back story, Meta likely is not be the ideal choice as ethical guardian of the metaverse. The already-extensive distribution of virtual influencers across platforms and markets highlights the need for ethical guidelines that go beyond the interests of one company – especially a company that stands to gain so much from the impending spectacle.
Tama Leaver is a Professor of Internet Studies at Curtin University. Rachel Berryman is a PhD Candidate in Internet Studies at Curtin University. (This article was initially published by The Conversation.)