• Considerations for Regulating the Metaverse: New Models for Content, Commerce, and Data   

As regulators catch up to Web2, human behavior and technological change continue to drive evolution in metaverse experiences. How can companies and regulators plan for a more distributed, immersive, and tangible internet?

Deloitte;

Don’t count the metaverse out. The grand vision of hyperscale metaverse experiences bringing potentially billions of users together into vast 3D worlds, enabled and managed by Web3 protocols like blockchains and cryptocurrencies, may indeed face technical and organizational challenges as it weaves together and unfolds. But the currents moving people and businesses into more digital and embodied interactions could continue to swell into a sea change. Regardless of how people interface or which services they use to do so, the concept of the metaverse is an important way to understand how more of the world is migrating into the internet and how digital systems are connecting more deeply with the physical. As businesses grapple with the opportunities, regulators are considering the implications.

Data from Deloitte’s 2023 Digital Media Trends survey shows that digital “places” are a very real part of users’ lives. Around one-third of US respondents consider online experiences to be meaningful replacements for in-person experiences, and half of all Gen Zs and millennials surveyed agree. Among these younger generations, 48% agree that they spend more time interacting with others on social media than in the physical world, and 40% say they socialize more in video games. These generations are much more engaged with multiplayer video games that support self-expression, social connection, and immersion—and they’re spending money to personalize their avatars with digital goods.

Regulators are still considering the challenges of the Web2 era: endless content, data collection, privacy, and increasingly complex digital economies. But are current regulations prepared to address the challenges and harness the opportunities presented by hyperscale metaverse experiences? Are businesses ready for the challenges of content and conduct, privacy, and trade when people don digital avatars to express themselves, socialize, and transact, casting large data shadows across vast networks? What are some of the implications when glances, gestures, and even speech are fully digitized, recorded, and stored, when identities can be distributed and obfuscated, and when source and provenance are even more uncertain thanks to generative AI? How can ownership of digital things, dominion over virtual territory, and currency that comprises a bundle of crypto tokens be defined?

These questions may not have answers yet, but business leaders and regulators should be asking them. Technology and media companies are committing billions of dollars to develop metaverse technologies in a race to capture consumer and business demographics. Investors are eyeing trillions in value. The hype may have died down, but the deeper currents are still building.

In this article, we explore how regulators are working to tackle the challenges of hyperscale social media and content creation—issues such as content moderation, conduct, privacy, copyright and fair use, and tax and trade—and then consider how metaverse experiences could amplify these challenges and necessitate new regulatory considerations.

Why is the metaverse different?

However it’s defined, the metaverse is a place. It approximates the physical world and typically acts as if it were solid. It has physics, and its users have bodies through which they can act, interacting with objects, others, and the virtual world. Social video games illustrate this “placeness” and the embodiment of avatars in interactive environments. Today’s most popular social video games host tens of millions of players, with all the attendant behaviors such large and diverse groups entail.

If a metaverse enables potentially tens of millions of users to interact together outside the confines of a specific game, how might this amplify the challenges of moderation, behavior, harassment, and misinformation? If a user says something in a metaverse space, is that the same as posting content? Who owns that content, and who is liable? Is an avatar an individual whose speech is protected? What constitutes abuse? Are the rules defined by the location of the users or the location of the service provider, or both? And where is the metaverse, anyway?

This lack of location and the abstraction of humans into avatars and things into data objects challenge companies and regulators to think differently about existing categories of law in metaverse experiences. A European Parliament briefing, for example, outlines the potential opportunities and risks for competition, liability, and intellectual property. South Korean lawmakers stated that metaverse use cases raise many issues around data governance, privacy, and user safety. The Japanese Ministry of Economy, Trade and Industry (METI) has established a “Web 3.0 Policy Office” to connect finance and tax departments with media, sports, fashion, and other industries to foster economic interoperability in the metaverse. Such efforts now exist alongside broad Web2 regulatory frameworks.

Yet more questions are likely needed to be ready for the metaverse—and the regulations that may be required. Do legal definitions of content apply to 3D objects and assemblies? Will metaverse experiences make data capture much more powerful? How should we view the emergence of AI influencers and digital twins of celebrities? What are the tax implications when identity and location may be hidden by encrypted blockchains? Amid so much continuous disruption, regulators are often caught between implementing and enforcing existing laws, understanding where they might apply to metaverses, and when they may need entirely new rulesets.

The regulatory focus: Content, conduct, privacy, and trade

By analyzing current regulations, the human behaviors that drive their evolution, and the novel implications that metaverse interaction models can create, business leaders have an opportunity to inform their strategies and implementations. Staying ahead of domestic and global regulatory developments can help mitigate risk and define the scope of innovative experiences. These considerations could enable a more equitable and safer user environment in the metaverse, drive public adoption of the technologies, and help inform future policy developments.

When anticipating how regulators may seek to manage hyperscale metaverse experiences, companies should consider how they’re currently addressing content, conduct, privacy, and trade issues such as tax and finance.

Content, conduct, and safety

Providers and regulators are now contending with the consequences of digitizing human behaviors into hyperscale social environments—the leading social media services that span continents and gather hundreds of millions (and even billions) of users together. Social media, user-generated content services, and multiplayer video games are working to monitor conduct and moderate content. They aim to protect their brands and support positive experiences for their users while establishing responses in case any cause problems.

These services are used by billions of people across many cultures, normative spaces, and legal jurisdictions. Yet they are often navigating which kinds of content and conduct require moderation. Depending on how it’s created and distributed, content can undermine ownership, like copyright violations. Depending on the views it expresses and the images it contains, content can be illegal by law, like hate speech or offensive by the various standards of different cultures. Conduct such as harassment or bullying can lead to harms and undermine the safety of these experiences. In the US, Section 230 of the Communications Decency Act generally provides online platforms with immunity from liability based on third-party content. Many lawmakers are pushing for this immunity to be scaled back.

In the European Union, the Digital Services Act (DSA) established a regulatory regime that requires platforms to manage the risks that illegal and harmful content and activity may pose to users and society. Aimed primarily at services that allow users to share content (including social media platforms), the DSA seeks to increase online safety, drive accountability, and create transparency on how content and behavior are managed on platforms.

While the EU has put in place a regulatory framework across its member state countries, there is limited federal law in the United States, where individual states are legislating piecemeal. A challenge in creating social media regulation is the need to not only protect against harmful content but also protect free speech. California’s AB 587, for instance, requires social networks to post their content moderation policies and provide a description of their processes for flagging and reporting problematic content like hate speech, racism, extremism, dis- and misinformation, harassment, and political interference.

There seems to be little global consensus on how to regulate human behaviors in social experiences. How such rules might play out in metaverse spaces can be even more fraught where current codes of conduct are suggested but rules seem to be scarce. What are the lines of self-expression through avatars? What about virtual violence? How should copyright apply to avatars and digital clothing, or deepfakes and generative AI content? Many of these issues are being tested and the legal positions being defined via lawsuits, in the absence of regulation. For instance, issues such as fair use and the copyright position for generative AI works are being debated in courts in multiple jurisdictions. The EU aims to address the risks of AI-generated content in its Digital Services Act and draft AI Act, requiring those in scope to classify levels of risk and enable labeling of deepfake and inauthentic content, for example.

These issues may become more problematic in the presence of younger users. The UK’s Age-appropriate Design Code (for privacy) and pending Online Safety Bill (for content), along with the EU’s Better Internet for Children (BIK+) program, are guidelines seeking to clarify how services should protect younger users, including recommendations for enabling “strong” privacy by default, turning off location tracking, and not prompting children for personal information. Providers are asked to assure the age of users, protect their data, and shield them from harmful and illegal content. The US state of Utah recently enacted requirements for social networks to secure parental consent before any child account is created and to set curfews on child accounts, preventing access between 10:30 PM and 6:30 AM. In 2019 and further amended in 2021, China limited the amount of time minors can play video games, aiming to deter “internet addiction.” In more immersive experiences, will younger users find it difficult to limit their time online? And whose responsibility is it to set those limits?

Critical considerations

When negative consequences are allowed to continue unchecked, regulators can be compelled toward strong responses that can limit growth and innovation. As regulations come into effect, tech companies should be proactive in embedding protections into their current policies and implementations and adopt leading practices that help support positive outcomes for users and society.

Protection by default, trust by design: Providers should enable protections by default, with straightforward and easy-to-manage user controls and policies for content and conduct. Restricting unsafe search results for younger users, blocking younger users from appearing in searches, disabling direct messaging from unknown accounts, and establishing new accounts as private until configured otherwise are steps that can help companies demonstrate compliance and dedication to user safety. Some may consider how to selectively “mute” other avatars, keep avatars at a distance, spin off safer metaverse spaces designed for children or teens, and follow a minimal data collection policy for younger users.

Real-time content moderation: Moderating such enormous amounts of content and conduct can be very difficult and costly—and metaverse experiences can be even harder. But as potential harms become evident, regulators are able to impose greater punitive measures on service providers. Providers should be paying attention to AI and large language models (LLMs) that may be better able to moderate at scale. Such tools may help mitigate harms, avoid legal challenges, and foster enjoyable experiences for the majority.

Risk analysis: Metaverse innovations may inadvertently create new ways for bad actors to engage in harmful conduct and exert influence. It is unclear how issues like bullying, harassment, stalking, grooming, and hate speech may manifest in metaverse environments. Companies should consider scenario modeling to identify new risks and mitigation strategies, then communicate them to users and regulators to build awareness.

Global regulators put people in the center

So far, the EU has played a leading role in advancing regulations focused on consumer protections, with a stated “people-first paradigm” that aims to emphasize the safety of individual users. Among these are the EU’s General Data Protection Regulation (GDPR) and Digital Services Act (DSA). These regulations apply across member countries and carry hefty fines for companies found in violation, including for non-EU businesses that operate there. They could be applied to metaverse use cases.

On the content and conduct front, a comprehensive Online Safety Bill is advancing through UK Parliament. It includes rules that online platforms must follow to protect children from harmful content and to identify and delete “illegal” content. Lawmakers have indicated that they deem these rules “future-proof” and applicable in metaverse environments, which may create a significant challenge for platforms that don’t authenticate users’ ages.

In the US, federal frameworks tend to regulate individual industries (finance and healthcare, for instance), whereas consumer protection rules often vary by state. Some US states are adopting rules modeled after those in the EU for protection of digital privacy. A federal-level “Kids Online Safety Act,” the goal of which is to protect younger users from social content that may be considered harmful, is under consideration in the Senate and the subject of robust debate.

Privacy

In the Web2 era, many tech companies and social media platforms have built their businesses around collecting and monetizing user data. However, there are concerns that consumers may not understand how and which kinds of data are being collected, the difference between personally identifiable data (PII) and anonymized data, how it’s being stored and protected, and how it’s being used. Tracking users can potentially infringe on privacy rights, and resellers can, if not regulated, provide data to third parties with potentially very different uses in mind. Data breaches have put consumers at risk and enabled additional targeting, persuasion, and abuse. Given the scale of data in the digital economy and the potential risks to consumers and providers, some governments have shifted to establishing much stronger legal and regulatory frameworks around personal data and privacy.

The EU’s GDPR requires any company that collects and processes personal information of EU residents to provide transparent privacy notices to users, obtain their explicit consent to collect and use their information, and implement data-protection measures. The proposed ePrivacy Regulation would govern the use of cookies, electronic marketing, and confidentiality of communications, and sets rules around tracking technologies and electronic marketing practices.

In the US, data is regulated by industry, and consumer privacy laws vary from state to state, which may create complications for companies whose business crosses state borders. Several individual states have data privacy regulations in effect as of this writing. In California, the Consumer Privacy Act (CCPA) has similarities to the GDPR, such as the right to access, delete, and opt-out of the sale of personal information. Many of the state-level data privacy rules apply primarily to businesses operating in those states, regarding data of residents in those states, and they do not govern data transfers.

Many of these may apply to the metaverse, but a world of much more embodied and digitized interactions—where gestures, facial expressions, and conversations are captured and stored, for example—could amplify these challenges and enable new ones. A Web 3.0–enabled metaverse could obfuscate user identity through encrypted blockchains or make it difficult to locate data trackers. Metaverse use cases could create new kinds of personal information, like sentiments and emotions collected from virtual reality headsets, and 3D user-generated content co-created with generative AI. Synthetic influencers and celebrity digital twins are already challenging licensing models and even personhood itself.

Critical considerations

To help mitigate risks and anticipate future conditions, companies should view their data collection and usage through a regulatory lens, implementing privacy protection measures during development, not merely when problems arise. In effect, data should be considered an asset that is also a potential liability requiring risk management.

Operate across borders: Jurisdiction-based privacy laws can be difficult to apply when users in many locations interact with entities in multiple countries. Blockchains and other encrypted distributed ledgers can make ascertaining jurisdiction easier—or more difficult. A decentralized framework for digital identity may enable users to “travel” across environments with their unique identity, information, and ownership of goods, but companies would then likely need compatible interfaces that recognize and support each user. Tech companies may consider sharing their operational challenges with regulators to frame common standards and harmonize approaches.

Privacy by design: Operating from a privacy-first perspective may help companies stay ahead of rules around consent, transparency, and data protection. Instead of finding pockets of customer information and attempting to lock them down, companies should consider a holistic and scalable data management approach. Solutions such as public data vaults and secure peer-to-peer storage architectures could remove some of the burdens of data security and compliance from companies.

Let customers decide: Establishing policies with metaverse-specific language and empowering users to decide how their data is collected and used are expected to help reduce liability and hedge against future and emerging regulations. Companies should approach this effort strategically rather than responding ad-hoc based on the uneven patchwork of regulatory regimes. For data-sharing and advertising, financial incentives between consumers and companies may encourage users to share more freely and give them a stronger role in the success of providers. Customers can also be given more tools to monitor and even modify their data on the company platforms.

Tax and finance

Businesses that operate in or transact through Web2 services have been navigating a patchwork of shifting tax jurisdictions that have raised many questions about indirect tax, the definition of digital goods and services, and how to handle multiple jurisdictions based on the location of the buyer, the seller, the service, and the benefit derived. On the runway from Web2 to Web3 and metaverse experiences, factors include considering identity and location, extending the definition of goods to include digital things, and addressing the challenges of cryptocurrencies.

In Europe, digital service taxes are levied across countries in the Organisation for Economic Co-operation and Development (OECD) on digital activities such as the sale of online advertisements or customer data, and on companies that maintain a digital presence in a given jurisdiction. For example, when a company collects user data, applies analytics, and serves ads, the profits are taxed based on the company location as well as the user’s location where the profit-generating event occurred. Value-added tax (VAT) rates may also apply based on the location of the supply (for physical goods) and based on the user’s location (for digital goods). However, there is no consensus yet on whether the organizers of virtual events in virtual spaces, for example, and exchanges of virtual currency for virtual goods are subject to VAT. The debate hinges on how to define cryptocurrencies and virtual assets. The OECD offers guidance for transactions involving crypto assets. The EU’s Markets in Crypto Act (MiCA) provides consumer protections and imposes stricter requirements on crypto exchanges.

These two developments demonstrate traceability and oversight considerations. The parties involved in the transaction should be identified, the flow of assets should be defined, and this information should travel with the transaction and be retained by both sides. Depending on their implementation, cryptocurrencies can make this easier or harder.

In the US, The Securities and Exchange Commission has brought cryptocurrency exchanges to court for alleged fraudulent behaviors and has asserted that the majority of cryptocurrencies are securities, rather than commodities. The distinction between securities and commodities can determine how they are taxed and regulated. To add more transparency to online transactions and deter criminal activities, the INFORM Consumers Act, which took effect in June 2023, requires online marketplaces to collect information about “high-volume” sellers, block those that don’t provide the information, and provide means for consumers to report problems.

In metaverse environments, identification and tracking may be more difficult, as different parties have different levels of access to user identity, information, and location. Many companies have experimented with bundles of real and virtual assets to encourage metaverse engagement and nurture brand loyalty. But regulators could struggle to define these assets and may apply different rulesets. For hyperscale metaverse services and the economies they support, specifying effective taxation and auditing may present unforeseen challenges, but companies have a chance to help cocreate those specifications.

Critical considerations

Work by analogy: In countries where rules around digital goods and virtual transactions are not clearly defined, companies and regulators should collaborate on analogies that correlate to existing rules—for example, whether nonfungible tokens (NFTs) are collectibles, or purchases executed with cryptocurrency are subject to capital gains taxes. These analogies could help in conversations with regulators and auditors.

Expand tax data reporting capabilities: When transactions cross jurisdictions with different tax and reporting requirements, companies may need to ensure they have systems and processes to capture data, retain it securely, and produce it as needed. However, many back-end systems may not be set up to capture, track, and report the information necessary to determine appropriate taxation and comply with transparency rules. Some legislative bodies are considering real-time reporting, which will require companies to grant them access directly to transaction data. It may be wise to include tax and finance professionals in product development discussions in order to help ensure that new models conform to applicable rulesets.

Leverage blockchain: Smart contracts, ledgers, and other blockchain technologies represent a potential solution for tracking and storing transaction information in Web2 and Web3 environments. Real-time information exchange between the platforms and authorities may enable companies to improve compliance and reporting once interoperability and scalability challenges are solved.

Do the work today for a more successful tomorrow

The rise of the social web has mobilized billions of people onto leading providers, bringing with them many of the expectations and behaviors of the physical world. The success of these services has underlined their value while revealing the very real consequences and side effects that attend their mass adoption. The popularity of social media and now social gaming reflect deeper currents in how people are migrating to digital experiences, further blurring the lines between the physical and the virtual. The vision of “the metaverse” is a way to learn where these currents seem to be leading us. If, in 2005, we had understood the trajectory of early social media, would we have done things differently?

This seeming inevitability is why we should take the metaverse seriously and why companies and regulators should be working together to establish the guardrails needed for robust and safe metaverse experiences and opportunities—to moderate content and guide conduct, prioritize privacy and safety, and define metaverse economies clearly. Doing the work today can secure better user experiences while helping enable businesses to innovate and can drive value with significantly less risk.

This article originally appeared on Deloitte.