• Tehnologija
  • Električna oprema
  • Materijalna Industrija
  • Digitalni život
  • Politika privatnosti
  • O nama
Location: Home / Tehnologija / How to address digital safety in the metaverse

How to address digital safety in the metaverse

techserving |
1999

As buzz around the metaverse increases, many are raising concerns about the potential risks in an environment where the boundaries between the physical and virtual worlds continue to blur.

The metaverse is a virtual reality world characterized by a three-dimensional, multi-sensory experience (as compared to the current two-dimensional internet – text and images on flat screens). According to some experts, the closest thing to the metaverse today can be seen in games like Fortnite and experiences on Roblox.

Have you read?

Constructing ecosystems of trust

Addressing the necessity of constructing trusted ecosystems within the technologies developed for the metaverse is a critical consideration. These trusted ecosystems will constitute building in algorithms, structures, frameworks, regulations and policies within hardware and software development cycles to address the distinct elements of safety, privacy, and security within the DNA of the technology.

How data is shared within virtual worlds will need to be considered more carefully to ensure privacy. A second dimension to be considered within the privacy considerations of the metaverse's development is eliminating biases that will lead to a non-inclusive or malicious adaptation of the real world. Engaging in the metaverse will constitute of a utilization of integrative emerging technologies. This calls for a global thorough open-box security validation process of the protection provided within the environments against breach of confidentiality, integrity, or other aspects of security.

These ecosystems of trust will contribute to creating a stable, inclusive and purposeful existence of a virtual and immersive existence.

How might these risks unfold in the metaverse?

To understand how risks to safety could become more prevalent in the metaverse, a key construct of this digital future should be shared: “Central to the concept of the metaverse is the idea that virtual, 3D environments that are accessible and interactive in real time will become the transformative medium for human engagement. If they are to become practical, these environments will be dependent on widespread adoption of extended reality.”

Even if not a fully immersive existence, it is likely that many people spend more time blending offline and virtual interactions, moving towards a mixed reality (MR). Privacy and security breaches are pathways that can compromise the safety of interactions and users. For example, this could take the form of someone masquerading as a medical doctor to gain access to surgical theatre technology for digitally-performed surgeries.

A good sense of the potential risks can be found in some existing applications that create “virtual worlds” such as on many gaming platforms. It is clear that there are significant safety challenges that have already presented themselves in these environments. For example, recreations of the 2019 Christchurch mosque shooting aimed at very young children have been found multiple times on the Roblox platform despite significant efforts on the company’s part to stem the tide of such content.

Violent extremist and terrorist content isn’t the only harm in such virtual worlds. Recently, on Facebook’s Oculus Quest VR Headset an employee experienced a racist tirade that lasted several minutes whilst playing Rec Room and was unable to identify or report the user. Groping has also been a problem that has emerged in the metaverse, for a variety of reasons.

Where do we stand on digital risks today?

Taking a step back and looking at the current digital context, the risks of harm are already growing. According to the latest Global Threats Assessment Report by the WEProtect Global Alliance, 1 in 3 (34%) respondents to their Economist Impact global survey, were asked to do something sexually explicit online they were uncomfortable with during childhood. In addition, The Internet Watch Foundation saw a 77% rise in child ‘self-generated’ sexual material from 2019 to 2020.

How to address digital safety in the metaverse

In order to address safety in a comprehensive way as the metaverse emerges, we need to partner with others in government, industry, academia and civil society.

Even before COVID-19, more than half of girls and young women had experienced online abuse, according to a global poll last year by the Web Foundation, an organization cofounded by the inventor of the web, Tim Berners-Lee. Sharing images, videos or private information without consent - known as doxxing - was the most concerning issue for girls and young women globally, according to the Web Foundation poll. One in four Black Americans and one in 10 Hispanic Americans have faced discrimination online as a result of their race or ethnicity, compared to only three percent of white Americans. The risks are already high, especially for vulnerable groups.

"Contributing to the metaverse in a responsible manner will require research, collaboration, and investment in safety as it relates to XR," Antigone Davis, Global Head of Safety at Meta explains. " For example, we are investing in controls that allow users to manage and report problematic content and conduct as well as safety tooling designed for immersive experiences. But we cannot do this alone. In order to address safety in a comprehensive way as the metaverse emerges, we need to partner with others in government, industry, academia and civil society."

This matters significantly given that digital risks in the metaverse will feel more real based on how our brains interpret immersive experience; Mary Anne Franks, president of the Cyber Civil Rights Initiative, noted in her paper on virtual and augmented reality that research indicates abuse in VR is “far more traumatic than in other digital worlds.”

How might the risks be exacerbated in the metaverse?

There are numerous ways that current risks could be exacerbated in the metaverse. Firstly, depending on how these digital spaces are governed, there are risks of unwanted contact in a more intrusive multimodal environment. Today, if someone who we don’t know or don’t want to engage with, reaches out by messaging, friending, or otherwise trying to contact us on platforms such as Instagram, Facebook, etc. their ability to contact is mostly limited to extending text-based messages, photos, emojis, etc.

However, imagine an unwanted individual being able to come into someone’s virtual space and “get up close” with that person in the metaverse. Without robust mechanisms to report, prevent, and act on this this in real-time, this could lead to unwanted conduct. With haptic technology, the risks that harms in the metaverse will feel more “real” are not far-fetched given that many companies are working to incorporate touch as an additional sensation in an immersive reality.

For example, haptic gloves being developed by many organizations aim to provide tactile feedback to provide more precise and realistic feel to any motion. While of course, this can create a better sense of reality and increase connectedness in a virtual environment, this can also potentially be abused by bad actors in ways that may not be fully understood just yet.

The harmful content that proliferates all too quickly in our current digital lives, may also translate in the metaverse to more graphic, 3D, and auditory unwanted content that feels more intrusive and has a greater impact due to the multisensory nature of the environment in which it is propagated.

The rise of virtual currencies can often be another challenge in the proliferation of harmful content and activities online. For example, it is purported that kids are using their avatars to provide lap dances in virtual strip clubs in return for the virtual currency, “Robux”. Cryptocurrencies are a popular option for those purchasing Child Sexual Abuse Material (CSAM), as their decentralized control and independence from financial institutions also ensure anonymity, according to a report by ActiveFence.

Given the role that digital currencies are expected to play in the metaverse, the financial incentive and payment structures that lead to the proliferation of harmful content are likely to increase in size and complexity with the move to this web 3.0.

There is an additional risk from the tracking and retention of biometric data, providing platforms with “a new quality of information that is comprised of your real identity combined with stimuli – indicating what you uniquely may think and like and want.”, according to technology and human rights expert Brittan Heller; in her paper Reimagining Reality: Human Rights and Immersive Technology, she coins the term “biometric psychography” and discusses the potential implications of new data collection with immersive technologies for human rights, privacy, and self censorship.

So, what can be done about it?

Many companies, academic and civil society experts, regulators, are advocating for laws and new regulation so that things which aren’t allowed in the real-world are similarly criminalized in online spaces. For example, Bumble is pushing to criminalize cyberflashing. Their CEO, Whitney Wolfe Herd has asked lawmakers: "If indecent exposure is a crime on the streets, then why is it not on your phone or computer?"

Human rights lawyer Akhila Kolisetty said India, Canada, England, Pakistan and Germany were among a small number of countries that have outlawed image-based sexual abuse, where private pictures are shared without consent. Many countries lack laws for emerging forms of digital abuse like “deepfakes”, where a woman’s face can be superimposed onto a porn video and shared on messaging platforms.

Australia eSafety Commissioner provides support to those experiencing such abuse, but many other countries are lagging behind in such mechanisms and regulatory functions. The same applies in protecting kids online. “Our society says we’re going to protect kids in the physical world, but we’ve yet to see that in the same way on the digital side,” said Steven J. Grocki, who leads the child exploitation and obscenity section at the Justice Department. Updating laws to apply in a digital context will be a key component of governing the metaverse.

Hoda Alkhzaimi, Research Assistant Professor, Computer Engineering; Director, Center of Cyber Security at NYU Abu Dhabi added, that there is a constant evolution to the means we build attack mechanisms on a virtual platform. This is never a fixed development cycle. We should be mindful to how we build the software and hardware elements of the technology to include indigenous elements of security consideration to protect the integrity of the content developed, the interactions created the users within the environment and holistically the stability of the presented virtual world. There is not a single factor to be considered here as confidentiality, integrity, authenticity, accessibility, privacy and safety aspects all need to be developed. Attacks on virtual devices have been built in the past through an open source platforms such as OpenVR platform by Valve.

How can we make sure that this will not be a recurring fact within a critical virtual infrastructure?

Civil society organizations such as Access Now and EFF are calling for governments and other stakeholders to address human rights in the context of virtual and augmented reality.

The other major area that can be improved are the policies, enforcement, and overall moderation mechanisms that platforms adopt.

“VR and AR platforms need specific terms of service for immersive environments, based in how this technology interacts with our brains. We cannot simply apply rules from existing social media to the Metaverse,” says technology and human rights expert Heller. “This is important,” Heller stresses, “because platform governance in digital worlds must regulate behavior, in addition to content.”

Right now, one of the most common forms of governance in virtual worlds is a reactive and punitive form of moderation. This does not prevent harms from occurring in the first place and often consequences can be circumvented as bad actors become more sophisticated in how they toe the line of policies. Finding ways to incentivize better behaviors and perhaps reward positive interactions may need to become a bigger part of a safer digital future, especially given increased safety risks in the metaverse.

Have you read?