Last year in November, Nick Clegg, Meta’s head of global affairs, said the company is working on building safety and privacy protections for the metaverse. Already under fire from lawmakers and regulators over privacy concerns and the inability to curb hate speech on its social media platform Facebook, the advent of the metaverse has raised further questions about security. In response, Andrew Bosworth, the man steering the company’s shift towards AR and VR efforts, said that even though supervising how users “speak or behave on a large scale is practically impossible,” Facebook is the company best suited to the task.
Virtual sexual harassment
Facebook announced a USD 50 million funds called the XR Programs and Research Fund to help develop a responsible metaverse. Meta stated in a blog that it would collaborate with Women in Immersive Tech, Africa No filter, Electric South, and the Organization of American States as a part of its initiative. The fund will also pour money to conduct external research with the University of Hong Kong and the National University of Singapore. Facebook clarified that it would only provide funds for the research and not the data so that the study remains independent.
Sign up for your weekly dose of what's up in emerging technology.
The virtual reality online video game Horizon Worlds is Meta’s flagship program to create something that is assumed to be close to the company’s vision of metaverse. The ease of virtual interaction in the game raised alarms following the sexual abuse allegations of her online avatar made by a metaverse beta tester. On February 4, Meta introduced a tool called ‘personal boundary’ when they access the Horizon Worlds and Horizon Venues apps using their VR headsets. The tool will ensure a distance of four feet between their virtual avatars to curb incidents of virtual groping and other abusive behaviour.
Identity fraud and theft pose a major risk in the metaverse, making user’s digital identity protection vital. The metaverse will contain far more personal information than our Google accounts. Aside from our credit card and bank account details, Meta was reportedly gathering biometric data, including the users’ pupil movements and body poses, to create their avatars and hyper-targeted advertisements.
The metaverse is also an easy target for advertisers to overwhelm spaces. Because of the sensory overload in the metaverse, constant video pop-ups, sponsored content, and repetitive ads could be even more intrusive to users. Critics are expecting the metaverse to be filled with a barrage of ads. Post Facebook’s decision to start testing in-headset ads, the company received massive backlash from developers. Bosworth, VP of Facebook Reality Labs, admitted that the criticism was “way too much.”
As technology advances, a host of more serious problems are expected to surface. Research showed that virtual attacks could transform into physical attacks. An attacker could reset the physical boundaries of hardware by manipulating the VR platform like a user could be pushed down a flight of stairs.
As augmented reality arrives on the scene, users could potentially be misdirected into dangerous situations like robberies. Even hypothetical attacks could leave users with a feeling of nausea from motion sickness. Kavya Pearlman, founder and CEO of XR Safety Initiative, explained, “We know that people could experience motion sickness in VR. The creator could have intentionally embedded something that, when you click on it, makes you sick.”
In order to protect the data and privacy of users, companies will need to do more than just policy changes, Pearlman said. A trusted ecosystem must be created that can build algorithms, frameworks and regulations to address the privacy and security issues. Serge Gianchandani, co-founder of MetaMall, a metaverse startup that offers high-end real estate and experiences said, “We feel that the metaverse can be made very secure with the right choice on tech and protocols. We follow both privacy by design and privacy by default methodology. Wherever it is not necessary, default is masking the user details and allowing the user to configure his privacy settings.”
But the Global Head of Safety at Meta, Antigone Davis, stated that building a safe Metaverse can’t be done alone, companies have to partner with government, industry, academia and civil society. Policy experts have set down certain goals that align with the idea of a secure metaverse: determining who has the authority to make policies, fixing the current infrastructure issues, better management and protection of digital identities and framing trust policies for virtual reality. The questions about what these rules will be are many, and it will be interesting to see how cybersecurity shapes up to answer them.