The internet is changing, and we’re bracing for yet another phase of digital expansion. Major social platforms are expanding their footprint and offerings, the “metaverse” is no longer a far-off concept in Sci-Fi novels but the inspiration behind Facebook’s rebrand, and there is a broader push to democratize augmented reality (AR), virtual reality (VR) and digital experiences and services that will provide new opportunities to all.
But the world, and society itself, in many ways, remain largely the same. Threats from bad actors that have existed in the real world increasingly have a greater reach and impact as people become more connected online and reliant on technology. We have not yet developed the robust set of regulations and monitoring, or standardized norms and protections for online activity that exist in real life — and we’re all starting to feel the effects.
As a result, the global focus is sharpening on the rights of internet users and the responsibilities of platforms. The rules established twenty-five years ago are rapidly being replaced, and platform operators and users alike must prepare. This revolution, necessitated by widespread abuse of centrally important services at a time of staggering technological innovation and user growth, will affect liability and freedom of speech, user anonymity and the future development of the online world.
What we do next matters. The next few years will only enhance the importance of the emergent trust and safety industry to set norms and best practices. Moreover, it will require significant technological innovation if we are to match the pace of abuse trends to keep up with a generation of changing legal requirements. “Safety by design” will become paramount for new and existing platforms to sustain success and growth.
Here are four evolving areas to watch as online platforms grapple with new and growing abuse vectors and the new phase of accountability.
2022: Legal Revolution - Towards A Proactive International Baseline
National legislators are taking steps to set new international standards for internet governance. Oliver Dowden, the U.K. Secretary of State responsible for new far-reaching British legislation which requires proactive harmful content detection, explained its goal, “I am intent on setting online standards the whole world can follow, in a new age of accountability.”
In 2022 a wave of legislation, already set in motion in Australia, Europe, India and North America, will begin to land. First steps in the U.S. were made with Sen. Warner’s Safe Tech Act 2021 proposal to strip internet companies of some of their liability protections, but elsewhere action is much faster. The U.K. will require a new proactive approach to user safety where platforms in the U.K. market will need to find and remove new child pornography and terrorist content from their servers. Canada is following suit, and the E.U. is considering similar requirements. It could place nearly 15% of users off-limits to those unwilling to comply. While no automated solution for finding original harmful content is currently available, legislators are attempting to drive innovation in safety rather than regulate within the existing parameters.
There are few online borders in user-generated content, and it can be accessed in multiple jurisdictions simultaneously. U.S. companies will need to continuously evolve their policies to comply with foreign laws at home, if they are to do business abroad. Proactive standards look set to become an international baseline.
Investigative Research And The Shift to AI
The 2022 revolution will not just be legal, it will also be technological.
The explosion of live audio and video streaming platforms is a new risk area, particularly problematic given the repositioning of platform liability for user’s activity. To protect users, this content must be moderated in real-time. Since hundreds of thousands of hours of content are being produced every hour, platforms entering 2022 will need to build artificial intelligence (AI) systems fast and integrate these into human moderation teams. We will need to build systems that are as complex as those used by threat actors in order to identify maliciously created content. The challenge is immense.
Innovations in AR, VR, and immersive games allow users to build new worlds, but we are already seeing abuse by underground groups sharing illicit material. To constantly monitor these spaces would be time-consuming and costly. Instead, teams have already begun to divert resources to identify potentially problematic user behavior based on their online history. The scale of the issue should propel the implementation of AI capable of searching through vast quantities of contextual and nuanced data to identify harmful content.
Agility In Handling New Threats
Yet the scale of the challenge also means that hiring more and more teams will not be enough, nor will AI solve the issue on its own. And internet platforms will continue to be engaged in a constant struggle to protect themselves from new abuses that capture the news cycle, targeting our children, health, and politics.
The challenge of ensuring safety online is as complex as maintaining it in the physical world, and no one has the perfect solution — neither platforms, nor regulators. As more users engage (4.66 billion in 2021) online, content across hundreds of languages from diverse countries and political contexts is constantly being produced. The current whack-a-mole approach, based on user flagging, classifiers, and content moderators, is too slow, has limited effectiveness, and leaves companies vulnerable. To protect their brand’s integrity and user experience, 2022 will see platforms strive to become more agile by identifying threats before they damage or become the news.
The Future Is Today
The metaverse is an expected iteration of the internet where users will be able to meet, play, shop, and engage in content from multiple platforms simultaneously. But people today are already fusing platforms by their actions. For example, millions simultaneously broadcast and watch livestreams across a number of other platforms. While these platforms are largely independent and do not share an owner, they do share content. So when a problem arises on one platform’s feed, it will affect them all. As this activity grows, we will need enhanced visibility between platforms to protect the internet as one interconnected safe network, undivided by brand, geography, or language.
2022 will be the year when the technological change will meet legal revolution, and the results will define the internet for many years to come. We see a multitude of opportunities, but first, we must manage the risks. The future is unknown, and new legislation proposals often seem more aspirational than practical. What is clear is that technology platforms will now require a proactive, intelligence-led approach in order to identify threats and ensure integrity.