Bots, Bias & Bigotry: safe scaling of AI

In the first of our Risk & Innovation series, James Allen examines the barriers to overcome when scaling AI.

Now that we’re well into the fourth Industrial Revolution (also known as Industry 4.0), we expect to see some fundamental shifts in how businesses operate and serve their customers.

Here’s what we see as the three big pillars of Industry 4.0:

  1. Digitisation of product and service offerings
  2. Digitisation and integration of supply / value chains
  3. Digital business models and customer access

 

The shift toward Industry 4.0 has become more important to many brands, and has accelerated during the Covid crisis as a result of significant changes in supply chain and consumer behaviour.

In fact, a recent McKinsey survey highlighted that 65% of respondents see Industry 4.0 as being more valuable since the pandemic, with the same survey revealing that the top 3 strategic objectives for Industry 4.0 are:

  1. Agility to scale operations up or down in response to market-demand changes (18.4%)
  2. Flexibility to customize products to consumer needs (17.2%)
  3. Increase operational productivity and performance to minimise costs (17.2%)

Yet when the same respondents were asked if they had successfully scaled Industry 4.0 initiatives, only 26% had managed to do so successfully.

 

According to Rothschild & Co, the market for Industry 4.0 is expected to top 300 billion dollars, and with AI and connectivity projected to reduce manufacturing costs by 20% (or 400 billion dollars), it’s essential that companies find a way to scale safely, at pace.

Artificial Intelligence evolution

AI has been in development for years, starting with the first computers in the 1940, with which scientists and mathematicians began to explore the potential for building an electronic brain. In 1950, the “Turing Test” proposed that if a machine could carry on a conversation that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Alan Turing to argue convincingly that a “thinking machine” was at least plausible, and the paper answered all the most common objections to the proposition.

Fast forward many years, and many millions of pounds of research investment, and in 1997 perhaps the first publicly recognised AI computer was developed. This came from IBM in the form of Deep Blue – a chess-playing computer that beat the reigning world chess champion Garry Kasparov.

But machines like Deep Blue were incredibly complex, extremely expensive, and inaccessible to all but a few large technology companies. In the past few years, however, the interest and opportunity presented by AI within Industry 4.0 has exploded.

This is due to a number of factors:

  • Wider availability of computing and access to cloud environments with large processing power
  • Development of deep learning algorithms
  • Big Data platforms
  • Development of Artificial General Intelligence

AI – learnings and barriers to scale

Whilst many companies see the potential presented by AI, companies are also rightly concerned by the risks that it presents, as well as the barriers they need to overcome when scaling.

The most common challenges we tend to come across are:

  • Access to specialist skills
  • Cost of processing in cloud environments
  • Inability to demonstrate fairness, lack of bias and integrity of AI algorithms
  • Risk of unintended consequences
  • Regulatory understanding
  • Ability to seamlessly switch between AI powered processes and regular business processes in the event the AI fails

This presents organisations with a real conundrum. AI use raises questions over ethics, safeguards, interpretability and more. It’s only right that organisations probe these issues and take the learnings from those that have gone before them.

Here’s a few public examples of where AI has gone wrong:

Footballer or felon

A facial-recognition system identified almost thirty professional American footballers as criminals, including New England Patriots three-time Super Bowl champion Duron Harmon. The software incorrectly matched the athletes to a database of mugshots in a test organized by the Massachusetts chapter of the American Civil Liberties Union (ACLU). Nearly one in six athletes were falsely identified.

CEO gets spoofed

In 2019 the CEO of a UK-based energy firm got a call from his boss at their German parent company, instructing him to transfer €220,000 to a Hungarian supplier. The ‘boss’ said the request was urgent and directed the UK CEO to transfer the money promptly. It turned out the phone call was made by criminals who used AI-based software to mimic the boss’ voice, including the “slight German accent and the melody of his voice,” as reported in the Wall Street Journal. Such AI-powered cyberattacks are a new challenge for companies, as traditional cybersecurity tools designed for keeping hackers off corporate networks can’t identify spoofed voices.

Get me out of here!

US airlines were subject to widespread criticism after their AI powered pricing systems charged customers up to 10 times the price of a regular ticket, as they desperately tried to escape Florida ahead of the arrival of hurricane Irma. The systems did not have a kill switch. “There are no ethics valves built into the system that prevent an airline from overcharging during a hurricane,” said Christopher Elliott, a consumer advocate and journalist.

 

Navigating the risks and enabling safe scaling of AI

Webhelp and Gobeyond Partners have developed a comprehensive framework to support the safe scaling of AI, including assessment of risk, key controls, human-centred ethics principles, algorithm management and data handling. This framework includes open source methods that can be used to demonstrate the integrity and explainability of AI algorithms.

Safe scaling of AI

Questions your organisation should consider

Although AI presents a huge opportunity to transform both business operations and customer experience, this is not without risk. Here are some of the long term strategic questions that we recommend you consider, for your organisation:

  • What role does AI have in the working environment and is there such a thing as a post-labour economy? If so, how do we make it fair?
  • How do we eliminate bias in AI?
  • How do we keep AI safe from threats?
  • Is it right to use AI in cyber defence? If so, where is the line?
  • As AI capabilities become more integrated, how do we stay in control of such a complex system?
  • How do we define the humane treatment of AI?

 

Feel free to get in touch, to see how we can help you safely fulfil your Industry 4.0 ambitions at pace and at scale.


Stricter content moderation policies puts pressure on social media platforms

Growing concerns towards content moderation policies aroused over the last few years due to scandals, users, and politics. Subsequently, there have been growing concerns about Content Moderation (CoMo) policies and enforcement on social media platforms. Today, we are dealing with how world-renowned social media platforms enforce their Content Moderation policies, as opposed to  how governments or institutions desire them to (See Article). Bear in mind that in most countries, these platforms are not immediately liable for their User-Generated-Content (UGC); Section 230 of the Communications Decency Act of 1996 in the United States is a great example of this ’liability shield’:

When it comes to terrorism or other threats to users, several countries like members of the EU, Brazil and Australia impose a time limit for platforms to delete content after it has been flagged as inappropriate.

With platforms not immediately liable for their User Generated Content, why are huge corporations enforcing stricter policies, raising censorship concerns ? Why don’t they just leave their communities to live freely? They need to for two main reasons:

  1. To protect their communities from violence, hate speech, racism, harassment and so many more threats to ensure a smooth user experience
  2. Protect individuals on their platforms from being influenced by other users who might spread misleading information or leverage the network to influence their decisions and behaviors

But as you will discover in this article, some platforms endure growing scrutiny from their audience due to their huge reach, whilst others might benefit from different levels of acceptance to convey a somewhat brand image.

Scrutiny makes social media leaders tighten their CoMo policies

The former is the case, especially for both Facebook and Twitter. Their billions of daily users have the ability to influence mass opinions – far more easily than any other type of media. Following several scandals, trust between these platforms and their users has been damaged (link to WH article). In fact, when interrogated in a hearing by the US Senate last October, leaders of Twitter, Google, and Facebook were pointed out as “Powerful arbiters of truth”, a rather explicit denomination.

Content Moderation has wide political implications. Last year’s American elections played out a bigger trial for large tech platforms showing how they were able to monitor the peaks of political comments, ads, and other UGC, safely and considerately. Numerous examples of Content Moderation can be cited as no political ads on Facebook: first flagging Donald Trump’s tweets as misleading or partly false before permanently banning the former US president on both platforms.

TikTok has also been questioned several times regarding their moderation of political content, but most importantly almost live suicides, paedophilia, and increased usage of guns in the videos were posted by their users. Further to political aspects, the reasons why these types of content should be deleted and not seen by the communities is straightforward. When it comes to firearm usage, local laws make it even more unclear on how to moderate the use and applications of these types of weapons online.

Logically, the pattern rubs off on smaller players

Most Big Tech giants have now funded Safety Advisory Councils generally – “made up of external experts on child safety, mental health and extremism”, signaling to their communities that they are trying their best to protect them while avoiding censorship and audience attrition.

Due to the attention their bigger peers face, targets of the proposed tighter Content Moderation policies are progressing towards them. Platforms such as Parler advocate free speech and use it to promote their brand image, while welcoming the most extreme far-right supporters, whose comments are widely moderated on Twitter and other mainstream social

After Parler was banned from most well-known online app stores (Amazon, Apple, Google, who are the main providers of these apps) due to its lack of Content Moderation, it was forced to go offline and its now-former CEO, John Matze, has been fired over his push for stronger moderation. There are several other social media platforms claiming to promote free speech (Telegram, Gab), but some have chosen bravely to take on the Content Moderation challenge to avoid Parler’s faith.

Nonetheless, such patterns are already observed for new and innovative social media, including Substack (newsletter developing platform) and the infamous Clubhouse (live audio conferences). The former was not expecting such controversy about one of their newsletters until one of its previous releases linked IQ to race. The latter poses new questions on how to efficiently moderate live audio feeds.

Mastering Content Moderation policies is the key to success

The scale of emerging social media platforms, as well as their innovative format and technology imposes new challenges on Content Moderation, which is evidently highlighted by increased scrutiny from users. Unfortunately, without benefiting from years of experience in Content Moderation, newcomers, and smaller players find that their policy is adapted to their own targeted communities, as well as their content. If both areas are too permissive or restrictive, they become dangerous for their longevity and brand image.

Mastering Content Moderation enforcement is a lever to the welfare of your community and reputation.


 

Author

Thomas Japy

Digital Content Services Business Analyst

Contact the author

Innovation

Innovation is necessary, safety is crucial

James Allen, Webhelp’s Chief Risk & Technology Officer, introduces our new series taking a deep dive into risk and innovation.

Risk and Innovation don’t tend to appear in the same sentence very often. Innovation is, of course, essential for businesses aiming to survive and thrive in the 4th Industrial Revolution. But with an increasing weight of regulation, and with data becoming more valuable than oil, how can companies simultaneously innovate while staying ahead of emerging threats?

Here at Webhelp and Gobeyond Partners, our mission is to be leaders and experts in delivering low risk solutions that help our clients to innovate and stay safe.

In this new series, we’ll be providing some insight and perspective into some of the key questions that we work to solve in partnership with our clients.

  • AI has huge potential to transform customer experience in my business. But how do I safely move from small scale experimentation to deployment at scale?
  • In a world of increasing regulatory burden, how can I use digital technology to automate compliance activities?
  • My firm is part of a critical national infrastructure, but I have a large amount of legacy applications that provide critical economic functions. How can I accelerate transformation of my business without putting these at risk?
  • My business has made massive investments in digitisation, but my control environment is still analogue. How do I pivot to the new without losing control?
  • How do I attract new skills in digital, analytics and cyber into my Risk function?
  • Now that home working is part of the new norm, how do I deliver leading edge cyber security with world-class colleague experience?

Innovation

End-to-end collaboration

By encouraging and facilitating collaboration between different teams, this helps teams reach outside their own comfort zones, think differently, and better consider the customer and colleague experience from beginning to end.

 

Understanding new technology

New developments in technology are the cornerstone to enabling businesses to innovate, operate effectively, and react quickly. But with the adoption of new technology inevitably come new risks, and new challenges. Foremost importance should be placed on understanding these implications, and ensuring the safe deployment of the new technology.

 

At Webhelp, we work every day to drive innovation, with control by design. We adapt and innovate – in our technology, our mindset and our operational practices – and continue to push the boundaries of what we can do for our clients. But at the heart of everything we do is a focus on delivering low risk solutions, helping our clients to innovate while remaining safe, now and in the future.

 

In Risk & Innovation, our new series of articles and white papers, we’ll be exploring how emerging technologies can be used to help companies stay safe while maintaining necessary growth and agility. The first in the series, Bots, Bias & Bigotry- Safe scaling of AI, will arrive on Monday March 22, and will address the fairness, privacy and operational safeguards you need to consider when incorporating AI into your operational model.

Interested in more content from our thought-leader James Allen? Check out his earlier pieces on Taking a human centred approach to cyber security and How AI and data analytics can support vulnerable customers.


Digital content significance on second-hand car platforms

Revealing why consumer content is instrumental in influencing both purchase decision making and in the uptake, visibility and popularity of brands online.

The trust and safety of users online is crucial in today’s digital world. Technology is disrupting businesses, so to ensure customers are provided with a safe and painless journey, it requires a balance of technology and human intervention.

This paper looks at some of the pain points in the the automotive industry space, highlighting the typical industry reactions and insights into how Webhelp can offer a comprehensive and game changing solution.

Download insights

social media

Webhelp Ranked Highly Across all Aspects of Social Media by Leading Analyst NelsonHall

social media

Firm announces host of analyst accolades 

Paris, France , 11 February 2021  

The leading global customer experience (CX) and business solutions provider, Webhelp has been recognized by top-ranking industry analyst, NelsonHall, for its social media capabilities. 

The firm was recognized across three core areas: customer care and sales capability; online reputation management capability; and content moderation, trust and safety capability.  

NelsonHall’s Evaluation & Assessment Tool (NEAT), part of a “speed-to-source” initiative, enables strategic sourcing managers to assess vendors’ capabilities to identify the industry’s best performers during the sourcing selection process. The methodology specifically evaluates the quality of players’ abilities in several categories, such as technology and tools, service innovation, geographic footprint, and scalability, amongst others. 

“We are thrilled that NelsonHall has recognized our social media capabilities. Now more than ever, and in an increasingly digital world, businesses need to deliver high-quality and trustworthy customer experience interactions. Webhelp has a diverse range of digitally enabled services, which allow us to support global brands with their social media interactions and reputation and work with social media platforms and marketplaces themselves to support a safer online environment for users. We are very proud of our achievements in this space,” said Webhelp Co-Founder Olivier Duha.  

Ivan KotzevNelsonHall CX Services analyst, said:

“Webhelp’s strong performance in social media support and sales is built on a fundament of proprietary technology, channel management experience, and CX consulting capability. Notable is the company’s expertise in lead generation and sales activities on social channels, an increasing priority for brands looking to meet their customers on these channels.” 

Webhelp’s extensive capabilities and growing global footprint continue to be validated by the analyst community, with esteemed U.S.-based analyst, Gartner, naming Webhelp as a Niche Player. This builds on the analyst’s reporting of Webhelp as a Rising Star in 2019/20, as the business further establishes its reputation as an industry disrupter and credible alternative to the more traditional players in the North American market. 

These recent accolades amplify Webhelp’s current positioning by global analyst Everest Group as a Leader in Customer Experience Management (CXM) in its PEAK Matrix® Assessment 2020, as well as a Leader in its CXM in Europe, Middle East, and Africa (EMEA) Services PEAK Matrix, recognizing Webhelp as being particularly strong in terms of both vision and capability. The Everest Group positioning extends to a new report where Webhelp is recognized as a Major Contender in work-from-home solutions amongst other global players.  

Everest Group wrote in its WAHA (Work aHome Agent) CXM Services PEAK Matrix Assessment:

“Webhelp is driving digital transformation through cloud adoption, CX consulting, and automation by partnering with technology vendors such as Amazon Connect, MS Azure, and UiPath, utilizing their platforms as per clients requirements.” 

 


Content Management

Americans distrust tech companies to moderate content online

Where do we draw the line between freedom of speech and allowing misinformation to be broadcasted online?

Content moderation is crucial for social platforms to ensure a trustworthy relationship with their users. Without moderators, billions of social media users would be shown potential harmful content every day.

Government control – trusting the system

There are many nuances of user generated content, and there are concerns that governments will take control over the content posted on media platforms, removing the platforms purpose of sharing content freely (within the guidelines).

For example, the U.S. Government signed new laws to ban social media platform TikTok – which has over 80 million daily users in the U.S. The platform has since won a preliminary injunction that will allow for the app to be used and downloaded from the U.S app store.

This precedent shows that if the government had more control, they would be quick to implement such regulations on these platforms. It is unlikely to happen as political figures use social media platforms to connect with their constituents, communicate their views, and advocate for political campaigns.

Free Speech vs Content Moderation?

According to Gallup and Knight Foundation survey, “55% of Americans say that social media companies are not tough enough, with only 25% saying they get it right”.
For instance, Trump’s behaviors and actions on Facebook, Twitter, and other social platforms, have allowed communicating harmful propaganda which can influence political views and undermine election campaigns. As well as provoke/incite violence by sharing false and deceptive information to the public which we have witnessed during his election campaign in 2020, and more recent events at the US Capitol with Trump supporters.

The violent storming of the US Capitol led to the big tech companies like Twitter and Facebook suspending Donald Trump from using the platform due to his alleged role in inciting violence and sharing misinformation; with many other players permanently banning him from their platforms. The platform Parler, which has a significant user base of Donald Trump supporters, was taken off major service providers app stores as they accused the platform of failing to police violent content.

After Trump’s 12-hour ban was lifted on Twitter, he continued to violate their policy. They concluded that his tweets during the incident was against their Glorification of Violence policy and left them with no choice but to permanently suspend his account.

To give multiple chances to an individual with this level of influence, users continue to express their views that big tech companies are being taken for a ride and not doing enough to stop the virality of content. Consequently, this has resulted in people not trusting the platforms’ moderation policies and algorithms to display authentic, unbiased content efficiently.

Trusting the system

Controversially, US online intermediaries are under no legal obligation to monitor content, “social media companies are under no legal obligation to monitor harmful speech, and governments can’t really make them or compel them to offer things like counter speech without running into First Amendment roadblocks”, Forbes, 2020.

Section 230 – a constitution act for Americans which protects the freedom of expression. In comparison to other countries, the U.S. Section 230 provides online platforms with immunity for legal reprimands with few exceptions, “they can avoid liability, and object to regulation as they claim to be editors of speech” outlined in Section 230(c)(1). There are many caveats and exceptions – particularly when it comes to interpreting images and videos.

Therefore, when it comes to accountability, this legislation has limitations to hold online intermediaries liable for user generated content on their platforms. It does not establish what is considered tortious speech, harmful or misleading information. Rather, big tech companies are left to outline this in their policies; to do the right thing by their users.

Moderating content

Early last year, Twitter introduced new labels on Tweets containing “synthetic and manipulated media”, likewise Facebook created labels that flagged harmful or unverified information.
Although these companies continue to introduce new tools to highlight harmful content, it is important for moderators to have the correct tools and expertise to moderate sensitive content and not solely rely on technology to do this. Without the right guidance and principles, misinformation and propaganda will manage to fall through the cracks.

Lear more about our Digital Services, or contact us to find out more.

SHARE

Read the 6th edition of our OneShot magazine on Social Engagement

Our 6th edition of the OneShot is here!

Download your OneShot Magazine

Tick tock tick tock…

Time is ticking away – now is the time to start focusing on social engagement.

Social commitment means becoming aware, but above all, taking action and standing up for inequalities.

Taking action can be as simple as these recipes to be: more human, more green, and more equal. Not only are these good for you, but for others too.

Compelling your company to pledge and commit in the fight for social and environmental changes, such as the global warming crisis or social justices and equalities – are vital steps to take now for a brighter future.

And it all starts with knowledge. So, here’s to your learning with the latest edition of the OneShot.

Dare to be ‘woke’ and be a driving force for change?

SHARE

legal framework

Legal frameworks of content moderation around the world (Part 3)

CMM_Legal_Frameworks_Web_Header

With an initial goal of curbing fake news and online hate, the NetzDG unfortunately, created a blueprint for internet censorship around the globe.

Turkey
For many years now, freedom of speech and press freedom have been strongly condemned in Turkey, it is ranked 154th out of 180 countries in the RSF 2020 World Press Freedom Index. (Source: www.rsf.org). Denying access to around 3000 articles, Turkish courts blocked articles that were highlighting political corruption and human rights violations in 2018, added to a track record of frequently blocking social media platforms such as Facebook, Twitter, and YouTube.

On 29th July, the Turkish parliament enacted a new law that was hastily ushered in without considering the opposition or other stakeholders’ inputs. Once approved by President Erdogan, the law mandates social media platforms to appoint a local representative in Turkey. However, activists are severely concerned that the law is designed to further conduct government censorship and surveillance.

Australia
Following the gruesome terror attack on two mosques in Christchurch (New Zealand), which was carried out by an Australian in 2019, a bill amending the Australian criminal code was passed. The amendments hold service providers criminally liable for failure to instantly remove violent content that is shared on their platforms.

Despite similarities with the NetzDG, the main difference is the take-down timeframe and the subject matter of illegal content. The amendment faced criticisms from media companies, stating it could lead to censorship of legitimate content due to the incentive it creates to over-screen their users. Others called for the government to address the problem at its root: violence and Anti-Muslim hatred as opposed to holding social media platforms accountable for the manifestation of such problems.

Nigeria
On 5th November 2019, an Anti-Social Media Bill was proposed by the senate of the Federal Republic of Nigeria to bring to book violations in peddling malicious information. The campaign has been backed up with the Northern States Governors’ Forum (NSGF) held with traditional rulers, government officials, and leaders of the National Assembly.

Following the recent terror at Lekki Toll Gate on the night of 20th October 2020 that turned fatal when police brutally invaded peaceful protests to #EndSARS by use of live ammunition, the infringement of freedom of speech amidst media censorship continues to oppress the fundamental human rights and is condemned by Amnesty International. Nigerian Police have since then denied despite evidence of people streaming live on their social media platforms to showcase this cruelty. (Source: amnesty.org)

China
With a more sophisticated censorship approach, China’s government blocks websites, IP addresses, URLs whilst monitoring internet access. Online service providers are expected to authenticate the real names of the online users according to the Cyber Security Law (CSL) that has been effective since 1st June 2017. Additionally, the CSL mandates all network operators to closely screen user-generated content and filter out information that is prohibited from being published or relayed by existing laws or administrative regulations.

Other countries that also have heavy internet censorship through political media restrictions and social media include Iran, North Korea, Somalia, Ethiopia amidst political unrest, and many Eastern European countries such as Moldova.

Following the recently concluded U.S. elections against a highly controversial and polarizing incumbent, President Trump is yet to concede. Instead, he has been making widespread allegations of voter fraud as well as concerns about the integrity of the process. Social media platforms like Twitter and Facebook continue to struggle with screening fake misinforming content.
Due to the thin line that exists between permitted and prohibited speech, enacting a universal solution globally governing content moderation is assertive. When relying on automated decision-making tools, moderation systems are prone to errors. Online platforms are hence forced to assess the amount of collateral damage that would be deemed “legitimate” versus the amount of harmful content that would slip through the cracks. Stronger enforcement means less hate and fake news will be shared, but it also means a greater probability of flagging of for example activists protesting police brutality or journalists exposing injustices and corruption in those particular governments.

This article is the the final part of a series. If you missed the first part, read it here.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today
SHARE

legal framework

Legal frameworks of content moderation around the world (Part 2)

CMM_Legal_Frameworks_Web_Header

Internationally, two documents provide freedom of expression protection. The first is Article 19 of the Universal Declaration of Human Rights (UDHR), and the second is Article 19 of the International Covenant on Civil and Political Rights (ICCPR). The importance of free speech and free expression are recognized as fundamental human rights with caution of unjustly infringing on them.

By obliging social media platforms to delete illegal content within 24 hours or otherwise face exorbitant fines, the NetzDG triggered fierce debates and concerns regarding its ramification on freedom of expression by:

  • The Streisand effect (detrimental outcomes of censorship)
  • Accidental removal of legal content
  • Privatized law enforcement
  • Unnecessary sanctions
  • Global Internet censorship through authoritarian regimes

At least 13 different countries have enacted or outlined laws that are similar to the NetzDG matrix. According to the Freedom House’s Freedom on the Net, five of them (Honduras, Venezuela, Vietnam, Russia, and Belarus) are ranked as “not free”, five others are ranked as “partly free” (Singapore, Malaysia, Philippines, Kenya, and India) and the remaining three are categorized as “free” (France, UK, and Australia). (Source: freedomhouse.org). More recently, Turkey was also added to the list, having passed the worst version of the NetzDG, according to the Electronic Frontier Foundation (Source: eff.org)

United States
According to a study that was conducted last year, 85% of daily active Facebook users live outside of the U.S. and Canada, 80% of YouTube users and 79% of Twitter accounts are mainly from up-coming markets such as Brazil, India, and Indonesia. (Source: www.omnicoreagency.com)

While most of these big tech companies have their headquarters in the United States, the majority of their users are based outside the country. As a result, these companies are essentially governed by U.S. law. The First Amendment of the U.S. Constitution and Section 230 are the two principal legal frameworks that regulate the online freedom of expression.

In the U.S., the government is prevented from infringing on the right to free speech by the First Amendment. However, tech companies are not similarly subordinate to the First Amendment. Consequently, they can enact their codes of conduct and policies that often further restrict speech that would not be prohibited by the government under the First Amendment. For instance, Tumblr and Facebook prohibit the publication of graphic nudity on their platforms.

Yet under the First Amendment law, such prohibition by the government would be unconstitutional. And because Section 230 of the Communications Decency Act protects social media networks, website operators, and other intermediaries, they are not held liable for the generated content in their platforms and have been able to thrive.

United Kingdom
To combat detrimental content, the U.K. released a White Paper last year highlighting multiple requirements. Internet companies must keep their platforms safe and can be held accountable for the content published on their platforms and, they are liable to pay consequent fines. (Source: assets.publishing.service.gov.uk)

This article is the second part of a series. If you missed the first part, read it here.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today
SHARE

legal framework

Legal frameworks of content moderation around the world (Part 1)

legal framework

Following increased pressure to protect the audience from harmful content, both large and small online platforms that mainly host User Generated Content have come under intense scrutiny from governments around the globe.
Depending on their size and capacity, different online platforms deploy two content moderation models to tackle this issue:

  1. Centralized content moderation – using this approach, companies establish a wide range of content policies they apply on a global scale with exceptions carved out to safeguard their compliance with laws in different jurisdictions. These content policies are implemented by centralized moderators who are trained, managed, and directed as such. Facebook and YouTube are examples of big internet platform companies using this model.
  2. Decentralized content moderation – this model tasks the users with the responsibility of enforcing the policies themselves. Being diverse by nature, this approach mainly enables platforms like Reddit to give their users a set of global policies that serve as a guiding framework.

Centralized models help companies to promote consistency in the adoption of content policies while decentralized models allow a more localized, context-specific, and culture-specific moderation to take place encouraging a diversity of opinions on a platform.
After failed attempts to push social media platforms to self-regulate, the German parliament approved the

Network Enforcement Act (NetzDG) on 30th June 2017. Also known as the “hate speech law” the NetzDG took full effect as from 1st. January 2018. The NetzDG directs platforms to delete terrorism, hate speech, and other illegal contents within 24 hours of being flagged on a platform or otherwise risk hefty fines.

While the NetzDG encourages transparency and accountability of social media platforms it also raises concerns regarding the violation of the e-Commerce Directive and fundamental human rights such as freedom of expression. In a statement that was sent to the German parliament in 2017, Facebook considered the NetzDG draft submitted in 2017, to be incompatible with the German constitution by stating, “It would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies”. (Source: businessinsider.com)

Following criticism from a wide array of activists, social networks, politicians, the EU commission, the UN, and scholars, the NetzDG is a controversial law that should be adapted with a grain of salt. Unintentionally, Germany created a prototype for Global Online Censorship from highly authoritarian states who have adapted the NetzDG to manipulate the freedom of speech on the internet by pushing their illiberal agendas camouflaged as moderation policies.

Find out more about this topic

This article is part of a series looking at legal frameworks around the world. The series will focus on countries legal amendments to moderate user-generated content in the following countries: U.S, U.K., Turkey, Australia, Nigeria, and China.

Want to discuss the specificities in your country? Get in touch with our experts to find out more.

Talk to us today