Watch Europe's first-ever Thought Leadership For Tomorrow Event on-demand. Watch Now

Are Technology Businesses Doing Enough to Combat Misinformation?

Which sector has the biggest influence on misinformation?

The chances are, you thought of the technology industry and the powerful influence that Facebook, Twitter, YouTube and other tech giants have on misinformation and disinformation, including fake news and deepfake videos.

Facebook’s Mark Zuckerberg and Google’s Sundar Pichai are among company chiefs who have testified before governments that want to know what steps the tech giants are putting in place to control misinformation amid public concern over its spreading.

Meta, the group behind Facebook, says, “When it comes to fighting false news, one of the most effective approaches is removing the economic incentives for traffickers of misinformation.”

Some of the steps it is taking include:

  • Better identifying false news through its community and third-party fact-checking organizations so that it can limit the spread of disinformation, which, in turn, makes it uneconomical
  • Making it as difficult as possible for people posting false news to buy ads on its platform through strict enforcement of its policies
  • Applying machine learning to assist its response teams in detecting fraud and enforcing its policies against inauthentic spam accounts
  • Updating its detection of fake accounts on Facebook, which makes large-scale spamming much harder.

Twitter says, “In the face of misleading information, we aim to create a better-informed world so people can engage in healthy public conversation. We work to mitigate detected threats and also empower customers with credible context on important issues.

“To help enable free expression and conversations, we only intervene if content breaks our rules… Otherwise, we lean on providing you with additional context.”

Here are some of the steps Twitter takes to combat misinformation:

  • People who repeatedly violate its policies may be subject to temporary or permanent suspensions.
  • Depending on the potential for offline harm, Twitter limits amplification of misleading content or removes it from Twitter if offline consequences could be immediate and severe.
  • In other situations, it aim to inform and contextualize by sharing timely information or credible content from third-party sources.

YouTube says, “With billions of people visiting us every day – whether they’re looking to be informed, to catch up on the latest news or to learn more about the topics they care about, we have a responsibility to connect people to high-quality content. So the most important thing we can do is increase the good and decrease the bad. That’s why we address misinformation on our platform based on our ‘4 Rs’ principles: we remove content that violates our policies, reduce recommendations of borderline content, raise up authoritative sources for news and information and reward trusted creators.”

  • YouTube does not allow misleading or deceptive content that poses a serious risk of egregious harm.
  • When it comes to misinformation, it needs a clear set of facts to base its policies on.
  • It enforces its policies consistently using a combination of content reviewers and machine learning to remove content that violates its policies as quickly as possible.

But some governments believe that big technology firms need to do more. The European Union passed a Digital Services Act last month that helps governments to force tech giants to take down illegal content, including hate speech and terrorist propaganda, or be fined up to 6% of their annual revenue.

Under the Digital Service Act, social media platforms have to follow a code of conduct and let their algorithms be tested to see how they handle misinformation. If they are found wanting, then they will be ordered to change the algorithms. During a major crisis, including a pandemic and war, extra measures may be introduced.

The UK’s response is the Online Safety Bill, which could seek to impose even larger fines on tech firms that fail to comply, and prison terms for business leaders. The regulator, Ofcom, could be authorised to fine companies up to 10% of their yearly global turnover. The bill is currently at the committee stage.

It aims to protect children from harmful content and limit people’s exposure to illegal content, while protecting freedom of speech. Companies would be forced to react more quickly to harmful and illegal content. Rather than accepting definitions from tech giants as to what is harmful, this would be decided upon by parliament.

So who bears responsibility for combating misinformation and are tech firms – both big and small – doing enough?

In Does fake news affect your business? iResearch Services asked an international mix of 600 business leaders and 1,000 consumers who are most accountable – government, industry or society as a whole? Most consumers (34%) surveyed say the government is responsible, with 28% stating it is a shared responsibility across society, and 10% believe businesses should take the lead.

But when we posed the question to business leaders, there was a different result, with 36% saying the responsibility to combat misinformation should be shared by all, 22% looking to the government and 17% to the business world.

Across all sectors surveyed, nine in 10 (91%) of business leaders say that technology companies are at the forefront of the fight against misinformation; and they have a “complete” or “partially complete” responsibility to combat misinformation. A little less, eight in 10, (80%) believe they are doing enough in the fight against misinformation.

The survey asked each sector to rate how well they are doing, and technology leaders have the most confidence that their sector is successfully combating misinformation, with one in two (49%) saying they are doing enough, compared with 27% in financial services, 21% in marketing and 18% in healthcare.

How do technology businesses and those in other sectors combat misinformation? Most companies rely on building trust internally and externally, setting up and following policy and guidance and investing in reliable brand messaging and communications, our report suggests.

The key to fighting misinformation lies in establishing credibility and communicating data – hard facts – in a persuasive and engaging manner. Through compelling data storytelling, companies can be seen as trustworthy sources who are then well-placed to speak out against misinformation and disinformation.

Arrow Back to Blogs

Subscribe to Our Newsletter




    Send me a copy







      This will close in 0 seconds