Given digital technology’s natural disregard for national borders, countries the world over have been tightening regulations governing its use. The European Union has been at the forefront of many of these efforts, introducing the General Data Protection Regulation (GDPR) in 2012, forcing all companies to rethink their assumptions about how they managed their assets, not just in the EU, but around the globe. Since the GDPR’s implementation in 2018, the EU has shown its enforcement teeth in other ways as well, issuing fines against Google for antitrust violations (USD 10 billion) and “abusive” online ad strategies (USD 1.7 billion), and slapping Facebook for privacy violations (USD 1.63 billion) and for misleading information USD 122 million). But the EU has also come under attack for being lax in its enforcement of GDPR, weakening the promises of its privacy protections. 

Regardless of efficacy — many believe the GDPR to be a tax on innovation — the EU remains at the forefront of technology policy. Its policies often land first, setting the tone for what will happen elsewhere, particularly in the US. [LINK to US policy piece]

In December, the European Union unveiled two additional regulatory proposals, which collectively take aim at harmful content, anti-competitive practices, and data privacy and fraud. Policy experts view these measures as far more restrictive — and therefore more impactful for all global businesses. As the proposals wind their way through the European Parliament and each of the 27 member states’ own governments, here’s what you need to know about these new proposed laws:

Digital Services Act — This piece of legislation aims to standardize safety rules around online businesses, covering data privacy, behavioral advertising, e-commerce fraud, and illegal content like hate speech. This broad measure impacts large technology platforms as well as all businesses with online activity in the EU. 

Some of the requirements for compliance include:

  • Removing illegal goods, services or content from online platforms
  • Transparency behind algorithms used in recommendations, online advertising and other services
  • Information and recourse for users when their content is inadvertently deleted
  • Reporting criminal offenses
  • Cooperating with government inquiries and providing access to key platform data as requested
  • Tightening measures on sellers of illegal goods and services, including ways to track and trace them

Digital Markets Act — Following the EU’s fines of the major platforms mentioned above, these proposed rules double-down on the commission’s intention to rein in “gatekeepers,” to quote  the regulation’s parlance. Rules govern antitrust and anti-competitive behaviors, such as prioritizing a platform’s own products over others. Platforms may not prohibit consumers from connecting with businesses outside their walls, or prevent the ability to uninstall apps from third parties.

The fines are steep, including up to 10% of global revenues or penalty payments up to 5% of global revenue. The EU has kept the door open for further measures with a note of “additional remedies” following investigations into violations.

Yes, but when?

Of course, both rules are still in the proposal stage, and have yet to make their way through the European Parliament and each of the 27 member states’ own governments and may see changes along the way. In the absence of existing rules around hate speech and privacy, individual countries have been rolling out their own laws, which will need to be harmonized with any pan-EU rules. As well, tech companies will be lobbying to shape the rules in their favor, which may have some impact on the final outcome. 

Final rules, such as they may be, are not expected before 2023 at the earliest and then won’t be fully activated for at least another year while each member state puts a regulatory body in place to enforce the laws. All of which gives companies fair warning to start planning and preparing for the new regulations.