As the US dithers, the EU is setting the global agenda on tech regulation

Karlin Lillington: There’s been considerable US lobbying lately in the hopes of watering down legislation

Given that OpenAI/Microsoft’s ChatGPT has already spurred international and European privacy concerns, it boggles that Google didn’t have advanced discussions with the Irish Data Protection Commission. Photograph: Michael Dwyer/AP

A busy technology policy week in the European Union has foregrounded the contrasted lawmaking and regulatory thinking of the EU and the US on new technologies, and signalled trouble ahead – for the US.

This week, the European Parliament debated and voted on a major piece of regulation: the Artificial Intelligence (AI) Act, with wording overwhelmingly approved on Wednesday after four years of intense discussion. EU labour ministers also agreed on the general proposal text for the Platform Workers Directive, designed to create stronger protections for gig workers.

In addition, the General Data Protection Regulation (GDPR), and the Irish Data Protection Commission (DPC) made headlines again, only weeks after the DPC issued the largest GDPR fine ever to Meta at €1.2 billion and stipulated that the company must halt transfers of user data from the EU to the US within six months.

This week, the DPC’s decision related to AI. Google was blocked from launching Bard, its version of a generative AI chatbot, in the EU after the DPC indicated Google had not adequately informed it of Bard’s potential privacy impacts, nor filed the necessary data protection impact assessment (DPIA).

READ MORE

Given that OpenAI/Microsoft’s ChatGPT – the generative AI internet bot that’s been in the news for months – has already spurred international and European privacy concerns, with EU complaints and a temporary ban in Italy, it boggles that Google didn’t have advanced discussions with the DPC to avoid such an obvious setback.

But then, that’s in an aspirational world where companies recognise and address privacy issues upfront rather than waiting to see what a regulator will do. It’s difficult to conceive of any company thinking an AI chatbot wouldn’t raise privacy concerns, especially when, hello, its planned launch is during the week when the entire European Parliament is voting on flagship AI legislation.

Then again, everyone in Ireland knows that just chancing your arm sometimes works, like the well-established tradition of sorting the planning permissions after you’ve already gone ahead and built the extension.

This week’s developments offer a trifecta of existing or proposed legislation irksome to the US. Hence there’s been considerable US lobbying in the hopes of watering down legislation.

And there’s been the inevitable toys/thrown/pram script, in this case from the chief executive of ChatGPT parent company OpenAI who stated last month that if the EU regulated AI in ways they didn’t like, they might withdraw from the EU. Meta has long used this vague threat too but hey, it’s still here.

The amuse bouche in advance of these developments was an online briefing on Monday given to European journalists by Nathaniel Fick, US ambassador at large for cyberspace and digital policy (a new position created last year). Fick, himself a tech entrepreneur, presented a generally upbeat view of new technologies, which he said had a “positive power” for society that “outweighs the risks”.

The Q&A brought out more specific views. AI, he said, was a “transformative technology” and he warned of the need to get “the right degree of governance engagement without too much” and he cautioned that the EU might suppress its own developing AI industry if it had lots of regulation and the US did not (maybe, but US companies must also comply with EU regulation). He suggested the four years spent developing the EU’s AI Act meant technologies had likely outstripped existing proposals that would fall further behind in the planned two-year implementation period. What was needed was – oh yes, that US favourite – an initial voluntary code of self-regulation.

We all know how well self-regulation has worked across the tech sector in the US, don’t we? And, it’s hardly a negative that the EU has put several years into hashing out the complex issue of legislating for AI, whereas the US is only now considering the thorny topic.

Then, there are the years of US failure to enact federal privacy legislation to give US citizens (and therefore, EU citizens) better safeguards against the potential surveillance by US agencies. That lack has spurred the Court of Justice of the EU (CJEU) to repeatedly rule the US does not have an adequate privacy and data protection framework for allowing the US/EU transfer of data (hence the DPC’s Meta decision).

In response to a question from me, Fick said he felt the DPC’s six-month Meta deadline – which means a US deadline to resolve the transfer issue, as Meta cannot address the lack of a US data protection law or its surveillance permissions – was “quite an aggressive timeline to get to a [US data privacy] consensus here, let alone a transatlantic consensus”.

And yet, it isn’t a six-month deadline. It’s been a seven-year deadline ignored by US lawmakers and companies, counting from when the GDPR was passed, the two-year preparation period for organisations, and the five years since it came into full effect.

The problem with chancing your arm is you might lose it. Meanwhile, as the US dithers, the EU continues to set the global agenda on tech regulation.