To regulate or not to regulate… The answer is clear
Not to be overdramatic, but governments’ Shakespearian dilemma over how to strike the right balance between a regulatory approach that promotes digital innovation and one that effectively addresses the potentially devastating risks of AI technology is taking on existential dimensions. Mixed messages, legislative flip-flopping and regulatory inconsistencies are all contributing to a sense of uncertainty for both industry and citizens. And while pretty much everyone agrees that the fast pace of technological development is unstoppable, there is a degree of paralysis affecting corporate decisions about the right approach to digital governance. To move forward, paying attention to what is really happening on the data and AI regulatory front, and understanding the mindset and actions of policymakers and regulators beyond the hype, is key. The good news is that the clues are all out there.
Let’s start with the latest regulatory attitudes towards existing laws. The ever-prolific European Data Protection Board (EDPB) has issued a 13-page detailed assessment of the European Commission’s proposed simplification of the requirement to maintain a record of processing activities under the GDPR for SMEs. The ChatGPT spot-on one-sentence summary of the EDPB’s conclusion is that while it supports simplifying record-keeping obligations for SMEs, any exemptions must be clearly defined and proportionate, and must not compromise data protection rights. What ChatGPT is not saying is that, compared to the immensity of the GDPR, this is like debating the pros and cons of rearranging the deck chairs on the Titanic. Or in other words, don’t expect a radical (or even relatively minor, as the UK has done) reform of the European data protection framework any time soon.
Perhaps the most consequential development in European digital regulation right now is the implementation of the AI Act, with its staggered rollout of obligations that started at the beginning of this year and will continue until the summer of 2027. However, partly due to its novelty and partly due to political pressures, there has been some perceived hesitation among policy power players about how to position this hugely ambitious framework. The response from the European Commission has been a balanced blend of reasonableness and robustness. The reasonableness is best evidenced by the openly supportive attitude exhibited by the AI Office towards any provider of general-purpose AI (GPAI) models that is willing to assert its credentials as a responsible AI developer by adhering to the GPAI Code of Practice. But while the Commission may be prepared to be patient and understanding, it is by no means conceding on its commendable quest to ensure that the development and deployment of AI is safe, secure, and above all, compatible with fundamental rights. So, the AI Act is definitely here to stay.
At the same time, amid all the cacophony emanating from the other side of the Atlantic, a picture is emerging of how the US is approaching the AI regulatory landscape. The White House ‘AI Action Plan’ has a lot to unpack, but the Trump Administration’s basic message when it comes to AI regulation is pretty straightforward: remove red tape and onerous regulation. What is less clear-cut is what this means in practice when, at the same time, a number of individual states are actively seeking to pass AI-specific laws, and Congress voted overwhelmingly against a moratorium that would have imposed a 10-year ban on states enforcing their own AI laws. So in reality, even the US federal government is acknowledging that it should not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.
Where does this leave us? Certainly not in a lawless environment. Digital regulation is evolving in the same way that technology itself is evolving. Policymakers are keen to show their pro-innovation credentials and to call out unnecessarily burdensome laws that deliver little more than paperwork. But the need for responsibility and accountability remains, and regulators’ commitment to doing their job is unlikely to fade. There will be some wavering, and unprincipled policies will be pursued, but on the whole, the direction is set and it is not towards an “undiscovered country”. Digital governance that is attuned to technological opportunity and alert to potential harms is the obvious answer to one of the most existential questions of our time.
This article was first published in Data Protection Leader in July 2025.
Such an important point, Eduardo Ustaran - privacy isn’t just a legal issue anymore, it’s an AI design challenge.