The geopolitics of data

The geopolitics of data

Published in Spanish at: https://www.eldiario.es/comunitat-valenciana/opinion/geopolitica-dato_129_12103157.html

Translated with DeepL

Following the inauguration of the new US presidency, the accelerated evolution of the global geopolitical scenario raises as many uncertainties as certainties. One of the issues that should concern us most is that of digital sovereignty. In a broad sense, it could be understood as our ability to control the technologies that support the digital transformation, the conditions for the generation of data and the most favourable scenario for their re-use for research, innovation and entrepreneurship. aimed at the common good and the generation of wealth. In a global context, this issue does not only concern Europeans, but also all companies, regardless of their nationality or where they are established, in so far as it is crucial for their business model.

Among the certainties, two are clear and have already begun to materialise in the US. The first is a significant commitment to deregulate technology sectors, especially those involving artificial intelligence. The second, which is directly related to the first, concerns the way in which the major hyperscalers in the sector will apply or comply with the conditions that define the new scenario, in the face of the direct and constant instructions issued every day by the US executive. There is no doubt that the policies proposed by the Trump administration are in direct conflict with the emerging regulatory framework of the European Union. Because of its great importance, we must refer to the elimination of any policy related to the management of risks to democracy and freedoms in social environments on the Internet. And while the proliferation of fake news and hate speech is worrying, it should not be the only issue that focuses our attention, and may not even be the most important.

The US government's announcement of a major round of public investment in artificial intelligence is by no means insignificant. This initiative, together with the deregulation brought about by the withdrawal of Biden's executive order, heralds an ecosystem of technological development in which the United States is deepening its model of flexible and deregulated innovation, while probably acquiring features typical of the choices that have fuelled the Chinese government's technological geopolitics in recent years. We can therefore expect an acceleration of innovation processes aimed at achieving competitive results through an irresponsible approach. The speed of development and the achievement of a company's previously defined goals would take precedence over all other considerations. And that includes the guarantee of human rights.

We are not going to argue that there is a lack of guarantees of the rule of law in the US. The essential difference, however, lies in a model of legal control that is to a large extent dependent on compensation for the damage caused. In other words, in the absence of a set of standardised procedures for developing technology, companies operate on the basis of state-of-the-art tools. These are essentially of two types. They can use standardisation mechanisms, such as the ISO standards on artificial intelligence or the framework provided by NIST. In addition, they can apply ethical principles or standards for artificial intelligence, whether in-house or provided by organisations such as the OECD.

The difference between the US and the EU in terms of regulation lies in the way in which legal protection is obtained. It seems that in the US, the most important procedure is to go to court and to demand the appropriate liability in those cases where the impact of the technology is harmful. Obviously, this could be the case in intellectual property matters, where there is a perception that, for example, the use of scraping techniques has led to unjust enrichment or to the re-use of information without due authorisation, in patent law, or in privacy law, where the rights of individuals are affected by the processing of their information. But whatever the legal basis and process used to demand accountability, it will almost certainly happen when the product is already on the market. Then, unless the issue falls under the jurisdiction of the Federal Trade Commission, it will require lengthy processes with costs that are not always affordable for the consumer.

There is another major cause for concern. US companies are being told to abandon diversity and affirmative action policies. Let's imagine another scenario. We saw the spectacle of the meeting between the presidents of Ukraine and the United States. Is it possible that the Trump Administration will pressure Starlink to reduce or withdraw support for connectivity in Ukraine? This could cause serious operational inconveniences for the Ukrainian army and for the normal functioning of society. In our opinion, this is an impossible scenario, but that is a word that is losing its meaning by the minute. Should our universities, hospitals, industry and more than one administration be preparing for such a scenario? It is not trivial to include in their contingency plans the migration to other environments and/or the confirmation of the guarantees they would have received from their suppliers.

In geopolitical terms, in addition to technological dependency, this could mean a loss of European competitiveness, which the Draghi report attributes to European hyper-regulation. It must be stressed that the European Union is committed to a market that is subject to regulation and supervision by a number of independent authorities. It also has the reinforcement of coercion through a framework of sanctions backed up by fines. These can be very costly for companies.

And this model is growing. Every single decision related to the creation of datasets is subject not only to the GDPR and the corresponding authorities, but also those emerging in AI, data governance or healthcare. New administrative bodies are emerging, either for the management of the system or to incorporate administrative powers and the ability to impose sanctions. On the other hand, certain technological developments - from electronic medical devices to high-risk artificial intelligence systems, including electronic health records - require a validation process by a notified body in order to obtain the CE mark that allows them to be placed on the market. The asymmetry is clear. Any North American company that has access to private and public funding in its financing rounds will have significant economic muscle and very flexible development processes. They will therefore be able to launch their products in much less time than their competitors in the EU market.

At this point it is clear that the simplest course of action would be deregulation and a level playing field with the US. But this would mean abandoning the core values of our post-World War II democracies. The algorithmic society that is emerging on the American horizon looks less and less like surveillance capitalism and is moving towards digital feudalism. This vision of the world may generate wealth, but it will leave the most vulnerable left behind, wipe out the welfare state and force us to abandon deliberative democracy in favour of a state of permanent emotional manipulation. I do not believe that this is what our societies want, at least not consciously.

Surprisingly, we can achieve the same result by sticking to the regulatory approach that has characterised data protection. Too often we have confused guaranteeing a fundamental right with making it absolute and unquestionable, and we have changed the constitutional position of regulators. This is a harsh statement, but one that can be proven. It is not at all possible to believe that this matter is regulated by the GDPR and by decisions that can be reviewed by the courts. Because it is not true. The real law is created by soft law instruments, such as guides, guidelines or legal reports, which are not subject to appeal.

In addition, the development of these instruments is in most cases not a response to a process of dialogue and co-creation, but rather a response to a criterion imposed by the regulator. And this usually produces results of various kinds. The most common is to promote a level of excellence that is so high as to be unattainable. At the other end of the spectrum, there are recommendations that make the right to privacy prevail at all costs, even at the risk of people's lives if necessary. As COVID has shown. The end result is a complete counterfactual. In practice, regulatory risk management is so intense that it cripples the ability to innovate and forces legal teams into a defensive strategy that is unsustainable for organisations. Those with a multinational structure can decide where to locate their developments, leaving others with no choice but to emigrate or rent their talent to the fearsome North American firms.

This situation forces us to rethink, in the geopolitical context towards which we are moving, the role of data protection authorities, as well as of artificial intelligence authorities and others that will emerge. Independence, regulatory control of the market and the defence of fundamental rights are not best preserved by a serious lack of public policies that encourage the development of digital technologies. Case in point. Guidelines 01/2025 on pseudonymisation are currently being discussed in the stage of public information. The document contains a paragraph, number twenty-two, which seems to aim at objectifying the concept of pseudonymised data. If we read it correctly, the result is obvious: anonymisation of personal data is impossible. In other words, even if we have made a very robust anonymisation effort, even if we have a secure processing space, even if we integrate software intermediation that acts as a barrier against improper use and traces all uses, even if we use cryptographic technologies, if the dataset presents a current or future risk of re-identification, the GDPR will apply. In other words, even if the user cannot be re-identified, even if the system prevents external attacks and not a single piece of data can leave it, it will be pseudonymised personal data with all the requirements of the GDPR. The result is clear: it will not be possible to generate large datasets in the European Union because, with the exception of the future Regulation of the European Parliament and of the Council on the European Health Data Space, there are no exceptions to the rule of consent or specific authorisations. And if they exist, they depend on the interpretation of the European Data Protection Board and the data protection authorities themselves, whose strict and restrictive vision is more than evident.

Therefore, assuming that the regulatory framework is mandatory and sharing its necessity and essential nature for the guarantee of fundamental rights, it is clear that the geopolitics of data will be played out in the offices of regulators. Will they understand their role in this new Great Game?

To view or add a comment, sign in

Others also viewed

Explore content categories