Here We Go … Again
This issue features a recent publication in my companion Substack newsletter — Mapping Responsible Innovation — as the publication discusses regulatory and market developments of relevance to the ethical data use and AI development and deployment. The text below has been further supplemented with specific observations on the role of ethics in this regard. I hope that you will enjoy the piece. If you do, please consider subscribing to Mapping Responsible Innovation .
Happy New Year, Everyone! I hope that it is off to a great start for all of you.
It has been a while since my last publication on Substack. Apologies for the radio silence! It was prompted by an intensive period of advisory work, research and teaching. This period also coincided with escalating calls for deregulation of digital technologies in the European Union (EU). It was worth pausing and taking the time to see how the dust would settle.
The European Commission took on the “deregulation” challenge by tabling at the end of last year legislative proposals for simplification of various digital regulations, including the EU General Data Protection Regulation (GDPR) and the EU AI Act. Now, with a clearer path forward charted, pieces of the broader picture start falling into place. The overall outlook makes me resume my Substack commentaries by remarking:
Here we go … again!
You have probably by now read summaries – or even the whole – of the European Commission’s proposals for simplification (also commonly referred to as the Digital Omnibus Package). If not, excellent overviews are available here (a longer overview) and here (a shorter overview halfway down the webpage).
The European Commission has been frank about its intentions upfront: no deregulation is planned, merely a simplification of certain rules (see, for example, here, here and here). Nevertheless, the Digital Omnibus Package raised hopes and expectations among segments of the European digital ecosystem for a lighter-touch regulatory approach (see, for example, here).
A careful contextual read through the Omnibus saliently reveals the scope of the simplifications:
The proposed amendments are targeted: They align core concepts with recent EU case-law (notably in the data protection domain), or fine-tune existing requirements for data and AI governance. No major overhaul of EU digital regulations is currently on the cards.
The substantive changes are limited and largely concern access to personal and non-personal data for, most notably, scientific research, AI development and use, and other forms of automated decision-making.
Most amendments to the EU AI Act have largely procedural (not substantive) implications: They aim to abolish a registration requirement, to relax certain other, quality management requirements with regards to high-risk AI systems, and to push back the application timeline for sections of the Act that are yet to become effective (see below). High-risk AI providers must, however, still have robust quality management in place. The bans on prohibited AI applications and the requirements for robust risk assessment and mitigations, data governance, safety, security, and transparency, would remain unchanged in letter and spirit.
The proposed delayed application of the EU AI Act to high-risk AI systems is conditional: The requirements for risk and quality management, safety, security, and transparency of high-risk AI may become effective only in December 2027 or 2028 (depending if an AI system is stand-alone or a component of a regulated product, respectively). Yet, these rules may also kick in earlier if respective technical standards are adopted later this or early next year.
But here come the kickers:
Political consensus on these amendments appear hard to reach: Various EU member states and political groups in the European Parliament have already raised concerns about the impact of the proposed changes to the GDPR and the EU AI Act on fundamental rights, transparency and accountability, among others (see, for example, here, here and here). Hence, the proposals appear unlikely to pass in this shape and form but would rather prompt revisions.
The window of opportunity for negotiating and agreeing on these revisions is narrow, according to commentators (see, for example, here and here). It is currently unclear if the proposals (even if revised) will pass before the very rules that they mean to amend actually become effective.
The market has matured: It is beyond the point of relying on regulations to ensure adequate product quality and risk management of data-driven and AI-enabled tools. Large institutional users of such tools (e.g. hospital) now routinely screen for and require proof of appropriate quality and risk management and robust technical performance, privacy and safeguards. So do research funders when they review and evaluate grant applications for research involving data, AI or other algorithmic and digital solutions.
Ethical and societal concerns feature prominently in project and vendor screening: Funders and large institutional users expect vendors of data- and AI-assisted tools to have clear understanding, strategy and protocols for identifying and addressing the ethical and social implications of their solutions. Testing for and mitigation of bias, unfair treatment or dehumanizing, misleading or otherwise harmful output are commonly inquired about during project evaluations and vendor vetting. More on this to come in my next post.
What do these developments mean for organizations developing, deploying or using AI and data? Three key takeaways:
AI and data governance should be high up on your strategic and operational agenda. The EU digital regulations are unlikely to budge significantly. Large funders and customers demand provable product quality and risk management with regards to data-driven and AI-enabled solutions.
Start implementing (if you have not yet) the governance structures, processes and protocols necessary to ensure or vet the quality, resilience and safety of the digital tools that you develop or license. For the reasons above and below.
Consider AI and data governance as your own product liability insurance and a competitive advantage. Adequate governance will help you troubleshoot and resolve technical, operational, legal and ethical risks before they materialize.
The top five questions to ask yourselves / your governance and compliance functions:
How have we integrated AI and data into our strategic and operational plans? For what applications and specific use cases?
Have we properly assessed the associated benefits and risks?
Do we have a dedicated governance / compliance function responsible for AI and data management?
Does this function have the necessary resources and access to counterparts in the organization to assess and mitigate the risks stemming from AI and data?
What policies and processes do we need to have in place for efficient risk management?
More to come in my next Substack commentary. Stay tuned!


