FORUM IP Talks: Patrick Heckeler on challenges when drafting AI patent applications

FORUM IP Talks: Patrick Heckeler on challenges when drafting AI patent applications

Patrick, in your experience, how has the global legal landscape adapted to the unique challenges presented by AI patent applications? Could you share examples of jurisdictions that are leading in setting clear standards?

The global legal landscape is actively adapting to the unique challenges presented by AI patent applications. Almost all jurisdictions have strived to establish rules, particularly concerning the assessment of inventive step and sufficiency of disclosure.

In my view, the European Patent Office – compared to other jurisdictions – has well-defined standards outlined in its Guidelines for Examination. Specifically, section G-II, 3.3.1, deals with the questions of patentability of AI-related inventions. In essence, the EPO applies the established standards for computer-implemented inventions to assess the patentability of AI-related inventions. This ensures predictability and allows practitioners to use a well-established legal framework with which they have been familiar for years.

EPO Guidelines, G-II, 3.3.1

What are the most significant hurdles patent practitioners face when drafting AI-related patents, particularly in terms of defining the invention and meeting disclosure requirements? How do you address these issues?

AI systems are complex and difficult to understand due to their intricate internal workings. These systems, particularly deep learning models, learn from massive datasets, identifying patterns and making predictions through processes that are not fully understood.  In other words, many AI systems are “black boxes”: We can see the input and the output, but we cannot explain how the output is actually determined or generated. Due to the internal complexity of an AI system, it is practically impossible to describe all its internal details in a patent application.

This poses a big challenge to patent practitioners because patents must disclose an invention in a way that the skilled person can carry it out or rework it.

To avoid sufficiency of disclosure objections, I recommend describing the structure of the used model, for example the number of neurons and layers of a neural network and how they are interconnected. Furthermore, you should disclose at least some training data samples and a suitable training method. Based on this information, the skilled person can build its own training data set and can use this set to train the model.

Hopfield net

This information enables the skilled person to rebuild an AI system as described in a patent application.

The evolving nature of AI technologies often raises questions about patent eligibility and inventive step. What key considerations should practitioners keep in mind when arguing for the patentability of AI inventions?

Firstly, it is crucial to demonstrate that the claimed invention involves a technical contribution beyond mere mathematical or abstract concepts. This can be achieved by showing that the AI system solves a technical problem in a novel and non-obvious manner. For example, an AI system that controls a manufacturing process more efficiently or an AI system that optimizes the energy consumption of a smart grid by predicting and adapting to fluctuating energy demands would likely be considered to have a technical effect.

Making it plausible that an alleged technical effect is actually achieved can for example be done by presenting test or measurement results. If such results are not available, try at least to explain how the inputs and outputs of an AI system correlate, providing a clear understanding of how the system achieves its intended purpose.

The scope of AI inventions often spans multiple industries. How can practitioners ensure that their patent applications sufficiently cover diverse use cases without risking overly broad claims?

In my view, it is recommendable to direct a patent claim to a concrete use case. Otherwise, it could be difficult or even impossible to successfully argue that a technical problem is solved.

To achieve protection for different use cases, include as many example use cases as possible in your application. This allows filing divisional applications based on claims directed to different use cases.

AI systems often involve collaborative contributions from multiple stakeholders. How does this complexity impact ownership determination, and what practical steps can companies take to address disputes in this area?

The collaborative nature of AI development significantly impacts ownership determination. AI projects often involve contributions from a diverse range of stakeholders, including data providers, algorithm developers, hardware providers, and researchers. Determining ownership rights in this complex ecosystem can be challenging.

For example, who owns the data used to train the AI model? Does the ownership of the data translate to ownership of the resulting AI system or any intellectual property derived from it? What level of contribution constitutes ownership? Does simply providing data or computational resources grant ownership rights?

Poisoned AI training data

To avoid lengthy legal disputes, it should be contractually agreed before the start of a joint project who owns which parts of an invention and who receives the corresponding utilization rights.

Divided infringement is a recurring issue in AI patenting. Could you explain this concept in a nutshell and share practical strategies to avoid it during patent drafting?

Divided infringement occurs when multiple parties collectively perform all the steps of a patented method, but no single party performs every step themselves.

As an example, let’s assume that a patent claim comprises steps directed to the training of an AI system as well as steps relating to the use of the trained AI system. In such a case, it is difficult to successfully sue a party for direct patent infringement. This is because the training-related steps may be performed by the manufacturer of the AI system, whereas the steps relating to the use of the AI system may be performed by the customer, i.e., the user of the trained system.

To avoid a divided infringement scenario, for each set of steps performed by a different party or entity, formulate a claim directed only to these steps – for example, one claim directed to the training of the model, and one directed to the use of the trained model. This prevents claims from only being realized jointly.

Patrick, thank you very much for the interview!


About the interviewee:

© Bardehle Pagenberg

Patrick is a German and European Patent Attorney and a Partner with Bardehle Pagenberg in Munich. With his studies of computer science, he has reliable expertise in the patentability of inventions in the field of computer technology. Patrick has excellent knowledge in the areas of computer architecture, operating systems, databases, embedded systems, multimedia technology, virtualization, and network technology. His main areas of practice include patent prosecution, opposition and appeal proceedings as well as infringement and nullity complaints.

Patrick is going to speak at the online course "AI Patent Application Drafting" on 3 April 2025 with Michaela Wegner (Siemens) and Martin Müller (EPO Boards of Appeal) (further information can be found here) and at the online course "Digital Patents: Claim Drafting" on 27 May 2025 (further information can be found here).


Cover: Artificial Intelligence & AI & Machine Learning, by VPNs ‘R Us, no changes were made, CC BY-SA 2.0

Hopfield net: Example Hopfield net with four units, by Zeno Gantner, no changes were made, CC BY-SA 3.0

Poisoned AI training data: AI_training_data_poisoning_illustration_(Nightshade, by Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, Ben Y. Zhao, no changes were made, CC BY 4.0

To view or add a comment, sign in

Others also viewed

Explore content categories