ECTA

The risks of progress without a plan

The implications of applying AI solutions must be carefully and fully considered before they are implemented, argues Mladen Vukmir of ECTA.


My cousin, who is a mathematician and a programmer, has given an endearing name to artificial intelligence (AI)—he calls it artificial stupidity. In doing so he tries to capture its present state, devoid of conscious and ethical elements.

Things are progressing rapidly, however, and AI is developing into an omnipresent and powerful entity impacting many aspects of our everyday professional and private lives and the decisions we make.

Its capability to deal with the unprecedented amounts of data and, increasingly, its ability to build and read ever more sophisticated correlations, does indeed raise its conclusions above the level of human capabilities of the past in many instances. We have all heard by now that in many specialised, narrower fields of expertise AI is surpassing or has surpassed the human results.

So far, this has been to human benefit—as we learn to harness this newly acquired powers, we are gaining improved management capabilities and deepening our insights into various processes.

The sheer speed of the progress, however, does prompt numerous questions about where this growing capability will end and what the consequences will be.

Above all, the question is looming about the possibility of AI gaining consciousness and developing its own objectives, if not intentions—will, in human terms.

Without trying to answer these questions pertaining to general AI, we will try to look at the possible directions of development in the field of IP beyond the presently deployed AI-enabled tools.

Risk vs reward

The IP systems of the moment are antiquated and overly complex systems that carry their own inherent burdens, which are then passed on to the economy at large to confront. Many, except its biggest users, sometimes seem overwhelmed by the sheer complexity of its rules and the costs of using it systematically.

In light of those characteristics of the system, the features of the AI tools that can be deployed to sort data in tasks as complex as patent, trademark or design searching and analysis seem to be worth taking—notwithstanding the risks inherent to the very core characteristics of AI.

Here, we have in mind issues such as the ‘explainability’ requirement, whereby the human users must always be able to understand the conclusions that the AI system has reached, that are arguably particularly important in the legal context.

Another issue of great importance for societies that are using AI with increasing frequency is connected to the bias whereby the training data used might sway the analysis in an unintended and inaccurate direction. To build trust into the AI system they must be structured to enable ‘unlearnability’ of the conclusions reached by an AI system based on data that was corrupted either by its content or by the interpretation of its content.

image
“What I am advocating here is to take a long, deep, and hard look at the entire IP protection system when these AI technologies are introduced.”

Mladen Vukmir, ECTA

However, we realise that exactly because they became so unwieldy due to their complexity, IP systems are ripe for improvement by machine learning tools. To exaggerate, but only slightly, we might conclude that their only chance of survival lies in the wide application of AI that will deal with the very large amount of data we now begin to realise that legal systems are dealing with.

It might be argued that legal systems were not designed in the first place to deal with such complex factual patterns or so many transactions as they regularly emerge now in contemporary societies.

What I am advocating here is to take a long, deep, and hard look at the entire IP protection system when these AI technologies are introduced. We need to remain focused on the everyday developments as they occur in order to understand the process. This is not an easy task for us, the imperfect humans.

For the purposes of illustration, I will here raise only two of the many issues that are ahead of us.

Who is responsible?

On the side of users, the current dilemma about whether an AI can be held responsible or can be considered an inventor can be dealt in various ways. While there are people who prefer apportioning liability for its possible errors between its authors, manufacturers, owners, vendors and users, in various combination, there are others who suggest that the best way at present is to treat AI in a way similar to animals, where we would not consider them responsible, but for certain of their actions we would hold their owners liable.

Of course, for the open-source AI offered as a free service a degree of the risk would shift to its users and similar. In the case of the question of inventorship, one of the dilemmas concerns the issue of the ownership of such an invention.

Historically our economies grew on the basis of ownership of the creative output, and it would appear intuitive to seek who that owner should be in AI inventions. However, we assign ownership because humans tend to claim it. In the case of AI, are we prepared to affirm the idea that AI does not seem to be claiming any ownership right over the inventions it might be making?

Should we structure the AI so that its starts claiming ownership? Should we consider granting the ownership to humans who enabled the AI to make inventions that they did not make, or should we consider the possibility that we can benefit from inventions not claimed by anyone?

In the context of the broader legal systems reshaping that is ahead of us, which role should we contemplate for AI? I doubt that its role will remain limited to being used as an analytical or drafting tool that we are already seeing being deployed, regardless of how successful it is at the moment.

Along the same lines as above, my view is that AI will likely gain a much more fundamental role in organising our societies than we currently perceive.

The legal system in many a jurisdiction is currently bursting at the seams due to its complexity, the inconsistency of decisions, and inherent contradictions—or at least poor alignment of the various unrelated norms which the judiciary needs to interpret coherently for law to retain its guidance role.

This seems ever harder to achieve, especially for smaller jurisdictions with fewer resources. Could it be that AI will bring more consistency in judicial decisions than humans are capable of producing?

Even more important, how good can AI become in designing, devising and deploying the rules that govern our societies? Do we have an opportunity now that by building ‘ethical’ AIs we raise the ethical level of our societies and will consequently need less law to guide us?

Will AI curtail corruption? Will we allow it to do so even if it would be capable of such a feat? What will be the standards required by humans to relinquish their decisions on lobbying for and choosing the rules, which too often serve particular interests and contradict each other? What if the AI recommends that we abandon the property for AI inventions? Will we be ready to accept such a conclusion?

“Could it be that the AI will bring more consistency in judicial decisions than humans are capable of producing?”

Ethical questions

These are some simple illustrations of the questions ahead of us. Should the space allow we would gladly discuss the role our present institutions, such as academia, government, the EU, and professional associations including ECTA, have in this process as well as the current achievements in that field.

As this is not possible and having in mind that the future development of our societies will be indivisible from the participation of AI, one thing is sure: we must discuss each and every aspect of its functioning early and thoroughly, not subsuming to the path of least resistance that so often leads our societies to the suboptimal solutions we then get stuck with for decades as our business models develop around them.

We came to realise in the initial discussion on ethical AI that ethics are not uniform across all societies, and that we have to deal with significant differences in what a society will consider ethical. A good example of this is ‘the trolley problem’, in which some societies attribute greater value to the life of a young person, while others consider the life of an older person to be more valuable.

Keeping in mind that neither the legal profession nor the techies, who are in the most intimate relation with this developing technology, are the best suited for the discussion of the ethical aspects of its introduction, we need to increase their education and the number of opportunities where the consequences and the best modalities of the introduction of AI will be discussed—frequently, frankly and fundamentally.

Mladen Vukmir is first vice-president of ECTA and the founding partner of Vukmir & Associates, a law firm based in Zagreb. He can be contacted at: www.vukmir.net


Images, from top: Shutterstock / metamorworks, Valentina Razumova, metamorworks

Annual 2021


Stay up-to-date with the latest news