Liability for The Robot’s Own Deed

Anca Florina Mateescu1

Abstract: The research of the proposed theme aims to demonstrate the existence of civil legal liability in the case of agencies with artificial intelligence, not only regarding human beings. There are already several studies of scientists in this field, and they have resulted in not only theoretical concepts, but also actual results, namely: robots. European legislation has also enshrined rules that consider the legal relationship of robots with humans. The methods we understand to use are quantitative, logical method, sociological method, comparative method. The conclusions of the article capture the importance of recognizing robots as subjects of civil law, corresponding to their degree of understanding and perception of reality, but also of assuming the consequences of their deed. This analysis can be useful: university professors, researchers in the field, doctrinaires, students, juries. The listing is not a limiting one, as the research is relevant to all those interested in this topic. The novelty of the study consists in formulating ideas based on existing legislation, but also based on the opinion poll of some professionals. The research will show that robots are also responsible for their own deed, bringing both arguments and counterarguments.

Keywords: artificial intelligence; subjects of law; civil law; legislation

1. Introduction in the Problem of Response for Property Fact

In order to be able to deal with this form of civil liability, we will first consider the definition of tortiouscivil liability, as a notion of law from which the theme of our research derives.

Tortious civil liability is defined by the Civil Code2 and provides that no one may prejudice the legitimate rights or interests of others and that there is an obligation for each person to comply with the rules of conduct laid down by law.

Liability also arises, according to the doctrine (Verdeṣ, 2010, p. 14), at the time of the violation of an obligation established by law, taking the form of a legal relationship of obligations, arising from an unlawful and prejudicial act.

The conditions of the Civil Code in which the tort liability may be incurred are specified in Article 1.357 and are mentioned by some doctrinaires (Colṭan, pp. 4-5).

We will present, therefore: the wrongful act, the guilt, the damage, and the causal link.

Illicit act is the main factor in the triggering of tortious civil liability. However, it does not enjoy a legal definition, being only mentioned in article 1.357, in the first paragraph (Baiaṣ & Chelaru, 2012, p. 1416). However, the literature has been concerned with the formulation of a definition, which will relay its characteristics: “the wrongful act is the operative event of liability, which in extra-contractual matters can be the act of another person and the deed of the things or animals that we have in legal guard”.

Guilt is defined by the legislator; the seat of the matter being represented by article 16 of the Civil Code3. This is the subjective element of civil liability and can only be trained in the person who has committed an injurious act. Even if in the legal definition, there are notions such as culpability and intent, the Civil Code still states in article 1.357, paragraph 2 that the person who causes the damage is liable for the easiest fault. So, although the civil norm seems to be of criminal inspiration, the first does not make such an accurate demarcation between fault and intention, as happens in the case of the criminal rule.

The damage does not enshrine a proper legal definition, but the notion arises within the framework of tort liability, where the object of reparation is provided, joint and several liability, the relationship between debtors, the right of regression, the extent of the repair and the forms of reparation.

The damage has an important role because through it the remedial function of tortious civil liability is implemented.

There is therefore the possibility of repairing the damage caused by committing an unlawful act, but there must necessarily be a causal link.

The causal link is the connection between the wrongful act, which is an objective element, and injury – because of the wrongful act. If there is no causal link, there can be no tortious civil liability. The causal link must be proved by the person who wishes to bring an action in court, which concerns tortious civil liability. This condition has been evoked in the doctrine, there are several theories: the theory of the proximate cause, the appropriate cause, or the condition of sine qua non (Colṭan, p. 6).

This succinct analysis will be the starting point, the benchmark of the investigation of responsibility for one’s own deed.

2. Comparation between Subjects with Artificial Intelligence and People – Norms and their Interpretation

Romanian legislation vs. European legislation

In Romania, the subjects of civil law (Puṣcă, 2006, p. 14) can only be people. Our law has not enshrined rules governing liability in the event of damage by agencies with artificial intelligence.

So, in our country, if a robot commits a harmful act, it will answer according to the Civil Code, applicable to individuals.

To demonstrate that the subjects with artificial intelligence will be responsible for their own deed, we consider it necessary to present their definition.

The Expert Group on AI, within the European Commission, defines robots as follows:Artificial intelligence (AI) systems are software systems (and, possibly hardware) designed by humans, who if given a complex objective, acts in the physical or digital dimension, perceiving the environment by means of data collection, by interpreting structured or unstructured data collected, by reasoning on knowledge or by processing the information obtained from that data and by deciding the best action to be taken to achieve the given objective. AI systems can either use symbolic rules or learn a numerical model and adapt their behavior by analyzing how the environment is affected by their previous actions”.4

We note, therefore, that these “systems” work by perceiving the environment, interpreting the data they have at their disposal, reasoning and processing the information obtained to decide what is the best action to do in the given situation. (Puṣcă, p. 3)

Based on this definition we can appreciate that robots are entities able to appreciate the consequences produced by the activities they make, so that we believe that they will be responsible for their own deed like humans, according to article 1.357 of the Civil Code.

In particular, if a robot causes harm to another person5, consisting of an unlawful act, committed with guilt, it will be obliged to repair it, just as stated in paragraph 1 of article 1.357.

The second paragraph of the same article stipulates that the person who created the damage is liable for the easiest fault.

At European level, concern for the field of artificial intelligence and for the legal liability of robots is increasingly present.

Thus, the European Parliament's Report of 27.01.2017 has edifying provisions on liability.6

In this regard, we will present two of the most important “General Principles on the Development of Robotics and Artificial Intelligence for Civil Use”7.

The first principle concerns the Commission’s proposal for common definitions at Union level, of cyber-physical systems, autonomous systems, autonomous intelligent robots and their subcategories, taking into account the following characteristics of a intelligent robot:

Based on these criteria, we can say that the subject of law, although not alive from a biological point of view, still has autonomy, analyzes the data from the environment, can learn from the experiences he has and interacts with the environment, can adapt his behavior and actions according to it.

So, according to the first principle, robots can be held accountable for their own deed, as they have autonomy and perception of the consequences of their action.

The second principle states that the development of robotics should focus on complementing human capabilities rather than replacing them; it believes it is essential to ensure that, in the development of robotics and AI, humans always have control over intelligent devices.

As far as we are concerned, we believe that indeed humans and robots should complement each other, and the former should be able to control the work of the latter, but without limiting it. Specifically, robots should be held accountable for their own action, which has created damage, because they are endowed with their own will, being able to realize the outcome of their deed.

An additional argument to show that agents with artificial intelligence will be very advanced technically in the future is a study published in 2016 by Stanford University8, which expects that by 2030 there will be multiple changes in various fields:

Of all the areas where important changes will occur due to artificial intelligence, the most relevant news, which will attract responsibility for the own deed of robots, are those concerning health, employment, household appliances.

If we refer to the field of health, given the high degree of responsibility, but also the autonomy of such a robot, which will be able to treat or act according to its own perception, without being controlled by the human factor, then there is nothing left to prove but that it has committed an illicit act, with guilt, even with the easiest form of fault. If these three conditions of liability for their own deed are met, the robot will be obliged to repair the damage created.

In the situation where the labor force will belong in a percentage increase to entities with artificial intelligence, of course the robots will be liable for the damage they have created, like any other employee, to the extent that it will not be the case to engage the principal’s liability for the person in charge.

As far as household appliances are concerned (e.g. vacuum cleaners whose advanced technology allows the registration and storage of the space of the house, with all the characteristics of the room and their layout), they will be civilly responsible, if they are not operated by the human factor. Specifically, if the state-of-the-art vacuum cleaner will operate within its usual technical parameters, being turned on and off at the user’s will, we are not in the presence of the possibility of civil liability of the equipment. However, since this “object” can store information about the house in which we live and cause harm under these conditions, then, depending on its degree of autonomy, it will be able to answer for its own deed. If the possibility of action will not fully belong to the robot, then the responsibility for the injury will be the manufacturer of the product in question or the user if he has misused the product.

3. Artificial Intelligence and Ethics

Ethics can be considered the knowledge of the ethos (of morality), of good and evil (according to Socrates, Plato and Cicero), of happiness, of virtue (Aristotle), of pleasure (Aristip), of the social ideal.

The appearance of ethics is due to Socrates, who claims that this is a distinct branch of knowledge. As a scientific discipline it exists from the time of Aristotle, who has made ethics a science.

Ethics is the scientific study of moral principles, with their links to historical development and represents the totality of the norms of moral conduct10.

The importance of respecting moral principles in relation to the young branch of artificial intelligence science is highlighted by the signing of a document by Pope Francisc, which highlights the need for responsibility in this area on the part of organizations, governments, and institutions. “Rome’s Call for an AI Ethics” was signed on 28.02.2020 by Microsoft, IBM, FAO and the Italian government11.

So, the concern to maintain a balance in the development of artificial intelligence technology, subsuming it to moral rules, exists and intensifies.

The same Report of the European Parliament12, which I mentioned earlier, proposes several ethical principles, two of which we will present, precisely to demonstrate the importance of the intertwining of ethics with artificial intelligence.

One of these principles emphasizes the idea that the use of robotics could produce tensions and risks, which should be carefully analyzed from the perspective of human safety, health and security, freedom, privacy and dignity, self-determination, nondiscrimination, and personal data protection.

We consider the European vision right, concerning the risk of violation person’s fundamental rights, in the event of non-existing moral limits, which should slow down the nowadays “technological explosion”.

Another principle states that it is essential that the current legal framework of the European Union be updated and complemented by guiding ethical principles adapting to the complexity of robotics and its implications (social, medical, and bioethical). A clear, strict, and effective indicative ethical framework is needed for the development, design, manufacture, use and modification of robots.

Within the principle, it is proposed to draw up a book, consisting of a code of conduct for robotics engineers, as well as a code of conduct for research ethics committees.

It is true that to be able to exist a clear and effective ethical framework, we believe that it is necessary to draw up a book, encompassing all the rules and limits applicable to robotics technology. We therefore appreciate the European Union's approach to delineate the framework for the development of artificial intelligence.

4. Conclusions

We consider that a conclusion acquires more value when accompanied by some opinions endorsed in the field. Thus, in conclusion, we chose to present a short opinion poll13, in which recognized specialists answer a question asked by journalists:

Do robots need their own ethical code?”

Paul-Louis Pröve - Expert in Artificial Intelligence at Lufthansa Industry Solutions:

Absolutely, an ethical code for robots is needed. But I do not think there’s any need for a new code of ethics. The one that we humans have created for ourselves over thousands of years is particularly good.”

Manuela Lenzen - Scientific journalist, among others, and in the field of artificial intelligence: “No, not an ethical code for robots, but one for researchers, manufacturers and users. Machines without consciousness have no moral principles.”

Stephan Dörner - Editor-in-Chief of t3n, German magazine for the digital future:

When robots or other machines act autonomously, without user command, we have to ask ourselves who is responsible. This requires an ethical and legal code, which we, as a society, need to debate and adopt in the most transparent way possible. First, here is a question of liability. Maybe someday robots will have consciousness and benefit from rights, but for now, there is no such significant progress.”

Regarding the first answer, we believe that, of course, the moral rules that people have for thousands of years are particularly good, but insufficient for artificial intelligence, which will probably involve setting new limits that correspond to robot technologies.

We partially agree with the statement of journalist Manuela Lenzen. Indeed, an ethical code is also required for researchers, manufacturers, and users, but artificial intelligence entities also need a set of rules, which sets out certain limits of action. Even if these “machines” have no knowledge or moral principles, they can perceive reality and realize the effects of their deed, so that they can understand several rules that apply to them.

The last opinion, that of Stephan Dörner, seems complete to us and we understand to fully rally to it. The problem is a complex one and it may seem impossible for robots to be held accountable for their actions, but it is almost certain that the development of artificial intelligence will be fulminant in the future.

The conclusion of our research is that, although it is at the beginning, the civil legal liability of robots shows more and more its usefulness and must be recognized as such. As a Latin proverb reminds us: “Times change, and so do we”14. So, we need humanity to create now the rules necessary for technology to integrate harmoniously into our lives.

5. Bibliography

Baiaṣ, F. & Chelaru, E. (2012). Noul Cod civil – comentariu pe articole. Bucharest: C.H. Beck.

Colṭan, T. (n.d.). Răspunderea civilă delictuală în Noul cod civil. Între dezideratul legiuitorului ṣi realităṭile practice/ Delinquent civil liability in the New Civil Code. Between the desideratum of the legislator and the practical realities. Bucharest.

Puṣcă, A. (2006). Drept civil român – Persoanele fizice ṣi persoanele juridice/ Romanian civil law - Individuals and legal entities. Bucharest: Editura Didactică ṣi Pedagogică Bucharest.

Verdeṣ, E. (2010). Tendinṭe în abordarea teoretică a răspunderii juridice. Privire specială asupra relaṭiei dintre răspunderea civilă delictuală ṣi răspunderea penală/ Trends in the theoretical approach to legal liability. Special look at the relationship between tortious civil liability and criminal liability. Bucharest.

Codul Civil al României, republicat în Monitorul Oficial, Partea I nr. 505 din 15 iulie 2011/ The Civil Code of Romania, republished in the Official Gazette, Part I no. 505 of July 15, 2011, updated.

Puṣcă, A., Legal Aspects on the Implementation of Artificial Intelligence. -

Puṣcă, A., Should We Share Rights and Obligations with Artificial Intelligence Robots? -

*** Orientări în materie de etică pentru o inteligență artificială fiabilă, elaborat în aprilie 2019, în cadrul Comisiei Europene, de către Grupul de Experți la nivel înalt privind AI (AI HLEG)/ Ethical Guidelines for Reliable Artificial Intelligence, developed in April 2019 within the European Commission by the High Level Expert Group on AI (AI HLEG).

*** Raportul din 27.01.2017 al Parlamentului European, conținând recomandări adresate Comisiei referitoare la normele de drept civil privind robotica (2015/2103(INL))/ Report of the European Parliament of 27.01.2017 with recommendations to the Commission on civil law rules on robotics (2015/2103 (INL)) -

*** One Hundred Year Study on Artificial Intelligence (AI 100),

Morar, M. & Uscov, S. Scurte considerații privind răspunderea Inteligenței Artificiale în România sub unghiul mort al AI/ Brief considerations regarding the responsibility of Artificial Intelligence in Romania from the blind spot of AI.

Bejan T., Definiṭii ale eticii/ Definiṭii ale eticii/ Definitions of ethics.


*** European Commission – White Paper On Artificial Intelligence – A European approach to excellence and trust, COM (2020), Brussels,19.2.2020,

*** – Rome Call for AI Ethics,

1 PhD in progress, Doctoral School of Legal Sciences and International Relations, University of European Studies of Moldova, Address: Strada Ghenadie Iablocikin 2/1, Chișinău 2069, Corresponding author:

2 Article 1349: “(1) Every person has the duty to respect the rules of conduct that the law or custom of the place imposes and not to infringe, through his actions or inactions, the rights or legitimate interests of other persons. (2) The one who, having discernment, violates this duty is responsible for all the damages caused, being obliged to repair them in full. (3) In the specific cases provided by law, a person is obliged to repair the damage caused by the deed of another, by the things or animals under his guard, as well as by the ruin of the building. (4) The liability for damages caused by defective products shall be established by special law”.

3 Article 16: “(1) If it’s provided by law, the person is liable only for his acts committed intentionally or through fault. (2) The deed is committed with intent when the author foresees the result of his deed and either pursues its production through the deed, or, although he does not pursue it, accepts the possibility of producing this result. (3) The deed is committed through guilt when the author either foresees the result of his deed, but does not accept it, considering without reason that it will not occur, or does not foresee the result of the deed, although he had to foresee it. The guilt is serious when the author acted with a negligence or recklessness that not even the most unskilled person would have shown towards his own interests. (4) When the law conditions the legal effects of an act by its culpable commission, the condition is fulfilled also if the act was committed intentionally”.

4 Guidelines in substance of ethics for a intelligence Artificial Reliable, Developed in April 2019, in framework Commission European, of towards Group of Experts at Nivel high On AI (AI HLEG).

5 As far as we are concerned, we appreciate that this can be the case for both a person and another robot.

6 Report of 27.01.2017 of the Parliament European, Containing Recommendations Addressed Commission Relating at Rules of straight civil on robotics (2015/2103(INL)) -

7, consulted on 02.06.21.

8 One Hundred Year Study on Artificial Intelligence (AI 100), consulted on 02.06.21.

9 M. Uscov S., Scurte considerații privind răspunderea Inteligenței Artificiale în România sub unghiul mort al AI/ Brief considerations regarding the responsibility of Artificial Intelligence in Romania from the blind spot of AI,, consulted on 02.06.21.

10 Bejan T., Definiṭii ale eticii/ Definitions of ethics,, consulted on 02.06.21.

11 Morar, M. & Uscov, S., Scurte considerații privind răspunderea Inteligenței Artificiale în România sub unghiul mort al AI/ Brief considerations regarding the responsibility of Artificial Intelligence in Romania from the blind spot of AI -, consulted on 02.06.21.

12 Report of 27.01.2017 of the Parliament European Containing Recommendations Addressed Commission Relating at Rules of straight civil On robotics (2015/2103(INL)) -

13 Opinion poll taken from the website:, consulted on 03.06.21.

14“Tempora mutantur et nos mutamur in illis”.