Asimov’s laws of robotics provide rare common ground, a kind of commandments for all sides of ethical discourse on artificial intelligence. Practical implementation of these «laws», however, faces fundamental problems which are studied in this work. In particular, the first «law» on the prevention of harm to humans does not specify, the harm to whom is meant; the second «law» on the subordination of a robot to a human does not specify, which of the subjects willing to control this robot must be preferred. Elementary examples show that with ubiquitous conflicts of interest, both these abstractly-humanistic laws are untenable. Even in the absence of such conflicts, the «prevention of harm» requires a robot to foresee the consequences of certain actions in the arbitrarily distant future, which contradicts the laws of physics and biology. Finally, it is shown that formulations of all three laws, obviously ridiculous in application to inanimate instruments, assume that robots have freedom of action, fallaciously endowing them with subjectness. Such misplaced anthropomorphism, typical for the ongoing discussions of artificial intelligence, has a foothold in stable language forms. A bundle of the revealed fallacies makes up an erroneous conceptual system, threatening social security in transition to future technological lifestyles.