автордың кітабын онлайн тегін оқу Информационные технологии в юридической деятельности: теле-право. Монография
Информация о книге
УДК 004:34
ББК 33.81:67.0
И74
Редакционная коллегия:
Огородов Д. В., кандидат юридических наук, член Комитета по вопросам искусственного интеллекта при Комиссии Правительства Российской Федерации по делам ЮНЕСКО, член Экспертного совета по совершенствованию законодательного регулирования космической деятельности при Комитете по экономической политике Совета Федерации Федерального Собрания Российской Федерации, юридический советник Ассоциации пилотов беспилотных летательных аппаратов Республики Татарстан;
Абросимова Е. А., кандидат юридических наук, доцент, заместитель заведующего и доцент кафедры международного частного и гражданского права имени С. Н. Лебедева МГИМО МИД России, директор Центра инновационной юриспруденции, заместитель директора Информационного центра Гаагской конференции по международному частному праву в Москве МГИМО МИД России;
Волкова А. А., кандидат юридических наук, доцент кафедры международного частного и гражданского права имени С. Н. Лебедева МГИМО МИД России, заместитель директора Центра инновационной юриспруденции МГИМО МИД России.
Рецензенты:
Комлев Е. Ю., кандидат юридических наук, заместитель директора юридического института РУДН по научной работе, доцент кафедры муниципального права юридического института РУДН;
Татаринов М. К., кандидат юридических наук, доцент кафедры уголовного права, уголовного процесса и криминалистики МГИМО МИД России.
Технический редактор Е. И. Абросимова.
Монография подготовлена в рамках НИР «Информационные технологии в юридической деятельности» по программе «Приоритет-2030». Целью работы является изложение профессиональных позиций по различным аспектам, касающимся использования информационных технологий в юридической деятельности, правового регулирования такого использования и основных связанных с этим сложностей. В монографии освещены современные проблемы применения информационных технологий в юриспруденции, особенности регулирования информационных технологий в России в компаративистском ключе, а также принципы регулирования отношений с помощью искусственного интеллекта.
Благодаря разнообразию поднимаемых проблем, объединенных общей темой, книга дает наиболее комплексное представление о затронутых вопросах и может быть полезна при преподавании курсов, связанных с использованием информационных технологий в юридической деятельности и правовым регулированием отношений с привлечением искусственного интеллекта, а также при проведении научных исследований по релевантной тематике.
Законодательство приведено по состоянию на 1 ноября 2023 г.
Монография подготовлена под руководством Центра инновационной юриспруденции и декана международно-правового факультета МГИМО МИД России Н. Ю. Молчакова.
УДК 004:34
ББК 33.81:67.0
© МГИМО МИД России, 2024
© Оформление. ООО «Проспект», 2024
АВТОРЫ
Ambrosio A. — partner of the law firm De Berti Jacchia Franchini Forlani (head of the Russian office), lawyer;
Абросимова Е. А. — заместитель заведующего и доцент кафедры международного частного и гражданского права им. С. Н. Лебедева, директор Центра инновационной юриспруденции, заместитель директора Информационного центра Гаагской конференции по международному частному праву в Москве МГИМО МИД России, кандидат юридических наук, доцент;
Введенская А. А. — инспектор организационно-аналитического отдела Управления правового обеспечения и международного сотрудничества Следственного комитета Российской Федерации;
Волкова А. А. — доцент кафедры международного частного и гражданского права им. С. Н. Лебедева, заместитель директора Центра инновационной юриспруденции МГИМО МИД России, кандидат юридических наук;
Гимадрисламова О. Р. — доцент кафедры гражданского права Уфимского университета науки и технологий, кандидат юридических наук;
Дмитриева Е. О. — руководитель юридической службы СРО «Саморегулируемая межрегиональная ассоциация оценщиков», эксперт Центра инновационной юриспруденции МГИМО МИД России;
Каминская Е. И. — доцент кафедры международного частного и гражданского права им. С. Н. Лебедева МГИМО МИД России, кандидат юридических наук, доцент;
Карпов В. Э. — член Российской ассоциации искусственного интеллекта, начальник Лаборатории робототехники НИЦ «Курчатовский институт», доктор технических наук, доцент;
Костюк И. В. — юрисконсульт компании CloudPayments, магистр частного права, эксперт Центра инновационной юриспруденции МГИМО МИД России;
Кривельская О. В. — заместитель заведующего и доцент кафедры административного и финансового права МГИМО МИД России, кандидат юридических наук;
Минбалеев А. В. — заведующий кафедрой информационного права и цифровых технологий Московского государственного юридического университета имени О. Е. Кутафина (МГЮА), доктор юридических наук, доцент, эксперт РАН;
Огородов Д. В. — член Комитета по вопросам искусственного интеллекта при Комиссии Правительства Российской Федерации по делам ЮНЕСКО, член Экспертного совета по совершенствованию законодательного регулирования космической деятельности при Комитете по экономической политике Совета Федерации Российской Федерации, юридический советник Ассоциации пилотов беспилотных летательных аппаратов Республики Татарстан, кандидат юридических наук;
Сагитдинова З. И. — начальник отдела профориентационной работы, доцент кафедры уголовного права и процесса Уфимского университета науки и технологий, кандидат юридических наук, доцент;
Сафьян Э. А. — старший преподаватель кафедры английского языка № 8 МГИМО МИД России;
Сергейчева Н. А. — старший преподаватель кафедры английского языка № 8 МГИМО МИД России;
Соколова О. В. — доцент кафедры административного и финансового права МГИМО МИД России, кандидат юридических наук;
Топадзе А. В. — адвокат, преподаватель кафедры международного частного и гражданского права им. С. Н. Лебедева МГИМО МИД России, юрист корпоративной практики в Московском офисе Nextons (ex-Dentons).
Чайка Л. Н. — доцент кафедры международного частного и гражданского права им. С. Н. Лебедева МГИМО МИД России, кандидат юридических наук;
Штодина Д. Д. — кандидат юридических наук, приглашенный специалист Центра инновационной юриспруденции МГИМО МИД России;
Штодина И. Ю. — доцент кафедры международного права МГИМО МИД России, кандидат юридических наук;
Щербаков А. А. — старший преподаватель кафедры международного частного и гражданского права им. С. Н. Лебедева МГИМО МИД России, кандидат юридических наук;
Юсупова К. И. — преподаватель кафедры английского языка № 8 МГИМО МИД России, магистр юриспруденции;
Магистранты и студенты Международно-правового факультета МГИМО МИД России: Левковский К. А., Прокофьева О. В., Лаптева В. В., Мещеряков А. В.
Победители конкурса «Правовые особенности отношений с использованием искусственного интеллекта»: Караваева Е. М., Валяева В. А., Абрамова М. М., Гафарова А. А., Ровнова В. С., Булочников С. Ю., Петрухина П. И.
ВВЕДЕНИЕ
В постковидную эпоху переход большой части взаимодействий в дистанционный формат является повсеместной данностью, однако законодательное регулирование таких взаимодействий до сих пор эффективно не разработано, а позиция правоприменителя в период пандемии COVID-19 носила отчасти вынужденный характер, что было оправдано ситуацией, но не в полной мере подходит для решения текущих вопросов.
Цифровое будущее уже здесь, российский суд признает эмодзи надлежащим акцептом1, исковое заявление можно подать онлайн2, по дорогам ездят беспилотные автомобили3, искусственный интеллект4 и роботы5 участвуют в медицинских манипуляциях. Стремительное развитие цифровых технологий и использования ИИ заставляет задуматься о правовых и этических проблемах, кого-то даже пугает6. Но вне зависимости от нашего отношения и нашей реакции необходимо задумываться о наращивании темпа разработки эффективного регулирования тех цифровых отношений, для регулирования которых традиционные подходы применимы лишь отчасти или неприменимы вообще. Монографии, подобные этой, позволяют обратить внимание нормотворцев на наиболее проблемные и пробельные аспекты, предоставить им научное обоснование для разработки новых норм и регуляторных подходов, продемонстрировать наиболее удачные находки зарубежного законодателя и правоприменителя.
Задача монографии заключается в том, чтобы выявить основные имеющиеся сложности, возникающие при регулировании отношений и разрешении споров, возникающих из дистанционных правоотношений. Предполагается, что, помимо выявления сложностей, авторы предложат взвешенные пути их разрешения, основанные на действующих подходах законодателя и/или правоприменителя и на имеющемся уровне развития юридической техники. Таким образом, данная монография является актуальной и обладает научной новизной. Она решает целый ряд теоретических и практических задач, стоящих перед современными исследователями и разработчиками правового регулирования.
Для того, чтобы текст монографии представлял наиболее широкую палитру взглядов и мнений, специалисты Центра инновационной юриспруденции МГИМО МИД России провели целый ряд мероприятий, ставших прекрасной площадкой для профессионального обсуждения тех тем, которые для монографии представляют наибольший интерес.
24 апреля 2023 г. в МГИМО прошел очередной международный научно-практический круглый стол на английском языке, посвященный проблемам использования искусственного интеллекта, оценке его правосубъектности, вопросам распределения ответственности при его использовании.
21 ноября 2023 г. проведена международная научно-практическая конференция по актуальным вопросам правового регулирования информационных технологий. На этот раз в центре внимания конференции находился феномен «теле-права». Теле-право не сформировалось еще в сколько-нибудь обособленную систему и представляет собой собирательный термин для обозначения правового регулирования отношений, предполагающих дистанционное взаимодействие. К таким отношениям относятся, например: дистанционное оказание услуг, онлайн-арбитраж и онлайн-суд, дистанционное образование, дистанционный труд, цифровое правительство, телемедицина, использование беспилотного транспорта (дроны). При осуществлении таких дистанционных взаимодействий и обеспечении их эффективности и безопасности зачастую используется искусственный интеллект, поэтому к нему в рамках конференции обсуждение также вернулось. Можно сказать, что в процессе проведения конференции удалось манифестировать термин «теле-право» и определить его нормативное и содержательное наполнение.
Как круглый стол в апреле, так и конференция, да и в целом исследования в выбранной сфере вызвали большой интерес ученых, представляющих различные вузы, а также практикующих юристов из разных стран. Во многих вузах в России и за рубежом цифровая тематика неизменно включена в учебную и научную повестку. Практикующие юристы последние годы все чаще сталкиваются с различными проявлениями цифровых технологий в коммерческом обороте и повседневной, потребительской сфере. Для участия в монографии также приглашены молодые ученые, только начинающие свой путь в науке. Преподаватели, осуществляющие научное руководство, выявили ряд студентов магистратуры и бакалавриата, которым было предложено участие в монографии ввиду высокого уровня научных притязаний и качественного изложения их результатов.
В монографии выделяется три раздела. Первый выполняет роль обзорного, главы, входящие в него, написанные, в том числе, иностранными специалистами, представляют собой анализ различных аспектов, связанных с применением и правовым регулированием информационных технологий. Главы касаются целого ряда областей применения: образования, следствия, рассмотрения споров, защит авторских прав, работы с чатом GPT. Задачей такого охвата является составление максимально широкого и многоаспектного обзора современного этапа цифровизации во всех сферах правового регулирования и человеческой деятельности. Второй раздел в большей степени посвящен инновациям именно в России, однако рассматриваемым в сравнительном аспекте, с тем, чтобы российское регулирование получило должную компаративную оценку и прогноз развития. Третий раздел полностью касается правовых проблем, связанных с применением искусственного интеллекта в отдельных сферах, а также вопросам распределения прав и ответственности при таком использовании. Монография призвана составить целостное впечатление о текущем уровне как регулирования информационных технологий с точки зрения его качества, актуальности, наличия пробелов и нерешенных проблем, так и возможностей и областей применения таких технологий для облегчения деятельности современного юриста — судьи, адвоката, юрисконсульта, преподавателя, правоохранителя.
[3] Официальный сайт беспилотных такси Яндекс. URL: https://sdg.yandex.ru/taxi/yasenevo (дата обращения: 04.11.2023).
[4] Новостной ресурс Интерфакс. Искусственный интеллект для операций на сетчатке гла за разработали в Самаре // URL: https://academia.interfax.ru/ru/news/articles/4531 (дата обращения: 04.11.2023).
[1] Постановление Пятнадцатого ААС от 29 июня 2023 г. № 15АП-8889/23.
[2] Система ГАС «Правосудие» // URL: https://ej.sudrf.ru/ (дата обращения: 04.11.2023).
[6] Открытое письмо о приостановлении развития ИИ // URL: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (дата обращения: 04.11.2023).
[5] Soon R. H., Yin Z., Dogan M. A. et al. Pangolin-inspired untethered magnetic robot for on-demand biomedical heating applications // Nat Commun 14, 3320 (2023). URL: https://doi.org/10.1038/s41467-023-38689-x.
Раздел 1. СОВРЕМЕННЫЕ ПРОБЛЕМЫ ИСПОЛЬЗОВАНИЯ ИНФОРМАЦИОННЫХ ТЕХНОЛОГИЙ В ЮРИДИЧЕСКОЙ ДЕЯТЕЛЬНОСТИ
Ambrosio Armando,
partner of the law firm De Berti Jacchia Franchini Forlani (head of the Russianoffice), lawyer
THE EUROPEAN LEGISLATION ON ARTIFICIAL INTELLIGENCE: FUTURE PERSPECTIVES AND CHALLENGES
Abstract. The liability landscape concerning artificial intelligence (AI) in the European Union (EU) is significantly influenced by two recently proposed Directives, namely the Product Liability Directive (PLD) and the AI Liability Directive (AILD). While these Directives aim to establish a degree of consistency in terms of liability rules for harm caused by AI, they fall short of fully achieving the EU’s objective to provide clear and uniform guidelines for injuries resulting from AI-driven goods and services. Consequently, certain black-box medical AI systems, characterized by their utilization of intricate and opaque reasoning to offer medical decisions and recommendations, give rise to potential liability gaps. Consequently, patients may encounter challenges when attempting to hold manufacturers or healthcare providers accountable for injuries incurred from these black-box medical AI systems, irrespective of whether liability is assessed under EU Member States’ strict liability or fault-based liability laws. The proposed Directives fail to adequately address these potential liability gaps, thereby posing challenges for manufacturers and healthcare providers in anticipating the liability risks associated with the development and utilization of such potentially beneficial black-box medical AI systems. As a result, it becomes increasingly arduous for these entities to effectively predict and manage the potential legal ramifications arising from the creation and use of these intricate AI systems in the medical field.
Key words: artificial intelligence, risks arising from AI systems, EU acts on AI, risk-based approach, product liability.
In its effort to regulate artificial intelligence (‘AI’) the European Union has taken an innovative approach focusing particularly on the possible risks arising from AI systems, rather than on the legal aspects of the technology. This outlook stems from the conviction that regulating such a rapidly evolving phenomenon may prove to be useless, leading to the adoption of rules that would probably become obsolete soon after their implementation.
On the issue of liability related to the use of AI systems, the European Union has taken a more prudent view, building on the existing rules and trying not to compromise the delicate balance achieved among the different contexts and traditions of the legal systems of the EU Member States.
In this article we will analyze the contents of these newly proposed EU legislative acts on AI, as well as the impact of the different approaches adopted.
А. The AI Act
1. Introduction
On 21 April 2021, the European Commission submitted to the Parliament and the Council a proposal for a regulation for a set of harmonized rules on AI (the «AI Act»)7 to promote the development, use and uptake of secure, trustworthy and ethical AI in the internal market and to enable AI systems to benefit from the principle of free movement of goods and services within the territory of the European Union8.
The AI Act followed the preliminary insights conducted by the High-Level Expert Group on AI, which resulted, inter alia, in the «Ethics guidelines for trustworthy AI», published on 8 April 2019, as well as the outcome of the public consultation on the «White Paper on Artificial Intelligence» of 19 February 20209.
The objective of the AI Act is to protect health, safety and fundamental rights of individuals without inhibiting the equally important profile of technological innovation, which must, however, be human-centric.
2. Scope of application
The provisions of the AI Act apply if the product is placed on the EU market and, therefore, concern: (a) providers of AI systems within the EU marketplace, regardless of whether they are established in the European Union or in a third country; (b) users of AI systems located within the European Union; (c) providers and users of AI systems located in third countries, if the output produced by the system is used in the European Union10.
From the analysis of the scope of application, it emerges that The AI Act is clearly a discipline designed for an extra-territorial scope.
The AI Act, however, does not apply to the public authorities of a third country or to international organizations, if these authorities or organisations use AI systems within the framework of international agreements on law enforcement and judicial cooperation with the European Union or with one or more EU Member States11.
3. Definition
The European legislator is immediately faced with a particularly difficult task, namely to provide a definition of AI.
According to the AI Act, an AI system is a software that can, for a certain set of human-defined objectives, generate output, such as content, predictions, recommendations or decisions, that influence the environments they interact with12. A specific annex (Annex I) is also introduced detailing techniques and approaches of AI systems to ensure legal certainty, with the possibility of amendments through delegated acts, outside the legislative process.
The above definition has the ambition of being resistant to future developments and tries to cover all forms of AI and, thus, not only the most recent machine learning ones, but also more traditional systems.
4. The risk-based approach
The approach adopted by the AI Act is based on risk (the so called ‘risk-based approach’), i.e. it is not a question of intervening through upstream control over the technology, but rather of regulating how it is used. The types of risk that the regulation intends to manage are those relating to health, safety and fundamental rights, based on equal treatment between operators, regardless of their origin.
AI risks are classified into four levels, so that the obligations and safeguards for an AI system are proportionate to the level of risk it poses.
At the first level are AI systems of unacceptable risk13, the marketing and possible uses of which are prohibited almost absolutely by the AI Act, such as, inter alia, (i) those that use subliminal techniques for the purpose of materially distorting the behaviour of a person in a way as to cause or be likely to cause that person or another physical or psychological harm; (ii) those which exploit the vulnerabilities of a specific group of persons because of their age, physical or mental disability, in order to materially distort the behaviour of a person belonging to that group in such a way as to causes or be likely to cause that person or another physical or psychological harm; (iii) those which evaluate or rank the trustworthiness of natural persons over a period of time on the basis of their social behaviour or personality traits, with associated score (so called ‘social scoring’) by public authorities or those acting on their behalf; (iv) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes.
At the second level are the high-risk AI systems14, which are in turn divided into two categories. Specifically, while in the first they are simple safety components of products that have undergone an ex ante conformity assessment by a third party, the second consists of the independent systems listed in Annex III, identified on the basis of criteria such as, among others, the level of use of the AI application, its intended use, the number of people potentially affected, the dependence on the results and the irreversibility of the damage. In order to be placed on the market, these products must comply with a number of strict transparency and surveillance requirements.
At the third level are the low-risk systems, which must comply with precise minimum transparency obligations, where the focus is on the user’s awareness of interacting with a machine and consent to its use.
Finally, at the fourth level are the minimum risk systems, which are not subject to any particular obligations under the AI Act, save for compliance with existing legislation and the possibility of ‘self-regulation’ by adhering to voluntary codes of conduct.
5. Burdens in terms of compliance
High-risk AI systems entail multiple obligations that providers must fulfil before they can be placed on the European market and during their life cycle.
Firstly, an ex-ante conformity assessment procedure with respect to the requirements of the AI Act is introduced, as well as the obligation to register in an EU database specifically created for AI systems. This will be managed by the European Commission to increase transparency vis-à-vis the public and surveillance, as well as to strengthen ex-post control by the competent authorities.
Secondly, providers will have to equip themselves with an appropriate risk management system, understood as an interactive and continuous verification process that anticipates, assesses and analyses foreseeable risks, based on the analysis of data collected by the post-sale monitoring system. Also in the logic of a control of the entire cycle, the results produced by the high-risk systems must then be verified and tracked throughout the life of the system.
The technical documentation, which must be constantly updated, must also be available before the AI system is placed on the market. In addition, there is the automatic recording of events (so-called log files) that indicate the period of each use of the system (start date and time and end date and time) and identify the individuals involved in verifying the results. The records are kept for the purpose of monitoring the operation of the high-risk AI system, ensuring a level of traceability appropriate to the purpose of the AI system.
There is an omnipresent guarantee of human supervision, implemented by means of interface tools between human and machine, through which the former can always control (and thus ignore, interrupt, cancel) the activity of the latter.
Finally, at the request of a competent national authority, providers will be obliged to demonstrate the conformity of the AI system and to notify it of any serious incident or malfunctioning of the system that may constitute a non-compliance of the EU obligations regarding the protection of fundamental rights.
6. Penalties applicable in case of non-compliance of the AI Act
In the event of non-compliance of the provisions of the AI Act, EU Member States must lay down rules on penalties that must be ‘effective, proportionate and dissuasive’15.
Penalties are structured as follows: (i) up to EUR 30 million or, if the offender is company, up to 6% of its total annual worldwide turnover for the preceding financial year, whichever is higher, for non-compliance with the prohibited practices or for non-compliance with personal data requirements; (ii) up to EUR 20 million or, if the offender is company, up to 4% of its total annual worldwide turnover for the preceding financial year, whichever is higher, for non-compliance with any other requirement or obligation of the AI Act; (iii) up to EUR 10 million or, if the offender is company, up to 2% of its total annual worldwide turnover for the preceding financial year, whichever is higher, for supplying incorrect, incomplete or misleading information to the notified bodies and the national competent authorities in response to a request.
7. Governance
Under the AI Act governance is structured at two levels: European and national.
At the European level, the Artificial Intelligence Board will be established to advise on matters relating to the implementation of the AI Act and to cooperate with the national supervisory authorities and the European Commission16.
At the national level, each EU Member State will establish or designate one or more national supervisory authorities to ensure the application and implementation of the AI Act and to act as market surveillance authorities17. These national supervisory authorities will also represent their country on the Artificial Intelligence Board.
8. Recent amendments introduced by the position of the European Parliament of 23 June 2023
The recent negotiating position approved by the European Parliament does not change the structure of the AI Act. However, it introduces some important changes to the proposal of the European Commission:
i. general principles are introduced to be applied to all AI systems that must observe the principles of technical robustness and security, transparency, compliance with privacy and data governance, social and environmental well-being;
ii. an obligation is envisaged for providers and operators of AI systems to adopt measures to ensure that their personnel have an adequate level of knowledge in the subject;
iii. the list of the AI systems classified as unacceptable risk is expanded to include (i) biometric categorization systems based on sensitive characteristics, (ii) predictive policing systems based on subject profiling, location or criminal history, (iii) emotion recognition systems used in law enforcement, border management, workplace and educational institutions, and (iv) non-targeted extraction of biometric data from the internet or CCTV footage to create facial recognition databases;
iv. the AI category of high risk is also expanded because of the potential negative consequences to health, security, fundamental rights of individuals or the environment to include AI systems used for influencing voters and the result of elections and recommendation systems used by social media platforms;
v. the transparency obligations provided for AI systems that interact with individuals are further specified. More in particular, providers shall ensure that they are designed and developed in such a way that the system itself, the provider or the user informs the exposed individual in a timely, clear and understandable manner of the fact that they are interacting with an AI system, unless this is not evident from the circumstances and context of use;
vi. the penalties are tightened. More in particular, the use of AI systems prohibited under Article 5 of the AI Act may result in a fine of up to EUR 40 million or, if the offender is a company, up to 7% of its total annual worldwide. If the rules on data governance and transparency and provision of information to users are not complied with, the penalty is up to EUR 20 million or 4% of the total annual worldwide turnover;
vii. finally, the so-called regulatory experimentation spaces are promoted, in order to test AI systems before they are implemented. More in particular, the EU Member States, singularly or jointly, shall establish at least one regulatory testing space for AI at national level, which will be operational at the latest on the day of entry into force of the AI Act, so that, on the one hand, the authorities will provide guidance to potential system providers for greater compliance with the applicable EU legislation and, on the other hand, the potential providers enable and facilitate the testing and development of innovative solutions related to the AI systems.
9. Entering into force of the AI Act
The final decision by the European Parliament and the Council is expected around mid-2024, through the elaboration of the final content of the discipline. In any case, a transition period of two years is indicated, which should lead to the full application of the discipline around mid-202618.
В. Liability of AI system under EU legislation
On 28 September 2022, the European Commission published two proposed Directives addressing the issue of liability for AI, namely a proposed Artificial Intelligence Liability Directive and a proposed Product Liability Directive.
In fact, the European Commission recognizes that the “[c]urrent national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services”19. In particular, it notes that some cases of AI-caused injury may fall into “compensation gaps” under national law, and thus may fail to provide victims with a level of liability protection comparable to what they would receive in similar cases not involving AI20.
The European Commission’s concern is that these compensation gaps would ultimately decrease trust in AI and may produce “legal uncertainty” with regard to liability for harm caused by AI. Therefore, the aim of the proposed Directives is to provide a uniform approach at the EU level to ensure liability protection for such injuries21.
1. The AI Liability Directive
The European Commission recognizes that “[t]he specific characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim22”.
Therefore, on 28 September 2022 the European Commission submitted to the European Parliament and the Council a proposal for a directive (the ‘AI Liability Directive’)23 aimed at adapting the rules on non-contractual civil liability to the AI context.
The AI Liability Directive lays down specific provisions on the proof of damage caused by an AI system and offers new means of protection for the injured party in court proceedings.
Particularly noteworthy is article 3 of the AI Liability Directive which, although limited to high-risk systems, entrusts the court with the power to order disclosure or preservation of evidence on the AI system suspected of having caused harm. In case the defendant fails to comply with the court’s order, the AI system is presumed to be non-compliant.
Article 4 of the AI Liability Directive then introduces a rebuttable presumption of fault, whereby the national court presumes the existence of a causal link between the defendant’s fault and the output produced by an AI system or the failure of that system to produce an output, if all the following conditions are met: (i) the plaintiff has proved or the court has presumed fault on the part of the defendant, consisting of a failure to comply with a duty of care under EU or national law and directly aimed at avoiding the damage that occurred; (ii) the negligent conduct can reasonably be expected, based on the circumstances of the case, to have affected the output produced/omitted by the AI system; (iii) the plaintiff has proved that the damage was caused by the output produced/omitted by the AI system.
2. The PL Directive
Along with the AI Liability Directive, on 26 September 2022 a further proposal (the ‘PL Directive’)24 was submitted by the European Commission to the Parliament and the Council aimed at modifying the existing rules on product liability and adapting them to the developments of the digital economy.
The main provisions of the PL Directive can be summarized as follows.
Firstly, he definition of ‘product’ is extended to also include AI systems.
Secondly, the interconnection and machine learning functions of AI systems are also deemed susceptible to ‘defects’, adding them to the list of factors that the courts must take into account when assessing the existence of defects.
Thirdly, the burden of proof remains with the injured party, but presumptions are introduced on the defective nature of the products, on the causal link between defect and damage, or on both.
Finally, equal treatment between EU and non-EU producers is promoted, and consumers who suffer harm caused by unsafe imported products will be able to claim compensation from the importer or the producer’s representative in the European Union.
С. Comparison between the AI Act and the Directives
It is worthwhile mentioning that the AI Act is a proposed regulation, which will be directly applicable in all EU Member States, while the PL Directive and the AI Liability Directive are proposed directives, which still need to be implemented into national law25. After having been implemented into national law, the two proposed Directives will operate in conjunction with the existing national liability laws of the EU Member States that govern liability for harm caused by AI systems.
Furthermore, while the PL Directive generally prohibits EU Member States to adopt national laws that are either more or less restrictive than those set forth in the PL Directive26, the AI Liability Directive generally allows them to adopt stricter national laws to govern non-contractual liability for AI-caused damages27. It follows that EU Member States will still have a considerable level of discretion in developing national rules that govern liability for harm caused by AI systems.
D. Conclusions
The benefits and opportunities deriving from the use of AI are evident, however the technology also brings with it new challenges and risks. Faced with these concerns, in recent years a tend has emerged at political level that would like to see the AI technology properly regulated, but its essence make regulation very complicated or even worthless, given the rapidly changing environment.
The newly proposed legislative acts examined above show the continued efforts and progress of the European Union to govern the AI environment and to adopt a uniform approach to regulating the development and risks deriving from AI technologies in the EU marketplace.
Having said that, it seems that the approach adopted by the EU legislator with the AI Act goes in the right direction, being focused on the possible risks linked to the use of the technology and aiming at defining the legal, ethical and political borders to its use.
As regards the liability arising from AI systems, while the proposed Directives provide some uniform liability rules for harm caused by AI systems, it seems that they fail to fully accomplish the goal set forth by the European Union of providing clarity and uniformity for liability for injuries caused by AI-driven goods and services.
[10] Please see article 2(1) of the AI Act.
[11] Please see article 2(4) of the AI Act.
[8] Please see recital 5 of the AI Act.
[9] European Commission. White Paper. On Artificial Intelligence — A European approach to excellence and trust. COM(2020) 65 final (European Commission, 2020).
[7] European Commission. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (AI Act). COM(2021) 206 final (European Commission, 2021).
[14] Please see article 6 of the AI Act.
[12] Article 3(1) of the AI Act provides the following definition of AI: «‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with».
[13] Please see article 5 of the AI Act.
[24] European Commission. Proposal for a Directive of the European Parliament and of the Council on liability for defective products (PLD). COM(2022) 495 final (European Commission, 2022).
[22] Please see Explanatory Memorandum of the AI Liability Directive.
[23] European Commission. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) (AILD). COM(2022) 496 final (European Commission, 2022).
[20] Please see recital 4 of the AI Liability Directive.
[21] Please see recitals 4, 6–8 of the AI Liability Directive.
[18] Please see article 85(2) of the AI Act.
[19] Please see the Explanatory Memorandum of the AI Liability Directive.
[16] Please see article 56 of the AI Act.
[17] Please see article 59 of the AI Act.
[15] Please see article 71 of the AI Act.
[27] Please see article 1(4), recital 11 of the AI Liability Directive.
[25] Please see article 288 of the Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C326/47 (TFEU) (European Union, 2012).
[26] Please see article 3 of the PL Directive.
Юсупова Камила Ильмировна,
преподаватель кафедры английского языка № 8 МГИМО МИД России, магистр юриспруденции
APPLICATION OF FAIR USE DOCTRINE TO AI ISSUES
Abstract. The article is focused on the application of fair use doctrine to works created by artificial intelligence. A general overview of the concept of fair use and the applicability of its specific criteria to the regulation of relations related to works created with the help of artificial intelligence was presented. The article summarises the fundamental case law that has shaped the modern approach to the concept of fair use. The main points of view regarding the applicability of the fair use concept to AI-generated content were analysed, taking into account the emerging jurisprudence.
Key words: сopyright law, copyright, artificial intelligence, generative artificial intelligence, fair use, copyrightability, subject matter of copyright, author, work of authorship, derivative work, copyright protection.
Copyright law is a sphere, which is constantly being challenged by a rise of new technologies. Historically, copyright law was developed to protect the literary works, so it has always faced the need to develop adequate approaches to protect newly emerging objects, such as movies, computer software, photography and others. Being in the year 2023, it is hard to imagine that copyright law once had to deal with the question whether it was possible to recognize copyrightability of photographs. Surprisingly, the position was initially taken that a photograph was not a protectable work, as it merely captured the reality around it. However, after a certain period of time, this rigid position was abandoned and subsequently both the practice of various jurisdictions and the Berne Convention recognised photography as a copyrightable work of art28.
Nowadays, with technologies developing even faster, copyright law is constantly facing new challenges. For example, one of the most important legal challenges is the protection of copyright in the use of generative artificial intelligence («Generative AI»). Generative AI is usually described as software used to create diverse types of content such as pictures, videos, texts, audio materials and 3D models. By learning and training on pre-existing patterns (which can be derived from copyrighted works available on the Internet, these are commonly referred to as «Input Works») Generative AI uses the information gathered to generate novel and unique works (commonly referred to as «Output Works»).
The main problem is that AI can process content to varying degrees. It can produce a rather abstract result, which is unlikely to potentially infringe on anyone’s copyright, or it can produce a very distinct picture, elements of which can be qualified as copyright infringement. For instance, the latest generation of Midjorney software allows you not only to specify an abstract prompt (e.g., a prompt might look like this: generate a still life with sunflowers in a vase), but also to refine your prompt up to adding the style of a particular author. Midjorney is already quite trained and knows the styles of various artists, directors, photographers, architects and in fact a person can request to generate something very specific, referring either to the creator himself or to his particular work (e.g., unlike the example above, the prompt could contain: generate a still life with sunflowers in a vase in the style of Van Gogh). In the latter case, the likelihood of copyright infringement is much higher. Thus, in some situations GenAI may substantially transform existing material (and presumably thereby create a derivative work), in other situations it may create a fundamentally new work that contains only stylistic features of other works of authorship (and thereby create a new original work), or it may rework input works so insignificantly that it would constitute a copyright infringement.
As long as there are no particular statutory provisions available to govern AI-oriented copyright law issues, it seems that we need to find some abstract norms and standards that might be suitable for the problem in question.
In view of the aforesaid, it seems appropriate for the author to refer to the doctrine of fair use, which is widely applied in US copyright law. This choice is justified by the sufficient degree of abstractness inherent in the concept of fair use. This feature provides a flexible approach to resolving various intractable copyright issues.
In general, fair use doctrine can be defined as the exception from copyright which is aimed at balancing the protection that copyright law grants its owners with the promotion of creativity, free expression and education. It means that under particular circumstances, a person is allowed to use copyright-protected material without permission from the copyright holder. Under the fair use doctrine, a person may use copyrighted materials for purposes of criticism, comment, news reporting, teaching, scholarship, or research. Fair use doctrine is a mixed question of law and fact, thus the finding of whether something complies with a fair use standard is case-specific.29 According to this implication, there are no areas where fair use can be presumed. In terms of the procedure, fair use is usually described as an affirmative defense, meaning that a defendant in a copyright infringement case has a right to invoke it.30 In deciding fair use cases, courts must consider the following factors having equal significance:
1. The purpose and character of the use, including whether it is commercial, transformative and non-expressive.
2. The nature of the used copyrighted work.
3. The amount and substantiality of the used fragment of copyrighted work.
4. The consequences and effect of the use on the potential market for or value of the copyrighted material31.
The purpose of the first factor that is also sometimes called “the transformative factor” is to determine whether the material of a copyrighted work was used to create something completely new or merely copied another work verbatim. In the case of AI, this is where it is necessary to determine the extent to which the input content has been reprocessed. In reality, the determination of what is transformative might be quite challenging. In evaluating the purpose and character of use, a court will usually consider whether the work created embodies a new meaning, message or expression. This is evidenced by one of the landmark cases, Campbell v. Acuff-Rose Music, in which the court held that for a use to be transformative, it must add to the original «with a further purpose or different character, altering the first with new expression, meaning, or message»32. Since the above case, the transformative test has become almost the most important criterion in determining the applicability of the fair use concept33. The weight of this criterion plays a decisive role in most disputes, although, of course, it cannot be stated that the court will not evaluate other factors as well. Therefore, here we can assume that potentially the concept of fair use may be applicable to some works created by AI, if such a work indeed turns out to be transformative, embodying a completely new message and idea in contrast to the original work. Moreover, it is worth noting that courts have further extended the construction of the transformative test. Under this broader construction, the newly transformed work is not required to directly link its meaning or purpose to the original copyrighted work, and is therefore given the option of copying the entire original work if it does so for a completely new purpose34. For example, in the case of Bill Graham Archives v. Dorling Kindersley Ltd, the authors of a biography of a musical group were sued for copyright infringement for reproducing old concert posters of the group in their book. The authors fully reproduced every poster image they used, placing them in chronological order in the biography. Despite using the poster images in their entirety, the court upheld their right to fair use because «the purpose in using the copyrighted images at issue is plainly different from the original purpose for which they were created»35. An extended construction of the transformative criterion is highly likely to allow the application of fair use to AI-created works. The main thing is that such a work must be genuinely new in its meaning, content and purpose.
The second factor deals with the characteristics of a used work. Here, the court will usually assess whether the copied material is published, commercially available, and creative. When interpreting this test, courts will look first to whether the copied work was of a factual nature. If the work is factual in nature (for example, it may be a news article, biography or technical paper), it is likely that the use of such material will be considered fair. On the contrary, if the nature of work is creative rather than factual, its use will probably be recognised as unfair. In addition, use of an unpublished work is less likely to be considered fair and vice versa. Many researchers have noted that assessing the nature of the copyrighted work is often an insignificant part of the fair use analysis, as it is most often determined via the consideration and examination of the remaining three factors36. In terms of the impact of this test on issues related to works generated by AI, it would be quite difficult to determine what kind of input content was used, given the overall volumes of content it processes on a regular basis.
The third criterion focuses on the amount and substantiality of the used copyrighted material. Based on this test, the smaller the amount of the copied material used, the more likely it is that copying such material will be justified as fair use. However, the amount is not always a decisive factor — even if a small part of the work has been copied but is considered central to the work as a whole, a court is less likely to recognise such use as a fair one. Application of this test is also closely related to the transformative factor discussed above. As a matter of fact, again, each case is considered and examined on a case-by-case basis. This is why in some contexts, such as use for purposes of criticism or parody, copying the entire work may be permissible. Photographs and works of visual art are often controversial as the user usually requires the full image and this would not qualify as fair use in some cases. On the other hand, in one of the landmark cases, Perfect 10 v. Google, Inc., the court has held that a “thumbnail” or low-resolution version of an image is a smaller “quantity.” Such a version of an image may be well suited for educational or research purposes and therefore constitutes fair use37. It is important to note that in some circumstances the amount of copyrighted material use can be so insignificant (“de minimis”) that the court allows it without even analyzing applicability of fair use doctrine. If we attempt to consider, in light of this criterion, the issues surrounding the generation of works through AI, everything seems rather ambiguous. As mentioned earlier, Generative AI can either use a substantial amount of the original author’s work, creating a secondary output work, or use a minimal amount of that work, creating an entirely new object. It seems that this test will hardly be decisive in determining the applicability of the fair use concept to works created by AI.
Another fair use factor is the effect of the use of copyrighted material on the potential market. Applying this factor to a particular case, the court will assess whether a use deprives the copyright owner of income or undermines a new or potential market for the copyrighted material. Prior to the dominance of the transformative test, it was the criterion of impact on the potential market that was central to the interpretation and application of the fair use concept. Nevertheless, even today this criterion does not lose its vast importance. The courts tend to give great importance to this factor. In practice, it is the factor that usually provides the grounds on which fair use is most often challenged. In this case, it is worth assessing whether the use of the work deprives the copyright owner of income or limits the potential market for the copyrighted work.
Taking into account all relevant factors, several approaches have been formulated on whether or not the fair use doctrine can be applied to the content created by artificial intelligence.
The first approach appears to be rather narrow. Some authors refer to this approach as «fair use minimalism» concept38. Under this perspective, a work created by Generative AI is defined as a derivative one. The adherents of this approach consider that all the output works lack originality and creativity and thus cannot be protected by a fair use principle. The basic idea is that Generative AI is trained on copyrighted material and such training cannot be considered an authorized use. It is argued that to create quality output works, whether texts, images, videos, or musical works, Generative AI must examine and use as many original works as possible, including their central parts. In such circumstances there can be no compliance with the fair use doctrine, thus it is not applicable.
The author finds this approach very restrictive and even rigid to a certain extent. Application of such an approach would imply that the generation of any work would be considered an infringement of the copyright of rightsholders. As we have seen earlier, courts can interpret the fair use criteria quite broadly. In this case, the presumed interpretation is too narrow. This approach also implies that the author of each original work will have the right to receive compensation for the unauthorized use of his work and thereby immediately raises the issue of liability. To implement it, it would seem necessary to develop some form of remittance system to compensate the authors of the original works and to determine whether a user who creates the output work should be liable for vicarious copyright infringement39. This approach will be significantly cumbersome both for courts and state authorities, and for the development of creative possibilities of artificial intelligence as a technology, as it fully denies the creative potential if AI and recognises all the generated works as infringing copyright. It is doubtful whether this approach can be fully adopted in its purest form.
There is another approach that appears to be highly favourable to the regulation of generated content issues that is also called “fair use maximalism” concept40. The proponents of this approach are artificial intelligence technology providers, and this is not surprising: according to this perspective, any work generated by artificial intelligence is subject to fair use regulation. This is because each generated piece is unique and created through rather profound transformation and modification processes. AI technologies are seen as tools, like a paintbrush, which merely help to create unique works. In this sense, the rights and benefits of the AI providers take priority over the original copyright owners’ rights. While treating AI as a tool is a reasonable idea to a certain extent, at the same time it seems utopian and unfair to completely deny the ability of authors of original works to rely on their copyright protection. This type of perspective seems to be unbalanced and could lead to many controversies in practical field.
In addition to very categorical approaches to the problem in question, we can refer to more balanced points of view, which definitely deserve consideration. The question of the applicability of the fair use doctrine to works created by artificial intelligence is very complex and cannot be resolved unequivocally. Since the concept of fair use is quite broad per se, and the possibility of its application is assessed by courts on a case-by-case basis, a differentiated approach seems to be the most appropriate. Differentiated approach subsists in the notion that every work generated by AI should be examined separately. In this sense, only works generated from diverse sources and that do not reproduce the core expression of any material should satisfy the fair use criteria. According to some authors, “this approach may in some cases support scenarios where most of the input data is owned or controlled by the rights holder, with a small portion of third-party inputs, to ensure diversity but largely retain the style of the owned inputs.41” Such an approach would certainly demand that AI technology providers develop certain mechanisms to track the amount of usage of certain content, which could become quite useful for the authors.
There is no doubt that the approaches described above are purely theoretical assumptions regarding the applicability of fair use doctrine to AI generated content. In terms of practice, the field in question is still underdeveloped. However, recently a potentially landmark case has been initiated. The dispute in question is Getty Images v. Stable Diffusion commenced in February 202342. According to the complaint, the plaintiff (Getty Images) claims that the defendant (Stability AI) unlawfully used the plaintiff’s copyrighted images, associated text, and metadata to train its artificial intelligence tool to convert those materials into Stable Diffusion images. The plaintiff states that the defendant infringes its copyrights on an enormous scale and exploits its resources for its commercial benefit. The complaint also indicates that Getty Images’ content is extremely valuable to the datasets used to train Stable Diffusion and that by using this content Stability AI commercially competes with Getty Images. The plaintiff claims that the Stable Diffusion is able to combine what it has learned to generate this artificial image, but only because it was trained on proprietary content belonging to Getty Images. Another of the plaintiff’s arguments is that while Getty Images licenses its content to responsible actors, Stability AI has taken that same content from without any permission, depriving Getty Images and its contributors of fair compensation, and without providing adequate protections for the privacy and dignity interests of individuals depicted43. In light of all the circumstances, it is quite interesting to see what the court will decide and what the interpretation of the Fair Use doctrine will be in this particular case. It is worth mentioning that according to the attorneys of the plaintiff, the main aim of the claim is to create a new legal status quo (presumably one with favorable licensing terms for Getty Images) and to make generative models respect intellectual property as it happened with Napster and Spotify platforms44. This case has not yet been resolved, but it has the clear potential to set a crucial precedent for determining the applicability of the concept of fair use and to change the whole copyright landscape in the sphere of AI.
Copyright issues in the use of AI are still in the early stages of development. Of course, legislative regulation and practice cannot keep pace with the technological breakthrough of the present. Many issues are yet to be resolved, and some of them will definitely appear later, with the development and improvement of currently existing technologies. Broad and abstract concepts, such as the fair use doctrine, are particularly useful in governing such issues. But even with the flexibility provided by the fair use concept, many questions are still difficult to answer. In the near future, courts will be hearing more and more cases related to copyright protection in the use of AI. Interpreting the concept of fair use in practice may indeed play an important role in enabling the use of Generative AI technologies around the world. If the United States can develop a consistent approach to regulating copyright issues in the sphere of AI, the generative AI industry will develop harmoniously and consistently with the interests of authors of works. And for other countries around the world, the U. S. experience, especially in applying the Fair Use concept, can provide a starting point and a benchmark for adopting appropriate and adequate regulation that would be consistent with the general spirit of the law in those countries.
References
1. Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979).
2. U.S. Copyright Act of 1976, 17. U.S.C.
3. Bill Graham Archives v. Dorling Kindersley Ltd., 448 F.3d 605 (2d Cir. 2006).
4. Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994).
5. Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135, (D. Del.).
6. The Routledge Companion to Copyright and Creativity in the 21st Century / M. Bogre, N. Wolff (ed.). Routledge, 2020.
7. Legal Guide on Fair Use. — Digital Media Law Project (DMLP).
8. Loren L. P. Fair Use: An Affirmative Defense? 90 Wash. Law Rev. 685, 2015.
9. Perfect 10 v. Google, Inc., CV 04-9484 AHM (SHx) (C.D. Cal. Feb. 21, 2006).
10. Rodriguez Maffioli D. Copyright in Generative AI training: Balancing Fair Use through Standardization and Transparency //Available at SSRN 4579322. 2023.
11. Soiffer A., Jain A. Copyright Fair Use Regulatory Approaches in AI Content Generation. Tech Policy Press, 2023.
12. Tomassian L. Transforming the Fair Use Landscape by Defining the Transformative Factor. S. Cal. L. Rev. 90 (2016): 1329.
13. Vincent J. Getty images is suing the creators of AI art tool stable diffusion for scraping its content // The Verge. 2023.
[44] Vincent J. Getty images is suing the creators of AI art tool stable diffusion for scraping its content // The Verge. 2023.
[42] Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135, (D. Del.).
[43] Id.
[40] Soiffer A., Jain A. Copyright Fair Use Regulatory Approaches in AI Content Generation. Tech Policy Press, 2023.
[41] Id.
[38] Soiffer A., Jain A. Copyright Fair Use Regulatory Approaches in AI Content Generation. Tech Policy Press, 2023.
[39] Id.
[36] Legal Guide on Fair Use. Digital Media Law Project (DMLP).
[37] Perfect 10 v. Google, Inc., CV 04-9484 AHM (SHx) (C. D. Cal. Feb. 21, 2006).
[35] Bill Graham Archives v. Dorling Kindersley Ltd., 448 F.3d 605 (2d Cir. 2006).
[33] Tomassian L. Transforming the Fair Use Landscape by Defining the Transformative Factor. S. Cal. L. Rev. 90 (2016): 1329.
[34] Id.
[31] U. S. Code § 107 — Limitations on exclusive rights: Fair use.
[32] Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994).
[29] The Routledge Companion to Copyright and Creativity in the 21st Century / M. Bogre, N. Wolff (ed.). Routledge, 2020.
[30] Loren L. P. Fair Use: An Affirmative Defense? 90 Wash. Law Rev. 685, 2015.
[28] Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979).
Волкова Анна Алексеевна,
старший преподаватель кафедры международного частного и гражданского права им. С. Н. Лебедева, заместитель директора Центра инновационной юриспруденции МГИМО МИД России, кандидат юридических наук
LIMITATIONS TO COPYRIGHT IN THE INFORMATION SOCIETY: SCIENTIFIC AND EDUCATIONAL ASPECTS
Abstract. The article analyzes the possibilities and challenges regarding the application of copyright exceptions and limitations in the information society, particularly in the context of the Internet. Specific emphasis is placed on exceptions in the educational and scientific domains, which are considered essential for promoting knowledge and serving the interests of society. The author adopts a comparative approach, examining regulations from various jurisdictions, including the USA, Germany, and Russia. The objective is to determine whether traditional legal mechanisms can be effectively applied to modern online relationships, given that certain legislation lags behind current trends, such as online libraries and digital education. In conclusion, it is evident that effective exceptions for online libraries are lacking, although exceptions in the educational sphere remain contemporary and relevant in the digital society.
Key words: copyright; exceptions and limitations; online libraries; online education; research exceptions.
Introduction
In the course of modern changes, we all face a large stream of digital information. The lockdown that we faced several years ago made us long for communication and activities online. Lots of cultural institutions worldwide were able to present media content in the digital sphere. Here comes the question of the legal status of such availability.
In general, all the types of use can be divided into two categories: authorized and unauthorized. The first one covers various license agreements (exclusive, non-exclusive, and cc-by licenses). The second one may include unlawful use and fair dealing clauses. In this paper we will concentrate on the latter subcategory.
Fair dealing relates to the situations where in the absence of the author’s permission the use of a work can be excused. There are different legal mechanisms for that throughout the world, but the certain basis for them is the concept of exceptions and limitations to copyright.
Originally this mechanism was developed for traditional (“offline”) use of content. The COVID-19 pandemic made all of us realize once again how deeply the Internet is embedded in our lives. Due to the pandemic lots of activities were transferred to the digital sphere, and they still maintain their digital existence. The present paper attempts to analyze whether existing copyright exceptions meet modern technological challenges of our society — whether they can be used for online utilizations of copyrighted works.
In recent years there has been growing interest in the copyright sphere, which is probably due to the new challenges and new level of responsibility of the society.
Most of the articles are concentrated on specific aspects of copyright exceptions. For example, Marketa Trimble outlines the problems that arise specifically with the copyright exceptions in transnational relations45. Papadopoulou M. D examines the educational sphere46.
There are, nevertheless, some fundamental researches. Armin Talke examines German approach to the library and copyright sphere47. Asta Tūbaitė-Stalauskienė outlines EU provisions for copyright exceptions and limitations, supporting it with some ECJ case-law, but this paper covers the period before the implementation of the new 2019 EU Directive48.
There are also numerous copyright policies and copyright notes on the Internet that are connected with the topic. They are not genuinely of a scientific legal character but they are practically useful for ordinary users and reflect the urgency of the topic.
This paper is mainly possible due to the comparative method. The legal jurisdictions under scrutiny primarily are: USA (as a country where there is the doctrine of fair use apart from a set of exceptions and limitations), Germany (as an example of a national approach of the European legislation) and Russia (as a more conventional jurisdiction where for some cases reforms are necessary).
The paper aims at giving a critical view of legal material as well as finding the principles underlying some specific provisions. Not only do we analyze existing regulation in the sphere, but we also try to check if this dated regulation is adapted to modern realities of widespread information society.
As for the research questions we should evaluate the following. Can we apply traditional copyright exceptions to digital sphere? Are there any advanced legal exceptions to copyright designed specifically to online relations? Is it time to review copyright laws in line with technological achievements?
It seems that the fields under analysis are on different levels of adaptation to modern times. And frankly speaking it is difficult to say which one is the one in the front and which one in the back.
The library sphere seems to be an outsider from the copyright exceptions and limitations point of view. Online libraries as a concept are not generally covered by the exceptions which means that there had to be invented other mechanisms of spreading the content. And yet they were invented — online libraries and online bases are now functioning on a license basis: either the ordinary paid one or the one free of charge (the so-called open license). Online libraries are concentrated in the hands of right holders or their agents — these are large publishing houses which can afford the financial burden of the online access. Ordinary state libraries rarely have a developed form of online content environment as there is no enough to finance all the fees to the right holders. Further in the paper it is suggested to assess the role of technological measures in this fight for the balance of interests of the society and the right holders.
As for the educational sphere, it is still counting on the system of copyright limitations and exceptions. And this sphere is likely to be covered by these mechanisms in the future as the education is a sort of a noble mission and the interests of the society as a whole are likely to be taken into consideration. In some jurisdictions online education is already regulated and a decent level, at least quotation and illustration are still possible in the digital environment. Moreover, the use of hyperlinks to a lawfully published work does not comprise the use of the work and therefore is not an infringement. The next step is to provide for some restricting criteria to distinguish safe educational online access from unsafe or open access, fortunately there are already some existing approaches in the foreign jurisdictions.
Legal basis for copyright exceptions and limitations
Copyright sphere is not quite uniform as we compare national laws. Of course, there is Berne Convention (Berne, 1886)49 but it only sets one mandatory exception — the quotation. Some other exceptions mentioned in the convention are provided for at discretion of states. The great achievement of the act is that it sets the so-called three-step test to check whether an exception is justified: the act must set only certain special cases; the exception does not conflict with the normal exploitation of the work; the exception does not unreasonably prejudice the legitimate interests of the author. All the national exceptions must meet these criteria.
We can name some other acts that address copyright exceptions like TRIPS (Art. 13)50, WIPO Copyright Treaty (WCT)51 (Art.10), but they fall outside the scope of this paper.
Even in the EU there is still no absolute uniformity. It has a comparably long history with exceptions and limitations. A worth mentioning one is the Infosoc directive which first introduced a limited list of possible exceptions for EU-members52. There was than a list of directives that set either mandatory or exhaustive and optional lists of exceptions. All that resulted in a new Directive 2019/79053 one of the goals of which is the mission to adapt to digital and cross-border environments. So, among other things the Directive addresses online education, licenses for out-of-commerce works and some aspects of cultural heritage institution activities.
Every state has developed its own copyright legislation more or less adjusted by the EU-directives. The most recent Directive sets an exhaustive but not mandatory list of exceptions. That means that EU-members’ copyright laws can vary within this long list. For the purposes of this paper, we address Germany as an advanced example of EU-jurisdictions.
Russia and USA are signatories to the Berne Convention (as Germany is as well), but obviously they are not related to EU legislation. Additionally, the United States is a jurisdiction that not only recognizes copyright exceptions but also employs the fair use doctrine. This doctrine permits the use of copyrighted works without the authors’ explicit permission, but it is determined on a case-by-case basis rather than relying on specific statutory permissions. As a result, we have three jurisdictions with different approaches to the problem.
Libraries and scientific research
In the realm of library operations, copyright exceptions primarily concern copying, both by the library itself and its patrons, as well as the lending of copies to readers. We shall not discuss making copies for the sake of preservation of material sources in this paper. Of particular interest is the societal role performed within the information society. During certain periods of lockdown, libraries were also closed, resulting in the cessation of free access to a vast amount of information. Setting aside entertainment literature, numerous researchers faced challenges in sustaining their professional studies. Furthermore, scientific papers possess a distinct nature compared to conventional “entertainment” literature; they are intended to convey ideas and contribute to “public good” objectives, with moral considerations taking precedence over economic aspects54.
Given these circumstances, one might assume that the distribution of knowledge during unique situations would eliminate barriers and expand the scope of copyright exceptions to encompass the online domain. However, this assumption does not hold true. The critical questions revolve around whether current copyright exceptions are applicable to online libraries, including the permissibility of lending e-books, the feasibility of digitizing physical copies for online access, and the extent to which online visitors are permitted to copy materials, if allowed at all.
According to the Russian law there can be some libraries that are allowed to lend copies to readers online but only as long these are copies in special format for visually-impaired persons (for special print-outs in Braille font, for example). Ordinary libraries are allowed to lend electronic copies of the works only on the premises, with the visitors being prohibited to further copy them (Art. 1275 of the Russian Civil Code55, the same position is taken by the courts56). However, there are still some electronic resources available online, for example, Russian state library57. Mostly these are resources provided by particular publishing houses, i.e., organized by right holders. In other cases, these websites have to ask for permission for such online use that is to enter into license agreements. It is worth reminding that this rule applies to copyrighted works and not to those in the public domain.
Regarding the prospect of digitizing works available in their collections, libraries face significant limitations. They are permitted to create isolated (single) copies solely under specific circumstances: when existing copies are exceedingly rare, deteriorated, or defective, or when lending the original copy to visitors might result in its loss or damage. These new copies, which may include digital copies, are ostensibly intended for lending within the library’s premises, as previously mentioned. Consequently, the act of digitization is more of an infrequent occurrence rather than a widespread practice. Furthermore, in most instances, such deteriorated copies are works that have already entered the public domain, allowing them to be potentially shared online.
Scientific purposes are also mentioned by the legislator in the context of exceptions. Libraries have the authorization to produce isolated (single) copies upon the request of visitors. The law does not specify a particular percentage of permissible reproduction but instead pertains to small sections or portions of other works. In practice, libraries generally permit visitors to make copies themselves, even though such allowance is not explicitly stipulated by the legislation.
German libraries are allowed to lend copies on the basis of a copyright exception too (§ 17 (2) UrhG)58. The situation with online libraries59 is quite similar: they are not covered by copyright exceptions. Again, it means that libraries have to enter into license agreements with right holders unless the work passed into public domain.
The right to digitize content also exists and has been recently discussed in connection with the case C-117/13 brought to CJEU60. It turns out that a library has the right to digitize elements of its collections if there is no existing licence agreement with the publisher which states otherwise. The resulting digital copies may be lent to visitors on the terminals in the premises of the library. The right of visitors to make copies also covers such digital content (including print-outs and copies to a USB-stick), the admissible amount is set in the German law pretty specifically: one can reproduce up to 10 percent of an average work or small-scale works in their entirety (§ 60e UrhG). Although these regulations are new they still fall behind modern needs as restriction of access limited to the library premises is already obsolete, and the pandemic proved it quite clear61.
In the USA the power of the libraries to lend copies to the public is based not on the exception but on the first sale doctrine: rights to the copy are separate from copyright to the work, therefore those who lawfully own a copy of a work can share, lend or sale these copies (17 U. S. C. § 109(a))62.
Reproduction is covered by some exceptions, though. According to the US legislation libraries are allowed to copy works for the sake of preservation. Noncommercial type of use is an absolute prerequisite for libraries reproduction exceptions. Even the number of copies is determined: it allows 3 copies for preservation of unpublished works and 3 copies for replacement of published wor
...