The European Union on the path to regulating AI

The European Commission (EC) aims to establish the European Digital Single Market by mid-2021 and to finally start competing with the United States and China. So, EC has just proposed a new outline of the strategy for the development of Artificial Intelligence (AI) in the EU.
The European Union on the path to regulating AI

(©Envato)

From the EU perspective, these actions are supposed to guarantee that European businesses will be able to compete on an equal footing with the two major global players, while at the same time ensuring adherence to the European ethical standards.

For the time being Europe is clearly lagging behind the US and China in terms of the development of AI technology, the global market for which could be worth USD13 trillion in just 10 years. This can be seen in many areas — the value of investments, the number of patents, and global digital champions such as Apple, Amazon, Microsoft, Google, Tencent, and Alibaba. Moreover, the European digital technology assets and resources remain scattered and do not add to a critical mass which would enable the EU to compete on equal footing with the global rivals.

The work on the establishment of the Digital Single Market has been under way for several years. The project is crucial for achieving global competitiveness, because it will lead to the creation of an extremely attractive digital market, perhaps the largest in the world in terms of value. This, in turn, could stop the migration of technology talents to other countries. In order to achieve this goal, it will be necessary to remove barriers to cross-border data flows as well as implementing the 5G technology.

The task of integrating the activities of EU countries in the area of AI is becoming even more urgent in light of Brexit, as the United Kingdom is a leading European hub of AI technology, with more than a thousand companies and research centers, such as the Alan Turing Institute. However, Brexit doesn’t necessarily mean that the mutual cooperation on AI projects will end.

China and the United States in the lead

Meanwhile, Europe’s competitors are implementing their own AI development strategies. China has made AI technology a key component of its “Made in China” initiative. Its implementation is progressing in the face of increasingly strong resistance on the part of the US. The US president Donald Trump is trying to prevent the expansion of Chinese technology giants using various administrative measures.

The Artificial Intelligence strategy of the United States was announced in February of last year. However, the policy objectives set out in the document are rather broad — the strategy allocates financial support at the federal level and emphasizes the necessity of establishing American standards in the field of AI, as well as the need for universal education in this area. Moreover, the document also points to the necessity of removing barriers to the development of AI technology. However, the shape of the strategy at the implementation stage is unclear, as no specific timetable was provided.

National strategies for the implementation of AI technology have thus far been adopted by more than half of the EU member states. The first such document was adopted in December 2017 in Finland. France, Germany, and Sweden released their own strategies in 2018. Poland announced the guidelines for its own strategy in the document entitled Artificial Intelligence Development Policy, which was published a year ago.

Each EU country has its own specific path for the development of AI. The strategy introduced in Germany relates to 12 fields of AI development. Its main objective is to achieve the status of European leader in the field of research through the development of European clusters of innovation, technology transfer, and the creation of incentives for startup developers and investors. Public funds in the amount of half a billion euros per year are supposed to be allocated for these tasks.

The French approach focuses on the role of public research, training, innovation, and resources in the areas of health care, environment, transport and mobility, as well as defense and security. The Estonian AI strategy is basically an extension of the country’s existing policy of digital public services and e-government – Estonia is seen as a global leader in these fields. However, the individual efforts of the European Union member states won’t be enough to enable them to compete on an equal footing with the global powers.

European values as a priority in AI regulations

The European AI strategy emphasizes the importance of trust and credibility as the foundations of EU regulations. Simply put, the adopted definition of Artificial Intelligence relates to IT systems, which exhibit intelligent behavior through the ability to analyze data, and which are able to make decisions with a certain, even if limited, degree of autonomy.

The initial guidelines concerning the ethical principles to be applied in the development of AI were formulated by the High-Level Expert Group on Artificial Intelligence (AI HLEG) in December 2018. Following extensive consultations within the European AI Alliance the guidelines were reviewed and published in April 2019. They are divided into seven sections, which include, among others, human oversight, technical robustness and safety, privacy protection and social responsibility, transparency, non-discrimination and fairness, societal and environmental well-being, as well as accountability.

These principles became the basis for the preparation of instructions for developers and the entire IT environment (Assessment List for Trustworthy Artificial Intelligence, ALTAI). The EC along with the Member States also adopted a plan for coordinated cooperation in the development of AI in Europe. The EC emphasized that only joint efforts based on shared values will give Europeans a chance to become the global leaders in the field of AI. In general, the objective is primarily to determine the values and principles which should be embedded in the development of AI, and not to outline specific technologies and methods of their development in the individual countries.

There are also some exceptions from this rule. The most important specific policy measures include the proposal of a temporary ban on the implementation of autonomous identification systems (e.g. recognition based on facial biometrics) in public spaces as well as so-called predictive policing systems, that is, technologies identifying potential criminals, which was emphasized by the Vice-President of the EC Margrethe Vestager.

User-friendly AI

However, the most important EU document concerning AI is the “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust”, which was published in February 2020. It is supposed to provide the foundations for the European Union’s ultimate strategy on AI. This document sets out measures that are supposed to simplify and facilitate research and cooperation between Member States, as well as increase the level of investment in AI and its applications.

The EU member states are supposed to achieve these goals by utilizing the potential, exchanging, and opening up access to anonymized data in many areas. The paper also presents the options for AI regulation which should determine the shape of the future legal provisions relating to all the involved entities, especially in the so-called sensitive sectors (health care, security, justice, public institutions, and others, where there the risk of discrimination, material damage, or threat to life and health is high). This entails the development of standards for test data, training of algorithms, archiving of records, and oversight. Another objective of the solutions developed by the EU is also to protect the European research potential and to build a world-class testing environment and an ecosystem of innovation.

The priority role of AI is reflected in the EU budget. As much as EUR9.2bn is to be allocated for digital technologies in the EU budget in the years 2021-2027, which is much more than the amount allocated under the Horizon 2020 program. This, in turn, is supposed to lead to greater involvement of the private sector. As a result, the total value of investment could exceed EUR20bn.

The introduction of the principles of “Trustworthy Artificial Intelligence” also reflects the EC’s desire to influence the shape of global regulations — as has been the case with the General Data Protection Regulation (EU GDPR) — and to gain an advantage in terms of the applicability of European standards for advanced technologies (the so-called “Brussels effect”).

In this way AI is also supposed to become more user-friendly, which will enable a quicker popularization of its applications. This also means that large technology companies will have to adhere to these rules within the territory of the EU and will be required to maintain data centers in Europe. In the short term, this will increase their operating costs; however, they will now be dealing with a digital single market, and European companies should be able to compete on equal terms.

Extensive consultations

The consultations on the White Paper lasted until mid-June. They involved more than 1200 stakeholders: governmental and non-governmental institutions, companies, experts, scientists, and citizens, including those from outside the EU. The consultations were carried out using a survey questionnaire which was divided into three sections.

In the first section, “Building an ecosystem of AI excellence in Europe”, 90 per cent of the respondents pointed to the development of digital skills, and 89 per cent to the support for research and innovation in the field of AI, as the key factors for the coordination of the policy pursued by the EU member states. Other issues identified by the respondents as important also included the development of infrastructure enabling technology testing (76 per cent), the creation of European data centers (75 per cent), as well as public-private partnership in research and innovation. The respondents attributed a particular role to the functioning of Digital Innovation Hubs (DIHs), which should support the transfer of know-how to small and medium-sized businesses.

In the “Regulatory options for AI” the respondents indicated their concern about the potential risk that AI could pose to fundamental rights (e.g. privacy) and the potential discriminatory character of the applications of this technology (90 and 87 per cent of answers, respectively). A large percentage of the respondents believe that this issue should be addressed in both new and existing regulations. Such regulations should also apply to biometric identification systems.

In the “Safety and liability implications of AI, IoT and robotics” almost 61 per cent of the respondents supported a revision of the existing Product Liability Directive to cover the specific risks and damages resulting from certain AI applications. These include, among others, cyber risks, personal security risks, and mental health risks.

The consultations also involved the participation of financial sector entities, including those from Poland. Almost half of the surveyed companies declared that they wanted to implement AI applications in their businesses due to the numerous benefits also for their customers provided by this technology. The surveyed companies are also aware of the regulatory risk, which hinders or discourages the implementation of AI solutions. The respondents representing the financial services sector in Poland believe that these risks could be mitigated through the introduction of appropriate EU-level legislation concerning the applications of AI, and in particular relating to the financial sector.

Other important proposals include, among others, enabling experiments in the field of AI applications under the supervision of the regulators, and establishing guidelines for AI systems’ audits and their certification.

The White Paper seeks to strike the delicate balance between the desire to achieve European technological sovereignty and the risk of over-regulation, which would discourage investors and entrepreneurs. The latter expect better opportunities for access to capital, easier options for scaling businesses based on AI, enhanced access to the broad consumer base, and the ability to compete with global rivals on an equal footing.

The specific solutions and regulations concerning AI will determine the attractiveness of the European Union’s Digital Single Market, whose principles are supposed to enter into force starting from mid-2021.

(©Envato)

Tags