Artificial intelligence is here, among us, making our lives easier. Satnav, online advertising, alerts from our bank about a suspicious transaction, smart speakers, are just a few examples of the widespread use of artificial intelligence algorithms installed in the tools we love to surround ourselves with. 

AI is a scientific discipline that is changing the world, shaping our lives through a two-way relationship: we use AI and AI learns at the same time, to translate complexity and put the best solution at our service to overcome a boundary or resolve a doubt. 

The benefits are numerous, just think of the telematic sensors in the technologically advanced black boxes, which collect and process data throughout the vehicle's life cycle, thanks to artificial intelligence, to improve the services already in place and introduce new ones. Efficiency and project development running simultaneously. 

​​​​​Through machine learning and statistical techniques, AI elaborates generalisations to successfully deal with unknown situations, learns from our behaviour, creates correlations and provides useful evidence to fill possible needs. A sophisticated process resulting in highly accurate probability estimates that are applicable in a wide range of fields: as an example in the insurance sector, important benefits can be obtained for customers who receive new profiled services.

All of this is a fundamental part of the goals of Unipol Group, which is engaged in the mobility ecosystem to offer increasingly flexible and customised services, created with UnipolTech's enabling telematics devices.

Public authorities, municipalities and local public transport will also be able to take advantage of the benefits provided by AI, just think of the exploitation of data for social purposes such as, for example, the monitoring of mobility during the spread of the SARS-CoV-2 coronavirus. But that's not all, let's think about collaborations with other companies in terms of Open Innovation, to seize new business opportunities using data processed by artificial intelligence designed by UnipolTech.

Therefore, we have to make sure that artificial intelligence adheres perfectly to the norms of social coexistence. A compromise between innovative services involving several sectors and functional usage limits in our individual and collective interests. 

First of all we have to define the playing field, and set a perimeter of rules: the European Commission's guidelines for artificial intelligence present concrete prospects for ensuring the development of lawful and ethical AI systems. In addition, the issue of artificial intelligence is strongly intertwined with the principles of data protection under GDPR 679/2016.

There are a large number of sectors to which artificial intelligence functions and services are attached. Just think about the automotive sector, where AI supports the design of autonomous and connected vehicles. Improved performance through interconnection with new smart city technology infrastructures. Behind these processes, there is a very high level of complexity to manage, i.e. choices to be made. And when we talk about choices, we inevitably talk about ethical aspects.

The benefits are clear for all to see, but what about the risks? Let’s think about the protection of personal data, the rights and freedoms of data subjects, with particular reference to cases where AI systems make use of automated decision-making processes. On this level of detail, every time an algorithm is faced with a choice to make, the question of what is right or wrong arises, considering all the possible implications of the available options, trying to foresee possible cases in which the machines might perform incorrectly. 


Another problem concerns the possible repercussions on human dignity and the principle of non-discrimination. Just think of when, in 2015, Google's image recognition system made a very serious mistake by recognising some black people and classifying them as 'gorillas'; or of some processes applied to recruitment selections.

In this context, the EU guidelines highlight the need to create a shared strategy to ensure the ethics of artificial intelligence based on a human-centred approach.

The guidelines describe an approach that “aims to ensure that human values are central to how AI systems are developed, deployed, used and monitored, ensuring respect for fundamental rights, including those enshrined in the European Union Treaties and the Charter of Fundamental Rights of the European Union, which share a common foundation rooted in respect for human dignity, in which the human being enjoys a unique and inalienable moral status. This also implies respect for the natural environment and other living beings that are part of the human ecosystem and a sustainable approach that allows future generations to prosper.”

In order to make the guidelines more concrete, a 'check list' was drawn up:

  • Human intervention and surveillance: use of AI systems in accordance with basic human rights, ensuring the well-being of the user.
  • ​Technical strength and security of algorithms to counter illicit operations.
  • Data confidentiality and governance: data protection starting with the design of IA systems.
  • Transparency: traceability of the systems and decision-making processes by which algorithms are designed.
  • Diversity, non-discrimination and fairness: preventing algorithms from being tainted by historical conditioning, incompleteness and inappropriate governance models.
  • Social and environmental well-being: as regards the impact on the environment, artificial intelligence must promote sustainable development wherever possible.
  • Accountability: internal or external verifiability of systems, with particular reference to those affecting fundamental rights.

These are extremely important guidelines and principles, which will have to be clarified and implemented concretely through further coordinated regulatory measures at European and national level, so that we can finally rely on human-scale artificial intelligence​.​