Today, technology is advancing rapidly also within legal professions, in the form of process digitisation and predictive justice. It is possible to create a perimeter containing various disciplines, such as Data Science, Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP) and statistics, with the aim of representing legal knowledge, identifying correlations, and making predictions concerning judicial decisions, or concerning the possibility of a draft law becoming a concrete measure.

When we talk about the digitisation of justice, we refer to predictive justice, namely that set of tools based on artificial intelligence capable of supporting the legal and jurisdictional function by analysing a large amount of information in a short time to predict the possible outcome of a judgement. The new technologies could make up for the slowness of proceedings, assist lawyers in drafting the defence, prosecutors for the prosecution, and even judges to provide a definition of the case.

But there are also those who look with concern at a possible technological shift in the world of the legal professions. On the one hand, because of various dysfunctions in the application of AI on the ethical level, when it comes to making decisions on people's lives and freedoms; on the other hand, because of the fear that an excessive digitalisation of justice could compromise the human factor, losing some fundamental qualities, such as empathy and the ability to circumstantiate situations.

There is a concern that technological development may entrust such an important role to machines that they no longer need their human counterpart. This would entail the loss of human sensitivity, which instead remains a fundamental value to avoid possible discriminatory outcomes.

Data Science, Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP) and statistics can identify correlations and make predictions regarding judicial decisions. But be careful not to lose the human factor

An example? According to the South China Morning Post, the Shanghai Pudong People's Court has designed a machine to detect the most frequent crimes committed in Shanghai and to file charges with an accuracy of more than 97%. A kind of robot prosecutor capable of formulating real charges through algorithms, based on a verbal description of the case. A machine that can assess the evidence, the grounds for arrest and the dangerousness of the subject. Does this reassure you? Are we sure this is the best choice for our communities? 

There is also another problem discussed among insiders: the discriminatory drifts of artificial intelligence. For instance, it has been noted that in some cases machines find it difficult to make assessments when dealing with non-Caucasian people. There have been several cases in which the granting of mortgages was made dependent on the decision of algorithms, with outcomes that penalised non-Caucasian people, who were considered less deserving even though they were in a similar situation to white applicants. Many problems were also encountered with facial recognition, which would tend to favour coloured people.

At this stage, a credible objective could be to provide users with elements to predict the possible outcome of a judgement, thus discouraging cases of reckless litigation and encouraging parties who have no chance of success in court to pursue other ways such as conciliation. Algorithms that make it possible to understand how many chances one has of obtaining a positive result, allowing one to select cases that are worth pursuing. Non-binding technologies that help make decisions, without being able to decide the fate of a case like a real robot judge.​