SPAIN’S Social Security system is using an artificial intelligence algorithm to try to predict whether someone who is taking sick leave is ready to get back to work. Should the computer program flag up possible fraud, an inspector is sent to chase up the case. 

That’s according to a major investigation carried out by Spanish online daily El Confidencial and Lighthouse Reports, which specialises in in-depth investigations in areas such as migration, climate, conflict and corruption. 

According to Lighthouse Reports, Spain’s National Institute of Social Security deployed two machine-learning algorithms in 2018 to assess the health of millions of people who were on sick leave in Spain. 

The aim was to detect which recipients of this sick leave (or baja laboral) were defrauding the state, which covers their salary in such circumstances. 

The algorithms use machine learning and a points-based system, which flags up possible cases of fraud. The final decision on each case, however, is always taken by a medical inspector. 

The computer programs analyse, for example, anyone who is on sick leave and whose file has not been analysed by a Social Security inspector, or cases where there have been no follow-up visits to the doctors.

The algorithms use variables such as gender and age but are considered to be ‘poor’ and ‘unbalanced’ according to an internal investigation.

Other variables used by the system for the calculations include gender, age, place of residence and medical diagnoses. 

While the Social Security system was unwilling to answer many of the questions posed by the journalists behind the series of articles, which were published on Monday, the investigation discovered that the system is very opaque and that its algorithms are considered to be ‘poor’ and ‘unbalanced’ according to an internal investigation. 

There are also concerns about the low quality of the data being fed into the system, which will mean that the data coming out is also poor. 

The interviews carried out by the investigative journalists with medical inspectors revealed a system that was of little use given the strains on the Social Security system due to a lack of funds and staff shortages. 

One of the doctors who spoke to El Confidencial and who is charged with following up the cases flagged by the system said that ‘we work with it every day and we aren’t able to explain what it is.’ 

Another said that the system would be ‘more useful if we had more doctors. Because we are at an all-time low.’

False positives

The system also generates a high number of false positives, something that could potentially be pushing people back into work before they are medically fit to do so. 

While the system initially was billed as a means to detect fraud and save on public spending, more than five years later few of these promises have come true. 

The system’s lack of transparency also means that any possible discrimination against a social group is not being detected. One of the major concerns about AI is that it can be biassed towards certain collectives or profiles.

Read more:

This site uses Akismet to reduce spam. Learn how your comment data is processed.