BSC experts design a system to alert users to the effects of AI-based applications in a quick and understandable way

27 October 2020

Researchers from the Barcelona Supercomputing Center (BSC) experts in artificial intelligence propose a set of visual icons and concise information for users who interact with applications and services that use AI technology. The objective is that everyone can quickly and intuitively understand under what privacy conditions each service operates, as well as the possible existence of biases in their interaction. The information system proposed by the researchers is based on ethical criteria of privacy and transparency and aims to facilitate users the decision to use or not applications that incorporate AI. The proposal simplifies and extends the current notifications regarding the terms and conditions of use, which in most cases are confusing and annoying for users.

"To guarantee the development and responsible use of AI, society must be involved and make it aware of the capabilities and limitations of this technology," says Àtia Cortés Martínez, BSC researcher and expert on ethics and artificial intelligence and participant in the proposal. "This implies providing the necessary and simple tools for users to be able to understand the processes behind an algorithm, such as data processing and automatic decision-making. Only in this way can we establish a relationship of trust towards the new ones. technologies".

BSC researchers focus on two aspects of AI that have direct consequences for users. On the one hand, what concessions does the use of the system imply with regard to the privacy of your personal data. On the other hand, what is the degree of transparency that exists about the AI ​​system (what it does and how it does it) and if its behavior towards the user is objective or personalized.

Darío Garcia-Gasulla, first author of the proposal, explains that "there are many services that use artificial intelligence, but it is impossible for us to know exactly how many, since we are not even informed of their presence. Users must know what rights we have yielding in exchange for the services they offer us and to what extent the information we consume may become biased".

In relation to data privacy, and based on the European regulation on personal data privacy (GDPR), the authors propose to give users clear answers to questions such as "Is my personal data being collected?" "Which ones?", "For what purpose?". To make it easier, a first level in three-color traffic light format notifies third parties of the capture and / or dissemination of personal data.

Regarding the transparency of AI systems, the authors consider that users should be informed about at least two aspects: 1) if it is possible to know the algorithms, models and the content of the databases on which the program is based. This aspect is essential to assess the possible existence of biases and discrimination; and 2) if the information that the user receives is personalized (that is, adjusted to the data about the user himself) or if, on the contrary, objective information regarding the consumer is provided. This second aspect is relevant in the field of misinformation (echo chamber effect) and manipulation.

Based on these premises, the BSC researchers have created a three-level alert system: a very simple visual first level, based on icons accessible to all, which allows minimally intrusive but informed decision making; a second level with additional very concise and structured information that will allow greater granularity in decisions, and a third level where all the information and technical details available would be, so that any user can carry out an audit on the privacy guarantees of the system.

"Current rights management systems are hostile to the user. For me, who am especially aware of the problem, it is a constant exercise of willpower to fill out all the privacy forms that I come across on a daily basis. There is no point in waiving to our rights out of boredom", affirms Darío Garcia-Gasulla.

The scientific article that presents and expands on these ideas is under review in a prestigious international journal, but the manuscript can already be found in the arXiv public repository (https://arxiv.org/abs/2009.13871).