The budgets for 2022 include a budget of 5 million euros to create a public body to monitor and sanction the impact of algorithms
Spain will have an agency dedicated to monitoring and minimizing “significant risks to the safety and health of people, as well as their fundamental rights & rdquor; that may cause the use of Artificial intelligence (IA). This is stated in the General State Budgets Law (PGE) approved on Tuesday and published this Wednesday in the BOE.
The pact for the 2022 budgets establishes that the Spanish Agency for the Supervision of Artificial Intelligence will have an endowment of 5 million euros to investigate the danger that may arise from algorithms. And it is that, whether they are private companies or public administrations, more and more people use these to automate all kinds of processes, from personalizing content on the Internet to granting bank loans or state aid. Although it operates under the false guise of mathematical neutrality, this technology is imperfect and can amplify potential racist, gender or class discrimination against part of society.
Aware of this, the Spanish agency will be in charge of auditing the algorithmic systems used in the territory “in a transparent, objective and impartial way & rdquor; and will provide advice to the unions so that the platforms comply with the regulation of the algorithms established by the ‘Rider Law‘. It will also have the ability to apply sanctions, although the law does not give more details about it.
Pressure to regulate AI
The creation of this AI oversight body comes after a proposal launched in November by More Country–Fair and agreed with the two Government partners, PSOE Y We can. Its approval in the budgets for 2022 comes as the European Union (EU) is preparing a community regulation to limit the impact of these algorithmic systems, which will include a ban on mass surveillance systems.
The government’s gesture also responds to growing international pressure to regular the use of AI and its potential social dangers. The United Nations, for example, they have warned of these risks and have asked both states and companies to “dramatically increase transparency & rdquor; about the algorithmic systems that are used on a day-to-day basis. On December 9, an international group of experts asked in the journal ‘Science’ that, given the growing distrust with this technology, a global network of ‘ethical hackers’ was put in place to uncover their vulnerabilities and fix them before they explode against society.
However, all of that is still on paper. For the agency to begin to walk, a law will first have to be agreed that establishes its creation in detail, as well as the development of an initial action plan. All of this means that this promise to curb the risks of AI can take months to take shape.