AI has played an important role during the corona crisis. AI applications contributed to the understanding of the structure of the virus, the search for a vaccine, and to the treatment of COVID-19.
As with any crisis, however, we also tended to turn more quickly to ‘invasive’ technology and to ‘turn a blind eye’ when it comes to fundamental rights, ethical values, and even effectiveness. The motto was often: no harm, no foul.
Unfortunately, this is different with AI. If it does not help, it may very well harm. We have seen false contradictions such as; we have to choose between ethics or health, between fundamental rights or the opening up of the economy. And it has often been thought that an invasive technology that is harmful, will be dismantled after the crisis. In reality, this is often not the case and the risk of setting unwanted precedents for the future still exists. Despite the urgency of this crisis, it was and is important that AI is developed and deployed in a responsible manner. Robustness, effectiveness, transparency, and explainability, but also fundamental rights, inclusion and ethics are adamant to ensure that AI actually helps in tackling the Corona crisis without causing harm to society along the way.
This is why, mid-pandemic, ALLAI started the “Responsible AI & Corona” project to systematically assess AI applications used or researched to tackle the Corona crisis and to devise AI strategies for this as well as future pandemics. With the Ethics Guidelines for Trustworthy AI as a basis, several assessments were performed of “medical AI” (e.g. AI for drug and vaccine development) and “societal AI” (e.g. face mask detection and algorithmic grading). During this #FrAIday, Catelijne will present the outcomes of a number of these assessments and the project in general.