Jump to content
Symbolfoto: Das AIT ist Österreichs größte außeruniversitäre Forschungseinrichtung

Fostering Austria’s Innovative strength and Research excellence in Artificial Intelligence

FAIR-AI addresses the research gap created by dealing with society-related risks in the application of AI. In particular, it focuses on the requirements of the EU AI Act and the obstacles to its implementation in the day-to-day development and management of AI-based projects and its AI law-compliant application. These obstacles are multi-faceted and arise from technical reasons (e.g., intrinsic technical risks of current machine learning such as data shifts in a non-stationary environment), technical and management challenges (e.g., the need for a highly skilled workforce, high initial costs, and project management-level risks), and socio-technical application-related factors (e.g., the need for risk awareness in the application of AI, including human factors such as cognitive biases in AI-assisted decision making). In this context, we consider the detection, monitoring, and, when possible, anticipation of risks at all levels of system development and application as a key factor.

In this regard, FAIR-AI follows a methodology to disentangle these types of risks. Rather than demanding a general solution to this problem, our approach takes a bottom-up strategy by selecting typical pitfalls in a specific development and application context to create a collection of instructive, self-contained use cases, which are implemented in research modules, to illustrate the intrinsic risks. We go beyond the state of the art to explore ways of risk disentanglement, prediction, and their integration into a recommender system capable of providing active support and guidance. The AIT Centre for Innovation Systems and Policy (ISP) is taking the lead in several work packages of the project coordinated by Alexander Schindler (AIT Center for Digital Safety and Security, DSS), which are particularly concerned with the topic of ethics and law. Among other things, various types of workshops are being developed in order to be able to work with industry partners on ethics and law-related topics in the form of specific issues that arise in practice. The proposed solutions also are relevant for other parts of the project, such as those relating to ethics tools, the application of legislation and the development of training modules on ethics and law.

Key Words:  AI, AI Ethics, trustworthiness, AI Act, law

Start: 01/2024

Duration: 36 months

Funded by: FFG

Contact: Peter Biegelbauer