Jump to content
Symbolfoto: Das AIT ist Österreichs größte außeruniversitäre Forschungseinrichtung

Applied Artificial Intelligence

Artificial Intelligence (AI) is a complex and diverse research field, ranging from technical to ethical and legal aspects. Applying AI solutions responsibly in a socio-technological environment requires comprehensive understanding and control of all components and technologies.

At AIT we focus on partial aspects of AI, such as AI-Audio, network analysis, explainable AI, but also on the big picture. Artificial intelligence does not consist of a simple machine learning based computer vision model but is the realization of a complex cognitive task consisting of several components from different research disciplines. This orchestration of disciplines such as Data Science, Artificial Intelligence and Software Engineering is the focus of the AIT research field Applied Artificial Intelligence.

Data Science & AI

Applying AI solutions responsibly means to responsibly handle data. As a Data Science research group, we have profound knowledge of Data Science workflows, especially in the context of Artificial Intelligence, to analyze and model data and datasets and to identify hidden biases and outliers. This interplay of Data Science and Artificial Intelligence is further applied to monitor context drifts in continuous learning systems and to manage data security in federated learning environments. We also apply data analytics and active learning to measure information gain, minimizing the manual effort required to annotate data or interact with AI systems. By applying the latest meta-learning and few-shot strategies, we try to avoid the expensive process of data annotation altogether.

Applied AI at Scale

The management of AI systems implies the efficient management of resources. Core components of Artificial Intelligence, such as Machine Learning, are based on data-driven Methods. Managing large volumes of data requires the use of appropriate technologies. We offer our expertise in scalable or High-Performance Computing (HPC) architectures based on specific or commodity hardware that make use of, for example, Apache Hadoop, Spark, and Airflow, as well as dedicated GPUs for neural-network-based training and prediction.