Jump to content
Metal construct with VR glasses

When complexity leads to human overload and uncertainty

21.10.2020

When using machines, people are often faced with complex tasks that they do not feel up to. Under the catchphrase "Deconstructing Complexity", user experience research at the AIT Austrian Institute of Technology is developing methods to make it easier for people to deal with technology - both in everyday life and when operating machines.

Using technical systems is often difficult for us humans - for example, if we don't know them, if we don't immediately understand how they work, or if we don't (want to) remember how to operate them. "That's why users face a certain degree of complexity," says Manfred Tscheligi, Head of Center for Technology Experience at the AIT Austrian Institute of Technology and professor at the University of Salzburg. "Things get especially complex when a device doesn't immediately do what you want it to do - and you have no idea why that is."

In Tscheligi's eyes, two areas are essential when dealing with this form of complexity: On the one hand, it's about how easy or awkward it is to operate a device on a technical level. On the other hand, however, the psychology of perception is also called for - in other words, how people perceive and comprehend objects in their environment. "Humans cannot immediately grasp and interpret a system made up of many individual parts with many links and a ramified structure," says the researcher. This has consequences ranging from insecurity and a lack of trust in technology, to a sense of loss of control, to a fear of technology and its rejection. "Something can work great technically, but if I as a human can't handle it, I can't do anything with it," Tscheligi said.

Therefore, human beings must be taken into account when designing technologies and the interfaces to users. Sounds logical, but in practice it is by no means standard everywhere yet. "The user's point of view is often still neglected in the development of new technologies," says Tscheligi.

 

"We want to make people's lives easier".

A separate scientific discipline has developed from this basic problem. In so-called "experience research," there are procedures with which subjective complexity can be measured and the complex interaction of experience factors can be examined. In addition, methods have been developed for reducing perceived complexity. Tscheligi: "Our noble goal is to make people's lives easier when using technical artifacts."

A common approach to this is called "Deconstructing Complexity" in technical jargon. It involves breaking down complex interrelationships in products or software programs into their individual parts to better understand what problems can occur. Linkages between the parts are also considered. "Once the individual elements are understood and improved, they are put back together," Tscheligi said. That requires a method of disassembly that is reproducible. One of these methods is called "heuristic analysis": you walk through a product or interface piece by piece according to certain principles, working through the basic principles of the design in order and clarifying at which point problems arise that affect usability. Says Tscheligi, "Often it would be better to leave out a part to reduce the complexity of a product that then works just as well or even better."

Metal construct with vr glasses next to it

The use of modern communication tools - such as virtual reality goggles here - can make it much easier for people to deal with technology. Credit: AIT/Gerdenitsch

What man can grasp

But analyzing the technology is only one side of the story. "It's not just the system that needs to be dissected, but also its environment," explains Markus Murtinger, experience expert at AIT. You also have to ask: Who are the users? What task do they have? In which situations do people need help? For example, what information do they need directly on a screen to complete a task?

Example: In the "MMAssist II" research project, which recently concluded with a conference, assistance systems were developed for "Industry 4.0" - i.e. for production companies in which machines assist people. In order to relieve production personnel in this process, a large industrial consortium led by AIT subsidiary Profactor has developed a user-centered assistance system consisting of individual modules known as "assistance units." "In certain situations, the operator of a machine needs something to help him cope with the task. We have broken the problem down into individual components that occur again and again," Tscheligi explains. "The system has to determine in which situation the human might have a problem and provides the needed information."

Subsequently, the question arises: what data should be provided to the human in a given situation? "Often, all the functions that are somehow needed to operate a machine are placed on a display," Murtinger explains. In reality, however, this can quickly lead to overload, because very different groups of users, with different information needs, often work on the machines: once a week a system administrator comes, once a day the person who sets the machine, and then there is someone who actually works with and on the machine. "The different people should each be shown only those functions that they actually need. This makes the work much easier," says Murtinger. It would be perfect if the machine or the interface recognized the user and his or her role and adapted to the person. In technical terms, this is called an "adaptive interface.

 

person stands in a laboratory and works

When operating and maintaining complex machines (the picture shows a die casting machine), people need completely different information in different situations. An intelligent system is designed to ensure that the data needed at any given moment is always available. Credit: AIT/LKR/Lang

How do you design an interface?

In the psychology of perception, there are a number of complexity factors that should be taken into account when designing interfaces. The most important of these include the amount of information on the one hand and the speed on the other. "The more that flows at users and the faster it happens, the less they can perceive - and that puts a strain on them," says Tscheligi. In the design process, therefore, you have to know exactly how to create an interface so that users can perceive the information that is important to them in each case. In practice, however, the exact opposite often happens. Murtinger refers to this phenomenon as "featuritis" - freely following the motto: "One more feature will do. This, he says, is also a result of primarily dealing with technical problems and too little with those who use a particular technique. "You shouldn't be surprised when everything becomes even more complex for people.

researcher works on a machine in the additive manufacturing laboratory

Interfaces at the human-machine interface must be carefully designed so that users can perceive the information that is important to them. After all, too much information leads to confusion and represents an additional burden. Credit: AIT/LKR/Lang

Can Artificial Intelligence be trusted?

The situation is becoming even more confusing with the spread of artificial intelligence (AI). The problem here is that the machine can take into account so many facts that humans are no longer able to keep track of. Normally, one has no insight into the process of machine learning: AI is comparable to a black box, where one does not know on which criteria a decision of the system is based. On the one hand, this gives rise to a feeling of loss of control, and on the other, it impairs trust in the machine.

Researchers have been fighting against this for several years under the catchword "explainable AI". This is intended to provide insight into how the system arrives at a particular decision. This is also essential for the question of who is responsible for a technical system's failure. In some areas, answers are already being found. For example, researchers at AIT are developing so-called "reliability displays" as part of the CALIBRaiTE project, which tell users how reliable an intelligent function is. This should make it transparent whether an AI has enough information to assess a situation and on which data quality the machine makes its assessments - whether, for example, in the case of an autonomous vehicle, all sensors provide reliable data or whether one sensor has failed.

This should enable users to adjust their expectations of an AI's ability and neither underestimate nor overestimate its reliability. "This is an important contribution to reducing the feeling of uncertainty and loss of control and to increasing the acceptance of AI systems in the long term," Tscheligi said.

This text is based on a contribution to the Alpbach Technology Talks 2020 Yearbook "Discussing Technology" on the topic of "Complexity" .