Prof. Dr. Thomas Kosch
Profil
Forschungsthemen5
ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
Quelle ↗Förderer: Bundesministerium für Forschung, Technologie und Raumfahrt Zeitraum: 04/2026 - 03/2029 Projektleitung: Prof. Dr. Thomas Kosch
easyTEM: Entwicklung von ressourceneffizienter Transmissionselektronenmikroskopie zur Demokratisierung ihres Einsatzes in der Materialforschung
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 04/2026 - 03/2029 Projektleitung: Prof. Dr. Thomas Kosch
Entwicklung von Ressourceneffizenter Transmissionselektronenmikroskopie zur Demokratisierung ihres Einsatzes in der Materialforschung
Quelle ↗Förderer: DFG Sachbeihilfe Zeitraum: 04/2026 - 03/2029 Projektleitung: Prof. Christoph T. Koch, PhD, Prof. Dr. Thomas Kosch, Prof. Dr. Ulf Leser
Institute for Diversity Competence
Quelle ↗Förderer: BMWE: EXIST Zeitraum: 12/2025 - 11/2026 Projektleitung: Prof. Dr. Thomas Kosch
SFB 1404/2: Nutzerzentrierter Entwurf für Workflowsprachen (TP C03)
Quelle ↗Förderer: DFG Sonderforschungsbereich Zeitraum: 07/2024 - 06/2028 Projektleitung: Prof. Dr. Thomas Kosch, Prof. Dr. Lars Grunske
Mögliche Industrie-Partner10
Stand: 26.4.2026, 19:48:44 (Top-K=20, Min-Cosine=0.4)
- 32 Treffer85.0%
- ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von DesinformationK85.0%
- ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
- 32 Treffer85.0%
- ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von DesinformationK85.0%
- ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
- 16 Treffer61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige SchülerP61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige Schüler
- 15 Treffer61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige SchülerP61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige Schüler
Ecole Pouchet
PT16 Treffer61.5%- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige SchülerP61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige Schüler
- 16 Treffer61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige SchülerP61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige Schüler
- 17 Treffer61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige SchülerP61.5%
- Unterstützung einer inklusiven Anleitung für den Englischunterricht als Fremdsprache für gehörlose und schwerhörige Schüler
- 10 Treffer61.0%
- Zuwendung im Rahmen des Programms „exist – Existenzgründungen aus der Wissenschaft“ aus dem Bundeshaushalt, Einzelplan 09, Kapitel 02, Titel 68607, Haushaltsjahr 2026, sowie aus Mitteln des Europäischen Strukturfonds (hier Euro-päischer Sozialfonds Plus – ESF Plus) Förderperiode 2021-2027 – Kofinanzierung für das Vorhaben: „exist Women“T61.0%
- Zuwendung im Rahmen des Programms „exist – Existenzgründungen aus der Wissenschaft“ aus dem Bundeshaushalt, Einzelplan 09, Kapitel 02, Titel 68607, Haushaltsjahr 2026, sowie aus Mitteln des Europäischen Strukturfonds (hier Euro-päischer Sozialfonds Plus – ESF Plus) Förderperiode 2021-2027 – Kofinanzierung für das Vorhaben: „exist Women“
- 99 Treffer57.9%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche BegleitanalyseP57.9%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche Begleitanalyse
- 99 Treffer57.9%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche BegleitanalyseP57.9%
- WayIn – Der Inklusionswegweiser für Arbeitgeber: Technische Entwicklung und wissenschaftliche Begleitanalyse
Publikationen25
Top 25 nach Zitationen — Quelle: OpenAlex (BAAI/bge-m3 embedded für Matching).
ACM Computing Surveys · 226 Zitationen · DOI
The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.
185 Zitationen · DOI
With increasing complexity of assembly tasks and an increasing number of product variants, instruction systems providing cognitive support at the workplace are becoming more important. Different instruction systems for the workplace provide instructions on phones, tablets, and head-mounted displays (HMDs). Recently, many systems using in-situ projection for providing assembly instructions at the workplace have been proposed and became commercially available. Although comprehensive studies comparing HMD and tablet-based systems have been presented, in-situ projection has not been scientifically compared against state-of-the-art approaches yet. In this paper, we aim to close this gap by comparing HMD instructions, tablet instructions, and baseline paper instructions to in-situ projected instructions using an abstract Lego Duplo assembly task. Our results show that assembling parts is significantly faster using in-situ projection and locating positions is significantly slower using HMDs. Further, participants make less errors and have less perceived cognitive load using in-situ instructions compared to HMD instructions.
149 Zitationen · DOI
Due to increasing complexity of products and the demographic change at manual assembly workplaces, interactive and context-aware instructions for assembling products are becoming more and more important. Over the last years, many systems using head-mounted displays (HMDs) and in-situ projection have been proposed. We are observing a trend in assistive systems using in-situ projection for supporting workers during work tasks. Recent advances in technology enable robust detection of almost every work step, which is done at workplaces. With this improvement in robustness, a continuous usage of assistive systems at the workplace becomes possible. In this work, we provide results of a long-term study in an industrial workplace with an overall runtime of 11 full workdays. In our study, each participant assembled at least three full workdays using in-situ projected instructions. We separately considered two different user groups comprising expert and untrained workers. Our results show a decrease in performance for expert workers and a learning success for untrained workers.
124 Zitationen · DOI
Research on how to take advantage of Augmented Reality and Virtual Reality applications and technologies in the domain of manufacturing has brought forward a great number of concepts, prototypes, and working systems. Although comprehensive surveys have taken account of the state of the art, the design space of industrial augmented and virtual reality keeps diversifying. We propose a visual approach towards assessing this space and present an interactive, community-driven tool which supports interested researchers and practitioners in gaining an overview of the aforementioned design space. Using such a framework we collected and classified relevant publications in terms of application areas and technology platforms. This tool shall facilitate initial research activities as well as the identification of research opportunities. Thus, we lay the groundwork, forthcoming workshops and discussions shall address the refinement.
ACM Transactions on Computer-Human Interaction · 119 Zitationen · DOI
In medicine, patients can obtain real benefits from a sham treatment. These benefits are known as the placebo effect. We report two experiments (Experiment I: N = 369; Experiment II: N = 100) demonstrating a placebo effect in adaptive interfaces. Participants were asked to solve word puzzles while being supported by no system or an adaptive AI interface. All participants experienced the same word puzzle difficulty and had no support from an AI throughout the experiments. Our results showed that the belief of receiving adaptive AI support increases expectations regarding the participant’s own task performance, sustained after interaction. These expectations were positively correlated to performance, as indicated by the number of solved word puzzles. We integrate our findings into technological acceptance theories and discuss implications for the future assessment of AI-based user interfaces and novel technologies. We argue that system descriptions can elicit placebo effects through user expectations biasing the results of user-centered studies.
99 Zitationen · DOI
With the opportunity to customize ordered products, assembly tasks are becoming more and more complex. To meet these increased demands, a variety of interactive instruction systems have been introduced. Although these systems may have a big impact on overall efficiency and cost of the manufacturing process, it has been difficult to optimize them in a scientific way. The challenge is to introduce performance metrics that apply across different tasks and find a uniform experiment design. In this paper, we address this challenge by proposing a standardized experiment design for evaluating interactive instructions and making them comparable with each other. Further, we introduce a General Assembly Task Model, which differentiates between task-dependent and task-independent measures. Through a user study with 12 participants, we evaluate the experiment design and the proposed task model using an abstract pick-and-place task and an artificial industrial task. Finally, we provide paper-based instructions for the proposed task as a baseline for evaluating Augmented Reality instructions.
VRHapticDrones
201894 Zitationen · DOI
We present VRHapticDrones, a system utilizing quadcopters as levitating haptic feedback proxy. A touchable surface is attached to the side of the quadcopters to provide unintrusive, flexible, and programmable haptic feedback in virtual reality. Since the users' sense of presence in virtual reality is a crucial factor for the overall user experience, our system simulates haptic feedback of virtual objects. Quadcopters are dynamically positioned to provide haptic feedback relative to the physical interaction space of the user. In a first user study, we demonstrate that haptic feedback provided by VRHapticDrones significantly increases users' sense of presence compared to vibrotactile controllers and interactions without additional haptic feedback. In a second user study, we explored the quality of induced feedback regarding the expected feeling of different objects. Results show that VRHapticDrones is best suited to simulate objects that are expected to feel either light-weight or have yielding surfaces. With VRHapticDrones we contribute a solution to provide unintrusive and flexible feedback as well as insights for future VR haptic feedback systems.
81 Zitationen · DOI
Head-mounted displays for virtual reality (VR) provide high-fidelity visual and auditory experiences. Other modalities are currently less supported. Current commercial devices typically deliver tactile feedback through controllers the user holds in the hands. Since both hands get occupied and tactile feedback can only be provided at a single position, research and industry proposed a range of approaches to provide richer tactile feedback. Approaches, such as tactile vests or electrical muscle stimulation, were proposed, but require additional body-worn devices. This limits comfort and restricts provided feedback to specific body parts. With this Interactivity installation, we propose quadcopters to provide tactile stimulation in VR. While the user is visually and acoustically immersed in VR, small quadcopters simulate bumblebees, arrows, and other objects hitting the user. The user wears a VR headset, mini-quadcopters, controlled by an optical marker tracking system, are used to provide tactile feedback.
Your Eyes Tell
201879 Zitationen · DOI
A common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. Existing approaches such as questionnaires or pupil dilation measurements either only allow for subjective assessments or are susceptible to environmental influences and user physiology. We address these challenges by exploiting the fact that cognitive workload influences smooth pursuit eye movements. We compared three trajectories and two speeds under different levels of cognitive workload within a user study (N=20). We found higher deviations of gaze points during smooth pursuit eye movements for specific trajectory types at higher cognitive workload levels. Using an SVM classifier, we predict cognitive workload through smooth pursuit with an accuracy of 99.5% for distinguishing between low and high workload as well as an accuracy of 88.1% for estimating workload between three levels of difficulty. We discuss implications and present use cases of how cognition-aware systems benefit from inferring cognitive workload in real-time by smooth pursuit eye movements.
79 Zitationen · DOI
More and more industrial manufacturing companies are outsourcing assembly tasks to sheltered work organizations where cognitively impaired workers are employed. To facilitate these assembly tasks assistive systems have been introduced to provide cognitive assistance. While previous work found that these assistive systems have a great impact on the workers' performance in giving assembly instructions, these systems are further capable of detecting errors and notifying the worker of an assembly error. However, the topic of how assembly errors are presented to cognitively impaired workers has not been analyzed scientifically. In this paper, we close this gap by comparing tactile, auditory, and visual error feedback in a user study with 16 cognitively impaired workers. The results reveal that visual error feedback leads to a significantly faster assembly time compared to tactile error feedback. Further, we discuss design implications for providing error feedback for workers with cognitive impairments.
55 Zitationen · DOI
This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI.
PalmTouch
201855 Zitationen · DOI
Touchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present PalmTouch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the device's top edge implicitly places the palm on the touchscreen. We present different use cases for PalmTouch, including the use as a shortcut and for improving reachability. To evaluate these use cases, we have developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. Results of the evaluation show that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate PalmTouch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.
51 Zitationen · DOI
Individuals with cognitive impairments currently leverage extensive human resources during their transitions from assisted living to independent living. In Western Europe, many government-supported volunteer organizations provide sheltered living facilities; supervised environments in which people with cognitive impairments collaboratively learn daily living skills. In this paper, we describe communal cooking practices in sheltered living facilities and identify opportunities for supporting these with interactive technology to reduce volunteer workload. We conducted two contextual observations of twelve people with cognitive impairments cooking in sheltered living facilities and supplemented this data through interviews with four employees and volunteers who supervise them. Through thematic analysis, we identified four themes to inform design requirements for communal cooking activities: Work organization, community, supervision, and practicalities. Based on these, we present five design implications for assistive systems in kitchens for people with cognitive deficiencies.
Proceedings of the ACM on Human-Computer Interaction · 43 Zitationen · DOI
Manual assembly at production is a mentally demanding task. With rapid prototyping and smaller production lot sizes, this results in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems compensate this increase in mental workload by providing "just-in-time" assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings. However, there is no evidence by objective measures if mental workload is reduced by in-situ assistance. In our work, we showcase electroencephalography (EEG) as a complementary evaluation tool to assess cognitive workload placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual EEG bandwidth that varied with changes in working memory load. We show, that changes in the EEG bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. Our work contributes by demonstrating how design claims of cognitive demand can be validated. Moreover, it directly evaluates the use of assistive systems for delivering context-aware information. We analyze the characteristics of EEG as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.
42 Zitationen · DOI
Cognition-aware systems acquire physiological data to derive implications about physical and mental states. Pupil dilation has recently attracted attention in the HCI community as an indicator for mental workload. The impact of mental workload on pupillary behavior has been extensively examined. However, systems making use of these measurements to alleviate mental workload have been scarcely evaluated. Our work investigates the expediency of task complexity adaption based on pupillary data in real-time. By conducting math tasks with different complexities, we calibrate a complexity adjustment system. In a pilot study (N=6), we evaluate the feasibility of changing task complexity using two different complexities. Our findings show less perceived mental workload during task complexity adaptation compared to presenting high task complexities only. We show the potential of pupil dilation as a valid metric for assessing mental workload as a modality for cognition-aware user interfaces.
Proceedings of the ACM on Human-Computer Interaction · 38 Zitationen · DOI
Cyclists' attention is often compromised when interacting with notifications in traffic, hence increasing the likelihood of road accidents. To address this issue, we evaluate three notification interaction modalities and investigate their impact on the interaction performance while cycling: gaze-based Dwell Time, Gestures, and Manual And Gaze Input Cascaded (MAGIC) Pointing. In a user study (N=18), participants confirmed notifications in Augmented Reality (AR) using the three interaction modalities in a simulated biking scenario. We assessed the efficiency regarding reaction times, error rates, and perceived task load. Our results show significantly faster response times for MAGIC Pointing compared to Dwell Time and Gestures, while Dwell Time led to a significantly lower error rate compared to Gestures. Participants favored the MAGIC Pointing approach, supporting cyclists in AR selection tasks. Our research sets the boundaries for more comfortable and easier interaction with notifications and discusses implications for target selections in AR while cycling.
38 Zitationen · DOI
Teaching new assembly instructions at manual assembly workplaces has evolved from human supervision to digitized automatic assistance. Assistive systems provide dynamic support, adapt to the user needs, and alleviate perceived workload from expert workers supporting freshman workers. New assembly instructions can be implemented at a fast pace. These assistive systems decrease the cognitive workload of workers as they need to memorize new assembly instructions with each change of product lines. However, the design of assistive systems for the industry is a challenging task. Once deployed, people have to work with such systems for full workdays. From experiences made during our past project motionEAP, we report on design challenges for interactive worker assistance at manual assembly workplaces as well as challenges encountered when deploying interactive assistive systems for diverse user populations.
Computers in Human Behavior · 36 Zitationen · DOI
Human Augmentation Technologies improve human capabilities using technology. In this study, we investigate the placebo effect of Augmentation Technologies. Thirty naïve participants were told to be augmented with a cognitive augmentation technology or no augmentation system while conducting a Columbia Card Task. In this risk-taking measure, participants flip win and loss cards. The sham augmentation system consisted of a brain–computer interface allegedly coordinated to play non-audible sounds that increase cognitive functions. However, no sounds were played throughout all conditions. We show a placebo effect in human augmentation, where a sustained belief of improvement remains after using the sham system and an increase in risk-taking conditional on heightened expectancy using Bayesian statistical modeling. Furthermore, we identify differences in event-related potentials in the electroencephalogram that occur during the sham condition when flipping loss cards. Finally, we integrate our findings into theories of human augmentation and discuss implications for the future assessment of augmentation technologies.
36 Zitationen · DOI
With increasingly large smartphones, it becomes more difficult to use these devices one-handed. Due to a large touchscreen, users can not reach across the whole screen using their thumb. In this paper, we investigate approaches to move the screen content in order to increase the reachability during one-handed use of large smartphones. In a first study, we compare three approaches based on back-of-device (BoD) interaction to move the screen content. We compare the most preferred BoD approach with direct touch on the front and Apple's Reachability feature. We show that direct touch enables faster target selection than the other approaches but does not allow to interact with large parts of the screen. While Reachability is faster compared to a BoD screen shift method, only the BoD approach makes the whole front screen accessible.
Flyables
201832 Zitationen · DOI
Recent advances in technology and miniaturization allow the building of self-levitating tangible interfaces. This includes flying tangibles, which extend the mid-air interaction space from 2D to 3D. While a number of theoretical concepts about interaction with levitating tangibles were previously investigated by various researchers, a user-centered evaluation of the presented interaction modalities has attracted only minor attention from prior research. We present Flyables, a system adjusting flying tangibles in 3D space to enable interaction between users and levitating tangibles. Interaction concepts were evaluated in a user study(N=17), using quadcopters as operable levitating tangibles. Three different interaction modalities are evaluated to collect quantitative data and qualitative feedback. Our findings show preferred user interaction modalities using Flyables. We conclude our work with a discussion and future research within the domain of human-drone interaction.
28 Zitationen · DOI
Detecting emotions while driving remains a challenge in Human-Computer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skin-conductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will provide in-situ interface adaptations on-the-go.
EMGuitar
201827 Zitationen · DOI
Mastering fine motor tasks, such as playing the guitar, takes years of time-consuming practice. Commonly, expensive guidance by experts is essential for adjusting the training program to the student's proficiency. In our work, we showcase the suitability of Electromyography to detect fine-grained hand and finger postures in an exemplary guitar tutor scenario. We present EMGuitar, an interactive guitar tutoring system, that assists students by reporting on play correctness and adjusts playback tempi automatically. We report person-dependent classification utilizing a ring of electrodes around the forearm with an F1 score of up to 0.89 on recorded calibration data. Furthermore, our system was received well by neither diminishing ease of use nor being disruptive for the participants. Based on the received comments, we identified the need for detailed play accuracy feedback down to individual chords, for which we suggest an adapted visualization and an algorithmic approach.
26 Zitationen · DOI
Learning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let’s Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let’s Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let’s Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let’s Frets enables independent practice sessions that can be translated to other musical instruments.
26 Zitationen · DOI
Improvising on the piano keyboard requires extensive skill development, which may reduce the feeling of immersion and flow for amateur players. However, being able to add simple musical effects greatly boosts a player's ability to express their unique playing style. To simplify this process, we designed an electromyography-based (EMG) system which integrates seamlessly into normal play by allowing musicians to modulate sound pitch using their thumb. We conducted an exploratory user study where users played a predefined melody and improvised using our system and a standard pitch wheel. Interview responses and survey answers showed that the EMG-based system supported the players' musical flow. Additionally, interviews indicated the system's capabilities to foster player creativity, and that players enjoyed experimenting with the effect. Our work illustrates how EMG can support seamless integration into existing systems to extend the range of interactions provided by a given interface.
26 Zitationen · DOI
To date, approximately 20% of the world population lives with a level of cognitive impairment. In Western Europe, sheltered living facilities have emerged which collaboratively convey and train daily living skills for people with cognitive disabilities. This includes cooking as an important communal activity. However, tenants receive rudimentary cooking training since most facilities are affected by a worker shortage as they are driven on a voluntary basis. In this work, we investigate how digital in-situ assistance can be used to convey cooking instructions in kitchens. We conduct a user study (N=10) over two weeks in a sheltered living facility to evaluate the cooking performance and subjective perception between in-situ assistance and caretaker assistance. We find that caretaker assistance requires less time to prepare a meal when participants cooked previously with in-situ assistance. Our results are complemented by positive feedback of using in-situ instructions. We discuss how in-situ assistance enables independent cooking sessions in living environments for cognitively impaired.
Kooperationen5
Bestätigte Forscher↔Partner-Paare aus HU-FIS — Gold-Standard-Positive für das Matching.
ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
university
ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
other
ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
other
ClaimGuard: Kollaborative Mensch-KI-Faktenchecks zur Abwehr von Desinformation
ngo
easyTEM: Entwicklung von ressourceneffizienter Transmissionselektronenmikroskopie zur Demokratisierung ihres Einsatzes in der Materialforschung
other
Stammdaten
Identität, Organisation und Kontakt aus HU-FIS.
- Name
- Prof. Dr. Thomas Kosch
- Titel
- Prof. Dr.
- Fakultät
- Mathematisch-Naturwissenschaftliche Fakultät
- Institut
- Institut für Informatik
- Arbeitsgruppe
- Human-Computer Interaction for Scientific Software
- Telefon
- +49 30 2093-41298
- HU-FIS-Profil
- Quelle ↗
- Zuletzt gescrapt
- 26.4.2026, 01:07:52