Our work “Towards a Humanoid-Oriented Movement Writing“, by Adrian Stoica, Hyung Ju Suh, Steven M. Hewitt, Sarah Bechtle, Anna Gruebler, Yumi Iwashita, was accepted at the IEEE International Conference On Systems, Man, and Cybernetics 2017.
Author Archives: anna.gruebler@gmail.com
Presenting “Data Visualization for Enterprise Products” at Visualising Data London
Together with Hugh Leoidsson we presented:
“Data Visualization for Enterprise Products” at Visualising Data London!
Our work was accepted for publication in the IEEE Transactions on Affective Computing
Our work “Design of a Wearable Device for Reading Positive Expressions from Facial EMG Signals“, by Anna Gruebler and Kenji Suzuki, was accepted for publication in the special issue on Technologies for Affect and Wellbeing in the IEEE Transactions on Affective Computing.
Our work was published in the Journal of Autism and Developmental Disorders
Our work “Brief Report: The Smiles of a Child with Autism Spectrum Disorder During an Animal-assisted Activity May Facilitate Social Positive Behaviors—Quantitative Analysis with Smile-detecting Interface“, by Atsushi Funahashi, Anna Gruebler, Takeshi Aoki, Hideki Kadone and Kenji Suzuki, was published in the Journal of Autism and Developmental Disorders.
Our work on “The Smiles of a Child with Autism Spectrum Disorder” in Chunichi Newspaper

Our work on “The Smiles of a Child with Autism Spectrum Disorder” has been reviewed in an article in the Chunichi Newspaper (Japanese).
Seminar: A Wearable Interface for Facial Expression Reading to Describe and Augment Interaction
I will give a seminar on “A Wearable Interface for Facial Expression Reading to Describe and Augment Interaction” as part of the Departmental Seminar Series in the school of Computer Science and Electronic Engineering at the University of Essex.
Date: June 19th, 2013
Location: 1N1.4.1
Time: 3.30 pm
Facial expressions are a continuous source of information about a person’s internal state and are used daily by humans to interact with each other. Even though great advances have been made in the field of facial recognition, mostly from video, there are certain situations where it is preferable to use an alternative way to recognize facial expressions, namely the wearable interface. Such an interface must record facial expressions continuously, be unobtrusive and wearable by anyone, anywhere. A novel method to read, display and transmit facial expressions uses distal electromyographic (EMG) signals captured from the side of the face.
By reading facial expressions it is possible to describe interaction, such as the characteristics of Smile Sharing and the expressed positive emotions of children with autism spectrum disorders as they experience dog assisted activities. Facial expressions can also enhance interaction by controlling an affective avatar in a virtual world, activating a robotic facial prosthetic to return facial expressiveness to hemifacially paralyzed patients, and to coach robots interacting with humans towards desired behaviors.
Leverhulme Fellow at the University of Essex
I have become a Visiting Fellow in the School of Computer Science and Electronic Engineering working in the Embedded and Intelligent Systems (EIS) Research Group of the University of Essex.
Lecture on “Artificial Intelligence applied to Medicine”
I will give a lecture on “Artificial Intelligence applied to Medicine” at the Universidad Santo Tomás, Temuco, Chile.
The lecture will take place on April 16th, at 19:30.