Interactive and Wearable Robotic Medusa Headset
Cyber-Physical Systems
Project Duration: 5 weeks
Team Size: 3 members (a mechanical engineer, a electrical engineer, and a member with a fashion background)
My Responsibilities: Mechanical design for robotic arms and housings (including mechanism designs of linkages, gears, and DFM), 3D modelling (CAD and movement simulations),
rapid rototyping (3D printing and laser cutting),
Coding (Arduino IDE and Processing), machine learning and training (Wekinator), and Electronics (cricuit design and soldering).
This is an interactive robot controlled by the user’s facial expressions (eyebrow and lip movements) created with the combination of machine learning, mechanical systems, coding, electrical engineering, and various hands-on manufacturing techniques that grants autonomy in how the users want the snakes to move and respond. Inspired by the Greek mythological figure Medusa, this project brings her hair of snakes to “life” with 3 carefully built robotic arms of 4 degrees of freedom, 6 LED lights, a housing for the electrical systems and the arms, and 3 pairs of 3D printed snake jaws. This was a project completed during my first term at Innovation Design Engineering in a period of 5 weeks.
Scroll for Overview of Each Section
BRIEF OVERVIEW
Final Product
The aim of the Cyber Physical Systems module is to over 5 weeks undertake an investigation into how physical computing, connected systems, and machine learning are being used to tackle complex problems through the design of expressive, kinetic objects.
Mechanism Design
CAD and 3D Printing
Software
Machine Learning (Wekinator)
To
capture the user’s facial and hand gestures, software called VisionOSC is used.
VisionOSC uses the Apple Vision Framework to detect various features and send
them via OSC. VisionOSC plots 21 key points on each hand and 73 key points on
each face, and tracks their x, y coordinates as well as confidence scores.
VisionOSC sends the
detected features as OSC messages to Wekinator, a machine learning tool that
can learn from the input data and generate output signals. Each OSC message
starts with the width and height of the frame, the number of detected objects,
and the data for each object.
Processing
As
mentioned above, two instances of Wekinator are set up, trained, and run. Both
models, use a regression output type, which produces a continuous value between
0-1. The output value depends on the intensity or position of the gesture. For
the face gesture model, the eyebrow height and mouth width are the input
feature where as for the hand gesture model, the hand angle is the input
feature. For the face gesture model, two outputs are generated, on
corresponding to the eyebrow height, and one corresponding to the mouth width.
The images show the settings for each Wekinator instance.
Processing used as an IDE to receive OSC messages from Wekinator before the information is sent to the Arduino.
Serial port is set up
to the same baud rate as the output of processing. The loop code then uses the ID and a
switch case function, to determine what the correct output is to be sent to the
motor controller.
Circuit Design