'Finger Movement Classification of Myoelectric Signals using Deep Learning'​: Summer Internship @ NUS

'Finger Movement Classification of Myoelectric Signals using Deep Learning': Summer Internship @ NUS

Hello readers!

This is Venkatesh Bharadwaj S, a final year undergraduate pursuing Instrumentation and Control Engineering at NIT Trichy. As a humane and an emphatic researcher wishing to utilize technology to impact the lives of the global community, I had an opportunity to pursue an internship at National University of Singapore under Dr. Hongliang Ren in the field of Biomedical Robotics in Medical Mechatronics laboratory of the Department of Biomedical Engineering. The problem statement I was tackling was to classify the finger movements of a person and implement the same on a robotic arm using deep learning techniques. This research has revolutionary applications in the healthcare industry. The entire process starting from Data Collection to real-time implementation was quite challenging, but it was very fruitful!

With gradual advancements in the field of medical sciences, the role of robotic arms is rising to prominence in certain applications such as assisting doctors to perform remote surgeries from a far away area with just the live feed from the camera.

Gesture recognition holds a major part in the field of rehabilitation. It is done using various techniques like:

1. Usage of depth for gesture tracking and recognition using devices such as Leap Motion and Microsoft Kinect.

2. By retrieving the finger trajectory and the strength of the muscle signal via a multi-sensory system with additional data from the glove.

Electromyography(EMG) based method is a non-invasive technique which can be adopted for classifying the various actions such as the flexion of fingers. Surface EMG signals can be retrieved in a non-invasive manner for predicting the movements of the person. 

EMG Signals are Biomedical signals that can measure electric current that is generated during the period of contraction of muscles. Signals can be recorded using intramuscular EMG(iEMG) where needles inserted into ones body is used to record the signal from the muscles, (or) Surface EMG(sEMG), where a non-invasive approach is taken to record the signals using EMG electrodes placed on the surface of the skin. 

Recently, Deep Learning has proved that it is an effective solution for solving problems pertaining to classification, regression, etc. compared to Machine Learning. Classification was done on EEG data using Convolutional Neural Networks(CNN), which enabled end-to-end learning, where raw data was as such used for learning purpose without deductive feature selection. CNN was also implemented for Regression in where feature learning and regression modeling were conducted simultaneously.

Hence, Deep Learning can prove to be much effective in solving such intricate problems. The classification of sEMG signals has been done using Convolutional Neural Networks due to the information loss during feature extraction of Machine Learning as it reduces the recognition accuracy.

The diagrams below illustrate the various gestures that are classified such as flexion of a) Thumb b) Index c) Ring d) Little e) Rest and the way the arm has to be placed for data collection p

It has been identified that Myo as a device can be beneficial in the medical domain to examine the EMG from their muscles and also to control their medical devices. It helps in acquiring the spatial data from IMU, which is set to a data frequency of 50 Hz. The gesture data is obtained from 8 EMG pods, which are again set to a sampling frequency of 50 Hz. The data is collected using PyoConnect, a library that is written in Python for retrieving signals from Myo armband using Linux. The data was collected from 10 people ranging from 19 years to 25 years. The data for each action was collected 20 times from a person counting for a time span of 3 second each.

The samples were recorded for 5 actions namely thumb flexion, index flexion, ring flexion, little flexion and rest action. Here, flexion indicates the bending action. It was planned to enable the gesture to be recognized only when it is held and not at the transition. The samples were to be recorded only after a keypress on the keyboard for each recording of data samples. The thumb movements occur at carpometacarpal joint, as well as at two joints involving the phalanx and the thenar muscles which have the impact during the flexion of thumb. Hypothenar muscles are the ones which receive the impact during the movement of the little finger.

The movement of other fingers is a result of a combination between several muscle groups namely metacarpophalangeal joint, proximal interphalangeal joint and distal interphalangeal joint. Since the muscle activities are at a considerable proximity to the wrist, it becomes slightly complex to retrieve signals from the various finger movements as the finger action of one finger can effectively have an impact on the others as well due to the interconnection of finger joints. So, the Myo armband was placed between the muscles of Antebrachial and Carpus to get effective data samples from the users.

Our algorithm involves a unique segmentation-based collection style where the 8 channel EMG data was collected and segmented into 90 samples for around 20 times spanning 3 seconds each for every action. Different signal lengths and the number of times a data must be collected were considered for the segmentation of data samples. Data segmentation holds a very major part in this experiment as the data need to be correctly fed in with equal number of samples for each data.

Raw surface EMG has a frequency content of 6-500 Hz, with the spectral power dominant between 20-150 Hz. So, we need to filter out the frequency values below 20 Hz using a high pass filter from the mean corrected EMG signal and a low pass filter to remove the frequencies above 450 Hz using a 4th order Butterworth digital filter. 

We then go in for a Linear envelope signal processing technique, where we pass the rectified signal to a 4th order Butterworth digital low pass filter with a cut-off frequency of 4-10 Hz. It reduces frequency content of EMG and lowers the memory storage. Thus, thereby it makes it easy to interpret and detect the onset of activity.

1000 samples were collected from 10 subjects for 20 times with 90x8 features for each action. 80% of the data has been put to training and the remaining 20% has been put as testing dataset. The analysis was done in a two-fold manner.

The following are the images depicting the network architecture, comparison of accuracies of various DL and ML architectures and the analysis of each finger action using a Confusion Matrix.

There were few limitations while performing this project.

  1. When IMU was combined with EMG data for classification, the model seemed to be heavily confused with the spatial data as the orientation and the degree of bending of a particular person’s fingers differ from others.
  2. When the classification is done to make it a personalized device to suit one particular person, the spatial data from the IMU such as that from accelerometer and gyrometer would’ve been of great use.
  3. The model was confused further with additional data from the IMU. Moreover, the middle finger couldn’t be detected properly as the flexion of the finger heavily impacts the muscles involving the other fingers.
  4. Due to the previously mentioned limitations and poor accuracy of the model which was obtained on its inclusion, it was decided to not consider the data samples collected from the middle finger.
  5. Initially, for better performance than Machine Learning, the extracted features were fed as inputs to the Artificial Neural Network and the results weren’t satisfactory for multi-class classification.

This internship was a one of a kind experience that helped me grasp the complexity of various Deep Learning techniques. It also helped me hone my practical skills by implementing them on the temporal data, processing the raw data and retrieving noiseless data from the sensors by using suitable filters. The scope of Deep Learning in Biomedical Signal Processing applications was huge. With better learning algorithms implemented on processed signals, one can attain a higher accuracy of prediction. My key takeaways in terms of software skills would be gaining the expertise of programming in python utilizing the Keras framework and the ability to implement a deep learning model using the same for any dataset.

The time I spent with Godwin at the SINAPSE labs during the implementation phase was an absolute eye-opener. My model was tested on the modular robotic gripper. One thing that became crystal clear was that the future of healthcare is in the hands of AI and that's exactly what I want to continue working on. This internship gave me the freedom to independently conduct my research and learn from my mistakes with ample guidance from the research scholar and the Professor. I would like to thank Dr. Ren and Mobarak, my guide throughout the internship for their constant support and guidance clarifying my doubts and giving me valuable inputs. I would also like to convey my heartfelt gratitude to my co-interns, Jose, Ravikiran and Rajiv for their motivation and in helping me when I couldn't get hold of a certain concept. Also, I would like to thank Aditya Vivek Thota, LinkedIn campus editor for motivating me to publish this!

Last but not least, it made me nurture and crave for further knowledge and a systematic mindset to pursue active research.

The outcome of this internship has been documented as a research paper, accepted to be presented at IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS(ROBIO) 2018, Kuala Lumpur, a B1(Qualis) ranked conference.

Link to my worksite: Venkatesh's research @ NUS

#IndiaStudents #StudentVoices

Nishanth Sanjeevi

Senior Software Engineer at Microsoft Technical lead at Surface

6y

We did do a prosthetic arm very recently

Martin Kuester

Software Developer experienced in Aerospace, Telephone, Computer Operating Systems, HA/DR and Applications.

6y

That’s impressive!

Sandee Herman

Global Goodwill Ambassador GGA GH

6y

Inspirational Worlds Organization Knowledge2Power.org

Like
Reply
Michael Daly

Creating Opportunities at the Intersection of Assistive Technology, User Experience, and Community Management

6y

I saw a demo of myoelectric control recently. It began with errors, but as the user learned the tool and the tool learned the user, it was amazing the accuracy that developed over a very short time.

Tanveer Teranikar (PhD)

Post Doctoral Associate | Adjunct Faculty

6y

Chaitanya Sardesai database karta yeu shakel emg signals cha on different types of movements

To view or add a comment, sign in

Others also viewed

Explore content categories