Skip to main content

Muscle Wave Classifier


Overview
#

Electromyography (EMG) measures the electrical activity generated by skeletal muscles during contractions, typically using surface electrodes. This technique enables the identification of specific gestures by analyzing the signals, making it particularly useful for applications in human-computer interaction. By interpreting muscle activity, devices can effectively respond to user intentions, thereby enhancing assistive technologies for individuals with mobility impairments.

In this project, a neural network is trained to classify the state of a user’s dominant fist (open or closed) using EMG data. The prediction made by the classifier is then used to control a remote-controlled car. The following sections discuss the methodology and results obtained.

Data Acquisition
#

To record the EMG signals, the Muscle BioAmp Candy sensor was utilized, interfaced with an Arduino UNO. This setup allowed real-time recording of muscle activity, with the sensor placed on the ulnar nerve region to capture signals related to the palm gestures.

The dataset was collected from 13 participants, each contributing 1,000 samples, resulting in a total of 13,000 labeled EMG samples. Each sample comprises 256 data points, representing the muscle activity over time. The data was recorded with a sampling rate optimized for accurately capturing high-frequency EMG signals.

Data Collection Protocol:

  • Each participant was equipped with the EMG sensor placed on their dominant arm’s forearm.
  • Participants were provided with visual cues displayed on a screen to guide them in performing the gestures.
  • The task involved alternating between opening and closing their fists consecutively for a fixed duration while ensuring consistency and minimal movement artifacts.
  • The raw EMG signal was preprocessed to extract the signal envelope, a smoothed representation of the EMG activity that highlights overall muscle activation.

Each sample was labeled as:

  • 0: Open fist
  • 1: Closed fist

The processed data is stored in .npy format, for easy loading into NumPy. The dataset is structured such that each file corresponds to a participant’s samples, with their associated labels stored in a separate array.

The dataset, after the preprocessing, is available for download here.

Model Design and Training
#

The architecture is structured in such a way that the initial input of 256 features is represented in a higher dimensional space with 512 dimensions. Then, it is passed down to much smaller dimensional spaces with 128 and 64 dimensions, to reach a single neuron. The final output is a single value between 0 and 1, representing the probability that the fist is closed. The inputs are normalized before being fed into the model.

  1. Input Normalization: Standardizes inputs for consistent performance.
  2. Hidden Layers:
    • 512 neurons: Activation function - ReLU
    • 128 neurons: Activation function - ReLU
    • 64 neurons: Activation function - ReLU
  3. Output Layer: A single neuron with a sigmoid activation function.

The model is trained using the Adam optimizer with a learning rate of 0.001. The model is trained for 25 epochs using a batch size of 32. The loss function is binary crossentropy.

The model is trained using the TensorFlow framework.

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Normalization(axis=-1, input_shape=[256]),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=25, validation_split=0.2)

Below are the plots of the training v/s validation accuracy,

and the plots of the training v/s validation loss.

The model has converged to an acceptable minimum.

Real-time Interface
#

A remote-controlled car is programmed using an ESP 32 to receive real-time commands from a server via Wi-Fi. Communication between the server and the ESP32 is established using Python’s socket library.

The server continuously collects 256 EMG data points from a user equipped with an EMG sensor. This data is then passed to the model for inference. The server transmits the control signals to the ESP32 based on the predicted gesture, enabling responsive car movements. The car moves forward when the model predicts an closed fist, and stops when it predicts a open fist. The process operates in a continuous loop until the connection is terminated.

graph LR A[/EMG Sensor + Arduino/] -->|EMG Signal| B{Model} B -->|Prediction| C(Server) C -->|Control Signal| D(RC Car) D -->A

Conclusion
#

This project has been an invaluable learning experience, providing a solid foundation in integrating machine learning with real-time applications. Looking ahead, there is significant scope for further development. I plan to expand the dataset by incorporating a wider range of gestures, experiment with advanced model architectures, and explore transfer learning techniques to enhance model performance and adaptability.

The entire codebase for this project can be found on GitHub.