Take the Brain-Computer Interface Challenge!
Take the Brain-Computer
Interface Challenge!
Slide into the future of gaming at GW BOOTH #922.
Use your brain power to guide Tux the Penguin down a snowy slope, collect fish, and rack up points. No hands needed–just your mind!
To play the game, players wear a headband with sensors that read and translate brain signals into simple computer commands. The headband is paired with a computer racing game via a machine learning algorithm that is able to translate commands a person is thinking into actions on the computer screen. Players guide Tux the Penguin down the slope by thinking "left" or "right" with the goal of picking up as many fish as they can before reaching the Finish Line.
- Instructions to Play the Game
Option 1: Quick Start (Pre-Trained Algorithm)
Total estimated time: ~10 minutes
This option allows the player to jump into the game quickly, leveraging an algorithm that has already been trained on a wide dataset of brain wave patterns.
- Before playing the game, remove any sweat and/or makeup to limit interference with the sensors and ensure good brain wave detection and translation. (Wipes will be provided.)
- If necessary, tie back long hair to limit contact with the sensors behind the ears.
- Remove eyeglasses while securing the headband. Once the headband is secure, replace eyeglasses with the temple tips above/on top of the headband.
- A GW person will help to secure the headband on your head. You should be able to feel the sensors firmly on the forehead and behind the ears.
- Receive a quick tutorial on how to play the game. You’ll be instructed to think “left” or “right” to guide the penguin’s movements.
- Before beginning, make sure you are in a comfortable position in front of the laptop and relax the muscles in your face. Limit facial and body movement as much as possible.
- After a practice round, begin playing. You’ll have the opportunity to play two games on two different racing courses. The system will interpret your brain signals using the pre-trained algorithm. The more you play, the more the machine learning algorithm will adapt slightly to your individual brain signals.
Option 2: Personalized Setup (Custom-Trained Algorithm)
Total estimated time: ~30 minutes
This option ensures the highest degree of accuracy by tailoring the algorithm to your unique brain wave patterns.
- Follow above Steps 1-3.
- The system will analyze your individual brain wave patterns and fine-tune the algorithm for increased accuracy.
- Once calibrated, enjoy gameplay that’s personalized to your brain signals.
- Before playing the game, remove any sweat and/or makeup to limit interference with the sensors and ensure good brain wave detection and translation. (Wipes will be provided.)
- Components of the BCI Game
- The computer game is an open-source winter sports racing game called Tux Racer.
- The sensor-based headband used by the GW research team and paired with the computer game is a commercially available product developed by the company MUSE.*
- The machine learning algorithm that bridges the computer game and the sensor-based headband was developed by GW Computer Science Professor Xiaogong Qu.
*Note that the headband (Muse 2) was not modified in any way from its intended commercial use. Instead, the innovation is in the algorithm that pairs two existing technologies.
- What is a Brain-Computer Interface?
A brain-computer interface (BCI) is a technology that establishes a direct communication pathway between the brain and an external device like a computer, prosthetic or software program. BCIs typically rely on sensors to detect brain activity, often measured in the non-invasive form of electrical signals through electroencephalography (EEG) or through more invasive methods such as implanted electrodes. The brain signals are processed to identify patterns corresponding to a user’s intentions. An algorithm then analyzes and interprets that data in real time, converting neural activity into digital signals to perform specific actions, like moving a cursor or a robotic limb.
BCIs can be classified as invasive, non-invasive, or partially invasive. Non-invasive BCIs, like the ones used in Professor Qu’s lab, use external sensors like EEG caps or sensor-based headbands to record and translate electrical activity in the brain. Artificial intelligence and machine learning are driving advances in this technology by helping process and interpret complex neural data, and by improving the accuracy and efficiency of signal decoding. Machine learning algorithms can identify patterns in brain activity and adapt to individual users, making BCIs more personalized. AI also enables real-time processing of large datasets, which are essential for responsive applications like robotic control.
There are numerous applications of BCIs. BCI technologies can help individuals with neurodegenerative diseases and paralysis regain mobility–for example, by controlling prosthetics–and communications. BCIs could also have applications in gaming, virtual reality and education.
- BCI Research in Professor Qu's Lab
Professor Xiaodong Qu's expertise lies at the intersection of machine learning and brain-computer interfaces. His machine learning research centers on ensemble methods and specialized approaches for time-series data, including long short-term memory (LSTM) and Transformer models. On the brain signal front, he specializes in non-invasive brain signals for both clinical and non-clinical applications.
Professor Qu and his students recently published a series of papers focused on advancing brain-computer interface technologies at the HCI International Conference in Washington, DC. The papers, a few of which are listed below, focused on making brain data analysis faster, more accurate, and more affordable, which can have real-world impacts in medicine, tech and accessibility tools.
Enhancing Representation Learning of EEG Data with Masked Autoencoders
This study found that using a method called a "masked autoencoder" can help computers understand brain data (EEG) more quickly and accurately. By hiding parts of the data and training the model to guess the missing parts, it actually learns faster and performs just as well as traditional methods, but with only one-third of the training time. This study is one of the first to apply masked autoencoders, a self-supervised learning technique, to EEG data. While masked autoencoders have been successful in fields like natural language processing and computer vision, using them for EEG data is innovative. It allows models to learn efficiently by guessing missing data, cutting down on training time without sacrificing accuracy.
Why this matters: This means we can build brain-data models that are faster and less resource-intensive, which is useful for fields like medicine and neuroscience, where quick, efficient analysis can make a big difference.
The Role of Kernel Size in EEG-based Gaze Prediction Models
By adjusting the way the model looks at EEG data, particularly the size of the sections (kernels) it analyzes, researchers managed to improve the accuracy of eye gaze predictions while cutting training time. This approach means the model can look at brain signals with finer detail, leading to a new best-in-class accuracy for predicting eye positions. By experimenting with kernel sizes, the study provides new insights on how to fine-tune CNN-transformer models for optimal EEG data processing. It demonstrates that even small adjustments in model architecture can have a significant impact on performance and efficiency.
Why this matters: These insights could lead to faster and more accurate systems that use brain signals for eye-tracking, potentially benefiting areas like assistive technology for people who can’t use traditional eye-tracking tools due to physical limitations.
Advancing EEG-Based Gaze Prediction Using Depthwise Separable Convolution and Enhanced Pre-Processing
The researchers improved eye-tracking from brain data by cleaning up the data and using a special type of model (depthwise separable convolution) that extracts information more efficiently. This led to more accurate predictions of where someone is looking, reducing errors to the lowest yet recorded. The study combines depthwise separable convolutions with data clustering in a way that hasn’t been done before for EEG-based eye-tracking. This method enhances the model’s ability to extract detailed information and significantly improves accuracy over prior methods. By optimizing both the model structure and how the data is processed, it sets a new standard for accuracy in EEG-based gaze prediction, showing how to achieve more with less computational power.
Why this matters: Better eye-tracking from brain data could enhance various technologies, like helping people with disabilities control computers with their eyes or improving user experiences in virtual reality.
Enhancing Eye-Tracking Performance Through Multi-task Learning Transformer
This paper introduced a method that tackles multiple tasks at once, which helps the model understand EEG signals more comprehensively. They combined several tasks into one process, which reduced the need for extra training steps and cut down on computing costs. This is among the first studies to integrate a multi-task learning framework specifically for EEG-based eye-tracking. Instead of focusing on a single task, the model handles multiple tasks simultaneously, improving its ability to generalize and adapt.
Why it matters: This method is more flexible and cheaper to run, making it possible to use these models in a variety of ways, such as improving EEG applications in fields like healthcare and cognitive research without the need for expensive resources.
Visit Professor Qu’s website to learn more about this research and lab.