Exciting breakthrough in neuroscience: Researchers at Stanford have developed a brain-computer interface that decodes *imagined speech*—words people silently “say” in their minds—with up to 74% accuracy using implanted microelectrodes in the motor cortex. Even more compelling: a “thought password” ensures only intentional speech is decoded, safeguarding privacy. This innovation holds profound promise for restoring natural communication for patients with paralysis. Read more via Stanford Medicine’s public write-up—no paywall: https://lnkd.in/enQcPYuj #Neurotech #BCI #AssistiveTech #HealthInnovation
Stanford Researchers Develop Brain-Computer Interface for Silent Speech
More Relevant Posts
-
Detecting causality in neural spike trains using a new technique Understanding the brain's functional architecture is a fundamental challenge in neuroscience. The connections between neurons ultimately dictate how information is processed, transmitted, stored, and retrieved, thus forming the basis of our cognitive functions. via News Medical Device / Technology News Feed
To view or add a comment, sign in
-
Recent research has identified how early brain structure primes itself for efficient learning. Findings reveal that, even before visual experience, the brain organizes neurons into modules, setting the stage for reliable and rapid interpretation of sensory information. As visual experience accumulates, these modules become better aligned with incoming information, enhancing reliability and adaptability. This developmental process may extend beyond vision, offering a broader framework for understanding how the brain achieves fast, flexible learning. Insights from this work could inform future approaches in neuroscience and artificial intelligence by highlighting mechanisms underlying the brain’s learning efficiency.
To view or add a comment, sign in
-
Stanford Breakthrough Decodes Inner Speech with 74% Accuracy🧑🏻🔬 Stanford University researchers have achieved a groundbreaking milestone in neurotechnology, decoding silent thoughts with up to 74% accuracy using brain-computer interfaces. Published in "Cell" , the study marks the first real-time decoding of imagined words from the brain’s motor cortex. Led by Erin Kunz and Frank Willett, the team used microelectrode arrays implanted in four participants with severe paralysis from ALS or brainstem stroke. By capturing neural patterns during attempted or imagined speech, the system offers transformative potential for communication aids and neuroprosthetics. #Neurotechnology #BCI #Stanford #Innovation
To view or add a comment, sign in
-
-
Researchers have created a novel computational method to decipher the complex communication patterns between neurons. By analyzing their irregular electrical "spikes," the technique accurately identifies which neurons influence others, a key step in understanding brain function and neurological disorders.
To view or add a comment, sign in
-
New research shows how vision stabilizes after birth: once the eyes open, neurons align with visual modules, turning chaotic signals into reliable patterns for learning. https://lnkd.in/ggW-tTUT
To view or add a comment, sign in
-
💭Researchers at Stanford have developed a brain-computer interface that can decode not just attempted speech, but even inner thoughts. Their system works by targeting the brain’s motor cortex, the region that normally sends commands to muscles responsible for speech. Tiny microelectrode arrays, each smaller than a baby aspirin, are implanted on the brain’s surface to capture patterns of neural activity. These signals are transmitted to a computer algorithm trained to recognize “phonemes” - the tiniest units of speech - and assemble them into sentences. The new study goes further, by coding not only attempted but also inner speech. Because for people with paralysis, attempting to speak can be slow and fatiguing, and if the paralysis is partial, it can produce distracting sounds and breath control difficulties. While it brings serious privacy issues, researchers came up with a solution. To protect privacy, the system only activates when the user thinks of a specific “password,” ensuring inner thoughts remain private unless the user chooses to share To read the full story: https://lnkd.in/gM5r57gu Source: Stanford University #EduLive #BCI #Neuroscience #StanfordResearch
To view or add a comment, sign in
-
-
A new study shows that the brain activity behind decision-making is far more widespread across the organ than first thought. Researchers have completed the first-ever activity map of a mammalian brain in a groundbreaking duo of studies, and it has rewritten scientists' understanding of how decisions are made. The project, involving a dozen labs and data from over 600,000 individual mouse brain cells, covered areas representing over 95% of the brain. Findings from the research, published in two papers in the journal Nature, suggest that decision-making involves far more of the brain than previously thought. The mammoth project was led by the International Brain Laboratory (IBL), a collaboration of experimental and theoretical neuroscientists from across Europe and the U.S. These scientists were united by a familiar, nagging feeling. https://lnkd.in/gWKnpxrq
To view or add a comment, sign in
-
Your Thoughts, Decoded! A Big Leap in Brain Tech A team of scientists from Stanford University has developed a new brain implant that can decode a person’s inner monologue, the silent thoughts we speak in our minds. Published on August 14, 2025, in the journal Cell, this breakthrough could be life-changing for people who are paralyzed or unable to speak. Instead of needing to move or try to speak, users just think what they want to say and the system decodes it with up to 74% accuracy. The research was led by Erin Kunz, an electrical engineer, and Frank Willett, a neurosurgery professor at Stanford. They worked with four people who had brain implants as part of a clinical trial. The AI system learned to read their brain signals and turn thoughts into words. To protect privacy, the implant only starts decoding when the user mentally says a passphrase, in this case, “chitty chitty bang bang.” This is a powerful step toward making communication more natural and inclusive for those who need it most. #BrainTech #AI #HealthcareInnovation #Accessibility #Stanford #Neuroscience #BCI #FutureOfCommunication
To view or add a comment, sign in
-
-
Last week Kajal Singla, researcher at ScaDS.AI Dresden/Leipzig and Max Planck Institute for Human Cognitive and Brain Sciences presented her latest paper “Learning Latent Spaces for Individualized Functional Neuroimaging with Variational Autoencoders” at the #CCNConference in #Amsterdam. Together with nico scherf (PI at ScaDS.AI) and Pierre-Louis Bazin, she introduced a novel deep learning approach that leverages variational autoencoders (#VAEs) to model functional Magnetic Resonance Imaging (#fMRI) data in subject-specific latent spaces. Traditional methods (e.g., ICA, diffusion map embedding) capture group-level brain networks but often miss individual-specific differences. The VAE-based framework reconstructs and denoises fMRI data in a low-dimensional latent space, enhancing the separation of signals from distinct functional networks without directly aligning them to specific latent axes. These individualized latent spaces can also be aligned across subjects, enabling meaningful cross-subject comparisons. This approach not only enhances the signal-to-noise ratio but also opens new avenues for personalized fMRI analysis and a deeper understanding of the brain’s functional architecture. #Neuroscience #AI #MachineLearning #Personalization 👉 https://lnkd.in/e5a2bZW9
To view or add a comment, sign in
-
-
🧠 Towards a Brain-Wide Map of Neural Activity During Decision-Making This paper presents one of the most comprehensive efforts to map brain-wide activity during behavior in mice. Using Neuropixels probes across hundreds of brain regions, the study analyzed how neurons encode key decision-making variables such as stimulus, choice, and feedback at both single-cell and population levels. Main findings: • Neural signals related to decision-making are widely distributed across cortical and subcortical regions, not confined to localized “decision centers.” • Both single-neuron firing rates and population dynamics showed robust encoding of task variables. • Feedback-related activity emerged as one of the most dominant signals, shaping brain-wide computations. Future perspectives: • The open dataset (accessible via the International Brain Lab) provides a resource for global collaboration. • Future work could explore causal manipulations, temporal dynamics, and cross-species comparisons, bringing us closer to a unified understanding of distributed cognition. Personal perspective: I find this paper inspiring because it shifts the view from isolated brain areas to distributed neural systems. As someone interested in brain-inspired AI and biomedical engineering, I see this as a blueprint for how large-scale, high-resolution neural datasets can inform both neuroscience and intelligent systems design. #Neuroscience #BrainMapping #DecisionMaking #NeuralCoding #BigData #Neurotechnology
To view or add a comment, sign in
-