🏛️ Can deep learning and EEG reveal how we emotionally respond to architectural spaces—before we even realize it? In this study, our Enobio (https://lnkd.in/dFS8ugFy) was used to record a 32-channel EEG while participants viewed images of architectural environments. Forty participants were shown multiple architectural space images while a real-time EEG was recorded. Event-related potential analysis (N100, N200, P300, LPP) revealed consistent differences between preferred and non-preferred stimuli. Two CNN-LSTM models trained on EEG data showed that emotional preferences could be predicted with high recall or precision, depending on the features used. These findings support integrating EEG into early design stages to create emotionally adaptive, user-centric spaces. 👏 Congratulations to Ju Eun Cho, Se Yeon Kang, Yi Yeon Hong, and Han Jong Jun for this exciting work in architectural neuroscience! #EEG #Neuroarchitecture #AffectiveDesign #SmartBuildings #DeepLearning
Neuroelectrics’ Post
More Relevant Posts
-
BCI 101: What really happens when you connect your brain to a computer? Let’s break it down. Brain-Computer Interfaces (BCIs) use EEG to read your brain’s electrical signals and translate them into digital actions. That could mean moving a cursor, controlling a smart device, or analyzing how focused or relaxed you are. From neuroscience labs to live performances, BCIs are already shaping the future of human-computer interaction. With Emotiv Epoc X, researchers, students, and developers can explore real-time brain data using a 14-channel EEG headset built for real-world applications. 👉 Swipe through to learn how BCI works, where it’s being used, and what’s possible when your brain goes hands-on with technology. Explore more: emotiv.com #BCI101 #EmotivEPOCX #BrainComputerInterface #Neurotech #EEG #BrainTech #FutureOfTech #CognitiveScience #RealWorldBCI
To view or add a comment, sign in
-
Using the cross-homology Hodge–Laplacian on cross-simplicial complexes, to lift the construction to cell complexes and build layered higher-order representations. For hands-on introduction to simplicial complexes: https://lnkd.in/g9KFTj2g #SimplicialComplex #CellComplex #HodgeLaplacian #CrossLaplacian
I’m delighted to share some great news! Next week, at the EUSIPCO 2025 Conference (Sept. 8-12), in Palermo, Italy, we will present three accepted papers: -Stefania Sardellitti, Breno C. Bispo, Fernando A. N. Santos, Juliano B. Lima, “Cross-Laplacians Based Topological Signal Processing over Cell MultiComplexes”. In this paper we present Cell MultiComplexes (CMCs) spaces which are topological domains for representing higher-order interactions among interconnected networks. We introduce cross-Laplacian operators as powerful algebraic descriptors of CMCs able to localize homologies, by capturing different topological invariants, at different scales. Then using cross-Laplacians we extend topological signal processing tools to CMCs. See preprint at https://lnkd.in/d9U6fhb8 -Breno C. Bispo, Stefania Sardellitti, Fernando A. N. Santos, Juliano B. Lima, “Learning Higher-Order Interactions in Brain Networks Via Topological Signal Processing”. In this work we leverage the potential of the topological signal processing (TSP) framework for analyzing brain networks. Representing brain data as signals over simplicial complexes allows us to capture higher-order relationships among brain ROIs. We develop two approaches for learning the mean brain topology from real brain datasets using higher-order statistical measures and TSP tools. See preprint https://lnkd.in/d6Kw-vBC -Tiziana Cattai, Stefania Sardellitti, Stefania Colonnese, Francesca Cuomo, and Sergio Barbarossa “Leak Detection in Water Distribution Networks Using Topological Signal Processing”. In this paper we leverage Topological Signal Processing (TSP) to model and analyze water flow as high-order signals defined on the edges of cell complexes. By incorporating these higher-order topological structures, we develop learning-based approaches to reconstruct the dynamics of the water flows and to detect leakages. I’m looking forward to sharing our contributions and exchanging new ideas! #EUSIPCO2025 #TopologicalSignalProcessing #BrainNetworks #WaterDistributionNetworks #MultilayerNetworks
To view or add a comment, sign in
-
Researchers investigate how mice process illusions, highlighting the neural circuitry involved in seeing and perception >>> https://lnkd.in/e3V3FYcG Hyeyoung Shin, Hillel Adesnik & Jerome Lecoq University of California, Berkeley & Allen Institute
To view or add a comment, sign in
-
Last week Kajal Singla, researcher at ScaDS.AI Dresden/Leipzig and Max Planck Institute for Human Cognitive and Brain Sciences presented her latest paper “Learning Latent Spaces for Individualized Functional Neuroimaging with Variational Autoencoders” at the #CCNConference in #Amsterdam. Together with nico scherf (PI at ScaDS.AI) and Pierre-Louis Bazin, she introduced a novel deep learning approach that leverages variational autoencoders (#VAEs) to model functional Magnetic Resonance Imaging (#fMRI) data in subject-specific latent spaces. Traditional methods (e.g., ICA, diffusion map embedding) capture group-level brain networks but often miss individual-specific differences. The VAE-based framework reconstructs and denoises fMRI data in a low-dimensional latent space, enhancing the separation of signals from distinct functional networks without directly aligning them to specific latent axes. These individualized latent spaces can also be aligned across subjects, enabling meaningful cross-subject comparisons. This approach not only enhances the signal-to-noise ratio but also opens new avenues for personalized fMRI analysis and a deeper understanding of the brain’s functional architecture. #Neuroscience #AI #MachineLearning #Personalization 👉 https://lnkd.in/e5a2bZW9
To view or add a comment, sign in
-
-
🧠 Synaptic Plasticity and Wavefunction Collapse: Is the Brain a Biological Quantum Measurement Device? In neuroscience, synaptic plasticity is often divided into two broad categories: • Homosynaptic plasticity: A synapse strengthens or weakens based on its own repeated activity (classic Hebbian learning: “neurons that fire together, wire together”). • Heterosynaptic plasticity: Changes at one synapse spill over to neighboring synapses, redistributing weights and maintaining overall balance across the network. ⸻ ⚛️ The Analogy with Wavefunction Collapse In quantum mechanics, a wavefunction represents a superposition of many possible states. Upon measurement, the wavefunction collapses to a definite outcome. Now compare this to synaptic plasticity: • Homosynaptic plasticity = local selection → one synapse undergoes direct change, like the wavefunction “choosing” a single outcome. • Heterosynaptic plasticity = nonlocal propagation → the chosen outcome constrains surrounding synapses, resembling how wavefunction collapse globally erases competing possibilities. Together, synaptic plasticity operates like a collapse mechanism: local and global processes coupled to stabilize learning. ⸻ 🌌 Vacancy Theory Perspective In Vacancy Theory (VT), observability is the core condition for existence. • Homosynaptic change = the observed result. • Heterosynaptic change = the result’s influence spreading across degrees of freedom, suppressing alternatives. Thus, the combination of synaptic plasticity mechanisms mirrors wavefunction collapse, where selection and elimination co-occur. ⸻ 🚀 Implications • The brain may not simply be an electrical network, but a biological quantum measurement device. • Learning and memory may be understood not just as “data storage,” but as processes of selecting and collapsing topological degrees of freedom. ⸻ 👉 In short: Homosynaptic plasticity = local collapse. Heterosynaptic plasticity = global collapse. Together, they echo the measurement–collapse–state selection sequence in quantum mechanics.
To view or add a comment, sign in
-
Precision Neuroscience, started by ex-Neuralink folks, has developed an ultra-thin brain implant (Layer 7 Cortical Interface) that just got FDA clearance for 30-day use. What’s wild is how minimally invasive it is — instead of drilling deep, it slides in through a tiny incision in the skull and sits on the surface of the brain. The goal? To pick up neural signals and translate thoughts into digital commands. Imagine a paralyzed patient controlling a computer or device just by thinking. As someone who’s super curious about the intersection of AI, neuroscience, and human-machine interfaces, this feels like a glimpse into the future — one where tech is literally bridging biology and digital systems. It also makes me wonder: How far are we from making such devices long-term safe and reliable? Could this open doors not just for medical use, but also for everyday human-AI collaboration? And what role could students like us play in shaping this future?
To view or add a comment, sign in
-
Our paper on using EEG to predict document relevance has been just accepted to ACM Transactions on Information Systems. We introduce a bimodal model that predicts document-level relevance during reading and outperforms EEG-only and text-only baselines. Our findings highlight the potential of human brain signals to model personalised document relevance. Full paper: https://lnkd.in/e_ubSpfe #EEG #IR #NeuroIR #Research #ACM #TOIS
To view or add a comment, sign in
-
🎉 Happy to share that our paper is now published as the Version of Record in eLife The spatial frequency representation predicts category coding in the inferior temporal cortex 👉 https://lnkd.in/eNtsPN4H 🔬 We show that the inferior temporal (IT) cortex explicitly encodes spatial frequency (SF) at both single-neuron and population levels. The coding unfolds coarse-to-fine (low SF decoded first, high SF later), and a neuron’s SF profile can even predict category coding at the population level, especially for faces. Interestingly, SF and category rely on distinct, uncorrelated coding mechanisms, with SF coded more sparsely by individual neurons. 👀 In simple words: The brain’s object-recognition hub first takes in the blurry big picture and only later fills in the sharp details. Neurons tuned to fine detail are particularly important for recognizing faces. And the brain seems to handle “detail level” and “object type” using separate systems. 🙏 Huge thanks to my brilliant co-authors for this collaboration, and to eLife Sciences Publications, Ltd. for their innovative publish-then-review model, which makes science and peer review open and transparent. If you’re curious about vision, the IT cortex, or bio-inspired AI, I’d love to hear your thoughts. #Paper #Research #Neuroscience #Brain #CognitiveNeuroscience #OpenScience #eLife
To view or add a comment, sign in
-
-
Neuroscientists around the world use Brain Map to accelerate their research. Explore the massive, free library of datasets, protocols, computational tools, and more at https://lnkd.in/dAuwUMC #OpenScienceWeek
To view or add a comment, sign in
-
🧠 Excited to share "Spacetop" – a new open fMRI dataset bridging naturalistic and experimental neuroscience! What we built: • 101 participants × 6 hours each = 606 hours of high-quality fMRI data • Naturalistic tasks: 90min movies + 30min audio narratives • Experimental tasks: pain, faces, social cognition, theory of mind • Exceptional data quality (FD <0.2mm, high tSNR) Why it matters: • Experimental ↔ Naturalistic bridge: This dataset enables researchers to link controlled lab experiments with real-world brain activity during naturalistic viewing—a way to converge neural insights from both ends. • Between ↔ Within subject balance: We strike an optimal balance between large sample size and deep individual phenotyping, occupying the sweet-spot between population-scale datasets (HCP, UK Biobank) and intensive single-subject studies (Natural Scenes Dataset). Key applications/suggestions: • Functional alignment methods using shared movie data • Individual differences in pain & social processing • Cross-task cognitive state decoding • Naturalistic vs experimental validation studies Fully open science: Complete dataset on OpenNeuro, BIDS format, all code on GitHub. Huge thanks to our incredible co-authors! I spearheaded this multi-year project during my PhD program at Dartmouth College, from initial concept to data collection, curation, and publication – Our team’s expertise and dedication at every step made all the difference. Special thanks to Tor Wager and Martin Lindquist for this amazing opportunity; Yaroslav Halchenko and Patrick Sadil for the awesome teamwork in open science curation. 📄 Paper: https://lnkd.in/ej6MpgEF 💾 Data: https://lnkd.in/edSiaKnw #Neuroscience #OpenScience #fMRI #DataSharing #cognitive #pain #affective #naturalistic #Bigdata
To view or add a comment, sign in
-
💬 How could EEG-based affective feedback transform architectural design and spatial computing? 📄 Read the full study here: https://www.mdpi.com/2076-3417/15/8/4217