🧠 Synaptic Plasticity and Wavefunction Collapse: Is the Brain a Biological Quantum Measurement Device? In neuroscience, synaptic plasticity is often divided into two broad categories: • Homosynaptic plasticity: A synapse strengthens or weakens based on its own repeated activity (classic Hebbian learning: “neurons that fire together, wire together”). • Heterosynaptic plasticity: Changes at one synapse spill over to neighboring synapses, redistributing weights and maintaining overall balance across the network. ⸻ ⚛️ The Analogy with Wavefunction Collapse In quantum mechanics, a wavefunction represents a superposition of many possible states. Upon measurement, the wavefunction collapses to a definite outcome. Now compare this to synaptic plasticity: • Homosynaptic plasticity = local selection → one synapse undergoes direct change, like the wavefunction “choosing” a single outcome. • Heterosynaptic plasticity = nonlocal propagation → the chosen outcome constrains surrounding synapses, resembling how wavefunction collapse globally erases competing possibilities. Together, synaptic plasticity operates like a collapse mechanism: local and global processes coupled to stabilize learning. ⸻ 🌌 Vacancy Theory Perspective In Vacancy Theory (VT), observability is the core condition for existence. • Homosynaptic change = the observed result. • Heterosynaptic change = the result’s influence spreading across degrees of freedom, suppressing alternatives. Thus, the combination of synaptic plasticity mechanisms mirrors wavefunction collapse, where selection and elimination co-occur. ⸻ 🚀 Implications • The brain may not simply be an electrical network, but a biological quantum measurement device. • Learning and memory may be understood not just as “data storage,” but as processes of selecting and collapsing topological degrees of freedom. ⸻ 👉 In short: Homosynaptic plasticity = local collapse. Heterosynaptic plasticity = global collapse. Together, they echo the measurement–collapse–state selection sequence in quantum mechanics.
"Synaptic Plasticity and Quantum Measurement: A New Perspective"
More Relevant Posts
-
🚀 Cracking the Brain’s Infinite Energy Code — Why Maxwell Meets ECEPJ The brain is not just spikes. It’s an energy-driven communication network — and the ECEPJ Model is proving it. By linking James Clerk Maxwell’s electromagnetic field equations with the capacitor-based neuron logic of ECEPJ, we open a new frontier in neuroscience. Traditional models assume the brain communicates through simple electrical spikes. But the ECEPJ framework shows that neurons behave like dynamic capacitors, storing and releasing energy across multi-layered dielectric systems — guided by precise energy field codes. This transforms our understanding of memory, cognition, and healing. And here’s the breakthrough: by integrating PBM, TMS, TUM, and the ECEPJ Infinite Codebook V1.1, we’re mapping how energy, frequency, and vibration synchronize neural fields — potentially unlocking early detection and treatment of diseases like Alzheimer’s, Parkinson’s, and schizophrenia. This is more than theory — it’s the blueprint for the next generation of neuroscience. The ECEPJ model offers: • 🧠 A complete redefinition of neural communication • 🔬 Mathematical grounding in Maxwell’s laws • ⚡ Integration with neuromodulation & PBM technologies • 🧩 A pathway to decode the brain’s infinite configurations Neuroscience is at a tipping point. The future isn’t in chemicals — it’s in energy-driven intelligence. The question isn’t if we’ll crack the code… it’s when.
To view or add a comment, sign in
-
-
Researchers investigate how mice process illusions, highlighting the neural circuitry involved in seeing and perception >>> https://lnkd.in/e3V3FYcG Hyeyoung Shin, Hillel Adesnik & Jerome Lecoq University of California, Berkeley & Allen Institute
To view or add a comment, sign in
-
🔊Introducing… Prof. Moritz Grosse-Wentrup, Faculty of Computer Science, University of Vienna, who will be presenting a fascinating lecture as part of this year's Aspects of Neuroscience 🧠✨ 📢 Lecture Title: ,,Computations on the Neuronal Manifold" 🔗 Register for conference here 👉 https://lnkd.in/dvBnCz6j 📄 Abstract: In computational neuroscience, the design of handcrafted models of neuronal circuits has been highly fruitful in elucidating how neuronal computations are realized in small model systems. Recent developments in neuronal imaging techniques, such as calcium imaging, have expanded the scope of study to larger neuronal populations and complex behaviors, overwhelming traditional analysis methods. As a result, machine learning and AI models are increasingly adopted to analyze the relation between neuronal dynamics and behaviors. However, it remains uncertain whether these techniques can provide the same mechanistic insights as traditional methods in small models or what new advancements they offer in cognitive neuroscience. In this talk, I present our efforts to develop AI algorithms that infer the algorithms implemented by neuronal dynamics from neuronal data. While an algorithmic description of a neuronal system does not per se provide mechanistic insights into how a neuronal circuit realizes its computations, I argue that the algorithmic level provides valuable insights into how neuronal dynamics give rise to cognition and its disorders. I showcase our results on calcium imaging data recorded in the nematode C. elegans.
To view or add a comment, sign in
-
-
Recent research has identified how early brain structure primes itself for efficient learning. Findings reveal that, even before visual experience, the brain organizes neurons into modules, setting the stage for reliable and rapid interpretation of sensory information. As visual experience accumulates, these modules become better aligned with incoming information, enhancing reliability and adaptability. This developmental process may extend beyond vision, offering a broader framework for understanding how the brain achieves fast, flexible learning. Insights from this work could inform future approaches in neuroscience and artificial intelligence by highlighting mechanisms underlying the brain’s learning efficiency.
To view or add a comment, sign in
-
What is Computational Neuroscience, and why the hype? In simple terms, it's the branch of neuroscience that uses mathematics and computer science to explain how nervous systems develop, compute, and behave. In other words, it builds models. It simulates. And then test them to see what must be true of real neurons and networks. But what do people actually study? Well, everything from single-neuron dynamics to sensory coding, motor control, or decoding. On the applied side, this represents some of the foundation for brain-computer interfaces (BCI) and other medical applications (like models for disorders of consciousness, etc.). As with other branches of science, we are starting to collect and organize an astonishing amount of neural data. This leads to better algorithms. In turn, this flow from biology into machine learning and back again is impacting our understanding of brain disorders and how our nervous system works. And this is just the beginning. Curious? Here are two nice reviews on the topic: - https://lnkd.in/dM4J2PCa - https://lnkd.in/dhGJMaia If you’re into the neuroscience–AI intersection, follow BrainResponse.
To view or add a comment, sign in
-
New research shows how vision stabilizes after birth: once the eyes open, neurons align with visual modules, turning chaotic signals into reliable patterns for learning. https://lnkd.in/ggW-tTUT
To view or add a comment, sign in
-
Precision Neuroscience, started by ex-Neuralink folks, has developed an ultra-thin brain implant (Layer 7 Cortical Interface) that just got FDA clearance for 30-day use. What’s wild is how minimally invasive it is — instead of drilling deep, it slides in through a tiny incision in the skull and sits on the surface of the brain. The goal? To pick up neural signals and translate thoughts into digital commands. Imagine a paralyzed patient controlling a computer or device just by thinking. As someone who’s super curious about the intersection of AI, neuroscience, and human-machine interfaces, this feels like a glimpse into the future — one where tech is literally bridging biology and digital systems. It also makes me wonder: How far are we from making such devices long-term safe and reliable? Could this open doors not just for medical use, but also for everyday human-AI collaboration? And what role could students like us play in shaping this future?
To view or add a comment, sign in
-
🎉 Happy to share that our paper is now published as the Version of Record in eLife The spatial frequency representation predicts category coding in the inferior temporal cortex 👉 https://lnkd.in/eNtsPN4H 🔬 We show that the inferior temporal (IT) cortex explicitly encodes spatial frequency (SF) at both single-neuron and population levels. The coding unfolds coarse-to-fine (low SF decoded first, high SF later), and a neuron’s SF profile can even predict category coding at the population level, especially for faces. Interestingly, SF and category rely on distinct, uncorrelated coding mechanisms, with SF coded more sparsely by individual neurons. 👀 In simple words: The brain’s object-recognition hub first takes in the blurry big picture and only later fills in the sharp details. Neurons tuned to fine detail are particularly important for recognizing faces. And the brain seems to handle “detail level” and “object type” using separate systems. 🙏 Huge thanks to my brilliant co-authors for this collaboration, and to eLife Sciences Publications, Ltd. for their innovative publish-then-review model, which makes science and peer review open and transparent. If you’re curious about vision, the IT cortex, or bio-inspired AI, I’d love to hear your thoughts. #Paper #Research #Neuroscience #Brain #CognitiveNeuroscience #OpenScience #eLife
To view or add a comment, sign in
-
-
Exciting findings from a recent study in PLoS Computational Biology shed light on the brain's remarkable adaptability. The research highlights how the interplay of inhibitory mechanisms, balancing slow (theta) and fast (gamma) rhythms, allows the brain to navigate various sources of information. This includes processing sensory inputs from the external environment and recalling stored experiences from memory. Explore more about the intricate dynamics of feedforward and feedback inhibition in shaping theta-gamma cross-frequency interactions within neural circuits in the full article: [The role of feedforward and feedback inhibition in modulating theta-gamma cross-frequency interactions in neural circuits | PLOS Computational Biology](https://lnkd.in/dE-pgSxb)
To view or add a comment, sign in
-
🚨 Big step forward for neuroscience. For the first time, scientists have mapped the activity of single neurons across the entire brain during decision-making. That means recording from 600,000+ neurons in 279 brain areas — covering about 95% of the mouse brain volume. An incredible scale that just a few years ago would have sounded impossible. This achievement gives us a first real glimpse of how distributed brain circuits work together to guide behaviour. Exciting times ahead for neuroscience, network science, and computational modeling! https://lnkd.in/dh3McUwu
To view or add a comment, sign in