SlideShare a Scribd company logo
Twitter: @dileeplearning @vicariousaiDileep George
Triangulating general intelligence and
cortical microcircuits
1. Build Artificial General Intelligence
2. Understand how the brain works
-600 -450 -300 -150 0
old brain new brain
sponges
jellyfish
flatworm
fish reptiles birds
amphibians mammaliaforms
primates
“Direct fit to nature”

Behaviors for survival in a niche
World modeling
“Model free” “Model based”
Old brain
Neocortex
Less structure More structure
RBMs Conv-nets
Fully connected
Neural nets
Flat Domain specific
architectures
Neo-Cortex
Fewer assumptions More assumptions
Data hungry Data efficient
More innate structureTabula Rasa
Tabula Rasa vs Innate tradeoff
To build general intelligence, the question we need to ask is this: 

What are the basic set of assumptions that are
specific enough to make learning feasible in a
reasonable amount of time while being general
enough to be applicable to a large class of problems?
Looking at the brain helps to speed up the search for the goldilocks
set of constraints, and to tease apart the interdependent
representational scaffolding by which they need to be operationalized
in a machine.
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Cognitive Science
Neurophysiology
Neuroanatomy
Physics of the world
Real-world test sets
Computational/Algorithmic model
Computational principles
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Physics of the world
Real-world test sets
If you ignore the “world” vertex
Doesn’t address the overall task of vision
Doesn’t make contact with real-world data
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Physics of the world
Real-world test sets
If you ignore the “brain vertex”
Pure machine learning / statistics
Need not generalize like the brain
Hard to connect to brain’s circuits
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Physics of the world
Real-world test sets
If you ignore the computational vertex
Too hard to build working models
No algorithmic principles to guide you
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
• “RCN”: Recursive Cortical Network
• Compositionality
• Controllability: Top-down attention
• 300x more data-efficient than deep CNNs
•
Oct 2017
Triangulating general intelligence and cortical microcircuits
CAPTCHAs are important because they exemplify
the strong generalization we seek
Triangulating general intelligence and cortical microcircuits
Triangulating general intelligence and cortical microcircuits
Biological Feature
Computational/Algorithmic
reason
Representation in RCN
Blobs and interblobs
Curvelet-like smoothness of natural signals
& contour-surface factorization
 Structure of contour-surface factor
Lateral connections between inter-blob
columns
Higher-order contour-continuity in natural
signals
Cloned structure of lateral connections for
higher-order interactions
Object-based attention Compositionality
Only positive weights & Object-
background factorization
Hierarchy Efficient learning and inference Hierarchically structured
Border-ownership coding
Required when objects are represented in
a factorized &n hierarchical manner
Two clones of each feature for border
ownership coding
Feedback connections
Inference requires explaining away when
the representation is compositional
Message passing algorithms automatically
do explaining away
Different dynamics for contour and surface
features
Convergence of message-passing
depends on the schedule
Biologically inspired message-scheduling
works better
Everything is interconnected. Trying to build a joint model can simplify things because of this.
• Factorized contour-surface representations

• Lateral connections between contour features

• Feedback and explaining away
Factorized contour-surface representations
Behavior
Neurophysiology
What is the general computational principle behind contour-surface factorization?
Piecewise continuous.
Discontinuities themselves are
piecewise continuous in a lower
dimension.
3D volumetric medical image Moving disk in 3D spatio-temporal space. Shock fronts in fluid dynamics
What do you lose through contour-surface factorization?
you lose the ability to easily recognize QR codes
• Factorized contour-surface representations

• Lateral connections between contour features
• Feedback and explaining away
Lateral connections in the visual cortex
Behavior
Neurophysiology
Property of natural
signals
Bosking et al 1997
Lawlor & Zucker 2013
generations from the contour network
Lateral factor
generations from the contour network
• Factorized contour-surface representations

• Lateral connections between contour features

• Feedback and explaining away
Required for dynamic contextual integration
m?
Local inferences are often wrong even in ‘simple-looking’ in real-world images
Explaining-away local evidence helps to find the globally best solution
m!
m!!
m?
coungk
Explaining-away local evidence helps to find the globally best solution
Parsing a scene = MAP inference
Backward pass will hallucinate some objects. These are then explained away with
further message passing.
CAPTCHA parsing
• RCN is NOT trained on examples from each CAPTCHA
• We train on clean font examples
• Generalizes to different deformations and clutter without specific training
One-shot and few-shot recognition on MNIST Robustness to novel perturbations
Reconstruction under novel perturbations
Omniglot : One-shot generation samples
Two standard benchmarks for real-world text recognition
ICDAR Street View Text
Exceeds Deep CNN accuracy with 300x data efficiency
Triangulating general intelligence and cortical microcircuits
Triangulating general intelligence and cortical microcircuits
Task Competing model
Classification CNN
RCN
Parsing Multi-CNN
Reconstruction VAE, DRAW
Generation
Bayesian Program
Learning
Occlusion Reasoning &
Binding
RCN is an integrated vision model
Triangulating general intelligence and cortical microcircuits
Semi-transparent shiny
Grasping succes rate > 99% on all these variations
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
• “RCN”: Recursive Cortical Network
• Compositionality
• Controllability: Top-down attention
• 300x more data-efficient than deep CNNs
•
Oct 2017
Microcircuit model
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
• “RCN”: Recursive Cortical Network
• Compositionality
• Controllability: Top-down attention
• 300x more data-efficient than deep CNNs
•
Oct 2017
Graphical Model -> Factor Graph -> Neural Circuit
Triangulating general intelligence and cortical microcircuits
A B
C D
a b c
d
e
E
ma!d
md!a
md!b mb!d
mc!d
md!eme!d
md!c
Belief b
d
OR OR OR
F
G
÷
expexp -1
X
X
X
⇤
log
⇤
mb!d mc!d
me!d
md!a
mb!d mc!d
md!a = log
emc!d+mb!d+me!d
emc!d+mb!d+me!d (mb!d) (mc!d)(eme!d 1)
H
a b c
e
Each variable is binary
A scalar value is passed along each edge
(log ratio)
microcolumn
FB_pool
Pools
Contour
Feature copies
Contour
Features
Appearance
Features
macrocolumn
FF_feat
FF_copy
FF_pool
FF_lat
BEL_lat
BEL_feat
FF_EXP_AWY
FB_comp
FeaturesLateralsPools
FF_pool
FF_feat
a
b
c
d e
f
g
h
i
OR OR POOL CONTOUR-LAT CONTOUR-SURFACE APPEARANCE
ExpAwy
FF_lat
RCN factor graph for one level
mf!a = max
i=1...M
mbi!f log M
mf!bm
= min(ma!f log M, max
j6=m
mbj !f )
f
a
b1 b2 bM
· · ·
POOL
zm = T log
⇣ mcm!d
T
⌘
hm =
X
j6=m
zj + Tf
⇣ X
j6=m
zj
T
⌘
md!cm
= T log
⇣ ⇣hm
T
⌘
+
⇣ hm
T
⌘
e
me!d
T
⌘
with f(x) = log(1 e x
) and (x) =
1
1 + e x
md!e =
X
m
zm + Tf
⇣ X
m
zm
T
⌘
· · ·c1 c2 cM
e
dOR
a
b1 b2 bM
· · ·
d
md!a =
X
m
mbm!d
md!bj = ma!d +
X
m6=j
mbm!d
mf!h2
c
= max
h1
(mh1!f + L(h1, h2))
mf!h1
c
= max
h1
(mh2!f + L(h1, h2))
Lh1 h2
sa = T log
X
i
e
mai!f
T
sb = T log
X
i
e
mbi!f
T
sb = T log
X
i
e
mai!f +mbi!f
T
mf!e = T log
⇣
e
sa+sb sd
T 1
⌘
c
mf!ai
c
= mbi!f + T log
✓
1 + e
me!f
T c
✓
e
sb mbi!f
T 1
◆◆
mf!bi
c
= mai!f + T log
✓
1 + e
me!f
T c
✓
e
sb mai!f
T 1
◆◆
a1 . . . aM b1 . . . bM
e
A B
C
D
E
From Retina
TRN
FF_copy
FF_pool
FB_pool
FF_lat
BEL_lat
FF_lat
FB_comp
blob interblob blob
BEL_lat
FF_EXP_AWY
BEL_POOL_FF
BEL_feat
a
b
c
d
e
f
g
h
i
Layer 2/3
Layer 1
Layer 4
Layer 5
Layer 6
First Order
Thalamus
Higer Order
Thalamus
FeedforwardFeedbackExplaining
Away
microcolumn
FeaturesLateralsPoolsFeaturesLaterals
To other
cortical
columns
Direct
Feedforward
Gated
feedforward
a
b
c
d
e
f
h
i
Feedback
copyforfeedback
copyforfeedback
A B
Feedback/higher-level context to this feature
Pooling centered on this feature
Sequences through this feature
Lateral connections of this feature
Feedforward representation of this feature
Lateral connections with feedback
Belief computation of this feature
Feedback to components of this feature
microcolumn
All the neurons in a cortical column
represent the same feature
Different laminae represent the computations
corresponding to that features’ participation:

• laterally with other features in same level

• hierarchically with features in other levels

• sequences passing through this feature

• feed-forward, feedback computation

• Belief computation



Many inter-laminar, within-column
connections can be innate.

Inter-columnar connections are
learned
Cortical microcolumn as a binary random variable
Thalamus as “Explaining away and gating”
a b c
L4
L5
L3
L2
L6
L1
TRN
HO
e
L4
L5
L3
L2
L6
L1
TRN
FO
Sensory input
ma!dmd!a
md!b mb!d
mc!d md!c
TRN
HO
L6
L5
b
L5
d
L6
e
a b c
d
e
OR OR OR
a c
Sherman & Guillery thalamic pathway
ma!dmd!a
md!b mb!d
mc!d md!c
TRN
HO
L6
L5
b
L5
d
L6
e
a b c
d
e
OR OR OR
a c
interactions between inter-blobs and blobs
Virtual Neurophysiology
Subjective contours
Triangulating general intelligence and cortical microcircuits
Neon-color spreading
hallucinated boundary
hallucinated color
Triangulating general intelligence and cortical microcircuits
Properties of
the brain
Properties of
the world
Real-world
datasets
Algorithmic
model
Triangulation strategy
Visit www.vicarious.com
Follow Vicarious on Twitter @vicariousai
Email: dileep@vicarious.com
Follow me on Twitter @dileeplearning
More information & Contact
Thank you for listening!

More Related Content

PDF
G04433953
PDF
capsule network
PDF
My invited talk at the 23rd International Symposium of Mathematical Programmi...
PDF
Text prediction based on Recurrent Neural Network Language Model
PDF
The impact of visual saliency prediction in image classification
PDF
A Novel Dencos Model For High Dimensional Data Using Genetic Algorithms
PPTX
Recurrent Neural Network
G04433953
capsule network
My invited talk at the 23rd International Symposium of Mathematical Programmi...
Text prediction based on Recurrent Neural Network Language Model
The impact of visual saliency prediction in image classification
A Novel Dencos Model For High Dimensional Data Using Genetic Algorithms
Recurrent Neural Network

Similar to Triangulating general intelligence and cortical microcircuits (20)

PDF
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...
PPT
lecun-01.ppt
PPTX
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima Pandit
PPTX
Computational Neuroscience - The Brain - Computer Science Interface
PDF
CIKM-keynote-Nov2014- Large Scale Deep Learning.pdf
PDF
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
PPTX
Neural Networks and Deep Learning Basics
PDF
CNN Algorithm
PDF
Introduction to Deep Learning: Concepts, Architectures, and Applications
PPTX
Deep Visual Understanding from Deep Learning by Prof. Jitendra Malik
PDF
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
PDF
Beep...Destroy All Humans!
PDF
Why do we need machine learning? Neural Networks for Machine Learning Lectur...
PPTX
A ann neural Aj NN ghgh hghyt gWeek 1.pptx
PPTX
Reinterpreting the Cortical Circuit
PDF
Deep learning and reasoning: Recent advances
PPT
ML-NaiveBayes-NeuralNets-Clustering.ppt-
PDF
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
PPTX
01-Introduction to artificial in(1).pptx
PPTX
Neural networks...
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...
lecun-01.ppt
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima Pandit
Computational Neuroscience - The Brain - Computer Science Interface
CIKM-keynote-Nov2014- Large Scale Deep Learning.pdf
SURVEY ON BRAIN – MACHINE INTERRELATIVE LEARNING
Neural Networks and Deep Learning Basics
CNN Algorithm
Introduction to Deep Learning: Concepts, Architectures, and Applications
Deep Visual Understanding from Deep Learning by Prof. Jitendra Malik
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Beep...Destroy All Humans!
Why do we need machine learning? Neural Networks for Machine Learning Lectur...
A ann neural Aj NN ghgh hghyt gWeek 1.pptx
Reinterpreting the Cortical Circuit
Deep learning and reasoning: Recent advances
ML-NaiveBayes-NeuralNets-Clustering.ppt-
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
01-Introduction to artificial in(1).pptx
Neural networks...
Ad

Recently uploaded (20)

PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Encapsulation theory and applications.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Machine Learning_overview_presentation.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Spectroscopy.pptx food analysis technology
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
Review of recent advances in non-invasive hemoglobin estimation
Diabetes mellitus diagnosis method based random forest with bat algorithm
“AI and Expert System Decision Support & Business Intelligence Systems”
Dropbox Q2 2025 Financial Results & Investor Presentation
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Big Data Technologies - Introduction.pptx
Encapsulation theory and applications.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Programs and apps: productivity, graphics, security and other tools
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
20250228 LYD VKU AI Blended-Learning.pptx
Machine Learning_overview_presentation.pptx
Machine learning based COVID-19 study performance prediction
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation_ Review paper, used for researhc scholars
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Empathic Computing: Creating Shared Understanding
Spectroscopy.pptx food analysis technology
Spectral efficient network and resource selection model in 5G networks
Digital-Transformation-Roadmap-for-Companies.pptx
Ad

Triangulating general intelligence and cortical microcircuits

  • 1. Twitter: @dileeplearning @vicariousaiDileep George Triangulating general intelligence and cortical microcircuits
  • 2. 1. Build Artificial General Intelligence 2. Understand how the brain works
  • 3. -600 -450 -300 -150 0 old brain new brain sponges jellyfish flatworm fish reptiles birds amphibians mammaliaforms primates “Direct fit to nature” Behaviors for survival in a niche World modeling “Model free” “Model based”
  • 5. Less structure More structure RBMs Conv-nets Fully connected Neural nets Flat Domain specific architectures Neo-Cortex Fewer assumptions More assumptions Data hungry Data efficient More innate structureTabula Rasa Tabula Rasa vs Innate tradeoff
  • 6. To build general intelligence, the question we need to ask is this: What are the basic set of assumptions that are specific enough to make learning feasible in a reasonable amount of time while being general enough to be applicable to a large class of problems? Looking at the brain helps to speed up the search for the goldilocks set of constraints, and to tease apart the interdependent representational scaffolding by which they need to be operationalized in a machine.
  • 7. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy
  • 8. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy Cognitive Science Neurophysiology Neuroanatomy Physics of the world Real-world test sets Computational/Algorithmic model Computational principles
  • 9. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy Physics of the world Real-world test sets If you ignore the “world” vertex Doesn’t address the overall task of vision Doesn’t make contact with real-world data
  • 10. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy Physics of the world Real-world test sets If you ignore the “brain vertex” Pure machine learning / statistics Need not generalize like the brain Hard to connect to brain’s circuits
  • 11. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy Physics of the world Real-world test sets If you ignore the computational vertex Too hard to build working models No algorithmic principles to guide you
  • 12. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy • “RCN”: Recursive Cortical Network • Compositionality • Controllability: Top-down attention • 300x more data-efficient than deep CNNs • Oct 2017
  • 14. CAPTCHAs are important because they exemplify the strong generalization we seek
  • 17. Biological Feature Computational/Algorithmic reason Representation in RCN Blobs and interblobs Curvelet-like smoothness of natural signals & contour-surface factorization Structure of contour-surface factor Lateral connections between inter-blob columns Higher-order contour-continuity in natural signals Cloned structure of lateral connections for higher-order interactions Object-based attention Compositionality Only positive weights & Object- background factorization Hierarchy Efficient learning and inference Hierarchically structured Border-ownership coding Required when objects are represented in a factorized &n hierarchical manner Two clones of each feature for border ownership coding Feedback connections Inference requires explaining away when the representation is compositional Message passing algorithms automatically do explaining away Different dynamics for contour and surface features Convergence of message-passing depends on the schedule Biologically inspired message-scheduling works better Everything is interconnected. Trying to build a joint model can simplify things because of this.
  • 18. • Factorized contour-surface representations • Lateral connections between contour features • Feedback and explaining away
  • 20. What is the general computational principle behind contour-surface factorization?
  • 21. Piecewise continuous. Discontinuities themselves are piecewise continuous in a lower dimension. 3D volumetric medical image Moving disk in 3D spatio-temporal space. Shock fronts in fluid dynamics
  • 22. What do you lose through contour-surface factorization? you lose the ability to easily recognize QR codes
  • 23. • Factorized contour-surface representations • Lateral connections between contour features • Feedback and explaining away
  • 24. Lateral connections in the visual cortex Behavior Neurophysiology Property of natural signals Bosking et al 1997 Lawlor & Zucker 2013
  • 25. generations from the contour network Lateral factor
  • 26. generations from the contour network
  • 27. • Factorized contour-surface representations • Lateral connections between contour features • Feedback and explaining away Required for dynamic contextual integration
  • 28. m? Local inferences are often wrong even in ‘simple-looking’ in real-world images Explaining-away local evidence helps to find the globally best solution
  • 29. m!
  • 30. m!!
  • 31. m?
  • 32. coungk Explaining-away local evidence helps to find the globally best solution
  • 33. Parsing a scene = MAP inference Backward pass will hallucinate some objects. These are then explained away with further message passing.
  • 34. CAPTCHA parsing • RCN is NOT trained on examples from each CAPTCHA • We train on clean font examples • Generalizes to different deformations and clutter without specific training
  • 35. One-shot and few-shot recognition on MNIST Robustness to novel perturbations
  • 36. Reconstruction under novel perturbations
  • 37. Omniglot : One-shot generation samples
  • 38. Two standard benchmarks for real-world text recognition ICDAR Street View Text
  • 39. Exceeds Deep CNN accuracy with 300x data efficiency
  • 42. Task Competing model Classification CNN RCN Parsing Multi-CNN Reconstruction VAE, DRAW Generation Bayesian Program Learning Occlusion Reasoning & Binding RCN is an integrated vision model
  • 44. Semi-transparent shiny Grasping succes rate > 99% on all these variations
  • 45. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy • “RCN”: Recursive Cortical Network • Compositionality • Controllability: Top-down attention • 300x more data-efficient than deep CNNs • Oct 2017 Microcircuit model
  • 46. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy • “RCN”: Recursive Cortical Network • Compositionality • Controllability: Top-down attention • 300x more data-efficient than deep CNNs • Oct 2017
  • 47. Graphical Model -> Factor Graph -> Neural Circuit
  • 49. A B C D a b c d e E ma!d md!a md!b mb!d mc!d md!eme!d md!c Belief b d OR OR OR F G ÷ expexp -1 X X X ⇤ log ⇤ mb!d mc!d me!d md!a mb!d mc!d md!a = log emc!d+mb!d+me!d emc!d+mb!d+me!d (mb!d) (mc!d)(eme!d 1) H a b c e Each variable is binary A scalar value is passed along each edge (log ratio)
  • 51. mf!a = max i=1...M mbi!f log M mf!bm = min(ma!f log M, max j6=m mbj !f ) f a b1 b2 bM · · · POOL zm = T log ⇣ mcm!d T ⌘ hm = X j6=m zj + Tf ⇣ X j6=m zj T ⌘ md!cm = T log ⇣ ⇣hm T ⌘ + ⇣ hm T ⌘ e me!d T ⌘ with f(x) = log(1 e x ) and (x) = 1 1 + e x md!e = X m zm + Tf ⇣ X m zm T ⌘ · · ·c1 c2 cM e dOR a b1 b2 bM · · · d md!a = X m mbm!d md!bj = ma!d + X m6=j mbm!d mf!h2 c = max h1 (mh1!f + L(h1, h2)) mf!h1 c = max h1 (mh2!f + L(h1, h2)) Lh1 h2 sa = T log X i e mai!f T sb = T log X i e mbi!f T sb = T log X i e mai!f +mbi!f T mf!e = T log ⇣ e sa+sb sd T 1 ⌘ c mf!ai c = mbi!f + T log ✓ 1 + e me!f T c ✓ e sb mbi!f T 1 ◆◆ mf!bi c = mai!f + T log ✓ 1 + e me!f T c ✓ e sb mai!f T 1 ◆◆ a1 . . . aM b1 . . . bM e A B C D E
  • 52. From Retina TRN FF_copy FF_pool FB_pool FF_lat BEL_lat FF_lat FB_comp blob interblob blob BEL_lat FF_EXP_AWY BEL_POOL_FF BEL_feat a b c d e f g h i Layer 2/3 Layer 1 Layer 4 Layer 5 Layer 6 First Order Thalamus Higer Order Thalamus FeedforwardFeedbackExplaining Away microcolumn FeaturesLateralsPoolsFeaturesLaterals To other cortical columns Direct Feedforward Gated feedforward a b c d e f h i Feedback copyforfeedback copyforfeedback A B
  • 53. Feedback/higher-level context to this feature Pooling centered on this feature Sequences through this feature Lateral connections of this feature Feedforward representation of this feature Lateral connections with feedback Belief computation of this feature Feedback to components of this feature microcolumn All the neurons in a cortical column represent the same feature Different laminae represent the computations corresponding to that features’ participation: • laterally with other features in same level • hierarchically with features in other levels • sequences passing through this feature • feed-forward, feedback computation • Belief computation Many inter-laminar, within-column connections can be innate. Inter-columnar connections are learned Cortical microcolumn as a binary random variable
  • 54. Thalamus as “Explaining away and gating”
  • 55. a b c L4 L5 L3 L2 L6 L1 TRN HO e L4 L5 L3 L2 L6 L1 TRN FO Sensory input ma!dmd!a md!b mb!d mc!d md!c TRN HO L6 L5 b L5 d L6 e a b c d e OR OR OR a c Sherman & Guillery thalamic pathway
  • 64. Properties of the brain Properties of the world Real-world datasets Algorithmic model Triangulation strategy
  • 65. Visit www.vicarious.com Follow Vicarious on Twitter @vicariousai Email: dileep@vicarious.com Follow me on Twitter @dileeplearning More information & Contact Thank you for listening!