Bio-Inspired Animated Characters
A Mechanistic & Cognitive View
Ben Kenwright
School of Media Arts and Technology
Southampton Solent University
United Kingdom
Abstract—Unlike traditional animation techniques, which attempt
to copy human movement, ‘cognitive’ animation solutions mimic
the brain’s approach to problem solving, i.e., a logical (intelligent)
thinking structure. This procedural animation solution uses bio-
inspired insights (modelling nature and the workings of the brain)
to unveil a new generation of intelligent agents. As with any
promising new approach, it raises hopes and questions; an extremely
challenging task that offers a revolutionary solution, not just in
animation but to a variety of fields, from intelligent robotics and
physics to nanotechnology and electrical engineering. Questions,
such as, how does the brain coordinate muscle signals? How does
the brain know which body parts to move? With all these activities
happening in our brain, we examine how our brain ‘sees’ our body
and how it can affect our movements. Through this understanding
of the human brain and the cognitive process, models can be
created to mimic our abilities, such as, synthesizing actions that
solve and react to unforeseen problems in a humanistic manner.
We present an introduction to the concept of cognitive skills, as
an aid in finding and designing a viable solution. This helps us
address principal challenges, such as: How do characters perceive
the outside world (input) and how does this input influence their
motions? What is required to emulate adaptive learning skills as
seen in higher life-forms (e.g., a child’s cognitive learning process)?
How can we control and ‘direct’ these autonomous procedural
character motions? Finally, drawing from experimentation and
literature, we suggest hypotheses for solving these questions and
more. In summary, this article analyses the biological and cognitive
workings of the human mind, specifically motor skills. Reviewing
cognitive psychology research related to movement in an attempt
to produce more attentive behavioural characteristics. We conclude
with a discussion on the significance of cognitive methods for creating
virtual character animations, limitations and future applications.
Keywords–animation, life-like, movement, cognitive, bio-mechanics,
human, reactive, responsive, instinctual, learning, adapting, biological,
optimisation, modular, scalable
I. INTRODUCTION
Movement is Life Animated films and video games are pushing
the limits of what is possible.
In today’s virtual environments, animations tends to be data-
driven [1], [2]. It is common to see animated characters using pre-
recorded motion capture data, but it is rare to see the animated
characters driven using purely procedural solutions. With the
dawn of Virtual Reality (VR) and Augmented Reality (AR) there
is an ever growing need for content - to create indistinguishably
realistic virtual worlds quickly and cost effectively. While ren-
dered scenes may appear highly realistic, the ‘movement’ of ac-
tively driven systems (e.g., biological creatures) is an open area of
research [2]. Specifically, the question of how to ‘automatically’
create realistic actions that mimic the real-world. This includes,
the ability to learn and adapt to unforeseen circumstances in a life-
like manner. While we are able to ‘record’ and ‘playback’ highly
realistic animations in virtual environments, they have limitations.
The motions are constrained to specific skeleton topologies, not
to mention, time consuming and challenging to create motions for
non-humans (creatures and aliens). What is more, the recording of
animations for dangerous situations is impossible using motion
capture (so must be manually done using artistic intervention).
Another key thing to remember, in dynamically changing envi-
ronments (video games), pre-recorded animations are unable to
adapt automatically to changing situations.
This article attempts to solve these problems using biolog-
ically inspired concepts. We investigate neurological, cognitive
and behavioural methods. These methods provide inspirational
solutions for creating adaptable models that synthesize life-
like character characteristics. We examine how the human brain
‘thinks’ to accomplish tasks; and how the brain solves unforeseen
problems. Exploiting the knowledge of how the brain functions,
we formulate a system of conditions that attempt to replicate
humanistic properties. We discusses novel approaches around
solving these problems, by questioning, analysing and formulat-
ing a system based on the human cognitive processes.
Cognitive vs Machine Learning Essentially, cognitive com-
puting has the ability to reason creatively about data, patterns,
situations, and extended models (dynamically). However, most
statistics-based machine learning algorithms cannot handle prob-
lems much beyond what they have seen and learned (match).
The machine learning algorithm has to be paired with cognitive
capabilities to deal with truly ‘new situation’. Cognitive science
therefore raises challenges for, and draws inspiration from, ma-
chine learning; and insights about the human mind to help inspire
new directions for animation. Hence, cognitive computing along
with many other disciplines within the field of artificial intelli-
gence are gaining popularity, especially in character systems, so
in the not so distant future will have a colossal impact on the
animation industry.
Automation The ability to ‘automatically’ generate physically
correct humanistic animations is revolutionary. Remove and add
behavioural components (happy and sad). Create animations for
different physical skeletons using a single set of training data. Per-
form a diverse range of actions, for instance, getting-up, jumping,
dancing, and walking. The ability to react to external interven-
tions, while completing assigned task (i.e., combining motions
with priorities). These problem-solving skills are highly valued.
We want character agents to learn and adapt to the situation. This
includes:
• physically based models (e.g., rigid bodies) that are
controlled through internal joint torques (muscle forces)
• controllable adjustable joint signals to accomplish spe-
cific actions (trained)
• learn and retain knowledge from past experiences
• embed personal traits (personality)
Problems We want the method to be automatic (i.e., not depend
too heavily on pre-canned libraries). Avoid simply playing back
captured animations, but instead paramaterizing and re-using
animations for different contexts (provide stylistic advice to
the training algorithm). We want the solution to have the ability
to adapt on-the-fly to unforeseen situations in a natural life-like
manner. Having said that, we also want to accommodate a diverse
range of complex motions, not just balanced walking, but getting-
up, climbing, and dancing actions. With a physics-based model
at the heart of the system (i.e., not just a kinematic skeleton
but joint torques/muscles), we are able to ensure a physically
correct solution. While a real-world human skeleton has a huge
number of degrees-of-freedom, we accept that a lower fidelity
model is able to represent the necessary visual characteristics
(enable reasonable computational overheads). Of course, even a
simplified model possesses a large amount of ambiguity with
singularities. All things considered, we do not want to focus on
the ‘actions’ - but embrace the autonomous emotion, behaviour
and cognitive properties that sit on top of the motion (intelligent
learning component).
Figure 1. Homunculus Body Map - The somato-sensory homunculus is a kind
of map of the body [3], [4]. The distorted model/view of a person (see Figure
2) represents the amount of sensory information a body part sends to the central
nervous system (CNS)
Geometric to Cognitive Synthesizing animated characters for
virtual environments addresses the challenges of automating a
variety of difficult development tasks. Early research combined
geometric and inverse kinematic models to simplify key-framing.
Physical models for animating particles, rigid bodies, deformable
solids, fluids, and gases have offered the means to generate co-
pious quantities of realistic motion through dynamic simulation.
Bio-mechanical models employ simulated physics to automate the
lifelike animation of animals with internal muscle actuators. In
recent years, research in behavioral modeling has made progress
towards ‘self-animating’ characters that react appropriately to
perceived environmental stimuli [5], [6], [7], [8]. It has remained
difficult, however, to instruct these autonomous characters so that
they satisfy the programmer’s goals. As pointed out by Funge
et al. [9], the computer graphics solution has evolved, from
geometric solutions to more logical mathematical approaches, and
ultimately cognitive models, as shown in Figure 3.
A large amount of work has been done into motion re-
targeting (i.e., taking existing pre-recorded animations and mod-
ifying them to different situations) [10], [11], [12]. Targeted
solutions that generate animations for specific situations, such as,
locomotion [13] and climbing [14]. Kinematic models do not take
into account the physical properties of the model, in addition, are
only able to solve local problems (e.g., reading and stepping and
not complex rhythmic actions) [15], [16], [17]. Procedural models
may not converge to natural looking motions [18], [19], [20].
Cognitive models go beyond behavioral models, in that they
govern what a character knows, how that knowledge is acquired,
and how it can be used to plan actions. Cognitive models are
applicable in instructing a new breed of highly autonomous,
quasi-intelligent characters that are beginning to find use in in-
teractive virtual environments. We decompose cognitive modeling
into two related sub-tasks: (1) domain knowledge specification
and (2) character instruction. This is reminiscent of the classic
dictum from the field of artificial intelligence (AI) that tries to
promote modularity of design by separating out knowledge from
control.
knowledge + instruction = intelligent behavior (1)
Domain (knowledge) specification involves administering
knowledge to the character about its world and how that world can
change. Character instructions tell the character to try to behave
in a certain way within its world in order to achieve specific goals.
Like other advanced modeling tasks, both of these steps can be
fraught with difficulty unless developers are given the right tools
for the job.
Components We wanted to avoid a ‘single’ amalgamated al-
gorithm (e.g., Neural Networks or connectionist models [21]).
Instead we investigate modular or dissectable learning models
for adapting joint signals to accomplish tasks. For example,
genetic algorithms [18], in combination with Fourier methods
to subdivide complex actions into components (i.e., extract and
identify behavioural characteristics [22]). Coupled with the fact
that, joint motions are essentially signals, while the physics-
based model ensures the generated motions are physically correct
[23]. To say nothing of the advancements in parallel hardware -
we envision the exploitation of massively parallel architecture
constitutional.
Figure 2. Homunculus Body Map - Reinert et al [4], presented a graphical
paper on mesh deformation to visualize the somato-sensory information of the
brain-body. The figure conveys the importance of the neuronal homunculus -
i.e., the human body part size relation to neural density and the brain.
Contribution The novel contribution of this technical article
is the amalgamation of numerous methods, for instance, bio-
mechanics, psychology, robotics, and computer animation, to
address the question of ‘how can we make virtual characters solve
unforeseen problems automatically and in a realistic manner?’
(i.e., mimic the human cognitive learning process).
Figure 3. Timeline - Computer Graphics Cognitive Development Model (Geometric, Kinematic, Physical, Behavioural, and Cognitive) ([9]. Simplified illustrate of
milestones over the years that have contributed novel animation solutions - emphasises the gradual transition from kinematic and physical techniques to intelligent
behavioural models. [A] [24]; [B] [20]; [C] [19]; [D] [25]; [E] [26]; [F] [27]; [G] [28]; [H] [18]; [I] [29]; [J] [30]; [K] [31]; [L] [32]; [M] [33]; [N] [34]; [O] [35];
[P] [8]; [Q] [36]; [R] [7]; [S] [5]; [T] [6]; [U] [37]; [V] [38];
II. BACKGROUND & RELATED WORK
Literature Gap The research in this article brings together
numerous diverse concepts and while in their individual field they
are well studied, in their whole and applied to virtual character
animations, there is a serious gap in the referential literature.
Hence, we begin by exploring branches of research from cognitive
psychology and bio-mechanics before taking them across and
combining them with computer animation and robotics concepts.
Autonomous Animation Solutions Formal approaches to an-
imation, such as, genetic algorithms [18], [19], [20], may not
converge to natural looking motions without additional work,
such as, artist intervention or constrained/complex fitness func-
tions. This causes limitations and constrains the ‘automation’
factor. We see autonomy as the emergent of salient, novel, action
discovery, through self organisation of high level goal directed
orders. The behavioural aspect emerges from the physical (or
virtual) constraints and fundamental low level mechanisms. We
adapt bodily motor controls (joint signals) from randomness to
purposeful actions based on cognitive development (Lee [39]
referred to this process as evolving from babbling to play).
Interestingly, this intrinsic method of behavioural learning has
also been demonstrated in biological models (known as action
discovery) [40].
Navigation/Controllers/Mechanical Synthesizing human
movement that mimics real-world behaviours ‘automatically’ is
a challenging and important topic. Typically, reactive approaches
for navigation and pursuit [24], [41], [42], [27], may not readily
accommodate task objectives, sensing costs, and cognitive
principles. A cognitive solution adapts and learns (finds answers
to unforeseen problems).
Expression/Emotion Humans exhibit a wide variety of ex-
pressive actions, which reflect their personalities, emotions, and
communicative needs [25], [26], [28]. These variations often in-
fluence the performance of simpler gestural or facial movements.
Components Essential Components:
• Fourier - subdivide actions into components, extract and
identify behavioural characteristics [22]
• Heuristic Optimisation [18] - adapting non-linear sig-
nals (with purpose)
• Physics-Based [43], [23] - torques and forces to control
the model
• Parallel Architecture - exploit massively parallel pro-
cessor architecture, such as, the graphical processing unit
(GPU)
• Randomness - inject awareness and randomness (blood
flow, repository signals, background noise) [44], [45]
Brain Body Map As shown in Figure 1, we are able to map
the minds awareness of different body parts. This is known as the
homunculus body map. So why is it important for movement?
Helps understanding the neural mechanisms of human sensori-
motor coordination and cognitive connection. While we are a
complex biological organism, we need feedback and information
(input) to be able to move and thus live (i.e., movement is life).
The motor part of the brain relies on information from the sensory
systems. The control signals are dynamically changing depending
on our state. Simply put, the better the central representation,
the better the motor output will be and the more life-like and
realistic the final animations will be. Our motor systems need
to know the state of our body. If the situation is not known or not
very clear, the movements will not be good, because the motor
systems will be ‘afraid’ to go all out. Very similar to driving a car
on an unknown road in misty conditions with only an old, worn
and worm eaten map. We drive slow and tense, to avoid hitting
something or getting of road. This is safety behaviour: safe, but
taxing on the system.
Cognitive Science The cognitive science of motion is an inter-
disciplinary scientific study of the mind and its processes. We
examines what cognition motion is, what it does and how it
works. This includes research in to intelligence and behaviour,
especially focusing on how information is represented, processed,
Figure 4. Brain and Actions - The phases (left-to-right) the human brain goes through - from thinking about doing a task to accomplishing it (e.g., walking to the
kitchen to get a drink from the cupboard).
and transformed (in faculties such as perception, language, mem-
ory, attention, reasoning, and emotion) within nervous systems
(humans or other animals) and machines (e.g. computers). Cog-
nitive motion science consists of multiple research disciplines,
including robotics, psychology, artificial intelligence, philosophy,
neuroscience, linguistics, and anthropology. The subject spans
multiple levels of analysis, from low level learning and decision
mechanisms to high level logic and planning; from neural cir-
cuitry to modular brain organization. However, the fundamental
concept of cognitive motion is the understanding of instinctual
thinking in terms of the structural mind and computational
procedures that operate on those structures. Importantly, cognitive
solutions are not only adaptive but also anticipatory and
prospective, that is, they need to have (by virtue of their phy-
logeny) or develop (by virtue of their ontogeny) some mechanism
to rehearse hypothetical scenarios.
Neural Networks and Cognitive Simulators Computational
Neuroscience [46], [29], [47] biologically inspired solutions for
neural models for simulating information processing and cog-
nition and behaviour modelling. The majority of the research
has focused on modelling ‘isolated components’. Cognitive ar-
chitectures [48] using biologically based models for goal driven
learning and behaviours. Publically available neural network
simulators are available [49].
Motor Skills Our brain sees the world in ‘maps’. The maps
are distorted, depending on how we use each sense, but they
are still maps. Almost every sense has a map. Most senses have
multiple maps. We have a ‘tonotopic’ map, which is a map of
sound frequency, from high pitched to low pitched, which is how
our brain processes sound. We have a ‘retinotopic’ map, which
is a reproduction of what you are seeing, and it is how the brain
processes sight. Our brain loves maps. Most importantly, we have
maps of our muscles. The mapping from sensory information to
motor movement is shown in Figure 1. For muscle movements,
the finer, more detailed the movements are, the more brain space
those muscles have. Hence, we can address which muscles take
priority and under what circumstances (i.e., sensory input). This
also opens the door to lots of interesting and exciting questions,
such as, what happens to the maps if we lose a body part, such
as, a finger.
Psychology Aspect A number of interesting facts are hidden
in the psychology aspect of movement that are often taken for
granted or overlooked. Incorporating them in a dynamic system
allows us to solve a number of problems. For example, when we
observe movements which are slightly different from each other
but possess similar characteristics. The work by Armstrong [50],
showed that when a movements sequence is speeded up as a unit,
the overall relative movement or ‘phasing’ remains constant. Led
to the discovery of relative forces or the relationship among forces
in the muscles participating in the action.
How the Brain Controls Muscles Let us pretend that we want
to go to the kitchen, because we are hungry. First, an area in
our brain called the parietal lobe comes up with a lots of
possible plans. We could get to the kitchen by skipping, sprinting,
uncoordinated somersaulting, or walking. The parietal lobe sends
these plans to another brain area called the basal ganglia. The
basal ganglia picks ‘walking’ as the best plan (with uncoordinated
somersaulting as close second option). It tells the parietal lobe the
plan. The parietal lobe confirms it, and sends the ‘walk to kitchen’
plan down the spinal cord and to the muscles. The muscles move.
As they move, our cerebellum kicks into high gear, making sure
we turn right before we crash into the kitchen counter, and that
we jump over the dog. Part of the cerebellum’s job is to make
quick changes to muscle movements while they are happening
(see Figure 4).
Visualizing the Solution (Offline) We visualize a goal. In our
mind, over and over and over again. We picture the movements.
We see ourself catching that ball. Dancing that toe touch. Swim-
ming that breaststroke. We watch it in the movie of our mind
whenever we can. Scrutinize it. Is our wrist turning properly? Is
our kick high enough? If not, we change the picture. See ourself
doing the movement perfectly. As far as our parietal lobe and
basal ganglia are concerned, this is exactly the same as doing
the movement. When we visualize the movement, we activate all
those planning pathways. Those neurons fire, over and over again.
Which is what needs to happen for our synapses to strengthen.
In other words, by picturing the movements, we are actually
learning them. This makes it easier for the parietal lobe to send
the right message to the muscles. So when we actually try to
perform a movement, we will get better, faster. We will need less
physical practice to be good at sports. This does not work for
general fitness (i.e., increased strength). We still need to train our
muscles, heart, and lungs to become strong. However, its good
for skilled movements. Basketball lay ups. Gymnastics routines.
For improved technique, visualization works. We train our brain,
which makes it easier to control our muscles. What does this
have to do with character simulations? We are able to mimic
the ‘visualization’ approach by having our system constantly run
simulations in the background. Exploit all that parallel processing
power. Run large numbers of simulations one or two seconds in
advance and see how the result leads out. If the character’s food
it a few centimetres forward, if we use more torque on the knee
muscle, how does this compare with our ideal animation we are
aiming for? As we find solutions, we store them and improve
upon them each time a similar situation arises.
Figure 5. Overview - High level view of interconnected components and their justifications. (a) We have a current (starting) state and a final state. The unknown
middle transitioning states is what we are searching for. The transition state is a dynamic problem that is specific to the problem. For instance, the terrain or the
situation may vary (slopes or crawling under obstacles). (b) A heuristic model would be able to train a set of trigonometric functions (e.g., Fourier series), to create
rhythmic motions that are able to accomplish the task. The low level task (fitness function), being a simple ‘overall centre of mass trajectory’. (c) With (b) on its
own, the solution is plagued with issues, such as, how to steer or control the type of motion and if the final motion is ‘humanistic’ or ‘life-like’. Hence, we have a
‘pre-defined’ library of motions that are chosen based on the type of animation we are leaning towards (standard walk or hopping). The information from the
animation is fed back into the fitness function in (b). Providing a multi-objective problem, centre of mass, end-effectors, and frequency components for ‘style’. (d)
The solution from each problem is ‘stored’ in a sub-bank of the animation and used for future problems. This builds upon using previous knowledge to help solve
new problems faster in a coherent manner (e.g., previous experiences will cause different characters to create slightly different solutions over time).
Physically Correct Model Our solution controls a physics
based model using joint torques as in the real world. This mimics
the real world more closely, not only do we require the model
to move in a realistic manner but it also has to control joint
muscles in sufficient ratios to achieve the final motion (e.g.,
balance control). Adjusting the physical model, for instance,
muscle strength or leg lengths, allows the model to retrain to
achieve the action.
(Get Up) Rise Animations Animation is diverse and complex
area, so rather than try and create solutions for every possible
situation, we focus on a particular set of actions, that is, rising
movements. Rise animations require a suitably diverse range
of motor skills. We formate a set of tasks to evaluate our
algorithm, such as, get up from front, get up from back, get
up on uneven ground and so on. The model also encapsulates
underlying properties, such as, visual attention and expressive
qualities (tired, unsure, eager) and human expressiveness. We
consider a number of factors, such as, inner and outer information,
emotion, personality, primary and secondary goals.
III.OVERVIEW
High Level Elements The system is driven by three key sources
of information:
1) the internal information (e.g., logistics of the brain,
experience, mood)
2) the aim or action
3) external input (e.g., environmental, contacts, comfort,
lighting)
4) memory and information retrieval (e.g., parallel models
and associative memory)
Motion Capture Data (Control) We have a library of actions
as reference material for look-up and comparison. Some form of
‘control’ and ‘input’ to steer the characters to perform actions in
a particular way (e.g., instead of the artist creating a large look-up
array of animations for every single possible solution), we provide
fundamental poses and simple pre-recorded animations to ‘guide’
the learning algorithm. As search models are able to explore
their diverse search-space to reach the goal (e.g., heuristically
adjusting joint muscles), however, a reference ‘library’ allows us
to steer the solution towards what is ‘natural-looking’. As there
are a wide number of ways of accomplishing a task - but what is
‘normal’ and what is ‘strange’ and uncomfortable. The key points
we concentrate on are:
1) the animations requires basic empirical information
(e.g., reference key-poses) from human movement and
cognitive properties;
2) the movement should not simply reply pre-recorded mo-
tions, but adapt and modify them to different contexts;
3) the solution must react to disturbances and changes in
the world while completing the given task;
4) the senses provide unique pieces of information, which
should be combined with internal personality and emo-
tion mechanisms to create the desired actions and/or re-
actions.
Blending/Adapting Animation Libraries During motor skill
acquisition, the brain learns to map between ‘intended’ limb
motion and requisite muscular forces. We propose that regions
(i.e., particular body segments) in the animation library area are
blended together to find a solution that is aesthetically pleas-
ing. (i.e., based upon pre-recorded motions instead of randomly
searching).
Virtual Infant (or Baby) Imagine a baby with no knowledge
or understanding. As we explained, a bottom up view, starting
with nothing and educating the system to mimic humanistic
(organic) qualities. Learning algorithms to tune skeletal motor
signals to accomplish high-level tasks. As with a child - ‘trial-
and-error’ approach to learning - exploring what is possible
and impossible - to eventually reach a solution. This requires
continuously integrating in corrective guidance (as with a child
- without knowing what is right and wrong - the child will
never learn). This guidance is through fitness criteria and example
motion clips (as children do - see and copy - or try to). Performing
multiple training exercises over and over again to learn skills.
Having the algorithm actively improve (e.g., proprioception - how
the brain understands the body). As we learn to perform motions,
there are thousands of small adjustments that our body as a
whole is making every millisecond to ensure optimal (quickest,
energy efficient, closest idea/style). Constantly monitoring the
body by sending and receiving sensory information (e.g., to and
from every joint, limb, and contact). Over time, the experience
strengthens the model’s ability to accomplish tasks quicker and
more efficiently.
Stability Autonomous systems have ‘stability’ issues (i.e., they
are far from equilibrium stability) [51]. Due to the dynamic
nature of a character’s actions, they are dependent for their
environment (external factors) requiring interaction, which are
open processes (exhibit closed self-organization). However, we
can measure stability in relation to reference poses, energy, and
balance to draw conclusions of the effectiveness of the learned
solution.
Memory Learn through explorative searching (i.e, with quan-
tative measures for comfort, security, and satisfaction). While a
character may find an ‘optimal’ solution that meets the specified
criteria - it will continue to expand its memory repertoire of
actions. This is a powerful component, increasing the efficiency in
achieving a goal (e.g., the development of walking and retention
of balanced motion in different circumstances would be more
effective). The view that exploration and retention (memory)
is crucial to ontogenetic development, which is supported by
research findings in developmental psychology [52]. Hofsten [53]
explains that it is not necessarily success at achieving task-
specific goals that drives development but the discovery of new
way of doing something (through exploration). Forms a solution
that builds upon ‘prior knowledge’ with an increased reliance
on machine learning and statistical evaluation (i.e., for tuning
the system parameters). This leads to an model that constantly
acquires new knowledge both for the current and future task.
IV.COMPLEXITY
Experimenting with optimisation algorithms (i.e., different
fitness criteria for specific situations). Highly dynamic animations
(jumping or flying through the air). Close proximity simulations
(dancing, wrestling, getting in/out of a vehicle). Exploring ‘be-
yond’ human but creative creatures (multiple legs and arms).
Instead of aesthetic qualities, investigate ‘interesting’ behaviours.
As the system and training evolves to use a ‘control language’
to give orders. Not just limited to generic motions (i.e., walking
and jumping), but the ability to learn and search for solutions
(whatever the method). Introduce risk, harm, and comfort to
‘limit’ the solutions to be more ‘human’ and organic. Avoid
unsupervised learning since it leads to random unnatural and
uncontrollable motions. Simple examples (i.e., training data) to
steer the learning. Gather knowledge and extend the memory
of experiences to help solve future problems (learn from past
problems). This method is very promising for building organic
real-life systems (handle unpredictable situations in a logical
natural manner). Technique is scalable and generalizes across
topologies. Learned solutions can be shared and transferred
between characters (i.e., accelerated learning through sharing).
Figure 6. Complexity - As animation and behavioural character models become
increasing complex, it becomes more challenging and time consuming to
customize and create solutions for specific environments/situations.
An physically correct, self-adapting, learning animation sys-
tem to mimic human cognitive mechanics is a complex task
that embodies a wide range of biologically based concepts. A
bottom up approach (i.e., starting with nothing). This forms a
foundation from which greater details can be added. As the model
grows in complexity and details more expressive and autonomous
animations appear. Leading on to collaborative agents, i.e., social
learning and interaction (i.e., behaviour in groups). The enormous
complexity of the human brain and its ability to problem solve
cannot be underestimated - however, through simple approxima-
tions we are able to develop autonomous animation models that
embody and possess humanistic qualities, such as, cognitive and
behavioural learning abilities.
Tackle a complex problem - our movement allows us to
express a vast array of behaviours in addition to solving physical
problems, such as, balance and locomotion. We have only scraped
the surface of what is possible - constructing and explaining
a simple solution (for a relatively complex neuro-behavioural
model) - to investigate a modular extendible framework to
synthesize human movement (i.e., mapping functionality, problem
solving, mapping of brain to anatomy, and learning/experience).
Body Language The way we ‘move’ says a lot. How we stand
and how we walk expels ‘emotional’ details. We humans are
very good at spotting these underlying characteristics. These
fundamental physiological motions are important in animation
- if we want to synthesize life-like characters. While these
subtle underlying motions are aesthetic (i.e., sitting on top of the
physical action or goal), they are non the less equally important.
Emotional synthesis is often classified as a low-level biological
process [54]. Chemical reactions in the brain for stress and pain
- correlate and modulate various behaviours (including motor
control) - vast array of effects - influencing sensitivity, mood,
and emotional responses. We have took a view that the motion
and learning is driven by a high level cognitive model (avoid the
various underlying physiological and chemical parameters).
Input (Sensory Data) The brain has a vast array of sensory
data, such as, the eyes, sound, temperature, smell, and feelings,
that feed in to make the final decision. Technically, our simple
assumption is analogous to a blind person taking lots of short
exploratory motions to discover how to accomplish the task.
Reduce the skeleton complexity compared to a full human model
(numerical complexity). Physical information from the environ-
ment, like contacts, centre of mass, and end-effector locations.
The output motor control signals - with behavioural selection,
example learning motion library, emotion, and fitness evaluation.
V. CONCLUSION
We have specified a set of simple constraints to steer and
control the animation (e.g., get-up poses). We developed a model
based on biology, cognitive psychology, and adaptive heuristics
to create animations to control a physics-based skeleton that
adapts and re-trains parameters to meet changing situations (e.g.,
different physical and environmental information). We inject
personality and behavioural components to create animations that
capture life-like qualities (e.g., mood, tired, and scared).
This article addresses several possibilities for future work.
It would be valuable to do further tests on specific hypotheses
and assumptions by constructing more focused and rigorous
experiments. However, these hypotheses are hard to state pre-
cisely, and thus have mixed feelings - since we are trying to
model humanistic cognitive abilities. A practical approach might
be to directly compare and contrast real-world and synthesized
situations. For instance, an experiment of an actor dealing with
difficult situations, such as, stepping over objects and walking un-
der bridges. Younger children approach the problem in a different
way - similar to our computer agent - learning through trial and
error, behaving less mechanically and more consciously. Further,
communication between director (e.g., example animations and
posses for control) might lead to more formal languages of
commands. This would help us learn precisely what sorts of
commands are needed and when there should be issued. Finally,
we could go further by developing richer cognitive models
and control languages for describing motion and style to solve
questions not even imagined.
We have taken a simplified view of cognitive modelling. We
will continue to see cognitive architectures develop over the
coming years that are capable of adapting and self-modifying,
both in terms of parameter adjustment phylogenetic skills. This
will be through learning and, more importantly, through the
modification of the very structure and organization of the system
itself (memory and algorithm) so that it is capable of altering its
system dynamics based on experience, to expand its repertoire of
actions, and thereby adapt to new circumstances [52]. A variety of
learning paradigms will need to be developed to accomplish these
goals, including, but not necessarily limited to, unsupervised,
reinforcement, and supervised learning.
Learning through watching Providing the ability to translate
2D video images to 3D animation sequences would allow cog-
nitive learning algorithms the ability to constantly ‘watch’ and
learn from people. Watching people in the street walking and
avoiding one another, climbing over obstacles, and interacting to
reproduce similar characteristics virtually.
REFERENCES
[1] D. Vogt, S. Grehl, E. Berger, H. B. Amor, and B. Jung, “A data-driven
method for real-time character animation in human-agent interaction,” in
Intelligent Virtual Agents. Springer, 2014, pp. 463–476.
[2] T. Geijtenbeek and N. Pronost, “Interactive character animation using
simulated physics: A state-of-the-art review,” in Computer Graphics Forum,
vol. 31, no. 8. Wiley Online Library, 2012, pp. 2492–2515.
[3] E. N. Marieb and K. Hoehn, Human anatomy & physiology. Pearson
Education, 2007.
[4] B. Reinert, T. Ritschel, and H.-P. Seidel, “Homunculus warping: Conveying
importance using self-intersection-free non-homogeneous mesh deforma-
tion,” Computer Graphics Forum (Proc. Pacific Graphics 2012), vol. 5,
no. 31, 2012.
[5] T. Conde and D. Thalmann, “Learnable behavioural model for autonomous
virtual agents: low-level learning,” in Proceedings of the fifth international
joint conference on Autonomous agents and multiagent systems. ACM,
2006, pp. 89–96.
[6] F. Amadieu, C. Marin´e, and C. Laimay, “The attention-guiding effect and
cognitive load in the comprehension of animations,” Computers in Human
Behavior, vol. 27, no. 1, 2011, pp. 36–40.
[7] E. Lach, “fact-animation framework for generation of virtual characters
behaviours,” in Information Technology, 2008. IT 2008. 1st International
Conference on. IEEE, 2008, pp. 1–4.
[8] J.-S. Monzani, A. Caicedo, and D. Thalmann, “Integrating behavioural
animation techniques,” in Computer Graphics Forum, vol. 20, no. 3. Wiley
Online Library, 2001, pp. 309–318.
[9] J. Funge, X. Tu, and D. Terzopoulos, “Cognitive modeling: knowledge,
reasoning and planning for intelligent characters,” in Proceedings of the
26th annual conference on Computer graphics and interactive techniques.
ACM Press/Addison-Wesley Publishing Co., 1999, pp. 29–38.
[10] S. Tak and H.-S. Ko, “A physically-based motion retargeting filter,” ACM
Transactions on Graphics (TOG), vol. 24, no. 1, 2005, pp. 98–117.
[11] S. Baek, S. Lee, and G. J. Kim, “Motion retargeting and evaluation for
vr-based training of free motions,” The Visual Computer, vol. 19, no. 4,
2003, pp. 222–242.
[12] J.-S. Monzani, P. Baerlocher, R. Boulic, and D. Thalmann, “Using an
intermediate skeleton and inverse kinematics for motion retargeting,” in
Computer Graphics Forum, vol. 19, no. 3. Wiley Online Library, 2000,
pp. 11–19.
[13] B. Kenwright, R. Davison, and G. Morgan, “Dynamic balancing and
walking for real-time 3d characters,” in Motion in Games. Springer, 2011,
pp. 63–73.
[14] C. Balaguer, A. Gim´enez, J. M. Pastor, V. Padron, and M. Abderrahim,
“A climbing autonomous robot for inspection applications in 3d complex
environments,” Robotica, vol. 18, no. 03, 2000, pp. 287–297.
[15] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi´c, “Style-based
inverse kinematics,” in ACM Transactions on Graphics (TOG), vol. 23,
no. 3. ACM, 2004, pp. 522–531.
[16] D. Tolani, A. Goswami, and N. I. Badler, “Real-time inverse kinematics
techniques for anthropomorphic limbs,” Graphical models, vol. 62, no. 5,
2000, pp. 353–388.
[17] T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in vision-
based human motion capture and analysis,” Computer vision and image
understanding, vol. 104, no. 2, 2006, pp. 90–126.
[18] B. Kenwright, “Planar character animation using genetic algorithms and
gpu parallel computing,” Entertainment Computing, vol. 5, no. 4, 2014,
pp. 285–294.
[19] K. Sims, “Evolving virtual creatures,” in Proceedings of the 21st annual
conference on Computer graphics and interactive techniques. ACM, 1994,
pp. 15–22.
[20] J. T. Ngo and J. Marks, “Spacetime constraints revisited,” in Proceedings
of the 20th annual conference on Computer graphics and interactive
techniques. ACM, 1993, pp. 343–350.
[21] J. A. Feldman and D. H. Ballard, “Connectionist models and their proper-
ties,” Cognitive science, vol. 6, no. 3, 1982, pp. 205–254.
[22] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier principles for emotion-
based human figure animation,” in Proceedings of the 22nd annual confer-
ence on Computer graphics and interactive techniques. ACM, 1995, pp.
91–96.
[23] P. Faloutsos, M. Van de Panne, and D. Terzopoulos, “Composable con-
trollers for physics-based character animation,” in Proceedings of the 28th
annual conference on Computer graphics and interactive techniques. ACM,
2001, pp. 251–260.
[24] H. Noser, O. Renault, D. Thalmann, and N. M. Thalmann, “Navigation for
digital actors based on synthetic vision, memory, and learning,” Computers
and graphics, vol. 19, no. 1, 1995, pp. 7–19.
[25] H. H. Vilhj´almsson, “Autonomous communicative behaviors in avatars,”
Ph.D. dissertation, Massachusetts Institute of Technology, 1997.
[26] J. Cassell, H. H. Vilhj´almsson, and T. Bickmore, “Beat: the behavior
expression animation toolkit,” in Life-Like Characters. Springer, 2004,
pp. 163–185.
[27] X. Tu and D. Terzopoulos, “Artificial fishes: physics, locomotion, percep-
tion, behavior,” in Proceedings of the 21st annual conference on computer
graphics and interactive techniques. ACM, 1994, pp. 43–50.
[28] J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket,
B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule-based
generation of facial expression, gesture & spoken intonation for multiple
conversational agents,” in Proceedings of the 21st annual conference on
Computer graphics and interactive techniques. ACM, 1994, pp. 413–420.
[29] X. Yao, “Evolving artificial neural networks,” Proceedings of the IEEE,
vol. 87, no. 9, 1999, pp. 1423–1447.
[30] H. A. ElMaraghy, “Kinematic and geometric modelling and animation of
robots,” in Proc. of Graphics Interface’86 Conference. ACM, 1986, pp.
15–19.
[31] C. W. Reynolds, “Computer animation with scripts and actors,” in ACM
SIGGRAPH Computer Graphics, vol. 16, no. 3. ACM, 1982, pp. 289–
296.
[32] N. Burtnyk and M. Wein, “Interactive skeleton techniques for enhancing
motion dynamics in key frame animation,” Communications of the ACM,
vol. 19, no. 10, 1976, pp. 564–569.
[33] C. Csuri, R. Hackathorn, R. Parent, W. Carlson, and M. Howard, “To-
wards an interactive high visual complexity animation system,” in ACM
SIGGRAPH Computer Graphics, vol. 13, no. 2. ACM, 1979, pp. 289–
299.
[34] R. A. Goldstein and R. Nagel, “3-d visual simulation,” Simulation, vol. 16,
no. 1, 1971, pp. 25–31.
[35] A. Bruderlin and T. W. Calvert, “Goal-directed, dynamic animation of
human walking,” ACM SIGGRAPH Computer Graphics, vol. 23, no. 3,
1989, pp. 233–242.
[36] I. Mlakar and M. Rojc, “Towards ecas animation of expressive complex
behaviour,” in Analysis of Verbal and Nonverbal Communication and
Enactment. The Processing Issues. Springer, 2011, pp. 185–198.
[37] M. Soliman and C. Guetl, “Implementing intelligent pedagogical agents in
virtual worlds: Tutoring natural science experiments in openwonderland,” in
Global Engineering Education Conference (EDUCON), 2013 IEEE. IEEE,
2013, pp. 782–789.
[38] J. Song, X.-w. Zheng, and G.-j. Zhang, “Method of generating intelligent
group animation by fusing motion capture data,” in Ubiquitous Computing
Application and Wireless Sensor. Springer, 2015, pp. 553–560.
[39] M. H. Lee, “Intrinsic activitity: from motor babbling to play,” in Develop-
ment and Learning (ICDL), 2011 IEEE International Conference on, vol. 2.
IEEE, 2011, pp. 1–6.
[40] K. Gurney, N. Lepora, A. Shah, A. Koene, and P. Redgrave, “Action
discovery and intrinsic motivation: a biologically constrained formalisa-
tion,” in Intrinsically Motivated Learning in Natural and Artificial Systems.
Springer, 2013, pp. 151–181.
[41] W.-Y. Lo, C. Knaus, and M. Zwicker, “Learning motion controllers
with adaptive depth perception,” in Proceedings of the ACM SIG-
GRAPH/Eurographics Symposium on Computer Animation. Eurographics
Association, 2012, pp. 145–154.
[42] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral
model,” in ACM Siggraph Computer Graphics, vol. 21, no. 4. ACM,
1987, pp. 25–34.
[43] K. Erleben, J. Sporring, K. Henriksen, and H. Dohlmann, Physics-based
animation. Charles River Media Hingham, 2005.
[44] K. Perlin, “Real time responsive animation with personality,” Visualization
and Computer Graphics, IEEE Transactions on, vol. 1, no. 1, 1995, pp.
5–15.
[45] B. Kenwright, “Generating responsive life-like biped characters,” in Pro-
ceedings of the The third workshop on Procedural Content Generation in
Games. ACM, 2012, p. 1.
[46] T. Trappenberg, Fundamentals of computational neuroscience. OUP
Oxford, 2009.
[47] P. Dayan and L. Abbott, “Theoretical neuroscience: computational and
mathematical modeling of neural systems,” Journal of Cognitive Neuro-
science, vol. 15, no. 1, 2003, pp. 154–155.
[48] A. V. Samsonovich, “Toward a unified catalog of implemented cognitive
architectures.” BICA, vol. 221, 2010, pp. 195–244.
[49] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower,
M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr et al., “Sim-
ulation of networks of spiking neurons: a review of tools and strategies,”
Journal of computational neuroscience, vol. 23, no. 3, 2007, pp. 349–398.
[50] T. R. Armstrong, “Training for the production of memorized movement
patterns,” Ph.D. dissertation, The University of Michigan, 1970.
[51] M. H. Bickhard, “Autonomy, function, and representation,” Communication
and Cognition-Artificial Intelligence, vol. 17, no. 3-4, 2000, pp. 111–131.
[52] D. Vernon, G. Metta, and G. Sandini, “A survey of artificial cognitive sys-
tems: Implications for the autonomous development of mental capabilities
in computational agents,” IEEE Transactions on Evolutionary Computation,
vol. 11, no. 2, 2007, p. 151.
[53] C. von Hofsten, On the development of perception and action. London:
Sage, 2003.
[54] M. Sagar, P. Robertson, D. Bullivant, O. Efimov, K. Jawed, R. Kalarot, and
T. Wu, “A visual computing framework for interactive neural system models
of embodied cognition and face to face social learning,” in Unconventional
Computation and Natural Computation. Springer, 2015, pp. 71–88.

More Related Content

PDF
Bioinspired Character Animations: A Mechanistic and Cognitive View
PPT
Pasquinelli 2005 lyon
PDF
Natural User Interfaces as a powerful tool for courseware design in Physical ...
PPSX
Learning theory and its application in the digital age
PPTX
Cognitive architectures
PDF
Where is my mind?
PPTX
Embodied cognition
PDF
Neural fields, a cognitive approach
Bioinspired Character Animations: A Mechanistic and Cognitive View
Pasquinelli 2005 lyon
Natural User Interfaces as a powerful tool for courseware design in Physical ...
Learning theory and its application in the digital age
Cognitive architectures
Where is my mind?
Embodied cognition
Neural fields, a cognitive approach

Viewers also liked (14)

PPTX
Mi familia
PDF
005 aorigemdosjejes
PDF
Computacion
PDF
Resultado 1-etapa-20131018055719
PDF
Korean-Swedish Executive Meeting Value-based Healthcare_Seoul 2016
PDF
 Langdon Winner, ¿Tienen política los artefactos?
DOCX
Configurando o brazil firewall
PPTX
Jci principales servicios
PPTX
Lineas de investigacion de lb. clinico/ TAREA Nº 1
PPTX
Cultura ciudadana Arnold Imitola
PDF
Ermelino 171
PPTX
Comportamiento Organizacional
DOCX
Ácidos Grasos Saturados e Insaturados
Mi familia
005 aorigemdosjejes
Computacion
Resultado 1-etapa-20131018055719
Korean-Swedish Executive Meeting Value-based Healthcare_Seoul 2016
 Langdon Winner, ¿Tienen política los artefactos?
Configurando o brazil firewall
Jci principales servicios
Lineas de investigacion de lb. clinico/ TAREA Nº 1
Cultura ciudadana Arnold Imitola
Ermelino 171
Comportamiento Organizacional
Ácidos Grasos Saturados e Insaturados
Ad

Similar to Bio-Inspired Animated Characters A Mechanistic & Cognitive View (20)

PDF
Physically plausible simulation for character animation
PPTX
Machine creativity TED Talk 2.0
PPTX
Machine creativity TED Talk 2.0
PDF
201500 Cognitive Informatics
PDF
Intelligent 3D Game Design based on Virtual Humanity
PPT
Ai
PPTX
[Pandora 22] ...Deliberately Unsupervised Playground - Milan Licina
PDF
New Research Articles 2019 September Issue International Journal of Artificia...
PDF
A Theory for Motion Primitive Adaptation
PDF
Neural Computing
PDF
Mc Lendon Using Eye Tracking To Investigate Important Cues For Representative...
PDF
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
PDF
WHY ROBOTICS, AI, AL & QUANTUM COMPUTING
PPTX
Algorithms that mimic the human brain (1)
PPTX
Algorithms that mimic the human brain
PDF
Hpai class 12 - potpourri & perception - 032620 actual
PPTX
Basic concepts of soft computing soft computing.pptx
PDF
PPT
Why humanoidrobots
PDF
AI Fables, Facts and Futures: Threat, Promise or Saviour
Physically plausible simulation for character animation
Machine creativity TED Talk 2.0
Machine creativity TED Talk 2.0
201500 Cognitive Informatics
Intelligent 3D Game Design based on Virtual Humanity
Ai
[Pandora 22] ...Deliberately Unsupervised Playground - Milan Licina
New Research Articles 2019 September Issue International Journal of Artificia...
A Theory for Motion Primitive Adaptation
Neural Computing
Mc Lendon Using Eye Tracking To Investigate Important Cues For Representative...
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
WHY ROBOTICS, AI, AL & QUANTUM COMPUTING
Algorithms that mimic the human brain (1)
Algorithms that mimic the human brain
Hpai class 12 - potpourri & perception - 032620 actual
Basic concepts of soft computing soft computing.pptx
Why humanoidrobots
AI Fables, Facts and Futures: Threat, Promise or Saviour
Ad

Recently uploaded (20)

PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Complications of Minimal Access-Surgery.pdf
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PPTX
20th Century Theater, Methods, History.pptx
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
History, Philosophy and sociology of education (1).pptx
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
Weekly quiz Compilation Jan -July 25.pdf
Complications of Minimal Access-Surgery.pdf
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
20th Century Theater, Methods, History.pptx
Share_Module_2_Power_conflict_and_negotiation.pptx
Paper A Mock Exam 9_ Attempt review.pdf.
FORM 1 BIOLOGY MIND MAPS and their schemes
Environmental Education MCQ BD2EE - Share Source.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Virtual and Augmented Reality in Current Scenario
Chinmaya Tiranga quiz Grand Finale.pdf
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
History, Philosophy and sociology of education (1).pptx
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
B.Sc. DS Unit 2 Software Engineering.pptx

Bio-Inspired Animated Characters A Mechanistic & Cognitive View

  • 1. Bio-Inspired Animated Characters A Mechanistic & Cognitive View Ben Kenwright School of Media Arts and Technology Southampton Solent University United Kingdom Abstract—Unlike traditional animation techniques, which attempt to copy human movement, ‘cognitive’ animation solutions mimic the brain’s approach to problem solving, i.e., a logical (intelligent) thinking structure. This procedural animation solution uses bio- inspired insights (modelling nature and the workings of the brain) to unveil a new generation of intelligent agents. As with any promising new approach, it raises hopes and questions; an extremely challenging task that offers a revolutionary solution, not just in animation but to a variety of fields, from intelligent robotics and physics to nanotechnology and electrical engineering. Questions, such as, how does the brain coordinate muscle signals? How does the brain know which body parts to move? With all these activities happening in our brain, we examine how our brain ‘sees’ our body and how it can affect our movements. Through this understanding of the human brain and the cognitive process, models can be created to mimic our abilities, such as, synthesizing actions that solve and react to unforeseen problems in a humanistic manner. We present an introduction to the concept of cognitive skills, as an aid in finding and designing a viable solution. This helps us address principal challenges, such as: How do characters perceive the outside world (input) and how does this input influence their motions? What is required to emulate adaptive learning skills as seen in higher life-forms (e.g., a child’s cognitive learning process)? How can we control and ‘direct’ these autonomous procedural character motions? Finally, drawing from experimentation and literature, we suggest hypotheses for solving these questions and more. In summary, this article analyses the biological and cognitive workings of the human mind, specifically motor skills. Reviewing cognitive psychology research related to movement in an attempt to produce more attentive behavioural characteristics. We conclude with a discussion on the significance of cognitive methods for creating virtual character animations, limitations and future applications. Keywords–animation, life-like, movement, cognitive, bio-mechanics, human, reactive, responsive, instinctual, learning, adapting, biological, optimisation, modular, scalable I. INTRODUCTION Movement is Life Animated films and video games are pushing the limits of what is possible. In today’s virtual environments, animations tends to be data- driven [1], [2]. It is common to see animated characters using pre- recorded motion capture data, but it is rare to see the animated characters driven using purely procedural solutions. With the dawn of Virtual Reality (VR) and Augmented Reality (AR) there is an ever growing need for content - to create indistinguishably realistic virtual worlds quickly and cost effectively. While ren- dered scenes may appear highly realistic, the ‘movement’ of ac- tively driven systems (e.g., biological creatures) is an open area of research [2]. Specifically, the question of how to ‘automatically’ create realistic actions that mimic the real-world. This includes, the ability to learn and adapt to unforeseen circumstances in a life- like manner. While we are able to ‘record’ and ‘playback’ highly realistic animations in virtual environments, they have limitations. The motions are constrained to specific skeleton topologies, not to mention, time consuming and challenging to create motions for non-humans (creatures and aliens). What is more, the recording of animations for dangerous situations is impossible using motion capture (so must be manually done using artistic intervention). Another key thing to remember, in dynamically changing envi- ronments (video games), pre-recorded animations are unable to adapt automatically to changing situations. This article attempts to solve these problems using biolog- ically inspired concepts. We investigate neurological, cognitive and behavioural methods. These methods provide inspirational solutions for creating adaptable models that synthesize life- like character characteristics. We examine how the human brain ‘thinks’ to accomplish tasks; and how the brain solves unforeseen problems. Exploiting the knowledge of how the brain functions, we formulate a system of conditions that attempt to replicate humanistic properties. We discusses novel approaches around solving these problems, by questioning, analysing and formulat- ing a system based on the human cognitive processes. Cognitive vs Machine Learning Essentially, cognitive com- puting has the ability to reason creatively about data, patterns, situations, and extended models (dynamically). However, most statistics-based machine learning algorithms cannot handle prob- lems much beyond what they have seen and learned (match). The machine learning algorithm has to be paired with cognitive capabilities to deal with truly ‘new situation’. Cognitive science therefore raises challenges for, and draws inspiration from, ma- chine learning; and insights about the human mind to help inspire new directions for animation. Hence, cognitive computing along with many other disciplines within the field of artificial intelli- gence are gaining popularity, especially in character systems, so in the not so distant future will have a colossal impact on the animation industry. Automation The ability to ‘automatically’ generate physically correct humanistic animations is revolutionary. Remove and add behavioural components (happy and sad). Create animations for different physical skeletons using a single set of training data. Per- form a diverse range of actions, for instance, getting-up, jumping, dancing, and walking. The ability to react to external interven- tions, while completing assigned task (i.e., combining motions with priorities). These problem-solving skills are highly valued. We want character agents to learn and adapt to the situation. This includes: • physically based models (e.g., rigid bodies) that are controlled through internal joint torques (muscle forces) • controllable adjustable joint signals to accomplish spe- cific actions (trained) • learn and retain knowledge from past experiences • embed personal traits (personality) Problems We want the method to be automatic (i.e., not depend too heavily on pre-canned libraries). Avoid simply playing back captured animations, but instead paramaterizing and re-using animations for different contexts (provide stylistic advice to the training algorithm). We want the solution to have the ability to adapt on-the-fly to unforeseen situations in a natural life-like
  • 2. manner. Having said that, we also want to accommodate a diverse range of complex motions, not just balanced walking, but getting- up, climbing, and dancing actions. With a physics-based model at the heart of the system (i.e., not just a kinematic skeleton but joint torques/muscles), we are able to ensure a physically correct solution. While a real-world human skeleton has a huge number of degrees-of-freedom, we accept that a lower fidelity model is able to represent the necessary visual characteristics (enable reasonable computational overheads). Of course, even a simplified model possesses a large amount of ambiguity with singularities. All things considered, we do not want to focus on the ‘actions’ - but embrace the autonomous emotion, behaviour and cognitive properties that sit on top of the motion (intelligent learning component). Figure 1. Homunculus Body Map - The somato-sensory homunculus is a kind of map of the body [3], [4]. The distorted model/view of a person (see Figure 2) represents the amount of sensory information a body part sends to the central nervous system (CNS) Geometric to Cognitive Synthesizing animated characters for virtual environments addresses the challenges of automating a variety of difficult development tasks. Early research combined geometric and inverse kinematic models to simplify key-framing. Physical models for animating particles, rigid bodies, deformable solids, fluids, and gases have offered the means to generate co- pious quantities of realistic motion through dynamic simulation. Bio-mechanical models employ simulated physics to automate the lifelike animation of animals with internal muscle actuators. In recent years, research in behavioral modeling has made progress towards ‘self-animating’ characters that react appropriately to perceived environmental stimuli [5], [6], [7], [8]. It has remained difficult, however, to instruct these autonomous characters so that they satisfy the programmer’s goals. As pointed out by Funge et al. [9], the computer graphics solution has evolved, from geometric solutions to more logical mathematical approaches, and ultimately cognitive models, as shown in Figure 3. A large amount of work has been done into motion re- targeting (i.e., taking existing pre-recorded animations and mod- ifying them to different situations) [10], [11], [12]. Targeted solutions that generate animations for specific situations, such as, locomotion [13] and climbing [14]. Kinematic models do not take into account the physical properties of the model, in addition, are only able to solve local problems (e.g., reading and stepping and not complex rhythmic actions) [15], [16], [17]. Procedural models may not converge to natural looking motions [18], [19], [20]. Cognitive models go beyond behavioral models, in that they govern what a character knows, how that knowledge is acquired, and how it can be used to plan actions. Cognitive models are applicable in instructing a new breed of highly autonomous, quasi-intelligent characters that are beginning to find use in in- teractive virtual environments. We decompose cognitive modeling into two related sub-tasks: (1) domain knowledge specification and (2) character instruction. This is reminiscent of the classic dictum from the field of artificial intelligence (AI) that tries to promote modularity of design by separating out knowledge from control. knowledge + instruction = intelligent behavior (1) Domain (knowledge) specification involves administering knowledge to the character about its world and how that world can change. Character instructions tell the character to try to behave in a certain way within its world in order to achieve specific goals. Like other advanced modeling tasks, both of these steps can be fraught with difficulty unless developers are given the right tools for the job. Components We wanted to avoid a ‘single’ amalgamated al- gorithm (e.g., Neural Networks or connectionist models [21]). Instead we investigate modular or dissectable learning models for adapting joint signals to accomplish tasks. For example, genetic algorithms [18], in combination with Fourier methods to subdivide complex actions into components (i.e., extract and identify behavioural characteristics [22]). Coupled with the fact that, joint motions are essentially signals, while the physics- based model ensures the generated motions are physically correct [23]. To say nothing of the advancements in parallel hardware - we envision the exploitation of massively parallel architecture constitutional. Figure 2. Homunculus Body Map - Reinert et al [4], presented a graphical paper on mesh deformation to visualize the somato-sensory information of the brain-body. The figure conveys the importance of the neuronal homunculus - i.e., the human body part size relation to neural density and the brain. Contribution The novel contribution of this technical article is the amalgamation of numerous methods, for instance, bio- mechanics, psychology, robotics, and computer animation, to address the question of ‘how can we make virtual characters solve unforeseen problems automatically and in a realistic manner?’ (i.e., mimic the human cognitive learning process).
  • 3. Figure 3. Timeline - Computer Graphics Cognitive Development Model (Geometric, Kinematic, Physical, Behavioural, and Cognitive) ([9]. Simplified illustrate of milestones over the years that have contributed novel animation solutions - emphasises the gradual transition from kinematic and physical techniques to intelligent behavioural models. [A] [24]; [B] [20]; [C] [19]; [D] [25]; [E] [26]; [F] [27]; [G] [28]; [H] [18]; [I] [29]; [J] [30]; [K] [31]; [L] [32]; [M] [33]; [N] [34]; [O] [35]; [P] [8]; [Q] [36]; [R] [7]; [S] [5]; [T] [6]; [U] [37]; [V] [38]; II. BACKGROUND & RELATED WORK Literature Gap The research in this article brings together numerous diverse concepts and while in their individual field they are well studied, in their whole and applied to virtual character animations, there is a serious gap in the referential literature. Hence, we begin by exploring branches of research from cognitive psychology and bio-mechanics before taking them across and combining them with computer animation and robotics concepts. Autonomous Animation Solutions Formal approaches to an- imation, such as, genetic algorithms [18], [19], [20], may not converge to natural looking motions without additional work, such as, artist intervention or constrained/complex fitness func- tions. This causes limitations and constrains the ‘automation’ factor. We see autonomy as the emergent of salient, novel, action discovery, through self organisation of high level goal directed orders. The behavioural aspect emerges from the physical (or virtual) constraints and fundamental low level mechanisms. We adapt bodily motor controls (joint signals) from randomness to purposeful actions based on cognitive development (Lee [39] referred to this process as evolving from babbling to play). Interestingly, this intrinsic method of behavioural learning has also been demonstrated in biological models (known as action discovery) [40]. Navigation/Controllers/Mechanical Synthesizing human movement that mimics real-world behaviours ‘automatically’ is a challenging and important topic. Typically, reactive approaches for navigation and pursuit [24], [41], [42], [27], may not readily accommodate task objectives, sensing costs, and cognitive principles. A cognitive solution adapts and learns (finds answers to unforeseen problems). Expression/Emotion Humans exhibit a wide variety of ex- pressive actions, which reflect their personalities, emotions, and communicative needs [25], [26], [28]. These variations often in- fluence the performance of simpler gestural or facial movements. Components Essential Components: • Fourier - subdivide actions into components, extract and identify behavioural characteristics [22] • Heuristic Optimisation [18] - adapting non-linear sig- nals (with purpose) • Physics-Based [43], [23] - torques and forces to control the model • Parallel Architecture - exploit massively parallel pro- cessor architecture, such as, the graphical processing unit (GPU) • Randomness - inject awareness and randomness (blood flow, repository signals, background noise) [44], [45] Brain Body Map As shown in Figure 1, we are able to map the minds awareness of different body parts. This is known as the homunculus body map. So why is it important for movement? Helps understanding the neural mechanisms of human sensori- motor coordination and cognitive connection. While we are a complex biological organism, we need feedback and information (input) to be able to move and thus live (i.e., movement is life). The motor part of the brain relies on information from the sensory systems. The control signals are dynamically changing depending on our state. Simply put, the better the central representation, the better the motor output will be and the more life-like and realistic the final animations will be. Our motor systems need to know the state of our body. If the situation is not known or not very clear, the movements will not be good, because the motor systems will be ‘afraid’ to go all out. Very similar to driving a car on an unknown road in misty conditions with only an old, worn and worm eaten map. We drive slow and tense, to avoid hitting something or getting of road. This is safety behaviour: safe, but taxing on the system. Cognitive Science The cognitive science of motion is an inter- disciplinary scientific study of the mind and its processes. We examines what cognition motion is, what it does and how it works. This includes research in to intelligence and behaviour, especially focusing on how information is represented, processed,
  • 4. Figure 4. Brain and Actions - The phases (left-to-right) the human brain goes through - from thinking about doing a task to accomplishing it (e.g., walking to the kitchen to get a drink from the cupboard). and transformed (in faculties such as perception, language, mem- ory, attention, reasoning, and emotion) within nervous systems (humans or other animals) and machines (e.g. computers). Cog- nitive motion science consists of multiple research disciplines, including robotics, psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology. The subject spans multiple levels of analysis, from low level learning and decision mechanisms to high level logic and planning; from neural cir- cuitry to modular brain organization. However, the fundamental concept of cognitive motion is the understanding of instinctual thinking in terms of the structural mind and computational procedures that operate on those structures. Importantly, cognitive solutions are not only adaptive but also anticipatory and prospective, that is, they need to have (by virtue of their phy- logeny) or develop (by virtue of their ontogeny) some mechanism to rehearse hypothetical scenarios. Neural Networks and Cognitive Simulators Computational Neuroscience [46], [29], [47] biologically inspired solutions for neural models for simulating information processing and cog- nition and behaviour modelling. The majority of the research has focused on modelling ‘isolated components’. Cognitive ar- chitectures [48] using biologically based models for goal driven learning and behaviours. Publically available neural network simulators are available [49]. Motor Skills Our brain sees the world in ‘maps’. The maps are distorted, depending on how we use each sense, but they are still maps. Almost every sense has a map. Most senses have multiple maps. We have a ‘tonotopic’ map, which is a map of sound frequency, from high pitched to low pitched, which is how our brain processes sound. We have a ‘retinotopic’ map, which is a reproduction of what you are seeing, and it is how the brain processes sight. Our brain loves maps. Most importantly, we have maps of our muscles. The mapping from sensory information to motor movement is shown in Figure 1. For muscle movements, the finer, more detailed the movements are, the more brain space those muscles have. Hence, we can address which muscles take priority and under what circumstances (i.e., sensory input). This also opens the door to lots of interesting and exciting questions, such as, what happens to the maps if we lose a body part, such as, a finger. Psychology Aspect A number of interesting facts are hidden in the psychology aspect of movement that are often taken for granted or overlooked. Incorporating them in a dynamic system allows us to solve a number of problems. For example, when we observe movements which are slightly different from each other but possess similar characteristics. The work by Armstrong [50], showed that when a movements sequence is speeded up as a unit, the overall relative movement or ‘phasing’ remains constant. Led to the discovery of relative forces or the relationship among forces in the muscles participating in the action. How the Brain Controls Muscles Let us pretend that we want to go to the kitchen, because we are hungry. First, an area in our brain called the parietal lobe comes up with a lots of possible plans. We could get to the kitchen by skipping, sprinting, uncoordinated somersaulting, or walking. The parietal lobe sends these plans to another brain area called the basal ganglia. The basal ganglia picks ‘walking’ as the best plan (with uncoordinated somersaulting as close second option). It tells the parietal lobe the plan. The parietal lobe confirms it, and sends the ‘walk to kitchen’ plan down the spinal cord and to the muscles. The muscles move. As they move, our cerebellum kicks into high gear, making sure we turn right before we crash into the kitchen counter, and that we jump over the dog. Part of the cerebellum’s job is to make quick changes to muscle movements while they are happening (see Figure 4). Visualizing the Solution (Offline) We visualize a goal. In our mind, over and over and over again. We picture the movements. We see ourself catching that ball. Dancing that toe touch. Swim- ming that breaststroke. We watch it in the movie of our mind whenever we can. Scrutinize it. Is our wrist turning properly? Is our kick high enough? If not, we change the picture. See ourself doing the movement perfectly. As far as our parietal lobe and basal ganglia are concerned, this is exactly the same as doing the movement. When we visualize the movement, we activate all those planning pathways. Those neurons fire, over and over again. Which is what needs to happen for our synapses to strengthen. In other words, by picturing the movements, we are actually learning them. This makes it easier for the parietal lobe to send the right message to the muscles. So when we actually try to perform a movement, we will get better, faster. We will need less physical practice to be good at sports. This does not work for general fitness (i.e., increased strength). We still need to train our muscles, heart, and lungs to become strong. However, its good for skilled movements. Basketball lay ups. Gymnastics routines. For improved technique, visualization works. We train our brain, which makes it easier to control our muscles. What does this have to do with character simulations? We are able to mimic the ‘visualization’ approach by having our system constantly run simulations in the background. Exploit all that parallel processing power. Run large numbers of simulations one or two seconds in advance and see how the result leads out. If the character’s food it a few centimetres forward, if we use more torque on the knee muscle, how does this compare with our ideal animation we are aiming for? As we find solutions, we store them and improve upon them each time a similar situation arises.
  • 5. Figure 5. Overview - High level view of interconnected components and their justifications. (a) We have a current (starting) state and a final state. The unknown middle transitioning states is what we are searching for. The transition state is a dynamic problem that is specific to the problem. For instance, the terrain or the situation may vary (slopes or crawling under obstacles). (b) A heuristic model would be able to train a set of trigonometric functions (e.g., Fourier series), to create rhythmic motions that are able to accomplish the task. The low level task (fitness function), being a simple ‘overall centre of mass trajectory’. (c) With (b) on its own, the solution is plagued with issues, such as, how to steer or control the type of motion and if the final motion is ‘humanistic’ or ‘life-like’. Hence, we have a ‘pre-defined’ library of motions that are chosen based on the type of animation we are leaning towards (standard walk or hopping). The information from the animation is fed back into the fitness function in (b). Providing a multi-objective problem, centre of mass, end-effectors, and frequency components for ‘style’. (d) The solution from each problem is ‘stored’ in a sub-bank of the animation and used for future problems. This builds upon using previous knowledge to help solve new problems faster in a coherent manner (e.g., previous experiences will cause different characters to create slightly different solutions over time). Physically Correct Model Our solution controls a physics based model using joint torques as in the real world. This mimics the real world more closely, not only do we require the model to move in a realistic manner but it also has to control joint muscles in sufficient ratios to achieve the final motion (e.g., balance control). Adjusting the physical model, for instance, muscle strength or leg lengths, allows the model to retrain to achieve the action. (Get Up) Rise Animations Animation is diverse and complex area, so rather than try and create solutions for every possible situation, we focus on a particular set of actions, that is, rising movements. Rise animations require a suitably diverse range of motor skills. We formate a set of tasks to evaluate our algorithm, such as, get up from front, get up from back, get up on uneven ground and so on. The model also encapsulates underlying properties, such as, visual attention and expressive qualities (tired, unsure, eager) and human expressiveness. We consider a number of factors, such as, inner and outer information, emotion, personality, primary and secondary goals. III.OVERVIEW High Level Elements The system is driven by three key sources of information: 1) the internal information (e.g., logistics of the brain, experience, mood) 2) the aim or action 3) external input (e.g., environmental, contacts, comfort, lighting)
  • 6. 4) memory and information retrieval (e.g., parallel models and associative memory) Motion Capture Data (Control) We have a library of actions as reference material for look-up and comparison. Some form of ‘control’ and ‘input’ to steer the characters to perform actions in a particular way (e.g., instead of the artist creating a large look-up array of animations for every single possible solution), we provide fundamental poses and simple pre-recorded animations to ‘guide’ the learning algorithm. As search models are able to explore their diverse search-space to reach the goal (e.g., heuristically adjusting joint muscles), however, a reference ‘library’ allows us to steer the solution towards what is ‘natural-looking’. As there are a wide number of ways of accomplishing a task - but what is ‘normal’ and what is ‘strange’ and uncomfortable. The key points we concentrate on are: 1) the animations requires basic empirical information (e.g., reference key-poses) from human movement and cognitive properties; 2) the movement should not simply reply pre-recorded mo- tions, but adapt and modify them to different contexts; 3) the solution must react to disturbances and changes in the world while completing the given task; 4) the senses provide unique pieces of information, which should be combined with internal personality and emo- tion mechanisms to create the desired actions and/or re- actions. Blending/Adapting Animation Libraries During motor skill acquisition, the brain learns to map between ‘intended’ limb motion and requisite muscular forces. We propose that regions (i.e., particular body segments) in the animation library area are blended together to find a solution that is aesthetically pleas- ing. (i.e., based upon pre-recorded motions instead of randomly searching). Virtual Infant (or Baby) Imagine a baby with no knowledge or understanding. As we explained, a bottom up view, starting with nothing and educating the system to mimic humanistic (organic) qualities. Learning algorithms to tune skeletal motor signals to accomplish high-level tasks. As with a child - ‘trial- and-error’ approach to learning - exploring what is possible and impossible - to eventually reach a solution. This requires continuously integrating in corrective guidance (as with a child - without knowing what is right and wrong - the child will never learn). This guidance is through fitness criteria and example motion clips (as children do - see and copy - or try to). Performing multiple training exercises over and over again to learn skills. Having the algorithm actively improve (e.g., proprioception - how the brain understands the body). As we learn to perform motions, there are thousands of small adjustments that our body as a whole is making every millisecond to ensure optimal (quickest, energy efficient, closest idea/style). Constantly monitoring the body by sending and receiving sensory information (e.g., to and from every joint, limb, and contact). Over time, the experience strengthens the model’s ability to accomplish tasks quicker and more efficiently. Stability Autonomous systems have ‘stability’ issues (i.e., they are far from equilibrium stability) [51]. Due to the dynamic nature of a character’s actions, they are dependent for their environment (external factors) requiring interaction, which are open processes (exhibit closed self-organization). However, we can measure stability in relation to reference poses, energy, and balance to draw conclusions of the effectiveness of the learned solution. Memory Learn through explorative searching (i.e, with quan- tative measures for comfort, security, and satisfaction). While a character may find an ‘optimal’ solution that meets the specified criteria - it will continue to expand its memory repertoire of actions. This is a powerful component, increasing the efficiency in achieving a goal (e.g., the development of walking and retention of balanced motion in different circumstances would be more effective). The view that exploration and retention (memory) is crucial to ontogenetic development, which is supported by research findings in developmental psychology [52]. Hofsten [53] explains that it is not necessarily success at achieving task- specific goals that drives development but the discovery of new way of doing something (through exploration). Forms a solution that builds upon ‘prior knowledge’ with an increased reliance on machine learning and statistical evaluation (i.e., for tuning the system parameters). This leads to an model that constantly acquires new knowledge both for the current and future task. IV.COMPLEXITY Experimenting with optimisation algorithms (i.e., different fitness criteria for specific situations). Highly dynamic animations (jumping or flying through the air). Close proximity simulations (dancing, wrestling, getting in/out of a vehicle). Exploring ‘be- yond’ human but creative creatures (multiple legs and arms). Instead of aesthetic qualities, investigate ‘interesting’ behaviours. As the system and training evolves to use a ‘control language’ to give orders. Not just limited to generic motions (i.e., walking and jumping), but the ability to learn and search for solutions (whatever the method). Introduce risk, harm, and comfort to ‘limit’ the solutions to be more ‘human’ and organic. Avoid unsupervised learning since it leads to random unnatural and uncontrollable motions. Simple examples (i.e., training data) to steer the learning. Gather knowledge and extend the memory of experiences to help solve future problems (learn from past problems). This method is very promising for building organic real-life systems (handle unpredictable situations in a logical natural manner). Technique is scalable and generalizes across topologies. Learned solutions can be shared and transferred between characters (i.e., accelerated learning through sharing). Figure 6. Complexity - As animation and behavioural character models become increasing complex, it becomes more challenging and time consuming to customize and create solutions for specific environments/situations. An physically correct, self-adapting, learning animation sys- tem to mimic human cognitive mechanics is a complex task
  • 7. that embodies a wide range of biologically based concepts. A bottom up approach (i.e., starting with nothing). This forms a foundation from which greater details can be added. As the model grows in complexity and details more expressive and autonomous animations appear. Leading on to collaborative agents, i.e., social learning and interaction (i.e., behaviour in groups). The enormous complexity of the human brain and its ability to problem solve cannot be underestimated - however, through simple approxima- tions we are able to develop autonomous animation models that embody and possess humanistic qualities, such as, cognitive and behavioural learning abilities. Tackle a complex problem - our movement allows us to express a vast array of behaviours in addition to solving physical problems, such as, balance and locomotion. We have only scraped the surface of what is possible - constructing and explaining a simple solution (for a relatively complex neuro-behavioural model) - to investigate a modular extendible framework to synthesize human movement (i.e., mapping functionality, problem solving, mapping of brain to anatomy, and learning/experience). Body Language The way we ‘move’ says a lot. How we stand and how we walk expels ‘emotional’ details. We humans are very good at spotting these underlying characteristics. These fundamental physiological motions are important in animation - if we want to synthesize life-like characters. While these subtle underlying motions are aesthetic (i.e., sitting on top of the physical action or goal), they are non the less equally important. Emotional synthesis is often classified as a low-level biological process [54]. Chemical reactions in the brain for stress and pain - correlate and modulate various behaviours (including motor control) - vast array of effects - influencing sensitivity, mood, and emotional responses. We have took a view that the motion and learning is driven by a high level cognitive model (avoid the various underlying physiological and chemical parameters). Input (Sensory Data) The brain has a vast array of sensory data, such as, the eyes, sound, temperature, smell, and feelings, that feed in to make the final decision. Technically, our simple assumption is analogous to a blind person taking lots of short exploratory motions to discover how to accomplish the task. Reduce the skeleton complexity compared to a full human model (numerical complexity). Physical information from the environ- ment, like contacts, centre of mass, and end-effector locations. The output motor control signals - with behavioural selection, example learning motion library, emotion, and fitness evaluation. V. CONCLUSION We have specified a set of simple constraints to steer and control the animation (e.g., get-up poses). We developed a model based on biology, cognitive psychology, and adaptive heuristics to create animations to control a physics-based skeleton that adapts and re-trains parameters to meet changing situations (e.g., different physical and environmental information). We inject personality and behavioural components to create animations that capture life-like qualities (e.g., mood, tired, and scared). This article addresses several possibilities for future work. It would be valuable to do further tests on specific hypotheses and assumptions by constructing more focused and rigorous experiments. However, these hypotheses are hard to state pre- cisely, and thus have mixed feelings - since we are trying to model humanistic cognitive abilities. A practical approach might be to directly compare and contrast real-world and synthesized situations. For instance, an experiment of an actor dealing with difficult situations, such as, stepping over objects and walking un- der bridges. Younger children approach the problem in a different way - similar to our computer agent - learning through trial and error, behaving less mechanically and more consciously. Further, communication between director (e.g., example animations and posses for control) might lead to more formal languages of commands. This would help us learn precisely what sorts of commands are needed and when there should be issued. Finally, we could go further by developing richer cognitive models and control languages for describing motion and style to solve questions not even imagined. We have taken a simplified view of cognitive modelling. We will continue to see cognitive architectures develop over the coming years that are capable of adapting and self-modifying, both in terms of parameter adjustment phylogenetic skills. This will be through learning and, more importantly, through the modification of the very structure and organization of the system itself (memory and algorithm) so that it is capable of altering its system dynamics based on experience, to expand its repertoire of actions, and thereby adapt to new circumstances [52]. A variety of learning paradigms will need to be developed to accomplish these goals, including, but not necessarily limited to, unsupervised, reinforcement, and supervised learning. Learning through watching Providing the ability to translate 2D video images to 3D animation sequences would allow cog- nitive learning algorithms the ability to constantly ‘watch’ and learn from people. Watching people in the street walking and avoiding one another, climbing over obstacles, and interacting to reproduce similar characteristics virtually. REFERENCES [1] D. Vogt, S. Grehl, E. Berger, H. B. Amor, and B. Jung, “A data-driven method for real-time character animation in human-agent interaction,” in Intelligent Virtual Agents. Springer, 2014, pp. 463–476. [2] T. Geijtenbeek and N. Pronost, “Interactive character animation using simulated physics: A state-of-the-art review,” in Computer Graphics Forum, vol. 31, no. 8. Wiley Online Library, 2012, pp. 2492–2515. [3] E. N. Marieb and K. Hoehn, Human anatomy & physiology. Pearson Education, 2007. [4] B. Reinert, T. Ritschel, and H.-P. Seidel, “Homunculus warping: Conveying importance using self-intersection-free non-homogeneous mesh deforma- tion,” Computer Graphics Forum (Proc. Pacific Graphics 2012), vol. 5, no. 31, 2012. [5] T. Conde and D. Thalmann, “Learnable behavioural model for autonomous virtual agents: low-level learning,” in Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. ACM, 2006, pp. 89–96. [6] F. Amadieu, C. Marin´e, and C. Laimay, “The attention-guiding effect and cognitive load in the comprehension of animations,” Computers in Human Behavior, vol. 27, no. 1, 2011, pp. 36–40. [7] E. Lach, “fact-animation framework for generation of virtual characters behaviours,” in Information Technology, 2008. IT 2008. 1st International Conference on. IEEE, 2008, pp. 1–4. [8] J.-S. Monzani, A. Caicedo, and D. Thalmann, “Integrating behavioural animation techniques,” in Computer Graphics Forum, vol. 20, no. 3. Wiley Online Library, 2001, pp. 309–318. [9] J. Funge, X. Tu, and D. Terzopoulos, “Cognitive modeling: knowledge, reasoning and planning for intelligent characters,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1999, pp. 29–38. [10] S. Tak and H.-S. Ko, “A physically-based motion retargeting filter,” ACM Transactions on Graphics (TOG), vol. 24, no. 1, 2005, pp. 98–117. [11] S. Baek, S. Lee, and G. J. Kim, “Motion retargeting and evaluation for vr-based training of free motions,” The Visual Computer, vol. 19, no. 4, 2003, pp. 222–242. [12] J.-S. Monzani, P. Baerlocher, R. Boulic, and D. Thalmann, “Using an intermediate skeleton and inverse kinematics for motion retargeting,” in Computer Graphics Forum, vol. 19, no. 3. Wiley Online Library, 2000, pp. 11–19.
  • 8. [13] B. Kenwright, R. Davison, and G. Morgan, “Dynamic balancing and walking for real-time 3d characters,” in Motion in Games. Springer, 2011, pp. 63–73. [14] C. Balaguer, A. Gim´enez, J. M. Pastor, V. Padron, and M. Abderrahim, “A climbing autonomous robot for inspection applications in 3d complex environments,” Robotica, vol. 18, no. 03, 2000, pp. 287–297. [15] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi´c, “Style-based inverse kinematics,” in ACM Transactions on Graphics (TOG), vol. 23, no. 3. ACM, 2004, pp. 522–531. [16] D. Tolani, A. Goswami, and N. I. Badler, “Real-time inverse kinematics techniques for anthropomorphic limbs,” Graphical models, vol. 62, no. 5, 2000, pp. 353–388. [17] T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in vision- based human motion capture and analysis,” Computer vision and image understanding, vol. 104, no. 2, 2006, pp. 90–126. [18] B. Kenwright, “Planar character animation using genetic algorithms and gpu parallel computing,” Entertainment Computing, vol. 5, no. 4, 2014, pp. 285–294. [19] K. Sims, “Evolving virtual creatures,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994, pp. 15–22. [20] J. T. Ngo and J. Marks, “Spacetime constraints revisited,” in Proceedings of the 20th annual conference on Computer graphics and interactive techniques. ACM, 1993, pp. 343–350. [21] J. A. Feldman and D. H. Ballard, “Connectionist models and their proper- ties,” Cognitive science, vol. 6, no. 3, 1982, pp. 205–254. [22] M. Unuma, K. Anjyo, and R. Takeuchi, “Fourier principles for emotion- based human figure animation,” in Proceedings of the 22nd annual confer- ence on Computer graphics and interactive techniques. ACM, 1995, pp. 91–96. [23] P. Faloutsos, M. Van de Panne, and D. Terzopoulos, “Composable con- trollers for physics-based character animation,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM, 2001, pp. 251–260. [24] H. Noser, O. Renault, D. Thalmann, and N. M. Thalmann, “Navigation for digital actors based on synthetic vision, memory, and learning,” Computers and graphics, vol. 19, no. 1, 1995, pp. 7–19. [25] H. H. Vilhj´almsson, “Autonomous communicative behaviors in avatars,” Ph.D. dissertation, Massachusetts Institute of Technology, 1997. [26] J. Cassell, H. H. Vilhj´almsson, and T. Bickmore, “Beat: the behavior expression animation toolkit,” in Life-Like Characters. Springer, 2004, pp. 163–185. [27] X. Tu and D. Terzopoulos, “Artificial fishes: physics, locomotion, percep- tion, behavior,” in Proceedings of the 21st annual conference on computer graphics and interactive techniques. ACM, 1994, pp. 43–50. [28] J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone, “Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents,” in Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994, pp. 413–420. [29] X. Yao, “Evolving artificial neural networks,” Proceedings of the IEEE, vol. 87, no. 9, 1999, pp. 1423–1447. [30] H. A. ElMaraghy, “Kinematic and geometric modelling and animation of robots,” in Proc. of Graphics Interface’86 Conference. ACM, 1986, pp. 15–19. [31] C. W. Reynolds, “Computer animation with scripts and actors,” in ACM SIGGRAPH Computer Graphics, vol. 16, no. 3. ACM, 1982, pp. 289– 296. [32] N. Burtnyk and M. Wein, “Interactive skeleton techniques for enhancing motion dynamics in key frame animation,” Communications of the ACM, vol. 19, no. 10, 1976, pp. 564–569. [33] C. Csuri, R. Hackathorn, R. Parent, W. Carlson, and M. Howard, “To- wards an interactive high visual complexity animation system,” in ACM SIGGRAPH Computer Graphics, vol. 13, no. 2. ACM, 1979, pp. 289– 299. [34] R. A. Goldstein and R. Nagel, “3-d visual simulation,” Simulation, vol. 16, no. 1, 1971, pp. 25–31. [35] A. Bruderlin and T. W. Calvert, “Goal-directed, dynamic animation of human walking,” ACM SIGGRAPH Computer Graphics, vol. 23, no. 3, 1989, pp. 233–242. [36] I. Mlakar and M. Rojc, “Towards ecas animation of expressive complex behaviour,” in Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues. Springer, 2011, pp. 185–198. [37] M. Soliman and C. Guetl, “Implementing intelligent pedagogical agents in virtual worlds: Tutoring natural science experiments in openwonderland,” in Global Engineering Education Conference (EDUCON), 2013 IEEE. IEEE, 2013, pp. 782–789. [38] J. Song, X.-w. Zheng, and G.-j. Zhang, “Method of generating intelligent group animation by fusing motion capture data,” in Ubiquitous Computing Application and Wireless Sensor. Springer, 2015, pp. 553–560. [39] M. H. Lee, “Intrinsic activitity: from motor babbling to play,” in Develop- ment and Learning (ICDL), 2011 IEEE International Conference on, vol. 2. IEEE, 2011, pp. 1–6. [40] K. Gurney, N. Lepora, A. Shah, A. Koene, and P. Redgrave, “Action discovery and intrinsic motivation: a biologically constrained formalisa- tion,” in Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2013, pp. 151–181. [41] W.-Y. Lo, C. Knaus, and M. Zwicker, “Learning motion controllers with adaptive depth perception,” in Proceedings of the ACM SIG- GRAPH/Eurographics Symposium on Computer Animation. Eurographics Association, 2012, pp. 145–154. [42] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in ACM Siggraph Computer Graphics, vol. 21, no. 4. ACM, 1987, pp. 25–34. [43] K. Erleben, J. Sporring, K. Henriksen, and H. Dohlmann, Physics-based animation. Charles River Media Hingham, 2005. [44] K. Perlin, “Real time responsive animation with personality,” Visualization and Computer Graphics, IEEE Transactions on, vol. 1, no. 1, 1995, pp. 5–15. [45] B. Kenwright, “Generating responsive life-like biped characters,” in Pro- ceedings of the The third workshop on Procedural Content Generation in Games. ACM, 2012, p. 1. [46] T. Trappenberg, Fundamentals of computational neuroscience. OUP Oxford, 2009. [47] P. Dayan and L. Abbott, “Theoretical neuroscience: computational and mathematical modeling of neural systems,” Journal of Cognitive Neuro- science, vol. 15, no. 1, 2003, pp. 154–155. [48] A. V. Samsonovich, “Toward a unified catalog of implemented cognitive architectures.” BICA, vol. 221, 2010, pp. 195–244. [49] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower, M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr et al., “Sim- ulation of networks of spiking neurons: a review of tools and strategies,” Journal of computational neuroscience, vol. 23, no. 3, 2007, pp. 349–398. [50] T. R. Armstrong, “Training for the production of memorized movement patterns,” Ph.D. dissertation, The University of Michigan, 1970. [51] M. H. Bickhard, “Autonomy, function, and representation,” Communication and Cognition-Artificial Intelligence, vol. 17, no. 3-4, 2000, pp. 111–131. [52] D. Vernon, G. Metta, and G. Sandini, “A survey of artificial cognitive sys- tems: Implications for the autonomous development of mental capabilities in computational agents,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 2, 2007, p. 151. [53] C. von Hofsten, On the development of perception and action. London: Sage, 2003. [54] M. Sagar, P. Robertson, D. Bullivant, O. Efimov, K. Jawed, R. Kalarot, and T. Wu, “A visual computing framework for interactive neural system models of embodied cognition and face to face social learning,” in Unconventional Computation and Natural Computation. Springer, 2015, pp. 71–88.