SlideShare a Scribd company logo
Unit II Unit Title: Overview of AR Development
Real time Engine Overview – Real time engine basics – UI and Navigation of Real time game
engines
2.1 What is a real time engine?
• A real time engine enables you to easily move through computer environments. It is often
associated with gaming or virtual reality environments.
• A real-time engine must have extremely high throughput with low latency and be able to
scale out easily. Real time computing requires a guaranteed response – often under a
millisecond. For example, a real time engine may be needed to deliver results during a
live online gaming event and be able to handle a surge in traffic during peak hours or
special promotions.
• Other uses requiring a real time engine can include safety-critical applications such as
anti-lock brakes, which require the proper mechanical response in real time.
2.2 Basics of Real time engine:
A game engine is a software framework primarily designed for the development of video games.
Developers can use engines to construct games for consoles, PC’s, mobile devices and even VR
platforms.
Most popular real time Game engines are
a. Unity
b. Unreal
What is Unity 3D?
Unity 3D is a powerful IDE and game engine platform for developers. Unity has multiplied
prebuilt- features that can help in making a game work. These built-in features include 3D
rendering, physics, collision detection, etc. If we look through a developer’s eye then we won’t
have to manually build everything from scratch while developing or creating a game. While
starting a new project you won’t have to apply new physics laws by separately calculating their
movement to decrease the level of surrealism in the game.
Moreover, the reason Unity is considering among the best game engines by developers is
because of its asset store. It is a community-based section in the game engine platform where the
game developers can upload their work and the whole community can utilize the same.
For example, if you need a beautiful fire effect but don’t have enough time to build everything
from scratch, in Unity 3D you will be easily able to find the fire effect you are searching for
directly from the asset store. While working on Unity, the game developers can freely
concentrate on their projects while coding for the feature that is extremely unique to the game.
Unity game development allows for building robust and scalable gaming applications.
What is the Unreal Engine?
Unreal Engine, as its name suggests, is a game development engine that was developed in the
year 1988 by the famous gaming studio Epic Games. Initially, the platform was a first-person
shooter game but right now it is being utilized to create and develop games of different genres
including Stealth, RPG (role-playing games), MMORPG (massively multiplayer online role-
playing games), fighting games, etc.
The platform only supports C++ as its coding language and it is immensely popular among
seasoned game developers and gaming studios. The basic development technique offered by
Unreal Engine is extremely helpful for beginners and freshers in the game development industry.
The tools available in Unreal Engine provide the users the license to develop and create their
games.
Unreal Engine is counted among the best game engines and is famous for its easy customization,
in addition to the tools readily available to create HD AAA games without a lot of effort. Most of
the freshers in this industry utilize the Unreal Engine platform to initiate their career in the
gaming industry and create a portfolio for presentation.
It has various unique mechanisms through which users can run the game on their own. Not to
mention, the editors and systematic set of tools available on the platform can help the developers
modify the properties of the game and create the necessary artwork.
Unreal Engine has different parts including an online module, input, graphics engine, physics
engine, gameplay framework, sound engine, etc. The editors available in the Unreal Engine are
necessary to develop games on the platform.
Unreal engine games are widely available and are some of the most played ones in the world.
Unique Characteristics of Unity 3D that Make it One of the Best Game Engines
Here is a list of Characteristics of Unity in detail:
• Easy Access Unity Asset Store - The adage “Easier said than done” stands true for Unity
3D as no game development software at present offers its audience a fully inbuilt and
easily accessible asset store. The Unity Asset Store makes it one of the most popular
features among game developers due to its multiple pre-designed assets. Before this,
who would have thought accessing 2D/3D animation renders, pre-designed 3D models,
and gaming tutorials will be just one click away? The Unity Asset Store is an ideal
characteristic that enables developers and designers to purchase assets according to their
upgraded requirements and design needs.
• Multi-Platform Developers’ Game Engine - Unity 3D is a platform that enables
developers to build multiple projects for multiple gaming devices including Console (PS4
or Xbox), PC, AR/VR experiences, and mobile devices. It is currently the only suitable
integration game engine for gamers across all major platforms. Developers can both build
and port their games from one gaming platform to another with great ease. The processes
of rendering and transferring assets also function smoothly in Unity 3D. Particularly
graphics rendering is one of the most exciting features in Unity. This is because other
platforms have their personalized system settings but in Unity, rendering from other
engines like Direct 3D, OpenGL, Adobe, etc. are the simplest.
• Rich Variety of Online Tutorials - Gamers feel that one of the most helpful
characteristics of Unity 3D is the repository of online tutorials and learning materials.
This bulk of rich learning resources in Unity boasts well-documented processes, detailed
steps, multilingual texts, and video forums that enable a comprehensive understanding of
gaming concepts. Among other varieties of resources, video tutorials are the most
preferred as gamers can pause, apply, rewind, and forward the video as per their learning
pace, giving them full flexibility to learn the different tools and processes. These tutorials
on Unity 3D have allowed amateur game developers to access lessons relating to various
aspects of gaming such that no novice will be afraid to start from scratch to master the
development skills.
• Multiple Coding Languages- Unity 3D is the most highly rated game engine owing to
its ability to let developers use multiple coding languages based on their familiarity. The
platform allows the hands-on experience of C#, BOO, and JavaScript (referred to as
Unity Script), which is usually at the fingertips of any game developer.
Unique Features of Unreal Engine
• Pipeline Integration
• Platform tools
• Animation
• World Building
• Simulations & Special Effects
• Gameplay & Interactivity Authoring
• Rendering & Lightening
• Integrated Media Support
• Developer tools
Major Differences Between Unity and Unreal Engine
Here is the list of key differences between Unity vs Unreal Engine:
Aspects Unity Unreal
Definition
While Unity is a cross-platform game
engine,
Unreal Engine is developed as a source
available game engine.
Invention Unity was launched in 2005,
whereas Unreal Engine is older with its
debut in 1998.
Language While Unreal Engine uses C++
Unity uses C# making it faster as per
the demand of current game
developers.
Assets
Unity asset store has a rich collection
of 31000 textures, props, and mods in
comparison to.
Unreal Engine at 10000
Graphic
Both Unity and Unreal Engine boast
superior graphics but here
Unreal Engine is more preferred than
Unity
Source
code
While Unreal Engine has an open-
source code easing the development
process,
Unity does not provide an open source.
Rather, one has to buy its code.
Rendering
Unity’s rendering process is slower
than Unreal Engine.
Unity’s rendering process is slower
than Unreal Engine.
Pricing
Unreal Engine is currently free of cost
but you must own royalties to them.
On the other hand, Unity’s free version
can be made complete by paying a
one-time upgrade fee of $1500 or
$75/month.
Popularity
Both Unity and Unreal Engine game
engines have huge popularity among
their community of active users but of
late,
Unity 3D is accessible in Unreal
Engine 4, thus accounting for a larger
client base.
After the above deliberation, it is safest to say that game development would be extremely
difficult without both Unity and Unreal Engine. Unity and Unreal Engine have their own specific
sets of advantages and disadvantages that cater to their relative audiences who make choices
based on their requirements.
While Unity is renowned for its huge clientele, performance development, and 2D/3D
simulations, Unreal Engine is more preferred when gamers are building bigger games with
superior quality graphics, leading to unreal engine games being among the most popular ones
among gamers.
Major Similarities Between Unity and Unreal Engine
Here is the list of similarities between Unity vs Unreal Engine:
Aspect Unity and Unreal Engine
Platforms
Both support multiple platforms, including Windows, macOS, Linux, iOS,
Android, consoles (PlayStation, Xbox), and virtual reality (VR) platforms.
Scripting
Languages
Both use scripting languages to create game logic. Unity primarily uses C#
(also supports JavaScript and Boo), while Unreal Engine uses Blueprint
visual scripting (also supports C++ programming).
Visual Editors
Both provide visual editors for designing game levels, UI, and animations.
Unity has a visual editor with a node-based system for creating scenes, while
Unreal Engine offers a similar visual editor called Unreal Editor.
Asset Pipelines
Both have asset pipelines that facilitate importing and managing 3D models,
textures, audio, and other game assets.
Aspect Unity and Unreal Engine
Community and
Resources
Both have large and active communities, with extensive documentation,
tutorials, forums, and online resources available.
Real-time
Rendering
Both engines offer powerful real-time rendering capabilities, including
dynamic lighting, shading, and post-processing effects.
AAA-quality
graphics
Unity and Unreal Engine are tailor-made for producing AAA-quality
graphics
industry-
standard
software
Both Unity and Unreal Engine support smooth bridges between several
industry-standard software.
Toolbox
Both the game engines offer an extensive toolbox that includes a terrain
editor, physics simulation, animation, VR Support, etc
Multiplayer
Networking
Both have networking features and frameworks to support multiplayer
games and online connectivity.
Cross-platform
Deployment
Both support cross-platform deployment, allowing developers to build and
deploy games to multiple platforms from a single codebase.
Third-Party
Integration
Both engines support integration with third-party tools, plugins, and assets,
expanding the functionality and possibilities for game development.
2.3 Functions of Game Engines
Graphics Rendering
Rendering is the final process of creating an image or animation from a computer model. When
rendering objects or characters in a scene it takes into account the geometric shape of the model,
viewpoints, textures, shading and lighting. Which is then processed by the engine to a digital
image or raster graphic, depending on how powerful your engine the faster the images will
render. Rendering is an intensive process for your CPU and usually 50% of the CPU’s
processing capabilities are taken up by it. Due to it being such a strenuous task there a methods
that can be put in place in order to to utilise your processing capabilities more efficiently. Such
as:
Culling Methods
In games there are often aspects of objects or NPC’s that aren’t visible to the eye thus making
them useless rendering. In this circumstance you would use one of the ‘Culling’ methods to
ensure that there is no form of waste rendering enabling you more processing power to dispose
more efficiently.
• Backface Culling – This is one of the most common forms of culling, it determines how
many polygons are visible at the time that should be rendered. To do this the user has to
specify which polygons are clockwise or counter-clockwise when projected on screen. Or
in simpler terms anything on the backend of an object will not be rendered because it is
not visible. For example in GTA you’ll see buildings from a front view, however the
texture of the backend of the building won’t be rendered because it isn’t visible.
• View Frustum – This method of culling acts as a virtual camera showing your field of
view. Anything that is outside of your field of view will not be rendered and remained
culled, if it is not visible there is no reason for it to be rendered. View Frustum is a
method mainly used in an First person game and is determined by the width/perspective
of the camera, which is a geometric representation of visible content on screen.
• Detail Culling – When in the view frustum, using geometry this method of culling will
determine how far an object is and if it should be rendered or not. Depending on how far
the object is the engine will appropriately adjust the level of detail in accordance of your
distance, which is a more advanced feature of culling. For example on the other side of
the map is a bin, from first site it might look like a block but the closer you get the more
detailed it’ll be rendered to look like a bin. By doing this it adds more realism to the
game
• Portal Culling – Portal culling is when the engine segregates a scene from the rest of the
map by determining which cells are currently visible and which ones
aren’t. Regions of the map are divided into zones that are represented as polygons and
once you have entered this particular zone everything outside of it becomes obsolete.
Most commonly this method is used for Indoor scenes, again another way of saving
processing power by only rendering whats in your view frustum in that particular room
rather than everything in and outside of it. However if there was a window or an open
door you would be able to see a rendered image, but this is through an entirely different
view frustum that specifically calculates your position and what you should be seeing
based on it..
• Occlusion Culling – This is one of the hardest methods of culling purely because of it’s
complexity. What this method does it sorts out the objects that is visible and re-positions
anything that obscures this object within the scene. This is all determined by the position
of the camera which will vary at different angles. This could be done by the Z-Buffer but
it is very time consuming for bigger scenes, thus making the Occlusion culling the better
option.
RayTracing
This is a very advanced and complicated technique for making rendered images look more
realistic, by adding colour intensity, shading and shadows that would be formed by having one
of more light source. How it does is by tracing rays of light that would be reflected or absorbed
by an object. For example if a player was crossing a road and a bus crossed drove past him, the
path of light would be shining on the bus thus creating a shadow on the player.
Fogging
This is method used to shroud the players vision from witnessing any rendering processes taken
place during gameplay. They do this by employing a fog gradient around your view frustum to
progressively obscure your visions, until the level of detail increases. An example of flogging is
in GTA when your looking down a straight road at a building but your visual sight is limited due
to flogging. However the further down the road you go the LOD of the object will greaten.
Shadowing
Shadowing also known as projective showing or shadow mapping, is the course of action an
engine takes when rendering shadows to applied objects. These shadows are done in real-time,
depending on if there is enough light within a pixel for it to be projected. In comparison to
‘Shadow Volumes’ (another shadowing technique) Shadow Projective/Mapping is less accurate.
However Shadow Volumes use a ‘Stencil Buffer’ (temporary storage of data while it is being
moved one place to another) which is very time consuming, whereas shadow map doesn’t
making it a faster alternative.
Anti-Aliasing
Anti-Aliasing detects rough polygon edges on models and smooths them out using a quick scan
method. The more edges a model has the easier the model can be scanned at once. This method
is mainly used on the PS3 rather than the Xbox because it’s a more powerful console and has
better processing power. However PC gamer’s may choose to turn of this rendering method
because it often slows down the initial performance of your game.
Animation System
Animation is a technique used to render movement in computer generated models. These can
either be pre-rendered or rendered in real time using two main methods inverse kinematics or
forward kinematics.
Ragdoll (Real Time) – In earlier games developers had to create separate animations for death
sequences. Now due to the advance technologies in games, physic simulations have
become obtainable thus creating the ragdoll. Which is a skeletal replica that reacts appropriatly
to it’s cause of death, giving various different outcomes to death scenarios making the game
seem more realistic. For example in the ‘Force Unleashed series’ when your levitating an enemy
using the force and your throw them or force push them the ragdoll would react accordingly to
the speed and distance it was thrown. However if you was to do it again you couldn’t reenact the
same scenario.
Idle Cycle (Pre-Rendered) – This is a pre-rendered animation for still objects or characters. This
gives the player more of a life like feel, giving the illusion that you are not just controlling him
but it also has a mind of it’s own. Depending on the audience of the game the Idle animation may
be more outrageous or subtle than others. For example in ‘Mario Galaxy’ which has a more
younger audience, when Mario is idle he starts flicking coins. Whereas in Tekken which has a
much older audience, it’ll have more natural Idle animations in reference to there fighting style.
E.g. Eddie uses Capoeira fighting style so in his idle stance he’ll be sweeping his feet.
Forward Kinematics – This is a function for movement in animation and also is ued a lot in
robotics. This method of movement is used for real time animation in things such as Rag dolls,
where movement is started from the joints and moves forward through the arm. These are based
on a number of kinematic equations that determine the movement of this animation.
Inverse Kinematics – In comparison to ‘Forward Kinematics’ this is the complete opposite. Inthe
sense that movement starts from the outer joint and moves backwards through the body rather
than forward. This method of animation is pre-rendered and is the most common form of
animation in games.
There are many other animation softwares that are available, but one that stood out the most to
me was Team Bondies ‘Motion Scan’. Which is a new state of the art middle ware used by
Rockstar to do the facial animations which were in L.A. Noire. How this is done is by sitting the
actor in a rig surround by advanced cameras that pick up all subtle features as they act out there
script. However the software is only a tool and is only as good as the actor who is performing
there role. This animation is then processed and rendered into a 3D modelling software.
Middleware
The middleware acts as an extension of the engine to help provide services that are beyond the
engines capabilities. It is not a part of the initial engine but can be hired/rented out for it’s usage
which can be for various purposes. There is an engine for any feature in game engines. For
example the physics on Skate 3 were terrible, if you were to drop of your board you would
bounce or fly to unrealistic lengths. In that circumstance for there next game they may want to
hire out ‘Havok’ which is a well established and respected physics engine to help fix this
problem. There are also other middle ware engines that you can hire out to assist your games in
many ways like Demonware, they are a networking engine whos sole purpose is to improve your
online features.
Artificial Intelligence
Artificial Intelligence is creating the illusion of an NPC having realistic reactions and thoughts to
add to the experience of the game. A common use of is ‘Pathfinding’ which determines the
strategic movement of your NPC. In the engine you give each NPC a route to take and different
options to act if that specific route is not accessible, this also takes other things into account like
your level of health & the current objective at the time. These paths will be represented as a
series of connected points. Another similar type of AI usage is ‘Navigation’ which is a series of
connected polygons. Similar to Pathfinding it’ll follow these connected polygons only moving
within the space, however they are not limited to one route. Thus having the space
and intelligence to know what objects or other NPC’s to avoid it can take different routes
depending on the circumstance. A fairly new method of AI is ‘Emergent’ which allows the NPC
to learn from the player and develop reactions or responses to these actions taken place. Even
though these responses or reactions may be limited it does does often give of the impression that
you are interacted with a human-like character.
Other Systems
Graphics Rendering – Rendering is the final process of creating an image or animation from a
computer model. When rendering objects or characters in a scene it takes into account the
geometric shape of the model, viewpoints, textures, shading and lighting. Which is then
processed by the engine to a digital image or raster graphic, depending on how powerful your
engine the faster the images will render. There are two types of rendering one being ‘Real-Time’
which is an on going rendering process that is calculated to show 20 to 120 frames per second. It
renders graphics that are visually noticeable by the eye in a fraction of a second, an exmaple of a
game that uses this is GTA wherever you move the engines renders with you and anything
beyond your sight would still be in it’s wireframe state. The other method of rendering is ‘Non
Real-Time’ in comparison to ‘Real-Time’ this method doesn’t render everything in your visual
site, it renders specific things in order to save processing power thus giving you a better quality
image. This method is mainly used for the cutscenes or cinematics, games will usually use both
methods of rendering appropriately for there uses.
Collision Detection – This is the response taken when two or more objects collide with each
other. Every game uses features of Collision Detection, however the level of importance it has in
a game will vary. For example one method of intersect detection is ‘Bounding Sphere’, which is
a sphere defined by a point, centre and radius that is placed around a character or object.
Anything that penetrates this sphere is a detection of intersection, then usually
an appropriate response is assigned to take. This is one of the simplest levels method of detection
and is most ideal when accuracy isn’t a huge factor.
A second solution for Collision Detection is ‘Bounding Box’ which is a rectangular box
surrounding your character or object. The Bounding Box has three values Width, Height and the
Vector location and anything that intercepts this invisible square boundary is a sign of collision.
Often a favourite for developers as it is mainly used for small objects in the mise-en-scene.
Physics – Physics are used to give the game some form of realism to the player. Depending on
the game some would need more accurate physics that the other, for example a fighter jet
simulation game would use more accurate physics in comparison to ‘Tom Clancy’s –
H.A.W.X.’. If your in need of more accurate physics you might want to think about renting some
middle ware. Havok is a specialised physics engine that may give you better results than making
the physics yourself in the engine. As of the late 2000′s games are made to look more cinematic
and the use of a good physics engine is detrimental for the realism of game.
Sound – Sound in game is detrimental because it’s a notifiable response that can occur from
interactions in the game. Another purpose of sound is that it can add more realism to the game by
having ambient sounds that make your environment more believable to be apart of. For example
if the scene setting is in an army camp you’ll be able to hear marching, guns reloading or being
shot, chants, groaning of injured soldiers etc. Or you could include soundtracks that bring out
different emotions in the player, for example in Dead Space they use music to shift your
emotions from calm to scared in a matter of seconds. Usually games are made and edited outside
of the engine, however some engines do include there own auido technology.
2.4 Basic Components of Real time - Game Engine
Game engines often include core functionality such as
• Graphics Engine (Rendering): This renders the game's visuals, such as 2D or 3D
graphics, textures, and animations. Everything you see in the game is a result of the
rendering engine.
• Physics Engine: This simulates the game's physical world. It manages collisions, gravity,
and other real-world physics. Or unrealistic physics, but either way, you need something
doing the physics math in-game.
• Audio Engine: This part handles the game's sound design, including music, sound
effects, and voiceovers. Games like Returnal have complex 3D audio engines that take
immersion to the next level.
• Artificial Intelligence: The AI system controls non-player characters (NPCs) and other
game world elements. It animates these characters by dictating their behavior, decision-
making, and interactions with the player. This is perhaps the most exciting area for game
development in coming years as AI technology explodes in complexity.
• Input Management: This component oversees user input, such as keyboard, mouse, and
controller actions. It translates these inputs into in-game actions. This might sound
boring, but input management is crucial. Any game that controls poorly is no fun to play
no matter how good it looks and sounds.
They also allow games to be deployed on multiple platforms – game engines are capable of
platform abstraction. Both Unity and Unreal Engine support easy deployment of game-ready
assets from their respective marketplaces, the Unity Asset Store and the Unreal Marketplace.
2.4.1Graphics Engine / Rendering Using Game Engines — Basics
3D Rendering is the method with which three-dimensional data is converted into an image.
The time taken for this process can vary from seconds to even days based on the amount of data
the computer has to process, and this time-frame can even be for a single frame. There are two
types of rendering technologies used in the industry these days- real-time and offline
rendering.
1. Offline Render - When we render a scene using an off-line renderer each time there is a
small change the system has to re-compile all the changes to produce how the scene will
look with them implemented (Popular off-line renderers are Arnold (used in Maya), V-
ray, Octane, etc). This change can even be something trivial; for example — changing the
color of a light, changing the camera angle, moving a character etc. When it comes to
end-to-end production of content changes are very frequent and from these examples, we
can see that adopting an off-line render pipeline can add a lot more time to the production
cycle. It also gives less room for content creators to iterate sequences and makes the
entire process of re-working scenes time-consuming and expensive.
2. Real-time rendering - With Real-Time Rendering traditional rendering is replaced with
a render engine that can render in real time. This means with real-time rendering, you
can have a glance at the end result almost instantly. The advancements in this field have
only been possible due to the push that game engines have advocated in recent times
(most notable Unreal Engine).This resulting technology has grown a lot that now it not
only meets the demands of architectural visualization but also film and television.
Production companies use technology in real time to accelerate the creative process and
even display the final pixels.
Case Studies – Working of Game Engines
In the scenario ,we explain how we were able to provide a base frame for how a scene would
look and render out a sequence that can ideally be used in the final stages of production (art is
meant only for demonstrative purposes).
• Step 1: Block out scene and provide a base for content
• Step 2: Refine scene for any animation and optimize accordingly.
• Step 3: Add final polish to the scene to accurately describe pre-visualization and
production experience.
• The scene blockout lays down the fundamental building blocks and how the scene
resonates visually and emphatically. This is the crucial stage to nail down as this
represents the base for the entire visualization. Using Unreal Engine we were able to
quickly render out a scene and make changes instantly.
• With each change we had to do the render time would increase in software’s like Maya as
they are not real-time. Basically to achieve the same results an off-line renderer took us a
lot more time to render scenes out.
• Once the light is build in a real-time engine any change in camera (or object movement)
is reflected instantly. This helps scene builders quickly make additional changes to the
scene if needed.
Problem- As content creation / visualization grows to be an ever-evolving artistic and
technological pursuit, traditional pipelines are being overturned due to the timeline
involved to pre-visualize and render out sequences.
1. Traditional visualization pipeline is a linear approach- which encompasses development,
pre-production, production, and post.
2. Itereation is challenging and costly.
3. Traditional visualization is created using animation softwares/external renderers at the
cost of long render times.
Solution - Using real-time rendering technologies artists now have the freedom to create
content exactly the way they want it iteratively, collaboratively while avoiding re-shoots.
1. Pipeline is non-linear and encourages a more iterative and collaborative process.
2. Real-time rendering solutions can produce high quality assets with real-world physics
that can be used even in the final product.
3. Decreased render times means professionals don’t have to wait to see their output which
would be close to the final result.
4. Combing all process into a single suite enabled us to iteriate faster avoiding the hassles of
a traditional pipeline. By this we were able to bring our artists and developers under one
roof and make checks systematically and progressively.
5. As content creation grows to be an ever-evolving artistic and technological pursuit,
traditional pipelines are being overturned due to the timeline involved to pre-visualize
and render out sequences. With real-time rendering, ‘what you see is what you get’.
Using this technology artists now have the freedom to create content exactly the way they
want iteratively, collaboratively while avoiding re-shoots, and without compromising
creativity. Every hour of pre-production is worth two hours of production, and Game
engines can further be used to create interactive spaces where creators can visualize their
intended locations and sets, set camera angles and frame lighting references- essentially
blocking out and creating the movie even before production to establish tone and
aesthetic so that the end result is exactly the way they want it.
Unity’s Graphics engine
Unity’s graphics features let you control the appearance of your application and are highly
customizable. You can use Unity’s graphics features to create beautiful, optimized graphics
across a range of platforms, from mobile to high-end consoles and desktop. The following are
the graphics features:
• Render pipelines
• Cameras
• Post-processing
• Lighting section
• Meshes, Materials, Textures, and Shaders
• Particle Systems
• Creating environments
• Sky
• Visual Effects
• Optimizing Graphics Performance
• Color spaces
Rendering Pipeline - A render pipeline performs a series of operations that take the contents of
a scene, and displays them on a screen. In Unity, you can choose between different render
pipelines. Unity provides three prebuilt render pipelines with different capabilities and
performance characteristics, or you can create your own.
A render pipeline
takes the objects in a scene and displays them on-screen.
How a render pipeline works
A render pipeline follows these steps:
1. Culling, where the pipeline decides which objects from the scene to display. This usually
means it removes objects that are outside the camera view (frustum culling) or hidden
behind other objects (occlusion culling).
2. Rendering, where the pipeline draws the objects with their correct lighting into pixel
buffers.
3. Post-processing - where the pipeline modifies the pixel buffers to generate the final
output frame for the display. Example of modifications include color grading, bloom,
and depth of field. A render pipeline repeats these steps each time Unity generates a new
frame.
Cameras - A Unity scene represents GameObjects in a three-dimensional space. Since the
viewer’s screen is two-dimensional, Unity needs to capture a view and “flatten” it for display. It
does this using cameras. In Unity, you create a camera by adding a Camera component to a
GameObject. A camera in the real world, or indeed a human eye, sees the world in a way that
makes objects look smaller the farther they are from the point of view. This well-
known perspective effect is widely used in art and computer graphics and is important for
creating a realistic scene. Naturally, Unity supports perspective cameras, but for some purposes,
you want to render the view without this effect. For example, you might want to create a map or
information display that is not supposed to appear exactly like a real-world object. A camera that
does not diminish the size of objects with distance is referred to as orthographic and Unity
cameras also have an option for this. The perspective and orthographic modes of viewing a scene
are known as camera projections.
Post Processing - Unity provides a number of post-processing effects and full-screen effects
that can greatly improve the appearance of your application with little set-up time. You can use
these effects to simulate physical camera and film properties, or to create stylised visuals.
Lighting section - With Unity, you can achieve realistic lighting that is suitable for a range of art
styles. Lighting in Unity works by approximating how light behaves in the real world. Unity uses
detailed models of how light works for a more realistic result, or simplified models for a more
stylized result.
• Direct and indirect lighting - Direct light is light that is emitted, hits a surface once, and
is then reflected directly into a sensor (for example, the eye’s retina or a camera). Indirect
light is all other light that is ultimately reflected into a sensor, including light that hits
surfaces several times, and sky light. To achieve realistic lighting results, you need to
simulate both direct and indirect light. Unity can calculate direct lighting, indirect
lighting, or both direct and indirect lighting. The lighting techniques that Unity uses
depends on how you configure your Project.
• Real-time and baked lighting - Real-time lighting is when Unity calculates lighting at
runtime. Baked lighting is when Unity performs lighting calculations in advance and
saves the results as lighting data, which is then applied at runtime. In Unity, your Project
can use real-time lighting, baked lighting, or a mix of the two (called mixed lighting).
• Global illumination- Global illumination is a group of techniques that model both direct
and indirect lighting to provide realistic lighting results. Unity has two global
illumination systems, which combine direct and indirect lighting.
• The Baked Global Illumination system consists of lightmaps , Light Probes ,
and Reflection Probes. You can bake with the Progressive Lightmapper (CPU or GPU)
or Enlighten
Baked Global Illumination. However, Enlighten Baked Global Illumination is deprecated
and no longer visible in the user interface by default. See Lightmapping using Enlighten
Baked Global Illumination for more information.
Meshes/ Materials/Textures and shades - A mesh is a collection of data that describes a shape.
In Unity, you use meshes in the following ways:
• In graphics, you use meshes together with materials, meshes describe the shape of an
object that the GPU renders, and materials describe the appearance of its surface.
• In physics, you can use a mesh to determine the shape of a collider.
a. Deformable meshes In addition to regular meshes, Unity also supports deformable
meshes.
Deformable meshes fall into the following categories:
o Skinned meshes: These meshes work with additional data called bones. Bones
form a structure called a skeleton (also called a rig, or joint hierarchy), and the
skinned mesh contains data that allows it to deform in a realistic way when the
skeleton moves. You usually use skinned meshes for skeletal animation with
Unity’s Animation features, but you can also use them with Rigidbody
components to create “ragdoll” effects.
o Meshes with blend shapes: These meshes contain data called blend shapes.
Blend shapes describe versions of the mesh that are deformed into different
shapes, which Unity interpolates between. You use blend shapes for morph target
animation, which is a common technique for facial animation.
o Meshes that work with a Cloth component component for realistic fabric
simulation.
• Materials - To draw something in Unity, you must provide information that describes its
shape, and information that describes the appearance of its surface. You use meshes to
describe shapes, and materials to describe the appearance of surfaces.
• Materials and shaders are closely linked; you always use materials with shaders.
Material fundamentals - A material contains a reference to a Shader object.If that Shader
object defines material properties, then the material can also contain data such as colors or
references to textures.
• A material asset is a file with the .mat extension. It represents a material in your Unity
project. For information on viewing and editing a material asset using the Inspector
window, see Material Inspector reference.
• Creating a material asset, and assigning a shader to it
To create a new material asset in your project, from the main menu or the Project
View context menu, select Assets > Create > Material.
To assign a shader to the material asset, in the Inspector window use
the Shader drop-down menu.
• Assigning a material asset to a GameObject - To render a GameObject using a
material:
Add a component that inherits from Renderer. MeshRenderer is the most common
and is suitable for most use cases, but SkinnedMeshRenderer, LineRenderer,
or TrailRenderer might be more suitable if your GameObject has special requirements.
Assign the material asset to the component’s Material property.
• To render a particle system in the Built-in Particle System using a material:
Add a Renderer Module to the Particle System.
Assign the material asset to the Renderer Module’s Material property.
Sky - A sky is a type of background that a Camera draws before it renders a frame. This type of
background greatly benefits 3D games and applications because it provides a sense of depth and
makes the environment seem much larger than it actually is. The sky itself can contain anything,
such as clouds, mountains, buildings, and other unreachable objects, to create the illusion of
distant three-dimensional surroundings. Unity can also use a sky to generate realistic ambient
lighting in your Scene.
Skyboxes - A skybox is a cube with a different texture on each face. When you use a skybox to
render a sky, Unity essentially places your Scene inside the skybox cube. Unity renders the
skybox first, so the sky always renders at the back.
Similar to other sky implementations, you can use a skybox to do the following:
• Render a skybox around your Scene.
• Configure your lighting settings to create realistic ambient lighting based on the skybox.
• Override the skybox that an individual Camera
uses, using the skybox component.
Visual effects – Some of the visual effect methods in Unity are
Visual Effect Method Description
Post-processing and full-
screen effects
How to set up and use post-processing
and other full-screen effects in Unity.
Particle systems How to choose between Unity’s different particle systems, and use them in
your project.
Decals and projectors How to create decal and projector effects.
Lens flares and halos How to create lens flare
Visual Effect Method Description
and halo effects.
Lines, trails, and billboards How to render lines, trails, and billboards
.
Optimizing Graphics Performance
Usually, the greatest contributor to CPU rendering time is the cost of sending rendering
commands to the GPU. Rendering commands include draw calls (commands to draw geometry),
and commands to change the settings on the GPU before drawing the geometry. If this is the
case, consider these options:
• You can reduce the number of objects that Unity renders.
o Consider reducing the overall number of objects in the scene
: for example, use a skybox to create the effect of distant geometry.
o Perform more rigorous culling, so that Unity draws fewer objects. Consider
using occlusion culling to prevent Unity from drawing objects that are hidden
behind other objects, reducing the far clip plane of a Camera
so that more distant objects fall outside its frustum, or, for a more fine-grained
approach, putting objects into separate layers and setting up per-layer cull
distances with Camera.layerCullDistances.
• You can reduce the number of times that Unity renders each object.
o Use light mapping to “bake” (pre-compute) lighting and shadows where
appropriate. This increases build time, runtime memory usage and storage space,
but can improve runtime performance.
o If your application uses Forward rendering
, reduce the number of per-pixel real-time lights that affect objects. For more
information, see Forward rendering path.
o Real-time shadows can be very resource-intensive, so use them sparingly and
efficiently. For more information, see Shadow troubleshooting: Shadow
performance.
o If your application uses Reflection Probes
, ensure that you optimize their usage. For more information, see Reflection Probe
performance
• You can reduce the amount of work that Unity must do to prepare and send rendering
commands, usually by sending them to the GPU in more efficient “batches”. There are a
few different ways to achieve this: for more information, see Optimizing draw calls.
Many of these approaches will also reduce the work required on the GPU; for example, reducing
the overall number of objects that Unity renders in a frame will result in a reduced workload for
both the CPU and the GPU.
Reducing the GPU cost of rendering - There are three main reasons why the GPU might fail to
complete its work in time to render a frame.
If an application is limited by fill rate, the GPU is trying to draw more pixels
per frame than it can handle. If this is the case, consider these options:
• Identify and reduce overdraw in your application. The most common contributors to
overdraw are overlapping transparent elements, such as UI , particles and sprites.In the
Unity Editor, use the Overdraw Draw mode to identify areas where this is a problem.
• Reduce the execution cost of fragment shaders.
• If you’re using Unity’s built-in shaders, pick ones from the Mobile or Unlit categories.
They work on non-mobile platforms as well, but are simplified and approximated
versions of the more complex shaders.
• Dynamic Resolution- is a Unity feature that allows you to dynamically scale individual
render targets.
If an application is limited by memory bandwidth, the GPU is trying to read and write more data
to its dedicated memory than it can handle in a frame. This usually means that that there are too
many or textures, or that textures are too large. If this is the case, consider these options:
• Enable mip maps for textures whose distance from the camera varies at runtime (for
example, most textures used in a 3D scene). This increases memory usage and storage
space for these textures, but can improve runtime GPU performance.
• Use suitable compression formats to decrease the size of your textures in memory. This
can result in faster load times, a smaller memory footprint, and improved GPU rendering
performance. Compressed textures only use a fraction of the memory bandwidth needed
for uncompressed textures.
If an appliction is limited by vertex processing, this means that the GPU is trying to process more
vertices than it can handle in a frame. If this is the case, consider these options:
• Reduce the execution cost of vertex shaders.
• Optimize your geometry: don’t use any more triangles than necessary, and try to keep the
number of UV mapping seams and hard edges (doubled-up vertices) as low as possible.
• Use the Level Of Detail system.
Reducing the frequency of rendering - Sometimes, it might benefit your application to reduce
the rendering frame rate. This doesn’t reduce the CPU or GPU cost of rendering a single frame,
but it reduces the frequency with which Unity does so without affecting the frequency of other
operations (such as script execution).
You can reduce the rendering frame rate for parts of your application, or for the whole
application. Reducing the rendering frame rate to prevents unnecessary power usage, prolongs
battery life, and prevent device temperature from rising to a point where the CPU frequency may
be throttled. This is particularly useful on handheld devices.
If profiling reveals that rendering consumes a significant proportion of the resources for your
application, consider which parts of your application might benefit from this. Common use cases
include menus or pause screens, turn based games where the game is awaiting input, and
applications where the content is mostly static, such as automotive UI.
To prevent input lag, you can temporarily increase the rendering frame rate for the duration of
the input so that it still feels responsive.
To adjust the rendering frame rate, use the On Demand Rendering API. The API works
particularly well with the Adaptive Performance package.
Note: VR applications don’t support On Demand Rendering. Not rendering every frame causes
the visuals to be out of sync with head movement and might increase the risk of motion sickness.
2.4.2.Physics Engine:
Unity helps you simulate physics in your Project to ensure that the objects correctly accelerate
and respond to collisions, gravity, and various other forces. Unity provides different physics
engine implementations which you can use according to your Project needs: 3D, 2D, object-
oriented, or data-oriented.
If your project is object-oriented, use the Unity’s built-in physics engine that corresponds to your
needs:
• Built-in 3D physics (Nvidia PhysX engine integration)
• Built-in 2D physics (Box2D engine integration)
If your project uses Unity’s Data-Oriented Technology Stack (DOTS), you need to install a
dedicated DOTS physics package. The available packages are:
• Unity Physics package: the DOTS physics engine you need to install by default to
simulate physics in any data-oriented project.
• Havok Physics for Unity package: an implementation of the Havok physics engine for
Unity, to use as an extension of the Unity Physics package. Note that this package is
subject to a specific licensing scheme.
The Unity Physics package, part of Unity's Data-Oriented Technology Stack (DOTS), provides a
deterministic rigid body dynamics system and spatial query system.
2.4.3 Audio Engine
In real life, objects emit sounds that listeners hear. The way a sound is perceived depends on
many factors. A listener can tell roughly which direction a sound is coming from and may also
get some sense of its distance from its loudness and quality. A fast-moving sound source (such as
a falling bomb or a passing police car) changes in pitch as it moves as a result of the Doppler
Effect. Surroundings also affect the way sound is reflected. A voice inside a cave has an echo,
but the same voice in the open air doesn’t.
To simulate the effects of position, Unity requires sounds to originate from Audio Sources
attached to objects. The sounds emitted are then picked up by an Audio Listener attached to
another object, most often the main camera.Unity can then simulate the effects of a source’s
distance and position from the listener object and play them to you accordingly. You can also use
the relative speed of the source and listener objects to simulate the Doppler Effect for added
realism.
Unity can’t calculate echoes purely from scene
geometry, but you can simulate them by adding Audio Filters
to objects. For example, you could apply the Echo filter to a sound that is supposed to be
coming from inside a cave. In situations where objects can move in and out of a place with a
strong echo, you can add a Reverb Zone to the scene. For example, your game might involve
cars driving through a tunnel. If you place a reverb zone inside the tunnel, the cars’ engine
sounds start to echo as they enter. The echo quiets as the cars emerge from the other side.
With the Unity Audio Mixer, you can mix various audio sources, apply effects to them, and
perform mastering.
Unity can import audio files in AIFF, WAV, MP3, and Ogg formats in the same way as
other assets. Drag the files into the Project panel. Import an audio file to create an Audio Clip
that you can then drag to an Audio Source or use from a script. The Audio Clip reference page
has more details about the import options available for audio files.
For music, Unity also supports tracker modules, which use short audio samples as
“instruments” that you can arrange to play tunes. You can import tracker modules
from .xm, .mod, .it, and .s3m files and use them the same way you use other audio clips.
2.4.4 Artificial Intelligence
AI in gaming can assist in game personalization by analyzing player data and behavior to
enable the scripting of tailored experiences and content recommendations. This helps in making
the game more playable for each player.
2.4.5 Input Management
Input allows the user to control your application using a device, touch, or gestures. You can
program in-app elements, such as the graphic user interface (GUI) or a user avatar
, to respond to user input in different ways.
Unity supports input from many types of input devices, including:
• Keyboards and mice
• Joysticks
• Controllers
• Touch screens
• Movement-sensing capabilities of mobile devices, such as accelerometers or gyroscopes
• VR and AR controllers
Unity supports input through two separate systems:
• The Input Manager - is part of the core Unity platform and available by default.
• The Input System is a package that needs to be installed via the Package Manager before
you can use it. It requires the .NET 4 runtime, and doesn’t work in projects that use the
old .NET 3.5 runtime.
2.5 User interface (UI)
Unity provides three UI
systems that you can use to create user interfaces (UI) for the Unity Editor and applications
made in the Unity Editor:
• UI Toolkit
• The Unity UI package (uGUI)
• IMGUI
This page provides an overview of each.
UI Toolkit
UI Toolkit is the newest UI system in Unity. It’s designed to optimize performance across
platforms, and is based on standard web technologies. You can use UI Toolkit to create
extensions for the Unity Editor, and to create runtime UI for games and applications.
UI Toolkit includes:
• A retained-mode UI system that contains the core features and functionality required to
create user interfaces.
• UI Asset types inspired by standard web formats such as HTML, XML, and CSS. Use
them to structure and style UI.
• Tools and resources for learning to use UI Toolkit, and for creating and debugging your
interfaces.
Unity intends for UI Toolkit to become the recommended UI system for new UI development
projects, but it is still missing some features found in Unity UI (uGUI) and IMGUI.
The Unity UI (uGUI) package
The Unity User Interface (Unity UI) package (also called uGUI) is an older, GameObject-based
UI system that you can use to develop runtime UI for games and applications. In Unity UI, you
use components and the Game view to arrange, position, and style the user interface. It supports
advanced rendering and text features.
See the Unity UI package documentation for the manual and API reference.
IMGUI
Immediate Mode Graphical User Interface (IMGUI) is a code-driven UI Toolkit that uses
the OnGUI function, and scripts
that implement it, to draw and manage user interfaces. You can use IMGUI to create
custom Inspectors
for script components, extensions for the Unity Editor, and in-game debugging displays. It is not
recommended for building runtime UI.
Choosing a UI system for your project
Unity intends for UI Toolkit to become the recommended UI system for new UI development
projects, but it is still missing some features found in Unity UI (uGUI) and IMGUI. These older
systems are better in certain use cases, and are required to support legacy projects.
Your choice of UI system for a given project depends on the kind of UI you plan to develop, and
the features you need support for.
Comparison of UI systems in Unity
UI Toolkit is intended to become the recommended UI system for your new UI development
projects. However, in the current release, UI Toolkit does not have some features that Unity UI
(uGUI) and Immediate Mode GUI (IMGUI) support. uGUI and IMGUI are more appropriate for
certain use cases, and are required to support legacy projects.
This page provides a high-level feature comparison of UI Toolkit, uGUI, and IMGUI, and their
respective approaches to UI design.
General consideration
The following table lists the recommended and alternative system for runtime and Editor:
2022 Recommendation Alternative
Runtime Unity UI UI Toolkit
Editor UI Toolkit IMGUI
Innovation and development
UI Toolkit is in active development and releases new features frequently. uGUI and IMGUI are
established and production-proven UI systems that are updated infrequently.
uGUI and IMGUI might be better choices if you need features that are not yet available in UI
Toolkit, or you need to support or reuse older UI content.
Runtime
uGUI is the recommended solution for the following:
• UI positioned and lit in a 3D world
• VFX with custom shaders
and materials
• Easy referencing from MonoBehaviours
UI Toolkit is an alternative to uGUI if you create a screen overlay UI that runs on a wide variety
of screen resolutions. Consider UI Toolkit to do the following:
• Produce work with a significant amount of user interfaces
• Require familiar authoring workflows for artists and designers
• Seek textureless UI rendering capabilities
Use Cases
The following table lists the recommended system for major runtime use cases:
2022 Recommendation
Multi-resolution menus and HUD in intensive UI projects UI Toolkit
World space UI and VR Unity UI
2022 Recommendation
UI that requires customized shaders and materials Unity UI
In details
The following table lists the recommended system for detailed runtime features:
2022 UI Toolkit Unity UI
WYSIWYG authoring Yes Yes
Nesting reusable components Yes Yes
Global style management Yes No
Layout and Styling Debugger Yes Yes
Scene
integration
Yes Yes
Rich text tags Yes Yes*
Scalable text Yes Yes*
Font fallbacks Yes Yes*
Adaptive layout Yes Yes
Input system support Yes Yes
Serialized events No Yes
Visual Scripting support No Yes
Rendering pipelines support Yes Yes
2022 UI Toolkit Unity UI
Screen-space (2D) rendering Yes Yes
World-space (3D) rendering No Yes
Custom materials and shaders No Yes
Sprites / Sprite atlas support Yes Yes
Dynamic texture atlas Yes No
Texture less elements Yes No
UI anti-aliasing Yes No
Rectangle clipping Yes Yes
Mask clipping No Yes
Nested masking Yes Yes
UI transition animations Yes No
Integration with Animation Clips
and Timeline
No Yes
*Requires the TextMesh Pro package
Editor
UI Toolkit is recommended if you create complex editor tools. UI Toolkit is also recommended
for the following reasons:
• Better reusability and decoupling
• Visual tools for authoring UI
• Better scalability for code maintenance and performance
IMGUI is an alternative to UI Toolkit for the following:
• Unrestricted access to editor extensible capabilities
• Light API to quickly render UI on screen
Use Cases
The following table lists the recommended system for major Editor use cases:
2022 Recommendation
Complex editor tool UI Toolkit
Property drawers UI Toolkit
Collaboration with designers UI Toolkit
In details
The following table lists the recommended system for detailed Editor features:
2022 UI Toolkit IMGUI
WYSIWYG authoring Yes No
Nesting reusable components Yes No
Global style management Yes Yes
Layout and Styling Debugger Yes No
Rich text tags Yes Yes
Scalable text Yes No
Font fallbacks Yes Yes
Adaptive layout Yes Yes
Default Inspectors Yes Yes
Inspector: Edit custom object types Yes Yes
2022 UI Toolkit IMGUI
Inspector: Edit custom property types Yes Yes
Inspector: Mixed values (multi-editing) support Yes Yes
Array and list-view control Yes Yes
Data binding: Serialized properties Yes Yes
UI hierarchy
Both uGUI and UI Toolkit build and maintain the UI inside a hierarchy tree structure. In uGUI,
all elements in this hierarchy are visible as individual GameObjects in the hierarchy view
panel. In UI Toolkit, visual elements organize into a visual tree. The visual tree isn’t visible in
the panel.
To view and debug the UI hierarchy in UI Toolkit, you can use the UI Debugger. You can find
UI Debugger in the Editor toolbar, under Window > UI Toolkit > Debugger.
UI Debugger
Key differences
Canvas versus UIDocument
The Canvas component in uGUI is similar to the UIDocument component in UI Toolkit. Both
are MonoBehaviours that attach to GameObjects.
In uGUI, a Canvas component sits at the root of the UI tree. It works with the Canvas
Scaler component to determine the sort order, rendering, and scaling mode of the UI underneath.
In UI Toolkit, the UIDocument component contains a reference to a PanelSettings object.
The PanelSettings contains the rendering settings for the UI, including the scale mode and the
sort order. Multiple UIDocument components can point to the same PanelSettings object, which
optimizes performance when using multiple UI screens in the same scene
.
Panel Settings
In uGUI, the UI tree hierarchy sits underneath the GameObject holding the Canvas component.
In UI Toolkit, the UIDocument component holds a reference to the root element of the Visual
Tree.
The UIDocument component also contains a reference to the UXML file that defines the UI
layout from which the Visual Tree is built at runtime. See Creating UI section for more
information.
Note: For Editor UI, no UIDocument component is needed. You can derive your custom class
from EditorWindow, then implement CreateGUI(). For a practical example, see the guide
on Creating custom Editor windows.
GameObject components vs visual elements
UI Toolkit refers to UI elements as controls or visual elements. Examples of UI elements are:
• Controls
• Buttons
• Text labels
uGUI builds the UI hierarchy from GameObjects. Adding new UI elements requires adding new
GameObjects to the hierarchy. The individual controls are implemented
as MonoBehaviour components.
In UI Toolkit, the visual tree is virtual and doesn’t use GameObjects. You can no longer build or
view the UI hierarchy in the hierarchy view, but it removes the overhead of using a GameObject
for each UI element.
In uGUI, UI elements derive (directly or indirectly) from the UIBehavior base class. Similarly, in
UI Toolkit all UI elements derive from a base class called VisualElement. The key difference is
the VisualElement class doesn’t derive from MonoBehaviour. You can not attach visual
elements to GameObjects.
Working with UI Toolkit controls in script is similar to working with uGUI controls. The
following table shows common script interactions with UI controls in uGUI, and their UI Toolkit
counterparts.
Action uGUI UI Toolkit
Write text into a
label
m_Label.text = "My Text"; m_Label.text = "My Text";
Read the state of
a toggle
bool isToggleChecked =
m_Toggle.isOn;
bool isToggleChecked = m_Toggle.value;
Assign a
callback to a
button
m_Button.onClick.AddListener(M
yCallbackFunc);
m_Button.clicked += MyCallbackFunc_1;
or
m_Button.RegisterCallback<ClickEvent>(
MyCallbackFunc_2);
Access UI elements
In uGUI, there are two ways scripts
can access UI elements:
• Assigning a reference to the UI components in the Editor.
• Finding the components in the hierarchy using helper functions such
as GetComponentInChildren<T>().
Since there are no GameObject or components in UI Toolkit, you can’t directly assign references
to a control in the Editor. They must be resolved at runtime using a query function. Instead,
access the Visual Tree via the UIDocument component.
UIDocument is a MonoBehaviour, so you can assign it as a reference and make it part of
a Prefab.The UIDocument component holds a reference to the root visual element. From the
root, scripts can find child elements by type or by name, similar to uGUI.
The table below shows a direct comparison of accessing UI controls in Unity UI and UI Toolkit
Action uGUI UI Toolkit
Find UI
element by
name
transform.FindChild("childName"); rootVisualElement.Query("childName");
Find UI
element by
type
transform.GetComponentInChildren<Button>(); rootVisualElement.Query<Button>();
Direct
assignment
of a
reference in
Editor
Possible Not possible
Create UI
One of the biggest differences between uGUI and UI Toolkit is the creation of user interfaces.
Both uGUI and UI Toolkit allow you to visually build the UI and preview it in the Editor. In
uGUI, the UI is then saved inside a Prefab, along with any logic scripts attached to individual UI
controls.
In UI Toolkit, The UI layout is created in UI Builder, then saved as one or multiple UXML files.
At runtime, UIDocument components load the UXML files that the Visual Tree assembles in
memory.
For a method similar to uGUI, you can create UI controls directly from a script, then add them to
a Visual Tree at runtime.
Prefabs
uGUI uses GameObjects for individual UI controls and Prefabs that both contain visuals and
logic. UI Toolkit takes a different approach to re-usability, as it separates logic and layout. You
can create reusable UI components through UXML and custom controls.
To create a similar template to a Prefab in UI Toolkit:
1. Create a UXML file for the partial UI element.
2. Create a GameObject with a UIDocument component.
3. Reference the UXML file in the GameObject.
4. Add a script to handle the UI component logic to the same GameObject.
5. Save the GameObject as a Prefab.
UI layout
Arranging individual UI elements on screen in uGUI is a manual process. By default, UI controls
are free floating and are only affected by their direct parent. Other UI controls under the same
parent don’t affect their siblings positions or sizes. Pivots and anchors control the position and
size of an element.
The UI Toolkit layout system is influenced by web design, and based on automatic layout
generation. The automatic layout system affects all elements by default, and an element’s size
and position will affect other elements under the same parent.
The default behavior in UI Toolkit is comparable to placing all elements inside
a VerticalLayoutGroup in uGUI, and adding a LayoutElement component to each.
You can disable automatic layout generation by changing the IStyle position property of the
visual element. All visual elements have this property. See Visual Tree for a code sample.
UI Toolkit has no direct equivalents for anchoring and pivots of UI elements, due to the
fundamental layout differences compared to uGUI.
The size and position of an element is controlled by the layout engine. For more information,
see Layout Engine and Coordinate and position systems.
Rendering order
In uGUI, the order of the GameObjects in the hierarchy determines the rendering order. Objects
further down in the hierarchy render last and appear on top. In a scene with multiple Canvases,
the Sort Order on the root Canvas component determines the render order of the individual UI
trees.
The render order in a visual tree in UI Toolkit operates the same way. Parent elements render
before their children, and children render from the first to the last, so that the last appears on top.
In a scene with multiple UI Documents, the render order is determined by the Sort Order setting
on the root UIDocument component.
To change the rendering order of an element in uGUI, such as making an element appear on top,
you can call the sibling functions on the Transform component of the GameObject.
The VisualElement class offers comparable functions to control the rendering order. As all UI
Toolkit controls derive from this class, all controls have access to this function.
The table below shows the uGUI functions to control render order and the equivalent functions
in UI Toolkit:
Action uGUI UI Toolkit
Action uGUI UI Toolkit
Make element render
underneath all other
siblings
transform.SetAsFirstSibling(); myVisualElement.SendToBack();
Make element render on
top of all other siblings
transform.SetAsLastSibling(); myVisualElement.BringToFront();
Manually control the
element’s render order
relative to its siblings
transform.SetSiblingIndex(newIndex); myVisualElement.PlaceBehind(sibling);
myVisualElement.PlaceInFront(sibling);
Events
Just like in uGUI, user interactions in UI Toolkit trigger events. The code can subscribe to
receive a callback on events, such as pressing a button or moving a slider.
In uGUI, all UI elements are based on MonoBehaviour, and can expose their events in the
Editor. This allows to set up logic with other GameObjects, for example to hide or show other UI
elements, or to assign callback functions.
uGUI Button
OnClick Inspector
In UI Toolkit, logic and UI layout are stored separately. Callbacks can no longer be set up
directly on GameObjects or stored in Prefabs. You must set up all callbacks at runtime, and
handle them via scripting.
Button playButton = new Button("Play");
playButton.RegisterCallback<ClickEvent>(OnPlayButtonPressed);
...
private void OnPlayButtonPressed(ClickEvent evt)
{
// Handle button press
}
The event dispatching system in UI Toolkit differs from events in uGUI. Depending on the event
type, events aren’t just sent to the target UI control, but also to all the parent controls.

More Related Content

PDF
What are the benefits of hiring a Unity game development company in India?
PPTX
UNITY 3D.pptx
PPTX
Unity3D_Seminar.pptx
PDF
The Unity Game Development Engine Features
PPTX
Game Engines for AR VR and XR- Advancements in hardware, increased adoption i...
PDF
The Basics of Unity - The Game Engine
PPTX
What Are The Reasons Behind Unity 3 D’s Popularity For Games - BR Softech
PPTX
Introduction to Game Engine: Concepts & Components
What are the benefits of hiring a Unity game development company in India?
UNITY 3D.pptx
Unity3D_Seminar.pptx
The Unity Game Development Engine Features
Game Engines for AR VR and XR- Advancements in hardware, increased adoption i...
The Basics of Unity - The Game Engine
What Are The Reasons Behind Unity 3 D’s Popularity For Games - BR Softech
Introduction to Game Engine: Concepts & Components

Similar to Augmented Reality Application Development_Unit II_Material.pdf (20)

PPTX
UNREAL ENGINE.pptx
PPTX
Why unity 3 d is chosen
PPT
My Presentation.ppt
PPTX
unity gaming programing basics for students ppt
PPTX
Unity Game Engine Presentation for ICT..
PPTX
Unreal Engine.pptx
PDF
Ways to Choose the Right Game Development Platform.pdf
DOCX
Game software development trends
PPTX
PRESENTATION ON Game Engine
PDF
Transforming Gaming: How Unity 3D Shapes Modern Game Development.pdf
ODP
HTML5 Game Development frameworks overview
PPSX
Gears of Speed Car Racing Game Presentation.ppsx
PPSX
Imaginecup
PDF
STUDY OF AN APPLICATION DEVELOPMENT ENVIRONMENT BASED ON UNITY GAME ENGINE
PPTX
Engines_Game_Slide_powerpoint_for_introduces .pptx
PPTX
Game engines and Their Influence in Game Design
PPT
Delta Engine @ CeBit 2011
PPTX
Android game ppt
PPTX
Lecture5
UNREAL ENGINE.pptx
Why unity 3 d is chosen
My Presentation.ppt
unity gaming programing basics for students ppt
Unity Game Engine Presentation for ICT..
Unreal Engine.pptx
Ways to Choose the Right Game Development Platform.pdf
Game software development trends
PRESENTATION ON Game Engine
Transforming Gaming: How Unity 3D Shapes Modern Game Development.pdf
HTML5 Game Development frameworks overview
Gears of Speed Car Racing Game Presentation.ppsx
Imaginecup
STUDY OF AN APPLICATION DEVELOPMENT ENVIRONMENT BASED ON UNITY GAME ENGINE
Engines_Game_Slide_powerpoint_for_introduces .pptx
Game engines and Their Influence in Game Design
Delta Engine @ CeBit 2011
Android game ppt
Lecture5
Ad

More from vijaykrishanmoorthy (8)

PPTX
History of Augmented Reality and Virtual reality
PDF
Unit V_Player Settings and Build Settings.pdf
PDF
Unity UI and Compatibility Testing Content.pdf
PDF
Unit III-Material_Setting up Assests.pdf
PDF
Augmented Reality Application Development_Vuforia.pdf
PDF
Augmented Reality Application Development_Unit 1_Material.pdf
PDF
Knapsack problem using Dynamic Programming.pdf
PPTX
Augmented Reality and Virtual Reality-Cross platform theory-intro
History of Augmented Reality and Virtual reality
Unit V_Player Settings and Build Settings.pdf
Unity UI and Compatibility Testing Content.pdf
Unit III-Material_Setting up Assests.pdf
Augmented Reality Application Development_Vuforia.pdf
Augmented Reality Application Development_Unit 1_Material.pdf
Knapsack problem using Dynamic Programming.pdf
Augmented Reality and Virtual Reality-Cross platform theory-intro
Ad

Recently uploaded (20)

PDF
composite construction of structures.pdf
PDF
PPT on Performance Review to get promotions
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Welding lecture in detail for understanding
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
additive manufacturing of ss316l using mig welding
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Geodesy 1.pptx...............................................
composite construction of structures.pdf
PPT on Performance Review to get promotions
CH1 Production IntroductoryConcepts.pptx
R24 SURVEYING LAB MANUAL for civil enggi
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Embodied AI: Ushering in the Next Era of Intelligent Systems
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
Welding lecture in detail for understanding
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Internet of Things (IOT) - A guide to understanding
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
additive manufacturing of ss316l using mig welding
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Foundation to blockchain - A guide to Blockchain Tech
Geodesy 1.pptx...............................................

Augmented Reality Application Development_Unit II_Material.pdf

  • 1. Unit II Unit Title: Overview of AR Development Real time Engine Overview – Real time engine basics – UI and Navigation of Real time game engines 2.1 What is a real time engine? • A real time engine enables you to easily move through computer environments. It is often associated with gaming or virtual reality environments. • A real-time engine must have extremely high throughput with low latency and be able to scale out easily. Real time computing requires a guaranteed response – often under a millisecond. For example, a real time engine may be needed to deliver results during a live online gaming event and be able to handle a surge in traffic during peak hours or special promotions. • Other uses requiring a real time engine can include safety-critical applications such as anti-lock brakes, which require the proper mechanical response in real time. 2.2 Basics of Real time engine: A game engine is a software framework primarily designed for the development of video games. Developers can use engines to construct games for consoles, PC’s, mobile devices and even VR platforms. Most popular real time Game engines are a. Unity b. Unreal What is Unity 3D? Unity 3D is a powerful IDE and game engine platform for developers. Unity has multiplied prebuilt- features that can help in making a game work. These built-in features include 3D rendering, physics, collision detection, etc. If we look through a developer’s eye then we won’t have to manually build everything from scratch while developing or creating a game. While starting a new project you won’t have to apply new physics laws by separately calculating their movement to decrease the level of surrealism in the game. Moreover, the reason Unity is considering among the best game engines by developers is because of its asset store. It is a community-based section in the game engine platform where the game developers can upload their work and the whole community can utilize the same. For example, if you need a beautiful fire effect but don’t have enough time to build everything from scratch, in Unity 3D you will be easily able to find the fire effect you are searching for
  • 2. directly from the asset store. While working on Unity, the game developers can freely concentrate on their projects while coding for the feature that is extremely unique to the game. Unity game development allows for building robust and scalable gaming applications. What is the Unreal Engine? Unreal Engine, as its name suggests, is a game development engine that was developed in the year 1988 by the famous gaming studio Epic Games. Initially, the platform was a first-person shooter game but right now it is being utilized to create and develop games of different genres including Stealth, RPG (role-playing games), MMORPG (massively multiplayer online role- playing games), fighting games, etc. The platform only supports C++ as its coding language and it is immensely popular among seasoned game developers and gaming studios. The basic development technique offered by Unreal Engine is extremely helpful for beginners and freshers in the game development industry. The tools available in Unreal Engine provide the users the license to develop and create their games. Unreal Engine is counted among the best game engines and is famous for its easy customization, in addition to the tools readily available to create HD AAA games without a lot of effort. Most of the freshers in this industry utilize the Unreal Engine platform to initiate their career in the gaming industry and create a portfolio for presentation. It has various unique mechanisms through which users can run the game on their own. Not to mention, the editors and systematic set of tools available on the platform can help the developers modify the properties of the game and create the necessary artwork. Unreal Engine has different parts including an online module, input, graphics engine, physics engine, gameplay framework, sound engine, etc. The editors available in the Unreal Engine are necessary to develop games on the platform. Unreal engine games are widely available and are some of the most played ones in the world. Unique Characteristics of Unity 3D that Make it One of the Best Game Engines Here is a list of Characteristics of Unity in detail: • Easy Access Unity Asset Store - The adage “Easier said than done” stands true for Unity 3D as no game development software at present offers its audience a fully inbuilt and easily accessible asset store. The Unity Asset Store makes it one of the most popular features among game developers due to its multiple pre-designed assets. Before this, who would have thought accessing 2D/3D animation renders, pre-designed 3D models, and gaming tutorials will be just one click away? The Unity Asset Store is an ideal
  • 3. characteristic that enables developers and designers to purchase assets according to their upgraded requirements and design needs. • Multi-Platform Developers’ Game Engine - Unity 3D is a platform that enables developers to build multiple projects for multiple gaming devices including Console (PS4 or Xbox), PC, AR/VR experiences, and mobile devices. It is currently the only suitable integration game engine for gamers across all major platforms. Developers can both build and port their games from one gaming platform to another with great ease. The processes of rendering and transferring assets also function smoothly in Unity 3D. Particularly graphics rendering is one of the most exciting features in Unity. This is because other platforms have their personalized system settings but in Unity, rendering from other engines like Direct 3D, OpenGL, Adobe, etc. are the simplest. • Rich Variety of Online Tutorials - Gamers feel that one of the most helpful characteristics of Unity 3D is the repository of online tutorials and learning materials. This bulk of rich learning resources in Unity boasts well-documented processes, detailed steps, multilingual texts, and video forums that enable a comprehensive understanding of gaming concepts. Among other varieties of resources, video tutorials are the most preferred as gamers can pause, apply, rewind, and forward the video as per their learning pace, giving them full flexibility to learn the different tools and processes. These tutorials on Unity 3D have allowed amateur game developers to access lessons relating to various aspects of gaming such that no novice will be afraid to start from scratch to master the development skills. • Multiple Coding Languages- Unity 3D is the most highly rated game engine owing to its ability to let developers use multiple coding languages based on their familiarity. The platform allows the hands-on experience of C#, BOO, and JavaScript (referred to as Unity Script), which is usually at the fingertips of any game developer. Unique Features of Unreal Engine • Pipeline Integration • Platform tools • Animation • World Building • Simulations & Special Effects • Gameplay & Interactivity Authoring • Rendering & Lightening • Integrated Media Support
  • 4. • Developer tools Major Differences Between Unity and Unreal Engine Here is the list of key differences between Unity vs Unreal Engine: Aspects Unity Unreal Definition While Unity is a cross-platform game engine, Unreal Engine is developed as a source available game engine. Invention Unity was launched in 2005, whereas Unreal Engine is older with its debut in 1998. Language While Unreal Engine uses C++ Unity uses C# making it faster as per the demand of current game developers. Assets Unity asset store has a rich collection of 31000 textures, props, and mods in comparison to. Unreal Engine at 10000 Graphic Both Unity and Unreal Engine boast superior graphics but here Unreal Engine is more preferred than Unity Source code While Unreal Engine has an open- source code easing the development process, Unity does not provide an open source. Rather, one has to buy its code. Rendering Unity’s rendering process is slower than Unreal Engine. Unity’s rendering process is slower than Unreal Engine. Pricing Unreal Engine is currently free of cost but you must own royalties to them. On the other hand, Unity’s free version can be made complete by paying a one-time upgrade fee of $1500 or
  • 5. $75/month. Popularity Both Unity and Unreal Engine game engines have huge popularity among their community of active users but of late, Unity 3D is accessible in Unreal Engine 4, thus accounting for a larger client base. After the above deliberation, it is safest to say that game development would be extremely difficult without both Unity and Unreal Engine. Unity and Unreal Engine have their own specific sets of advantages and disadvantages that cater to their relative audiences who make choices based on their requirements. While Unity is renowned for its huge clientele, performance development, and 2D/3D simulations, Unreal Engine is more preferred when gamers are building bigger games with superior quality graphics, leading to unreal engine games being among the most popular ones among gamers. Major Similarities Between Unity and Unreal Engine Here is the list of similarities between Unity vs Unreal Engine: Aspect Unity and Unreal Engine Platforms Both support multiple platforms, including Windows, macOS, Linux, iOS, Android, consoles (PlayStation, Xbox), and virtual reality (VR) platforms. Scripting Languages Both use scripting languages to create game logic. Unity primarily uses C# (also supports JavaScript and Boo), while Unreal Engine uses Blueprint visual scripting (also supports C++ programming). Visual Editors Both provide visual editors for designing game levels, UI, and animations. Unity has a visual editor with a node-based system for creating scenes, while Unreal Engine offers a similar visual editor called Unreal Editor. Asset Pipelines Both have asset pipelines that facilitate importing and managing 3D models, textures, audio, and other game assets.
  • 6. Aspect Unity and Unreal Engine Community and Resources Both have large and active communities, with extensive documentation, tutorials, forums, and online resources available. Real-time Rendering Both engines offer powerful real-time rendering capabilities, including dynamic lighting, shading, and post-processing effects. AAA-quality graphics Unity and Unreal Engine are tailor-made for producing AAA-quality graphics industry- standard software Both Unity and Unreal Engine support smooth bridges between several industry-standard software. Toolbox Both the game engines offer an extensive toolbox that includes a terrain editor, physics simulation, animation, VR Support, etc Multiplayer Networking Both have networking features and frameworks to support multiplayer games and online connectivity. Cross-platform Deployment Both support cross-platform deployment, allowing developers to build and deploy games to multiple platforms from a single codebase. Third-Party Integration Both engines support integration with third-party tools, plugins, and assets, expanding the functionality and possibilities for game development. 2.3 Functions of Game Engines Graphics Rendering Rendering is the final process of creating an image or animation from a computer model. When rendering objects or characters in a scene it takes into account the geometric shape of the model, viewpoints, textures, shading and lighting. Which is then processed by the engine to a digital image or raster graphic, depending on how powerful your engine the faster the images will
  • 7. render. Rendering is an intensive process for your CPU and usually 50% of the CPU’s processing capabilities are taken up by it. Due to it being such a strenuous task there a methods that can be put in place in order to to utilise your processing capabilities more efficiently. Such as: Culling Methods In games there are often aspects of objects or NPC’s that aren’t visible to the eye thus making them useless rendering. In this circumstance you would use one of the ‘Culling’ methods to ensure that there is no form of waste rendering enabling you more processing power to dispose more efficiently. • Backface Culling – This is one of the most common forms of culling, it determines how many polygons are visible at the time that should be rendered. To do this the user has to specify which polygons are clockwise or counter-clockwise when projected on screen. Or in simpler terms anything on the backend of an object will not be rendered because it is not visible. For example in GTA you’ll see buildings from a front view, however the texture of the backend of the building won’t be rendered because it isn’t visible. • View Frustum – This method of culling acts as a virtual camera showing your field of view. Anything that is outside of your field of view will not be rendered and remained culled, if it is not visible there is no reason for it to be rendered. View Frustum is a method mainly used in an First person game and is determined by the width/perspective of the camera, which is a geometric representation of visible content on screen. • Detail Culling – When in the view frustum, using geometry this method of culling will determine how far an object is and if it should be rendered or not. Depending on how far the object is the engine will appropriately adjust the level of detail in accordance of your distance, which is a more advanced feature of culling. For example on the other side of the map is a bin, from first site it might look like a block but the closer you get the more detailed it’ll be rendered to look like a bin. By doing this it adds more realism to the game • Portal Culling – Portal culling is when the engine segregates a scene from the rest of the map by determining which cells are currently visible and which ones aren’t. Regions of the map are divided into zones that are represented as polygons and once you have entered this particular zone everything outside of it becomes obsolete. Most commonly this method is used for Indoor scenes, again another way of saving processing power by only rendering whats in your view frustum in that particular room rather than everything in and outside of it. However if there was a window or an open door you would be able to see a rendered image, but this is through an entirely different view frustum that specifically calculates your position and what you should be seeing based on it..
  • 8. • Occlusion Culling – This is one of the hardest methods of culling purely because of it’s complexity. What this method does it sorts out the objects that is visible and re-positions anything that obscures this object within the scene. This is all determined by the position of the camera which will vary at different angles. This could be done by the Z-Buffer but it is very time consuming for bigger scenes, thus making the Occlusion culling the better option. RayTracing This is a very advanced and complicated technique for making rendered images look more realistic, by adding colour intensity, shading and shadows that would be formed by having one of more light source. How it does is by tracing rays of light that would be reflected or absorbed by an object. For example if a player was crossing a road and a bus crossed drove past him, the path of light would be shining on the bus thus creating a shadow on the player. Fogging This is method used to shroud the players vision from witnessing any rendering processes taken place during gameplay. They do this by employing a fog gradient around your view frustum to progressively obscure your visions, until the level of detail increases. An example of flogging is in GTA when your looking down a straight road at a building but your visual sight is limited due to flogging. However the further down the road you go the LOD of the object will greaten.
  • 9. Shadowing Shadowing also known as projective showing or shadow mapping, is the course of action an engine takes when rendering shadows to applied objects. These shadows are done in real-time, depending on if there is enough light within a pixel for it to be projected. In comparison to ‘Shadow Volumes’ (another shadowing technique) Shadow Projective/Mapping is less accurate. However Shadow Volumes use a ‘Stencil Buffer’ (temporary storage of data while it is being moved one place to another) which is very time consuming, whereas shadow map doesn’t making it a faster alternative. Anti-Aliasing Anti-Aliasing detects rough polygon edges on models and smooths them out using a quick scan method. The more edges a model has the easier the model can be scanned at once. This method is mainly used on the PS3 rather than the Xbox because it’s a more powerful console and has better processing power. However PC gamer’s may choose to turn of this rendering method
  • 10. because it often slows down the initial performance of your game. Animation System Animation is a technique used to render movement in computer generated models. These can either be pre-rendered or rendered in real time using two main methods inverse kinematics or forward kinematics. Ragdoll (Real Time) – In earlier games developers had to create separate animations for death sequences. Now due to the advance technologies in games, physic simulations have become obtainable thus creating the ragdoll. Which is a skeletal replica that reacts appropriatly to it’s cause of death, giving various different outcomes to death scenarios making the game seem more realistic. For example in the ‘Force Unleashed series’ when your levitating an enemy using the force and your throw them or force push them the ragdoll would react accordingly to the speed and distance it was thrown. However if you was to do it again you couldn’t reenact the same scenario. Idle Cycle (Pre-Rendered) – This is a pre-rendered animation for still objects or characters. This gives the player more of a life like feel, giving the illusion that you are not just controlling him but it also has a mind of it’s own. Depending on the audience of the game the Idle animation may be more outrageous or subtle than others. For example in ‘Mario Galaxy’ which has a more younger audience, when Mario is idle he starts flicking coins. Whereas in Tekken which has a much older audience, it’ll have more natural Idle animations in reference to there fighting style. E.g. Eddie uses Capoeira fighting style so in his idle stance he’ll be sweeping his feet. Forward Kinematics – This is a function for movement in animation and also is ued a lot in robotics. This method of movement is used for real time animation in things such as Rag dolls,
  • 11. where movement is started from the joints and moves forward through the arm. These are based on a number of kinematic equations that determine the movement of this animation. Inverse Kinematics – In comparison to ‘Forward Kinematics’ this is the complete opposite. Inthe sense that movement starts from the outer joint and moves backwards through the body rather than forward. This method of animation is pre-rendered and is the most common form of animation in games. There are many other animation softwares that are available, but one that stood out the most to me was Team Bondies ‘Motion Scan’. Which is a new state of the art middle ware used by Rockstar to do the facial animations which were in L.A. Noire. How this is done is by sitting the actor in a rig surround by advanced cameras that pick up all subtle features as they act out there script. However the software is only a tool and is only as good as the actor who is performing there role. This animation is then processed and rendered into a 3D modelling software. Middleware The middleware acts as an extension of the engine to help provide services that are beyond the engines capabilities. It is not a part of the initial engine but can be hired/rented out for it’s usage which can be for various purposes. There is an engine for any feature in game engines. For example the physics on Skate 3 were terrible, if you were to drop of your board you would bounce or fly to unrealistic lengths. In that circumstance for there next game they may want to hire out ‘Havok’ which is a well established and respected physics engine to help fix this problem. There are also other middle ware engines that you can hire out to assist your games in many ways like Demonware, they are a networking engine whos sole purpose is to improve your online features. Artificial Intelligence Artificial Intelligence is creating the illusion of an NPC having realistic reactions and thoughts to add to the experience of the game. A common use of is ‘Pathfinding’ which determines the strategic movement of your NPC. In the engine you give each NPC a route to take and different options to act if that specific route is not accessible, this also takes other things into account like your level of health & the current objective at the time. These paths will be represented as a series of connected points. Another similar type of AI usage is ‘Navigation’ which is a series of connected polygons. Similar to Pathfinding it’ll follow these connected polygons only moving within the space, however they are not limited to one route. Thus having the space and intelligence to know what objects or other NPC’s to avoid it can take different routes depending on the circumstance. A fairly new method of AI is ‘Emergent’ which allows the NPC to learn from the player and develop reactions or responses to these actions taken place. Even though these responses or reactions may be limited it does does often give of the impression that you are interacted with a human-like character.
  • 12. Other Systems Graphics Rendering – Rendering is the final process of creating an image or animation from a computer model. When rendering objects or characters in a scene it takes into account the geometric shape of the model, viewpoints, textures, shading and lighting. Which is then processed by the engine to a digital image or raster graphic, depending on how powerful your engine the faster the images will render. There are two types of rendering one being ‘Real-Time’ which is an on going rendering process that is calculated to show 20 to 120 frames per second. It renders graphics that are visually noticeable by the eye in a fraction of a second, an exmaple of a game that uses this is GTA wherever you move the engines renders with you and anything beyond your sight would still be in it’s wireframe state. The other method of rendering is ‘Non Real-Time’ in comparison to ‘Real-Time’ this method doesn’t render everything in your visual site, it renders specific things in order to save processing power thus giving you a better quality image. This method is mainly used for the cutscenes or cinematics, games will usually use both methods of rendering appropriately for there uses. Collision Detection – This is the response taken when two or more objects collide with each other. Every game uses features of Collision Detection, however the level of importance it has in a game will vary. For example one method of intersect detection is ‘Bounding Sphere’, which is a sphere defined by a point, centre and radius that is placed around a character or object. Anything that penetrates this sphere is a detection of intersection, then usually an appropriate response is assigned to take. This is one of the simplest levels method of detection and is most ideal when accuracy isn’t a huge factor. A second solution for Collision Detection is ‘Bounding Box’ which is a rectangular box surrounding your character or object. The Bounding Box has three values Width, Height and the Vector location and anything that intercepts this invisible square boundary is a sign of collision. Often a favourite for developers as it is mainly used for small objects in the mise-en-scene. Physics – Physics are used to give the game some form of realism to the player. Depending on the game some would need more accurate physics that the other, for example a fighter jet simulation game would use more accurate physics in comparison to ‘Tom Clancy’s – H.A.W.X.’. If your in need of more accurate physics you might want to think about renting some middle ware. Havok is a specialised physics engine that may give you better results than making the physics yourself in the engine. As of the late 2000′s games are made to look more cinematic and the use of a good physics engine is detrimental for the realism of game. Sound – Sound in game is detrimental because it’s a notifiable response that can occur from interactions in the game. Another purpose of sound is that it can add more realism to the game by having ambient sounds that make your environment more believable to be apart of. For example if the scene setting is in an army camp you’ll be able to hear marching, guns reloading or being
  • 13. shot, chants, groaning of injured soldiers etc. Or you could include soundtracks that bring out different emotions in the player, for example in Dead Space they use music to shift your emotions from calm to scared in a matter of seconds. Usually games are made and edited outside of the engine, however some engines do include there own auido technology. 2.4 Basic Components of Real time - Game Engine Game engines often include core functionality such as • Graphics Engine (Rendering): This renders the game's visuals, such as 2D or 3D graphics, textures, and animations. Everything you see in the game is a result of the rendering engine. • Physics Engine: This simulates the game's physical world. It manages collisions, gravity, and other real-world physics. Or unrealistic physics, but either way, you need something doing the physics math in-game. • Audio Engine: This part handles the game's sound design, including music, sound effects, and voiceovers. Games like Returnal have complex 3D audio engines that take immersion to the next level. • Artificial Intelligence: The AI system controls non-player characters (NPCs) and other game world elements. It animates these characters by dictating their behavior, decision- making, and interactions with the player. This is perhaps the most exciting area for game development in coming years as AI technology explodes in complexity. • Input Management: This component oversees user input, such as keyboard, mouse, and controller actions. It translates these inputs into in-game actions. This might sound boring, but input management is crucial. Any game that controls poorly is no fun to play no matter how good it looks and sounds. They also allow games to be deployed on multiple platforms – game engines are capable of platform abstraction. Both Unity and Unreal Engine support easy deployment of game-ready assets from their respective marketplaces, the Unity Asset Store and the Unreal Marketplace. 2.4.1Graphics Engine / Rendering Using Game Engines — Basics 3D Rendering is the method with which three-dimensional data is converted into an image. The time taken for this process can vary from seconds to even days based on the amount of data the computer has to process, and this time-frame can even be for a single frame. There are two types of rendering technologies used in the industry these days- real-time and offline rendering.
  • 14. 1. Offline Render - When we render a scene using an off-line renderer each time there is a small change the system has to re-compile all the changes to produce how the scene will look with them implemented (Popular off-line renderers are Arnold (used in Maya), V- ray, Octane, etc). This change can even be something trivial; for example — changing the color of a light, changing the camera angle, moving a character etc. When it comes to end-to-end production of content changes are very frequent and from these examples, we can see that adopting an off-line render pipeline can add a lot more time to the production cycle. It also gives less room for content creators to iterate sequences and makes the entire process of re-working scenes time-consuming and expensive. 2. Real-time rendering - With Real-Time Rendering traditional rendering is replaced with a render engine that can render in real time. This means with real-time rendering, you can have a glance at the end result almost instantly. The advancements in this field have only been possible due to the push that game engines have advocated in recent times (most notable Unreal Engine).This resulting technology has grown a lot that now it not only meets the demands of architectural visualization but also film and television. Production companies use technology in real time to accelerate the creative process and even display the final pixels. Case Studies – Working of Game Engines In the scenario ,we explain how we were able to provide a base frame for how a scene would look and render out a sequence that can ideally be used in the final stages of production (art is meant only for demonstrative purposes). • Step 1: Block out scene and provide a base for content • Step 2: Refine scene for any animation and optimize accordingly. • Step 3: Add final polish to the scene to accurately describe pre-visualization and production experience. • The scene blockout lays down the fundamental building blocks and how the scene resonates visually and emphatically. This is the crucial stage to nail down as this represents the base for the entire visualization. Using Unreal Engine we were able to quickly render out a scene and make changes instantly. • With each change we had to do the render time would increase in software’s like Maya as they are not real-time. Basically to achieve the same results an off-line renderer took us a lot more time to render scenes out. • Once the light is build in a real-time engine any change in camera (or object movement) is reflected instantly. This helps scene builders quickly make additional changes to the scene if needed.
  • 15. Problem- As content creation / visualization grows to be an ever-evolving artistic and technological pursuit, traditional pipelines are being overturned due to the timeline involved to pre-visualize and render out sequences. 1. Traditional visualization pipeline is a linear approach- which encompasses development, pre-production, production, and post. 2. Itereation is challenging and costly. 3. Traditional visualization is created using animation softwares/external renderers at the cost of long render times. Solution - Using real-time rendering technologies artists now have the freedom to create content exactly the way they want it iteratively, collaboratively while avoiding re-shoots. 1. Pipeline is non-linear and encourages a more iterative and collaborative process. 2. Real-time rendering solutions can produce high quality assets with real-world physics that can be used even in the final product. 3. Decreased render times means professionals don’t have to wait to see their output which would be close to the final result. 4. Combing all process into a single suite enabled us to iteriate faster avoiding the hassles of a traditional pipeline. By this we were able to bring our artists and developers under one roof and make checks systematically and progressively. 5. As content creation grows to be an ever-evolving artistic and technological pursuit, traditional pipelines are being overturned due to the timeline involved to pre-visualize and render out sequences. With real-time rendering, ‘what you see is what you get’. Using this technology artists now have the freedom to create content exactly the way they want iteratively, collaboratively while avoiding re-shoots, and without compromising creativity. Every hour of pre-production is worth two hours of production, and Game engines can further be used to create interactive spaces where creators can visualize their intended locations and sets, set camera angles and frame lighting references- essentially blocking out and creating the movie even before production to establish tone and aesthetic so that the end result is exactly the way they want it. Unity’s Graphics engine Unity’s graphics features let you control the appearance of your application and are highly customizable. You can use Unity’s graphics features to create beautiful, optimized graphics across a range of platforms, from mobile to high-end consoles and desktop. The following are the graphics features: • Render pipelines • Cameras • Post-processing • Lighting section
  • 16. • Meshes, Materials, Textures, and Shaders • Particle Systems • Creating environments • Sky • Visual Effects • Optimizing Graphics Performance • Color spaces Rendering Pipeline - A render pipeline performs a series of operations that take the contents of a scene, and displays them on a screen. In Unity, you can choose between different render pipelines. Unity provides three prebuilt render pipelines with different capabilities and performance characteristics, or you can create your own. A render pipeline takes the objects in a scene and displays them on-screen. How a render pipeline works A render pipeline follows these steps: 1. Culling, where the pipeline decides which objects from the scene to display. This usually means it removes objects that are outside the camera view (frustum culling) or hidden behind other objects (occlusion culling). 2. Rendering, where the pipeline draws the objects with their correct lighting into pixel buffers. 3. Post-processing - where the pipeline modifies the pixel buffers to generate the final output frame for the display. Example of modifications include color grading, bloom,
  • 17. and depth of field. A render pipeline repeats these steps each time Unity generates a new frame. Cameras - A Unity scene represents GameObjects in a three-dimensional space. Since the viewer’s screen is two-dimensional, Unity needs to capture a view and “flatten” it for display. It does this using cameras. In Unity, you create a camera by adding a Camera component to a GameObject. A camera in the real world, or indeed a human eye, sees the world in a way that makes objects look smaller the farther they are from the point of view. This well- known perspective effect is widely used in art and computer graphics and is important for creating a realistic scene. Naturally, Unity supports perspective cameras, but for some purposes, you want to render the view without this effect. For example, you might want to create a map or information display that is not supposed to appear exactly like a real-world object. A camera that does not diminish the size of objects with distance is referred to as orthographic and Unity cameras also have an option for this. The perspective and orthographic modes of viewing a scene are known as camera projections. Post Processing - Unity provides a number of post-processing effects and full-screen effects that can greatly improve the appearance of your application with little set-up time. You can use these effects to simulate physical camera and film properties, or to create stylised visuals. Lighting section - With Unity, you can achieve realistic lighting that is suitable for a range of art styles. Lighting in Unity works by approximating how light behaves in the real world. Unity uses detailed models of how light works for a more realistic result, or simplified models for a more stylized result. • Direct and indirect lighting - Direct light is light that is emitted, hits a surface once, and is then reflected directly into a sensor (for example, the eye’s retina or a camera). Indirect light is all other light that is ultimately reflected into a sensor, including light that hits surfaces several times, and sky light. To achieve realistic lighting results, you need to simulate both direct and indirect light. Unity can calculate direct lighting, indirect lighting, or both direct and indirect lighting. The lighting techniques that Unity uses depends on how you configure your Project. • Real-time and baked lighting - Real-time lighting is when Unity calculates lighting at runtime. Baked lighting is when Unity performs lighting calculations in advance and saves the results as lighting data, which is then applied at runtime. In Unity, your Project can use real-time lighting, baked lighting, or a mix of the two (called mixed lighting). • Global illumination- Global illumination is a group of techniques that model both direct and indirect lighting to provide realistic lighting results. Unity has two global illumination systems, which combine direct and indirect lighting. • The Baked Global Illumination system consists of lightmaps , Light Probes , and Reflection Probes. You can bake with the Progressive Lightmapper (CPU or GPU) or Enlighten Baked Global Illumination. However, Enlighten Baked Global Illumination is deprecated and no longer visible in the user interface by default. See Lightmapping using Enlighten Baked Global Illumination for more information.
  • 18. Meshes/ Materials/Textures and shades - A mesh is a collection of data that describes a shape. In Unity, you use meshes in the following ways: • In graphics, you use meshes together with materials, meshes describe the shape of an object that the GPU renders, and materials describe the appearance of its surface. • In physics, you can use a mesh to determine the shape of a collider. a. Deformable meshes In addition to regular meshes, Unity also supports deformable meshes. Deformable meshes fall into the following categories: o Skinned meshes: These meshes work with additional data called bones. Bones form a structure called a skeleton (also called a rig, or joint hierarchy), and the skinned mesh contains data that allows it to deform in a realistic way when the skeleton moves. You usually use skinned meshes for skeletal animation with Unity’s Animation features, but you can also use them with Rigidbody components to create “ragdoll” effects. o Meshes with blend shapes: These meshes contain data called blend shapes. Blend shapes describe versions of the mesh that are deformed into different shapes, which Unity interpolates between. You use blend shapes for morph target animation, which is a common technique for facial animation. o Meshes that work with a Cloth component component for realistic fabric simulation. • Materials - To draw something in Unity, you must provide information that describes its shape, and information that describes the appearance of its surface. You use meshes to describe shapes, and materials to describe the appearance of surfaces. • Materials and shaders are closely linked; you always use materials with shaders. Material fundamentals - A material contains a reference to a Shader object.If that Shader object defines material properties, then the material can also contain data such as colors or references to textures. • A material asset is a file with the .mat extension. It represents a material in your Unity project. For information on viewing and editing a material asset using the Inspector window, see Material Inspector reference. • Creating a material asset, and assigning a shader to it To create a new material asset in your project, from the main menu or the Project View context menu, select Assets > Create > Material. To assign a shader to the material asset, in the Inspector window use the Shader drop-down menu.
  • 19. • Assigning a material asset to a GameObject - To render a GameObject using a material: Add a component that inherits from Renderer. MeshRenderer is the most common and is suitable for most use cases, but SkinnedMeshRenderer, LineRenderer, or TrailRenderer might be more suitable if your GameObject has special requirements. Assign the material asset to the component’s Material property. • To render a particle system in the Built-in Particle System using a material: Add a Renderer Module to the Particle System. Assign the material asset to the Renderer Module’s Material property. Sky - A sky is a type of background that a Camera draws before it renders a frame. This type of background greatly benefits 3D games and applications because it provides a sense of depth and makes the environment seem much larger than it actually is. The sky itself can contain anything, such as clouds, mountains, buildings, and other unreachable objects, to create the illusion of distant three-dimensional surroundings. Unity can also use a sky to generate realistic ambient lighting in your Scene. Skyboxes - A skybox is a cube with a different texture on each face. When you use a skybox to render a sky, Unity essentially places your Scene inside the skybox cube. Unity renders the skybox first, so the sky always renders at the back. Similar to other sky implementations, you can use a skybox to do the following: • Render a skybox around your Scene. • Configure your lighting settings to create realistic ambient lighting based on the skybox. • Override the skybox that an individual Camera uses, using the skybox component. Visual effects – Some of the visual effect methods in Unity are Visual Effect Method Description Post-processing and full- screen effects How to set up and use post-processing and other full-screen effects in Unity. Particle systems How to choose between Unity’s different particle systems, and use them in your project. Decals and projectors How to create decal and projector effects. Lens flares and halos How to create lens flare
  • 20. Visual Effect Method Description and halo effects. Lines, trails, and billboards How to render lines, trails, and billboards . Optimizing Graphics Performance Usually, the greatest contributor to CPU rendering time is the cost of sending rendering commands to the GPU. Rendering commands include draw calls (commands to draw geometry), and commands to change the settings on the GPU before drawing the geometry. If this is the case, consider these options: • You can reduce the number of objects that Unity renders. o Consider reducing the overall number of objects in the scene : for example, use a skybox to create the effect of distant geometry. o Perform more rigorous culling, so that Unity draws fewer objects. Consider using occlusion culling to prevent Unity from drawing objects that are hidden behind other objects, reducing the far clip plane of a Camera so that more distant objects fall outside its frustum, or, for a more fine-grained approach, putting objects into separate layers and setting up per-layer cull distances with Camera.layerCullDistances. • You can reduce the number of times that Unity renders each object. o Use light mapping to “bake” (pre-compute) lighting and shadows where appropriate. This increases build time, runtime memory usage and storage space, but can improve runtime performance. o If your application uses Forward rendering , reduce the number of per-pixel real-time lights that affect objects. For more information, see Forward rendering path. o Real-time shadows can be very resource-intensive, so use them sparingly and efficiently. For more information, see Shadow troubleshooting: Shadow performance. o If your application uses Reflection Probes , ensure that you optimize their usage. For more information, see Reflection Probe performance • You can reduce the amount of work that Unity must do to prepare and send rendering commands, usually by sending them to the GPU in more efficient “batches”. There are a few different ways to achieve this: for more information, see Optimizing draw calls. Many of these approaches will also reduce the work required on the GPU; for example, reducing the overall number of objects that Unity renders in a frame will result in a reduced workload for both the CPU and the GPU.
  • 21. Reducing the GPU cost of rendering - There are three main reasons why the GPU might fail to complete its work in time to render a frame. If an application is limited by fill rate, the GPU is trying to draw more pixels per frame than it can handle. If this is the case, consider these options: • Identify and reduce overdraw in your application. The most common contributors to overdraw are overlapping transparent elements, such as UI , particles and sprites.In the Unity Editor, use the Overdraw Draw mode to identify areas where this is a problem. • Reduce the execution cost of fragment shaders. • If you’re using Unity’s built-in shaders, pick ones from the Mobile or Unlit categories. They work on non-mobile platforms as well, but are simplified and approximated versions of the more complex shaders. • Dynamic Resolution- is a Unity feature that allows you to dynamically scale individual render targets. If an application is limited by memory bandwidth, the GPU is trying to read and write more data to its dedicated memory than it can handle in a frame. This usually means that that there are too many or textures, or that textures are too large. If this is the case, consider these options: • Enable mip maps for textures whose distance from the camera varies at runtime (for example, most textures used in a 3D scene). This increases memory usage and storage space for these textures, but can improve runtime GPU performance. • Use suitable compression formats to decrease the size of your textures in memory. This can result in faster load times, a smaller memory footprint, and improved GPU rendering performance. Compressed textures only use a fraction of the memory bandwidth needed for uncompressed textures. If an appliction is limited by vertex processing, this means that the GPU is trying to process more vertices than it can handle in a frame. If this is the case, consider these options: • Reduce the execution cost of vertex shaders. • Optimize your geometry: don’t use any more triangles than necessary, and try to keep the number of UV mapping seams and hard edges (doubled-up vertices) as low as possible. • Use the Level Of Detail system. Reducing the frequency of rendering - Sometimes, it might benefit your application to reduce the rendering frame rate. This doesn’t reduce the CPU or GPU cost of rendering a single frame, but it reduces the frequency with which Unity does so without affecting the frequency of other operations (such as script execution). You can reduce the rendering frame rate for parts of your application, or for the whole application. Reducing the rendering frame rate to prevents unnecessary power usage, prolongs battery life, and prevent device temperature from rising to a point where the CPU frequency may be throttled. This is particularly useful on handheld devices.
  • 22. If profiling reveals that rendering consumes a significant proportion of the resources for your application, consider which parts of your application might benefit from this. Common use cases include menus or pause screens, turn based games where the game is awaiting input, and applications where the content is mostly static, such as automotive UI. To prevent input lag, you can temporarily increase the rendering frame rate for the duration of the input so that it still feels responsive. To adjust the rendering frame rate, use the On Demand Rendering API. The API works particularly well with the Adaptive Performance package. Note: VR applications don’t support On Demand Rendering. Not rendering every frame causes the visuals to be out of sync with head movement and might increase the risk of motion sickness. 2.4.2.Physics Engine: Unity helps you simulate physics in your Project to ensure that the objects correctly accelerate and respond to collisions, gravity, and various other forces. Unity provides different physics engine implementations which you can use according to your Project needs: 3D, 2D, object- oriented, or data-oriented. If your project is object-oriented, use the Unity’s built-in physics engine that corresponds to your needs: • Built-in 3D physics (Nvidia PhysX engine integration) • Built-in 2D physics (Box2D engine integration) If your project uses Unity’s Data-Oriented Technology Stack (DOTS), you need to install a dedicated DOTS physics package. The available packages are: • Unity Physics package: the DOTS physics engine you need to install by default to simulate physics in any data-oriented project. • Havok Physics for Unity package: an implementation of the Havok physics engine for Unity, to use as an extension of the Unity Physics package. Note that this package is subject to a specific licensing scheme. The Unity Physics package, part of Unity's Data-Oriented Technology Stack (DOTS), provides a deterministic rigid body dynamics system and spatial query system.
  • 23. 2.4.3 Audio Engine In real life, objects emit sounds that listeners hear. The way a sound is perceived depends on many factors. A listener can tell roughly which direction a sound is coming from and may also get some sense of its distance from its loudness and quality. A fast-moving sound source (such as a falling bomb or a passing police car) changes in pitch as it moves as a result of the Doppler Effect. Surroundings also affect the way sound is reflected. A voice inside a cave has an echo, but the same voice in the open air doesn’t. To simulate the effects of position, Unity requires sounds to originate from Audio Sources attached to objects. The sounds emitted are then picked up by an Audio Listener attached to another object, most often the main camera.Unity can then simulate the effects of a source’s distance and position from the listener object and play them to you accordingly. You can also use the relative speed of the source and listener objects to simulate the Doppler Effect for added realism. Unity can’t calculate echoes purely from scene geometry, but you can simulate them by adding Audio Filters to objects. For example, you could apply the Echo filter to a sound that is supposed to be coming from inside a cave. In situations where objects can move in and out of a place with a strong echo, you can add a Reverb Zone to the scene. For example, your game might involve cars driving through a tunnel. If you place a reverb zone inside the tunnel, the cars’ engine sounds start to echo as they enter. The echo quiets as the cars emerge from the other side. With the Unity Audio Mixer, you can mix various audio sources, apply effects to them, and perform mastering. Unity can import audio files in AIFF, WAV, MP3, and Ogg formats in the same way as other assets. Drag the files into the Project panel. Import an audio file to create an Audio Clip that you can then drag to an Audio Source or use from a script. The Audio Clip reference page has more details about the import options available for audio files. For music, Unity also supports tracker modules, which use short audio samples as “instruments” that you can arrange to play tunes. You can import tracker modules from .xm, .mod, .it, and .s3m files and use them the same way you use other audio clips. 2.4.4 Artificial Intelligence AI in gaming can assist in game personalization by analyzing player data and behavior to enable the scripting of tailored experiences and content recommendations. This helps in making the game more playable for each player.
  • 24. 2.4.5 Input Management Input allows the user to control your application using a device, touch, or gestures. You can program in-app elements, such as the graphic user interface (GUI) or a user avatar , to respond to user input in different ways. Unity supports input from many types of input devices, including: • Keyboards and mice • Joysticks • Controllers • Touch screens • Movement-sensing capabilities of mobile devices, such as accelerometers or gyroscopes • VR and AR controllers Unity supports input through two separate systems: • The Input Manager - is part of the core Unity platform and available by default. • The Input System is a package that needs to be installed via the Package Manager before you can use it. It requires the .NET 4 runtime, and doesn’t work in projects that use the old .NET 3.5 runtime. 2.5 User interface (UI) Unity provides three UI systems that you can use to create user interfaces (UI) for the Unity Editor and applications made in the Unity Editor: • UI Toolkit • The Unity UI package (uGUI) • IMGUI This page provides an overview of each. UI Toolkit UI Toolkit is the newest UI system in Unity. It’s designed to optimize performance across platforms, and is based on standard web technologies. You can use UI Toolkit to create extensions for the Unity Editor, and to create runtime UI for games and applications. UI Toolkit includes:
  • 25. • A retained-mode UI system that contains the core features and functionality required to create user interfaces. • UI Asset types inspired by standard web formats such as HTML, XML, and CSS. Use them to structure and style UI. • Tools and resources for learning to use UI Toolkit, and for creating and debugging your interfaces. Unity intends for UI Toolkit to become the recommended UI system for new UI development projects, but it is still missing some features found in Unity UI (uGUI) and IMGUI. The Unity UI (uGUI) package The Unity User Interface (Unity UI) package (also called uGUI) is an older, GameObject-based UI system that you can use to develop runtime UI for games and applications. In Unity UI, you use components and the Game view to arrange, position, and style the user interface. It supports advanced rendering and text features. See the Unity UI package documentation for the manual and API reference. IMGUI Immediate Mode Graphical User Interface (IMGUI) is a code-driven UI Toolkit that uses the OnGUI function, and scripts that implement it, to draw and manage user interfaces. You can use IMGUI to create custom Inspectors for script components, extensions for the Unity Editor, and in-game debugging displays. It is not recommended for building runtime UI. Choosing a UI system for your project Unity intends for UI Toolkit to become the recommended UI system for new UI development projects, but it is still missing some features found in Unity UI (uGUI) and IMGUI. These older systems are better in certain use cases, and are required to support legacy projects. Your choice of UI system for a given project depends on the kind of UI you plan to develop, and the features you need support for. Comparison of UI systems in Unity UI Toolkit is intended to become the recommended UI system for your new UI development projects. However, in the current release, UI Toolkit does not have some features that Unity UI (uGUI) and Immediate Mode GUI (IMGUI) support. uGUI and IMGUI are more appropriate for certain use cases, and are required to support legacy projects.
  • 26. This page provides a high-level feature comparison of UI Toolkit, uGUI, and IMGUI, and their respective approaches to UI design. General consideration The following table lists the recommended and alternative system for runtime and Editor: 2022 Recommendation Alternative Runtime Unity UI UI Toolkit Editor UI Toolkit IMGUI Innovation and development UI Toolkit is in active development and releases new features frequently. uGUI and IMGUI are established and production-proven UI systems that are updated infrequently. uGUI and IMGUI might be better choices if you need features that are not yet available in UI Toolkit, or you need to support or reuse older UI content. Runtime uGUI is the recommended solution for the following: • UI positioned and lit in a 3D world • VFX with custom shaders and materials • Easy referencing from MonoBehaviours UI Toolkit is an alternative to uGUI if you create a screen overlay UI that runs on a wide variety of screen resolutions. Consider UI Toolkit to do the following: • Produce work with a significant amount of user interfaces • Require familiar authoring workflows for artists and designers • Seek textureless UI rendering capabilities Use Cases The following table lists the recommended system for major runtime use cases: 2022 Recommendation Multi-resolution menus and HUD in intensive UI projects UI Toolkit World space UI and VR Unity UI
  • 27. 2022 Recommendation UI that requires customized shaders and materials Unity UI In details The following table lists the recommended system for detailed runtime features: 2022 UI Toolkit Unity UI WYSIWYG authoring Yes Yes Nesting reusable components Yes Yes Global style management Yes No Layout and Styling Debugger Yes Yes Scene integration Yes Yes Rich text tags Yes Yes* Scalable text Yes Yes* Font fallbacks Yes Yes* Adaptive layout Yes Yes Input system support Yes Yes Serialized events No Yes Visual Scripting support No Yes Rendering pipelines support Yes Yes
  • 28. 2022 UI Toolkit Unity UI Screen-space (2D) rendering Yes Yes World-space (3D) rendering No Yes Custom materials and shaders No Yes Sprites / Sprite atlas support Yes Yes Dynamic texture atlas Yes No Texture less elements Yes No UI anti-aliasing Yes No Rectangle clipping Yes Yes Mask clipping No Yes Nested masking Yes Yes UI transition animations Yes No Integration with Animation Clips and Timeline No Yes *Requires the TextMesh Pro package Editor UI Toolkit is recommended if you create complex editor tools. UI Toolkit is also recommended for the following reasons: • Better reusability and decoupling • Visual tools for authoring UI • Better scalability for code maintenance and performance IMGUI is an alternative to UI Toolkit for the following: • Unrestricted access to editor extensible capabilities
  • 29. • Light API to quickly render UI on screen Use Cases The following table lists the recommended system for major Editor use cases: 2022 Recommendation Complex editor tool UI Toolkit Property drawers UI Toolkit Collaboration with designers UI Toolkit In details The following table lists the recommended system for detailed Editor features: 2022 UI Toolkit IMGUI WYSIWYG authoring Yes No Nesting reusable components Yes No Global style management Yes Yes Layout and Styling Debugger Yes No Rich text tags Yes Yes Scalable text Yes No Font fallbacks Yes Yes Adaptive layout Yes Yes Default Inspectors Yes Yes Inspector: Edit custom object types Yes Yes
  • 30. 2022 UI Toolkit IMGUI Inspector: Edit custom property types Yes Yes Inspector: Mixed values (multi-editing) support Yes Yes Array and list-view control Yes Yes Data binding: Serialized properties Yes Yes UI hierarchy Both uGUI and UI Toolkit build and maintain the UI inside a hierarchy tree structure. In uGUI, all elements in this hierarchy are visible as individual GameObjects in the hierarchy view panel. In UI Toolkit, visual elements organize into a visual tree. The visual tree isn’t visible in the panel. To view and debug the UI hierarchy in UI Toolkit, you can use the UI Debugger. You can find UI Debugger in the Editor toolbar, under Window > UI Toolkit > Debugger.
  • 31. UI Debugger Key differences Canvas versus UIDocument The Canvas component in uGUI is similar to the UIDocument component in UI Toolkit. Both are MonoBehaviours that attach to GameObjects. In uGUI, a Canvas component sits at the root of the UI tree. It works with the Canvas Scaler component to determine the sort order, rendering, and scaling mode of the UI underneath. In UI Toolkit, the UIDocument component contains a reference to a PanelSettings object. The PanelSettings contains the rendering settings for the UI, including the scale mode and the sort order. Multiple UIDocument components can point to the same PanelSettings object, which optimizes performance when using multiple UI screens in the same scene .
  • 32. Panel Settings In uGUI, the UI tree hierarchy sits underneath the GameObject holding the Canvas component. In UI Toolkit, the UIDocument component holds a reference to the root element of the Visual Tree. The UIDocument component also contains a reference to the UXML file that defines the UI layout from which the Visual Tree is built at runtime. See Creating UI section for more information. Note: For Editor UI, no UIDocument component is needed. You can derive your custom class from EditorWindow, then implement CreateGUI(). For a practical example, see the guide on Creating custom Editor windows. GameObject components vs visual elements UI Toolkit refers to UI elements as controls or visual elements. Examples of UI elements are: • Controls • Buttons • Text labels uGUI builds the UI hierarchy from GameObjects. Adding new UI elements requires adding new GameObjects to the hierarchy. The individual controls are implemented as MonoBehaviour components.
  • 33. In UI Toolkit, the visual tree is virtual and doesn’t use GameObjects. You can no longer build or view the UI hierarchy in the hierarchy view, but it removes the overhead of using a GameObject for each UI element. In uGUI, UI elements derive (directly or indirectly) from the UIBehavior base class. Similarly, in UI Toolkit all UI elements derive from a base class called VisualElement. The key difference is the VisualElement class doesn’t derive from MonoBehaviour. You can not attach visual elements to GameObjects. Working with UI Toolkit controls in script is similar to working with uGUI controls. The following table shows common script interactions with UI controls in uGUI, and their UI Toolkit counterparts. Action uGUI UI Toolkit Write text into a label m_Label.text = "My Text"; m_Label.text = "My Text"; Read the state of a toggle bool isToggleChecked = m_Toggle.isOn; bool isToggleChecked = m_Toggle.value; Assign a callback to a button m_Button.onClick.AddListener(M yCallbackFunc); m_Button.clicked += MyCallbackFunc_1; or m_Button.RegisterCallback<ClickEvent>( MyCallbackFunc_2); Access UI elements In uGUI, there are two ways scripts can access UI elements: • Assigning a reference to the UI components in the Editor. • Finding the components in the hierarchy using helper functions such as GetComponentInChildren<T>(). Since there are no GameObject or components in UI Toolkit, you can’t directly assign references to a control in the Editor. They must be resolved at runtime using a query function. Instead, access the Visual Tree via the UIDocument component. UIDocument is a MonoBehaviour, so you can assign it as a reference and make it part of a Prefab.The UIDocument component holds a reference to the root visual element. From the root, scripts can find child elements by type or by name, similar to uGUI.
  • 34. The table below shows a direct comparison of accessing UI controls in Unity UI and UI Toolkit Action uGUI UI Toolkit Find UI element by name transform.FindChild("childName"); rootVisualElement.Query("childName"); Find UI element by type transform.GetComponentInChildren<Button>(); rootVisualElement.Query<Button>(); Direct assignment of a reference in Editor Possible Not possible Create UI One of the biggest differences between uGUI and UI Toolkit is the creation of user interfaces. Both uGUI and UI Toolkit allow you to visually build the UI and preview it in the Editor. In uGUI, the UI is then saved inside a Prefab, along with any logic scripts attached to individual UI controls. In UI Toolkit, The UI layout is created in UI Builder, then saved as one or multiple UXML files. At runtime, UIDocument components load the UXML files that the Visual Tree assembles in memory. For a method similar to uGUI, you can create UI controls directly from a script, then add them to a Visual Tree at runtime. Prefabs uGUI uses GameObjects for individual UI controls and Prefabs that both contain visuals and logic. UI Toolkit takes a different approach to re-usability, as it separates logic and layout. You can create reusable UI components through UXML and custom controls. To create a similar template to a Prefab in UI Toolkit: 1. Create a UXML file for the partial UI element. 2. Create a GameObject with a UIDocument component. 3. Reference the UXML file in the GameObject.
  • 35. 4. Add a script to handle the UI component logic to the same GameObject. 5. Save the GameObject as a Prefab. UI layout Arranging individual UI elements on screen in uGUI is a manual process. By default, UI controls are free floating and are only affected by their direct parent. Other UI controls under the same parent don’t affect their siblings positions or sizes. Pivots and anchors control the position and size of an element. The UI Toolkit layout system is influenced by web design, and based on automatic layout generation. The automatic layout system affects all elements by default, and an element’s size and position will affect other elements under the same parent. The default behavior in UI Toolkit is comparable to placing all elements inside a VerticalLayoutGroup in uGUI, and adding a LayoutElement component to each. You can disable automatic layout generation by changing the IStyle position property of the visual element. All visual elements have this property. See Visual Tree for a code sample. UI Toolkit has no direct equivalents for anchoring and pivots of UI elements, due to the fundamental layout differences compared to uGUI. The size and position of an element is controlled by the layout engine. For more information, see Layout Engine and Coordinate and position systems. Rendering order In uGUI, the order of the GameObjects in the hierarchy determines the rendering order. Objects further down in the hierarchy render last and appear on top. In a scene with multiple Canvases, the Sort Order on the root Canvas component determines the render order of the individual UI trees. The render order in a visual tree in UI Toolkit operates the same way. Parent elements render before their children, and children render from the first to the last, so that the last appears on top. In a scene with multiple UI Documents, the render order is determined by the Sort Order setting on the root UIDocument component. To change the rendering order of an element in uGUI, such as making an element appear on top, you can call the sibling functions on the Transform component of the GameObject. The VisualElement class offers comparable functions to control the rendering order. As all UI Toolkit controls derive from this class, all controls have access to this function. The table below shows the uGUI functions to control render order and the equivalent functions in UI Toolkit: Action uGUI UI Toolkit
  • 36. Action uGUI UI Toolkit Make element render underneath all other siblings transform.SetAsFirstSibling(); myVisualElement.SendToBack(); Make element render on top of all other siblings transform.SetAsLastSibling(); myVisualElement.BringToFront(); Manually control the element’s render order relative to its siblings transform.SetSiblingIndex(newIndex); myVisualElement.PlaceBehind(sibling); myVisualElement.PlaceInFront(sibling); Events Just like in uGUI, user interactions in UI Toolkit trigger events. The code can subscribe to receive a callback on events, such as pressing a button or moving a slider. In uGUI, all UI elements are based on MonoBehaviour, and can expose their events in the Editor. This allows to set up logic with other GameObjects, for example to hide or show other UI elements, or to assign callback functions. uGUI Button OnClick Inspector In UI Toolkit, logic and UI layout are stored separately. Callbacks can no longer be set up directly on GameObjects or stored in Prefabs. You must set up all callbacks at runtime, and handle them via scripting. Button playButton = new Button("Play"); playButton.RegisterCallback<ClickEvent>(OnPlayButtonPressed); ...
  • 37. private void OnPlayButtonPressed(ClickEvent evt) { // Handle button press } The event dispatching system in UI Toolkit differs from events in uGUI. Depending on the event type, events aren’t just sent to the target UI control, but also to all the parent controls.