SlideShare a Scribd company logo
VISIONIZE BEFORE VISUALIZE
The Future of Graphics Hardware, Visualization Software Development Environment,
Visual Communication and Vision
Masatsugu HASHIMOTO†
, Masanori KAKIMOTO†
, Makoto MATSUMURA†
,
Tatsuya MURAUCHI†
, Kazutoshi HATAKEYAMA†
and Tadao NAKAMURA‡
†SGI Japan, Ltd. ‡Stanford University
ABSTRACT
In this paper we discuss the future of visualization
systems. First, we overview current graphics pipeline
architecture. Then we redefine GPGPU as “Graphics-
output Presupposition General Purpose Usage” and
discuss future graphics hardware. We propose a
concept for a next generation visualization software
development environment “Visual Realityware”
which unlocks the development environment and
enables us to freely select the best available mainstream
technology for any project and implement an
application with minimum bugs in the shortest time with
simple multiple platform expandability.
Since visual communication in real-time allows us to
understand complex situations easily and supports
decision-making therefore visualization can make issues
clear. However when we visualize issues, we already
have issues there. We need to head off issues before
they arise. To prevent risks, we must be able to share
both goals and vision. Actions based on real-time,
dynamic graphic representations or visualizations help
prevent risks. “Visionize” is to take vision-based actions
resulting from shared insights from visual
communication to prevent risks. Finally we believe,
based on our experience, that high quality visualization
is necessary for the collaboration of scientists, engineers
and artists.
KEYWORDS
Visualization, Graphics Hardware, GPGPU, GPU
Computing, Visualization Development Environment,
Visual Communication and Computer Supported
Cooperative Work (CSCW)
1. INTRODUCTION
Recent advancement of Graphics Processing Unit
(GPU) performance is significant and the usage of the
GPU is expanded even for general purpose computing.
In this paper, we review the principle of the
mainstream method of real-time 3D rendering, and
discuss the future of graphics hardware, visualization
software development, an advanced environment model
and the synergies from visual communication.
2. OVERVIEW OF GRAPHICS HARDWARE
GPU performance advancement exceeds Moore’s Law
and new GPU functions are constantly implemented.
However the basic principle of real-time 3D rendering-
polygon-based modeling, depth determination of objects
and occlusion culling hasn’t changed.
As the original model of 3D graphics hardware, we
can find Geometry Engine [1] in 1981 by Jim Clark of
Stanford University, a founder of Silicon Graphics, Inc.
(SGI). Kurt Akeley, a student of Jim Clark and a
founder of SGI improved Geometry Engine and
established present 3D graphics architecture. In 1986 he
implemented several crucial techniques on Silicon
Graphics 4D series Workstation. Those included
triangle-based 3D object model representation,
rasterization of triangles after geometry transformation,
and Z-buffer for hidden surface removal [2] [3]. After
that, the processing performance of graphics hardware
evolved steadily [4] [5]. During this period, the
evolution highlights are real-time texture mapping on
Silicon Graphics VGX in 1990, graphics hardware
popularization on PCs in the late 1990s and
programmable GPU by NVIDIA in 2001 [6]. Graphics
hardware performance advances from 1990 exceeded
Moore’s Law, improving 2.3 times per year.
The overview of graphics architecture in Figure 1
represents the relation between graphics application and
hardware. This figure shows the layers of 3D graphics
processing. When application software runs on graphics
hardware, OpenGL functions must be used. While
OpenGL has a graphics API function, it is a driver
software accessing the graphics hardware directly. In
that sense, the OpenGL API specification represents the
graphics architecture.
3. GRAPHICS PIPELINE
In this section, we detail the steps of 3D graphics
processing. Figure.1 shows an overview of the graphics
pipeline that processes 3D rendering. The pipeline has
G T X S D
Software
Hardware
Scene Graph API Low Level API(OpenGL)
3D Device Driver
GPUCPU
Scene Graph Viewing Frustum Frame Buffer Display
Figure 1. The role of software and hardware in 3D Graphics Processing
five stages (G, T, X, S and D). Each stage has its own
unique functionality:
3.1. G stage (Generation of Scene Graph)
3D model data for rendering is generated or updated
on the main memory according to the data structure
which applications define. This on-memory data
structure is called a scene graph. The Generation stage
runs on the CPU as part of applications and libraries.
3.2. T stage (Traversal of Scene Graph)
A scene graph is often formed as a hierarchical tree
structure. In T stage, the tree structure is traversed and
eventually a group of vertices describing triangles or
other geometric primitives are sent to the graphics
hardware. This traversal processing is done by
applications or libraries on top of OpenGL. Finally the
OpenGL driver outputs all data needed for rendering.
3.3. X stage (Transformation of Vertex)
Initially X stage receives the information of vertices
with primitive structure, transformation matrix, light
sources, etc. from T stage. Next, this stage transforms
the 3D coordinates of each vertex into 2D screen
coordinates and a depth value Z and computes the color
of each vertex. The processing of X stage typically runs
on GPU in contemporary graphics architecture.
3.4. S stage (Scan conversion)
S stage computes RGB brightness on each pixel in the
triangle given as three vertices on the screen. This stage
also computes the depth value Z corresponding to each
pixel on the triangle. Finally it computes the brightness
of the pixel while carrying out depth tests, blending,
and other raster operations. The results are written as
image data on the frame memory. The processing of S
stage runs on GPU.
3.5. D stage (Display)
D stage reads image data on the frame memory and
outputs video signals through a display such as a VGA
or a DVI, synchronizing a fixed rate video refresh clock.
4. PARALLEL RENDERING
The ideal is to operate a parallel rendering platform
like a conventional PC. On parallel rendering, a shared
memory type computer enables this operation if an
application supports multi-CPU or GPU.
In September 2007, a PC-based graphics platform on a
single OS supports a maximum of 8 CPUs (16 cores),
256 Giga bytes of memory and 2 PCI Express interfaces
for graphics cards on a single PC motherboard with SLI
specification by NVIDIA. This PC supports either the
Windows or Linux OS. The supercomputing system
“Silicon Graphics Altix” supports a maximum of 256
CPUs Intel Itanium2, 3 Tera-bytes of shared memory
based on cash-coherent NonUniform Memory Access
(ccNUMA) graphics architecture. This system supports
Linux OS only.
In the future, the CPU sockets of Itanium2 and Xeon
will be the same interface. Therefore the commodity
CPU-based platform will support huge shared memory
based on ccNUMA. When the system exceeds the
limitation of shared memory on a single OS, we must
build a hardware cluster with sorting mechanisms using
a system such as Chromium [7], which is a system for
real-time graphics rendering based on OpenGL on
cluster systems.
5. THE FUTURE OF GRAPHICS HARDWARE
The advancement of GPU is enabling support for
general purpose computing, called GPGPU (General
Purpose GPU). While single CPU has a maximum 8
cores, single GPU has 128 massively parallel shader
units. The memory bandwidth of a CPU is 8 GB/s and
that of a GPU is 60GB/s. However today’s GPU does
not support double-precision values. At the moment, the
GPGPU support area is limited. We want to redefine
current GPGPU. As the performance of read-back
operations from GPU to CPU is slow, GPUs are
generally as versatile as vector processors such as NEC
SX. One can utilize the GPU’s full performance when
the output is obtained on the frame memory. Therefore,
at this point, we want to call this technology “Graphics-
output Presupposition General Purpose Usage”. The
current GPGPU will be suitable for iso-surface
generation, ray-tracing, data analysis of data mining and
physical simulation before visualization. NVIDIA
released a GPU Computing hardware platform Tesla [8]
and software development environment Compute
Unified Device Architecture (CUDA)[10]. The peak
performance achieves 518 G flops.
In Japan, an ultra high speed computer called “Peta-
scale Computer” is under design by Japanese computer
manufacturers. We strongly hope that vector computing
and GPU computing technologies grow together as a
single global technology.
6. IDEAL DEVELOPMENT ENVIRONMENT
Ten years ago, Silicon Graphics, Inc. provided a
totally integrated system consisting of graphics
hardware, CPU, OS, compiler and software
development environment including Graphics Library.
Currently, technology companies provide each product
separately such as CPU, OS and graphics hardware.
Therefore there is a risk that a given product may be
discontinued. For example as Microsoft left the
OpenGL Architecture Review Board (OpenGL ARB) in
2003, there is a risk that Microsoft Windows may not
support OpenGL.
Developers and researchers must select new
mainstream technology in response to frequent changes
in development environment. Microsoft has updated
Windows OS periodically. Adobe released Adobe
Integrated Runtime (AIR) [9], an application runtime
environment. In GPGPU development environment,
NVIDIA provides CUDA, while ATI provides Close
To the Metal (CTM) [11].
6.1. Visual Realityware
We have built the visualization software development
environment “Visual Realityware” considering the
volatility of the mainstream technologies, We show the
concept of Visual Realityware in Figure.2. A
visualization software development environment
including Visual Realityware consists of three layers.
The upper layer or “Application Layer” is a group of
original software modules not depending on a specific
mainstream technology. The middle layer is the
“Abstraction Layer” -switching several mainstream
technologies. The lower layer is the “Technology
Layer” where each company provides its own
mainstream technology. And below the three layers we
show OS supporting technologies. Black ellipses
represent supporting OS’s corresponding to the
technologies.
The key feature of Visual Realityware is the
Abstraction Layer. OpenGL and DirectX are abstracted
by this Layer. Developers can select a favorite Graphics
Library when they compile. And they can rapidly
expand developed software to multiple platforms.
Additionally, they can use the added asset of the source
code.
SceneGraph
DataImporter
Dataexporter
MathLibrary
CADRendering
GeographyRendering
FluidRendering
RealTimeRayTracing
VolumeRendering
DirectX
Real Time
Renderer
GUI Tessellator GPGPU
OpenGL
AdobeFlash
NativeUI
OpenGLOptimizer
OriginalTessellator
NVIDIACUDA
ATICTM
ApplicationLayerTechnologyLayerOS
Windows
Linux MixedReality
Abstraction
Layer
Figure 2. Concept of Visualization Software
Development Environment “Visual Realityware”
Visual Realityware has several software modules.
These modules are often used for application
developments. We continue to maintain these software
modules, adding new functions and testing code.
Therefore we can develop original software with
minimum bugs in a shorter period of the time based on
these software modules rather than developing from
scratch.
6.2. Visual Realityware Based Application
We have developed some applications using Visual
Realityware. A case example is “Virtual Anatomia
[12]”, which visualizes an inner body of a living woman
scanned by MRI equipment with The Jikei University.
Table 1. The Classification and Application of Computer Supported Cooperative Work
Real-time Communication Asynchronous Stored Communication
Face-to-face ・White board with printing function
・Real-time Design Review Software
- Design Review using CAD data
- Landscape Simulation using 3D Map
・Sales Support Tool
Distributed ・IP-phone and Skype
・TV Conference
・Mail, Web, Blog and SNS
・Google Earth
Figure 3 presents snapshots of Virtual Anatomia
visualized with four-dimensional biological information.
Figure 3. Four dimensional visualization of
biological information ”Virtual Anatomia”
©True Laboratory and SGI Japan, Ltd.
This model consists of 421 parts including anatomy,
internal organ and blood vessel. This application users
can observe and analyze the cyber body.
Next, we show Virtual Anatomia flamework in the
Visual Realityware in Figure 4. Virtual Anatomia uses
Scene Graph, Data Importer, Data Exporter and
Mathematics Library on Visual Realityware.
SceneGraph
DataImporter
Dataexporter
MathLibrary
CADRendering
GeographyRendering
FluidRendering
RealTimeRayTracing
VolumeRenddering
DirectX
Real Time
Renderer
GUI Tessellator GPGPU
OpenGL
AdobeFlash
NativeUI
OpenGLOptimizer
OriginalTessellator
NVIDIACUDA
ATICTM
ApplicationLayerTechnologyLayerOS
Windows
Linux
MixedReality
Abstraction
Layer
Virtual Anatomia
Figure 4. Virtual Anatomia on “Visual Realityware”
If we develop Virtual Anatomia from scratch it takes
more than two months. However, using Visual
Realityware, we could implement it in two weeks. And
we can easily and quickly switch DirectX to OpenGL.
7. THE FUTURE OF VISUAL
COMMUNICATION
Communication with visual images makes information
transfer more efficient. As they say “A picture is worth
a thousand words”. In Table 1, we show the
classification of Computer Supported Cooperative
Work (CSCW) according to whether it’s face-to-face or
distributed communication and whether real-time or
stored-type.
An asynchronous stored type cooperative work is
suitable for shared information. Real-time cooperative
work is suitable for decision support. Recently
broadband networking environments realized real-time
communication. It is important for companies and
research laboratories that must make timely decisions to
use real-time cooperative work styles, instead of stored-
type communication tools. An automobile company
developed a system which enabled the sales staff of
hundreds of its dealers to give their customers real-time
visual presentations of their product’s features with
interactive contents and movies using PCs. They have
increased sales as a result [13].
In order to make timely decision support based on
real-time visual communication, it is necessary to build
total content workflow of content archiving, content
rights management, content re-editing, content
distribution and efficient visual presentation of the
content. Visual Realityware achieves this objective.
8. VISIONIZE
Finally we think that three types of human resources
are necessary to build a successful visualization system.
First we need scientists or other specialists who have a
deep understanding and insight for the visualization
target data. Second we need graphics engineers who
know the visualization processes. More specifically,
graphics engineers who should know GPU architecture
and should be expert in tools: C/C++, OpenGL, DirectX,
MPI etc or applications such as AVS, Ensight, Maya etc.
Third, we need artists who can write scenarios with
artistic sense. We can visualize tomorrow today while
making the best of advanced visualization technologies,
providing an environment which accelerates synergetic
effect among these three types of people is important.
The visualization by three types of specialists creates
messages based on the combined scenario. Someone
seeing the visualization results can sometimes achieve
synergetic vision.
For example, in Figure 5 we visualized the tsunami
simulation using our fluid analysis application
“FLUIDISISTA[14]” based on Visual Realityware.
Figure 5. Tsunami Simulation Example
© MAPCUBE, Prometech and SGI Japan
While we can recognize issues from visualization of
real tsunami damage, we can get vision like a hazard
map from visualization of tsunami simulations by fluid
dynamics experts, graphics engineers and an artist.
These visualizations containing important meaning
will draw us powerfully into a synergetic vision. After
this visual communication, we can take vision-oriented
action. This action will prevent risks and unforeseen
issues. We define “Visionize” as this process. Visionize
is a risk management methodology.
9. CONCLUSION
In this paper, we have surveyed the real-time 3D
Graphics Pipeline, showed current GPGPU and
redefined GPGPU as Graphics-output Presupposition
Purpose Usage. Furthermore we described the future
of graphics hardware, visualization software
development environment and visual communication.
Visual Realityware development environment is not
dependant on any given CPU, OS or graphics hardware
and provides great development flexibility through it's
"abstraction layer" while supporting rapid expansion of
developed software to multiple platforms.
Visualization of phenomena that humans cannot
foresee will support our real-time decision making.
We can make issues clear by visualization. But it’s after
the fact, when we already have issues. To prevent
unforeseen issues we say “Visionize”. We need to
graphically share our vision and goals. Visual
Communication visualizes meaning and therefore
allows us to more easily “see” potential issues before
they occur. Synergetic combining of visual meaning
and creating vision-oriented actions is a computer-
assisted, visualization-based communication process
that prevents risks from becoming issues. Visionize is a
risk management methodology.
10. REFERENCES
[1] J. Clark, “The geometry engine: A VLSI geometry system
for graphics,” In Proc. SIGGRAPH 1982, pp. 127-133 (July
1982).
[2] K. Akeley, T. Jermoluk, “High-Performance Polygon
Rendering”, In Proc. SIGGRAPH 1988, pp.239-246 (August
1988)
.
[3] K. Akeley, “The Silicon Graphics 4D/240GTX Super -
workstation,” IEEE CG&A, Vol. 9, No. 4, pp.71-83 (July
1989).
[4] K. Akeley, “RealityEngine Graphics,” In Proc.
SIGGRAPH 1993, pp. 109-116 (August 1993).
[5] J. Montrym, D. R. Baum, D. L. Dignam, C. J. Migdal,
“Infinite Reality: A Real-time Graphics System,” In Proc
SIGGRAPH 1997, pp. 293-302 (July 1997).
[6] E. Lindholm, M. J. Kilgard, H. Moreton, “A User-
Programmable Vertex Engine,” In Proc. SIGGRAPH 2001,
pp. 149-158 (August 2001).
[7] Chromium: http://chromium.sourceforge.net/
[8] NVIDIATesla:
http://www.nvidia.com/object/tesla_computing_solutions.html
[9] Adobe AIR: http://labs.adobe.com/technologies/air/
[10] NVIDIA CUDA:
http://developer.nvidia.com/object/cuda.html
[11] ATI CTM: http://ati.amd.com/companyinfo/researcher/
documents/ATI_CTM_Guide.pdf
[12] Virtual Anatomia:
http://www.sgi.co.jp/products/software/va/
[13] Mazda Visual IT Presention
http://www.sgi.co.jp/newsroom/press_releases/2006/may/maz
da.html
[14] FLUIDISISTA
http://www.sgi.co.jp/products/software/fluid/

More Related Content

PDF
Introduction to Computing on GPU
PDF
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
PDF
Graphics Processing Unit: An Introduction
PPTX
GPU Computing
PPTX
Computer Graphics Project Development Help with OpenGL computer graphics proj...
PDF
GPU Programming
PPTX
Lec04 gpu architecture
PDF
Image Processing Application on Graphics processors
Introduction to Computing on GPU
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
Graphics Processing Unit: An Introduction
GPU Computing
Computer Graphics Project Development Help with OpenGL computer graphics proj...
GPU Programming
Lec04 gpu architecture
Image Processing Application on Graphics processors

What's hot (17)

PPTX
GPU Computing: A brief overview
PPTX
2D graphics
PDF
Volume 2-issue-6-2040-2045
PDF
The International Journal of Engineering and Science (The IJES)
PPTX
Gpu with cuda architecture
PDF
High Performance Medical Reconstruction Using Stream Programming Paradigms
PPTX
A New Approach for Parallel Region Growing Algorithm in Image Segmentation u...
PDF
GPU Ecosystem
PDF
Technical Documentation_Embedded_Image_DSP_Projects
PDF
Hybrid Multicore Computing : NOTES
PDF
LightWave™ 3D 11 Add-a-Seat
PDF
LCU13: GPGPU on ARM Experience Report
PDF
Gpu based image segmentation using
PPTX
GPU and Deep learning best practices
PDF
Accelerating Real Time Applications on Heterogeneous Platforms
GPU Computing: A brief overview
2D graphics
Volume 2-issue-6-2040-2045
The International Journal of Engineering and Science (The IJES)
Gpu with cuda architecture
High Performance Medical Reconstruction Using Stream Programming Paradigms
A New Approach for Parallel Region Growing Algorithm in Image Segmentation u...
GPU Ecosystem
Technical Documentation_Embedded_Image_DSP_Projects
Hybrid Multicore Computing : NOTES
LightWave™ 3D 11 Add-a-Seat
LCU13: GPGPU on ARM Experience Report
Gpu based image segmentation using
GPU and Deep learning best practices
Accelerating Real Time Applications on Heterogeneous Platforms
Ad

Similar to VisionizeBeforeVisulaize_IEVC_Final (20)

PPTX
Graphics processing unit ppt
PPTX
Java on the GPU: Where are we now?
PPTX
Graphics pipelining
PPTX
Graphics Processing unit ppt
PPT
Fundamentals of Algorithms in computer G
PPTX
CGLecture 02 Interactive Graphics.pptx
PPT
Computer graphics - Nitish Nagar
PDF
Introduction to GPU Programming
DOCX
Ha4 displaying 3 d polygon animations
PPTX
Gpu microprocessors
PDF
Evolution of the modern graphics architectures with a focus on GPUs | Turing1...
PDF
GPU - DirectX 10 Architecture White Paper
PDF
Mod 2 hardware_graphics.pdf
PDF
The road to multi/many core computing
PPTX
Ch01 -introduction
PPTX
Introduction-to-Distributed-Systems GPU-BilqesF 2.pptx
PDF
Newbie’s guide to_the_gpgpu_universe
PDF
Compute API –Past & Future
PDF
Gpu digital lab english version
PDF
IRJET-A Study on Parallization of Genetic Algorithms on GPUS using CUDA
Graphics processing unit ppt
Java on the GPU: Where are we now?
Graphics pipelining
Graphics Processing unit ppt
Fundamentals of Algorithms in computer G
CGLecture 02 Interactive Graphics.pptx
Computer graphics - Nitish Nagar
Introduction to GPU Programming
Ha4 displaying 3 d polygon animations
Gpu microprocessors
Evolution of the modern graphics architectures with a focus on GPUs | Turing1...
GPU - DirectX 10 Architecture White Paper
Mod 2 hardware_graphics.pdf
The road to multi/many core computing
Ch01 -introduction
Introduction-to-Distributed-Systems GPU-BilqesF 2.pptx
Newbie’s guide to_the_gpgpu_universe
Compute API –Past & Future
Gpu digital lab english version
IRJET-A Study on Parallization of Genetic Algorithms on GPUS using CUDA
Ad

VisionizeBeforeVisulaize_IEVC_Final

  • 1. VISIONIZE BEFORE VISUALIZE The Future of Graphics Hardware, Visualization Software Development Environment, Visual Communication and Vision Masatsugu HASHIMOTO† , Masanori KAKIMOTO† , Makoto MATSUMURA† , Tatsuya MURAUCHI† , Kazutoshi HATAKEYAMA† and Tadao NAKAMURA‡ †SGI Japan, Ltd. ‡Stanford University ABSTRACT In this paper we discuss the future of visualization systems. First, we overview current graphics pipeline architecture. Then we redefine GPGPU as “Graphics- output Presupposition General Purpose Usage” and discuss future graphics hardware. We propose a concept for a next generation visualization software development environment “Visual Realityware” which unlocks the development environment and enables us to freely select the best available mainstream technology for any project and implement an application with minimum bugs in the shortest time with simple multiple platform expandability. Since visual communication in real-time allows us to understand complex situations easily and supports decision-making therefore visualization can make issues clear. However when we visualize issues, we already have issues there. We need to head off issues before they arise. To prevent risks, we must be able to share both goals and vision. Actions based on real-time, dynamic graphic representations or visualizations help prevent risks. “Visionize” is to take vision-based actions resulting from shared insights from visual communication to prevent risks. Finally we believe, based on our experience, that high quality visualization is necessary for the collaboration of scientists, engineers and artists. KEYWORDS Visualization, Graphics Hardware, GPGPU, GPU Computing, Visualization Development Environment, Visual Communication and Computer Supported Cooperative Work (CSCW) 1. INTRODUCTION Recent advancement of Graphics Processing Unit (GPU) performance is significant and the usage of the GPU is expanded even for general purpose computing. In this paper, we review the principle of the mainstream method of real-time 3D rendering, and discuss the future of graphics hardware, visualization software development, an advanced environment model and the synergies from visual communication. 2. OVERVIEW OF GRAPHICS HARDWARE GPU performance advancement exceeds Moore’s Law and new GPU functions are constantly implemented. However the basic principle of real-time 3D rendering- polygon-based modeling, depth determination of objects and occlusion culling hasn’t changed. As the original model of 3D graphics hardware, we can find Geometry Engine [1] in 1981 by Jim Clark of Stanford University, a founder of Silicon Graphics, Inc. (SGI). Kurt Akeley, a student of Jim Clark and a founder of SGI improved Geometry Engine and established present 3D graphics architecture. In 1986 he implemented several crucial techniques on Silicon Graphics 4D series Workstation. Those included triangle-based 3D object model representation, rasterization of triangles after geometry transformation, and Z-buffer for hidden surface removal [2] [3]. After that, the processing performance of graphics hardware evolved steadily [4] [5]. During this period, the evolution highlights are real-time texture mapping on Silicon Graphics VGX in 1990, graphics hardware popularization on PCs in the late 1990s and programmable GPU by NVIDIA in 2001 [6]. Graphics hardware performance advances from 1990 exceeded Moore’s Law, improving 2.3 times per year. The overview of graphics architecture in Figure 1 represents the relation between graphics application and hardware. This figure shows the layers of 3D graphics processing. When application software runs on graphics hardware, OpenGL functions must be used. While OpenGL has a graphics API function, it is a driver software accessing the graphics hardware directly. In that sense, the OpenGL API specification represents the graphics architecture. 3. GRAPHICS PIPELINE In this section, we detail the steps of 3D graphics processing. Figure.1 shows an overview of the graphics pipeline that processes 3D rendering. The pipeline has
  • 2. G T X S D Software Hardware Scene Graph API Low Level API(OpenGL) 3D Device Driver GPUCPU Scene Graph Viewing Frustum Frame Buffer Display Figure 1. The role of software and hardware in 3D Graphics Processing five stages (G, T, X, S and D). Each stage has its own unique functionality: 3.1. G stage (Generation of Scene Graph) 3D model data for rendering is generated or updated on the main memory according to the data structure which applications define. This on-memory data structure is called a scene graph. The Generation stage runs on the CPU as part of applications and libraries. 3.2. T stage (Traversal of Scene Graph) A scene graph is often formed as a hierarchical tree structure. In T stage, the tree structure is traversed and eventually a group of vertices describing triangles or other geometric primitives are sent to the graphics hardware. This traversal processing is done by applications or libraries on top of OpenGL. Finally the OpenGL driver outputs all data needed for rendering. 3.3. X stage (Transformation of Vertex) Initially X stage receives the information of vertices with primitive structure, transformation matrix, light sources, etc. from T stage. Next, this stage transforms the 3D coordinates of each vertex into 2D screen coordinates and a depth value Z and computes the color of each vertex. The processing of X stage typically runs on GPU in contemporary graphics architecture. 3.4. S stage (Scan conversion) S stage computes RGB brightness on each pixel in the triangle given as three vertices on the screen. This stage also computes the depth value Z corresponding to each pixel on the triangle. Finally it computes the brightness of the pixel while carrying out depth tests, blending, and other raster operations. The results are written as image data on the frame memory. The processing of S stage runs on GPU. 3.5. D stage (Display) D stage reads image data on the frame memory and outputs video signals through a display such as a VGA or a DVI, synchronizing a fixed rate video refresh clock. 4. PARALLEL RENDERING The ideal is to operate a parallel rendering platform like a conventional PC. On parallel rendering, a shared memory type computer enables this operation if an application supports multi-CPU or GPU. In September 2007, a PC-based graphics platform on a single OS supports a maximum of 8 CPUs (16 cores), 256 Giga bytes of memory and 2 PCI Express interfaces for graphics cards on a single PC motherboard with SLI specification by NVIDIA. This PC supports either the Windows or Linux OS. The supercomputing system “Silicon Graphics Altix” supports a maximum of 256 CPUs Intel Itanium2, 3 Tera-bytes of shared memory based on cash-coherent NonUniform Memory Access (ccNUMA) graphics architecture. This system supports Linux OS only. In the future, the CPU sockets of Itanium2 and Xeon will be the same interface. Therefore the commodity CPU-based platform will support huge shared memory based on ccNUMA. When the system exceeds the limitation of shared memory on a single OS, we must build a hardware cluster with sorting mechanisms using a system such as Chromium [7], which is a system for real-time graphics rendering based on OpenGL on cluster systems.
  • 3. 5. THE FUTURE OF GRAPHICS HARDWARE The advancement of GPU is enabling support for general purpose computing, called GPGPU (General Purpose GPU). While single CPU has a maximum 8 cores, single GPU has 128 massively parallel shader units. The memory bandwidth of a CPU is 8 GB/s and that of a GPU is 60GB/s. However today’s GPU does not support double-precision values. At the moment, the GPGPU support area is limited. We want to redefine current GPGPU. As the performance of read-back operations from GPU to CPU is slow, GPUs are generally as versatile as vector processors such as NEC SX. One can utilize the GPU’s full performance when the output is obtained on the frame memory. Therefore, at this point, we want to call this technology “Graphics- output Presupposition General Purpose Usage”. The current GPGPU will be suitable for iso-surface generation, ray-tracing, data analysis of data mining and physical simulation before visualization. NVIDIA released a GPU Computing hardware platform Tesla [8] and software development environment Compute Unified Device Architecture (CUDA)[10]. The peak performance achieves 518 G flops. In Japan, an ultra high speed computer called “Peta- scale Computer” is under design by Japanese computer manufacturers. We strongly hope that vector computing and GPU computing technologies grow together as a single global technology. 6. IDEAL DEVELOPMENT ENVIRONMENT Ten years ago, Silicon Graphics, Inc. provided a totally integrated system consisting of graphics hardware, CPU, OS, compiler and software development environment including Graphics Library. Currently, technology companies provide each product separately such as CPU, OS and graphics hardware. Therefore there is a risk that a given product may be discontinued. For example as Microsoft left the OpenGL Architecture Review Board (OpenGL ARB) in 2003, there is a risk that Microsoft Windows may not support OpenGL. Developers and researchers must select new mainstream technology in response to frequent changes in development environment. Microsoft has updated Windows OS periodically. Adobe released Adobe Integrated Runtime (AIR) [9], an application runtime environment. In GPGPU development environment, NVIDIA provides CUDA, while ATI provides Close To the Metal (CTM) [11]. 6.1. Visual Realityware We have built the visualization software development environment “Visual Realityware” considering the volatility of the mainstream technologies, We show the concept of Visual Realityware in Figure.2. A visualization software development environment including Visual Realityware consists of three layers. The upper layer or “Application Layer” is a group of original software modules not depending on a specific mainstream technology. The middle layer is the “Abstraction Layer” -switching several mainstream technologies. The lower layer is the “Technology Layer” where each company provides its own mainstream technology. And below the three layers we show OS supporting technologies. Black ellipses represent supporting OS’s corresponding to the technologies. The key feature of Visual Realityware is the Abstraction Layer. OpenGL and DirectX are abstracted by this Layer. Developers can select a favorite Graphics Library when they compile. And they can rapidly expand developed software to multiple platforms. Additionally, they can use the added asset of the source code. SceneGraph DataImporter Dataexporter MathLibrary CADRendering GeographyRendering FluidRendering RealTimeRayTracing VolumeRendering DirectX Real Time Renderer GUI Tessellator GPGPU OpenGL AdobeFlash NativeUI OpenGLOptimizer OriginalTessellator NVIDIACUDA ATICTM ApplicationLayerTechnologyLayerOS Windows Linux MixedReality Abstraction Layer Figure 2. Concept of Visualization Software Development Environment “Visual Realityware” Visual Realityware has several software modules. These modules are often used for application developments. We continue to maintain these software modules, adding new functions and testing code. Therefore we can develop original software with minimum bugs in a shorter period of the time based on these software modules rather than developing from scratch. 6.2. Visual Realityware Based Application We have developed some applications using Visual Realityware. A case example is “Virtual Anatomia [12]”, which visualizes an inner body of a living woman scanned by MRI equipment with The Jikei University.
  • 4. Table 1. The Classification and Application of Computer Supported Cooperative Work Real-time Communication Asynchronous Stored Communication Face-to-face ・White board with printing function ・Real-time Design Review Software - Design Review using CAD data - Landscape Simulation using 3D Map ・Sales Support Tool Distributed ・IP-phone and Skype ・TV Conference ・Mail, Web, Blog and SNS ・Google Earth Figure 3 presents snapshots of Virtual Anatomia visualized with four-dimensional biological information. Figure 3. Four dimensional visualization of biological information ”Virtual Anatomia” ©True Laboratory and SGI Japan, Ltd. This model consists of 421 parts including anatomy, internal organ and blood vessel. This application users can observe and analyze the cyber body. Next, we show Virtual Anatomia flamework in the Visual Realityware in Figure 4. Virtual Anatomia uses Scene Graph, Data Importer, Data Exporter and Mathematics Library on Visual Realityware. SceneGraph DataImporter Dataexporter MathLibrary CADRendering GeographyRendering FluidRendering RealTimeRayTracing VolumeRenddering DirectX Real Time Renderer GUI Tessellator GPGPU OpenGL AdobeFlash NativeUI OpenGLOptimizer OriginalTessellator NVIDIACUDA ATICTM ApplicationLayerTechnologyLayerOS Windows Linux MixedReality Abstraction Layer Virtual Anatomia Figure 4. Virtual Anatomia on “Visual Realityware” If we develop Virtual Anatomia from scratch it takes more than two months. However, using Visual Realityware, we could implement it in two weeks. And we can easily and quickly switch DirectX to OpenGL. 7. THE FUTURE OF VISUAL COMMUNICATION Communication with visual images makes information transfer more efficient. As they say “A picture is worth a thousand words”. In Table 1, we show the classification of Computer Supported Cooperative Work (CSCW) according to whether it’s face-to-face or distributed communication and whether real-time or stored-type. An asynchronous stored type cooperative work is suitable for shared information. Real-time cooperative work is suitable for decision support. Recently broadband networking environments realized real-time communication. It is important for companies and research laboratories that must make timely decisions to use real-time cooperative work styles, instead of stored- type communication tools. An automobile company developed a system which enabled the sales staff of hundreds of its dealers to give their customers real-time visual presentations of their product’s features with interactive contents and movies using PCs. They have increased sales as a result [13]. In order to make timely decision support based on real-time visual communication, it is necessary to build total content workflow of content archiving, content rights management, content re-editing, content distribution and efficient visual presentation of the content. Visual Realityware achieves this objective. 8. VISIONIZE Finally we think that three types of human resources are necessary to build a successful visualization system. First we need scientists or other specialists who have a deep understanding and insight for the visualization target data. Second we need graphics engineers who know the visualization processes. More specifically, graphics engineers who should know GPU architecture and should be expert in tools: C/C++, OpenGL, DirectX, MPI etc or applications such as AVS, Ensight, Maya etc. Third, we need artists who can write scenarios with
  • 5. artistic sense. We can visualize tomorrow today while making the best of advanced visualization technologies, providing an environment which accelerates synergetic effect among these three types of people is important. The visualization by three types of specialists creates messages based on the combined scenario. Someone seeing the visualization results can sometimes achieve synergetic vision. For example, in Figure 5 we visualized the tsunami simulation using our fluid analysis application “FLUIDISISTA[14]” based on Visual Realityware. Figure 5. Tsunami Simulation Example © MAPCUBE, Prometech and SGI Japan While we can recognize issues from visualization of real tsunami damage, we can get vision like a hazard map from visualization of tsunami simulations by fluid dynamics experts, graphics engineers and an artist. These visualizations containing important meaning will draw us powerfully into a synergetic vision. After this visual communication, we can take vision-oriented action. This action will prevent risks and unforeseen issues. We define “Visionize” as this process. Visionize is a risk management methodology. 9. CONCLUSION In this paper, we have surveyed the real-time 3D Graphics Pipeline, showed current GPGPU and redefined GPGPU as Graphics-output Presupposition Purpose Usage. Furthermore we described the future of graphics hardware, visualization software development environment and visual communication. Visual Realityware development environment is not dependant on any given CPU, OS or graphics hardware and provides great development flexibility through it's "abstraction layer" while supporting rapid expansion of developed software to multiple platforms. Visualization of phenomena that humans cannot foresee will support our real-time decision making. We can make issues clear by visualization. But it’s after the fact, when we already have issues. To prevent unforeseen issues we say “Visionize”. We need to graphically share our vision and goals. Visual Communication visualizes meaning and therefore allows us to more easily “see” potential issues before they occur. Synergetic combining of visual meaning and creating vision-oriented actions is a computer- assisted, visualization-based communication process that prevents risks from becoming issues. Visionize is a risk management methodology. 10. REFERENCES [1] J. Clark, “The geometry engine: A VLSI geometry system for graphics,” In Proc. SIGGRAPH 1982, pp. 127-133 (July 1982). [2] K. Akeley, T. Jermoluk, “High-Performance Polygon Rendering”, In Proc. SIGGRAPH 1988, pp.239-246 (August 1988) . [3] K. Akeley, “The Silicon Graphics 4D/240GTX Super - workstation,” IEEE CG&A, Vol. 9, No. 4, pp.71-83 (July 1989). [4] K. Akeley, “RealityEngine Graphics,” In Proc. SIGGRAPH 1993, pp. 109-116 (August 1993). [5] J. Montrym, D. R. Baum, D. L. Dignam, C. J. Migdal, “Infinite Reality: A Real-time Graphics System,” In Proc SIGGRAPH 1997, pp. 293-302 (July 1997). [6] E. Lindholm, M. J. Kilgard, H. Moreton, “A User- Programmable Vertex Engine,” In Proc. SIGGRAPH 2001, pp. 149-158 (August 2001). [7] Chromium: http://chromium.sourceforge.net/ [8] NVIDIATesla: http://www.nvidia.com/object/tesla_computing_solutions.html [9] Adobe AIR: http://labs.adobe.com/technologies/air/ [10] NVIDIA CUDA: http://developer.nvidia.com/object/cuda.html [11] ATI CTM: http://ati.amd.com/companyinfo/researcher/ documents/ATI_CTM_Guide.pdf [12] Virtual Anatomia: http://www.sgi.co.jp/products/software/va/ [13] Mazda Visual IT Presention http://www.sgi.co.jp/newsroom/press_releases/2006/may/maz da.html [14] FLUIDISISTA http://www.sgi.co.jp/products/software/fluid/