SlideShare a Scribd company logo
Multiphase flow modeling and simulation: HPC-enabled capabilities today and tomorrow Igor A. Bolotnov, Assistant Professor Department of Nuclear Engineering North Carolina State University Joint Faculty Appointment with Oak Ridge National Laboratory through the DOE Energy Innovation Hub “CASL” Acknowledgements: Research support: DOE-CASL; U.S.NRC; NSF HPC resources: INCITE and ALCC awards through DOE 54th HPC User Forum September 15th-17th, 2014 – Seattle, Washington
Outline 
• 
NC State: 
– 
College of Engineering 
– 
Department of Nuclear Engineering 
• 
Massively parallel CFD-DNS code PHASTA: 
– 
Application examples 
– 
Scaling performance 
• 
Interface tracking simulation examples: 
– 
Lift force evaluation 
– 
Bubble influence on the turbulence 
– 
Subchannel simulation of single/two-phase turbulent flows 
– 
Reactor subchannel geometry with spacer grids / mixing vanes 
• 
Future applications: 
– 
Useful/usable virtual experiments for nuclear industry 
– 
Computing requirement / estimates / possible timeframe 2 
http://www.hpcwire.com/2014/06/05/titan-enables-next-gen-nuclear-models/
NC State College of Engineering 
• 
Fall 2013: Total enrollments 9,114 
– 
Undergraduate: 6,186 
– 
Masters: 1,803 
– 
PhD: 1,125 
– 
Size & quality indicators on the rise 
• 
FY 13 Research expenditures > $160M 
• 
Among all U.S. engineering colleges: 
– 
10th largest undergraduate enrollment 
– 
9th largest graduate enrollment 
– 
8th in number of BS degrees awarded 
– 
14th in number of MS degrees awarded 
– 
16th in number of PhD degrees awarded 
• 
281 tenured & tenure-track faculty members 
– 
13 members of the National Academy of Engineering
Nuclear Engineering Department Brief History 
1950 Established as graduate program in Physics Dept 
1953 First non-governmental university-based research reactor 
1954 Two PhDs awarded 
1961 Department of Nuclear Engineering established 
1965 Rapid growth from 4 to 9 faculty; thrust areas: (1) Fission power reactors; (2) Radiation applications 
1973 1MW PULSTAR operational (4th on-campus reactor) 
1983 Added Plasma/fusion graduate track 
1994 Combined five-year BS/MNE degree established 
2008 
Master of Nuclear Engineering degree via Distance Ed 
2014 
PULSTAR is being licensed by NRC for 2MW operation
NCSU’s Nuclear Engineering Today 
• 
Our Faculty: 
– 
8 active faculty in October 2007  14 today 
– 
2 open positions currently in search 
– 
2 endowed chairs: Progress Energy (in search) & Duke Energy 
– 
Multiple Joint Faculty Appointments with ORNL & INL 
– 
Lead in $25M Consortium for Nonproliferation Enabling Capabilities 
– 
Pivotal role in CASL: Turinsky Chief Scientist, Doster Ed Programs 
– 
Gilligan: Director of NEUP (~$60M annually in DOE research) 
• 
Our Students: 
– 
Growing enrolments: ~100 Grads, ~150 UGs (sophomore – senior) 
– 
Won Mark Mills Award (best PhD) 10 times in Award’s 55 years 
– 
~10% win one or more award, scholarship, or fellowship annually 
• 
Space: 
– 
Increased by more than 50% since 2008 
– 
Future move to new building on Centennial Campus
CFD-DNS code overview 6 
PHASTA is a Parallel, Hierarchic, higher-order accurate, Adaptive, Stabilized, finite element method (FEM) Transient Analysis flow solver for both incompressible and compressible flows (Jansen, 1993). Governing equations: 
• 
Mass Conservation: 푢푖,푗=0 
• 
Momentum Conservation: ρ휙푢푖,푡+ρ휙푢푗푢푖,푗=−푝,푖+τ푖푖,푗+푓푖 
• 
Incompressible Newtonian Fluid: τ푖푖,푗=2μ휙Si,j=μ휙(푢푖,푗+ 푢푗,푖) 
• 
Continuum Surface Tension (CST) model of Brackbill et al. (1992) The Level-set Tracking method is implemented in PHASTA, which allows it simulate the two-phase phenomena.
PHASTA application examples 7
Scaling performance 
• 
Most recent: ANL’s BG/Q “Mira”: 
– 
Complex wing design on 11B elements 
– 
on up to 3M parts 
• 
Strong scaling results with 5.07B elements up to 294,912 cores on JUGENE and up to 98,304 cores on Kraken 8 
0.000.501.001.502.002.503264128256512768 Strong scaling K cores 1 mpi/core2 mpi/core4 mpi/core
Lift force: introduction 9 
Physics 
• 
Four fundamental factors govern lift force (Hibiki & Ishii, 2007) 
• 
Relative velocity 
• 
Shear rate 
• 
Bubble rotational speed 
• 
Bubble surface boundary condition Control Solution 
• 
PID-based controller 
• 
Control bubble’s location at (quasi) steady state 
• 
Control forces balance lift and drag forces Force Balance: 퐹퐷=퐹퐵+퐹푥푥=휌푙−휌푔푉푏푔+퐹푥푥=12퐶퐷휌푙푣푟 2퐴 퐹퐿=퐹푦푦=−퐶퐿휌푙푉푏푣⃗푟×푐푐푐푐푣⃗푙 
2δ 
x 
z 
y 
g 
2πδ 
2πδ/3
Drag force validation 10 
0.000.100.200.300.400.500.600.7002004006008001000 Drag coefficient Reynolds NumberTomiyama's CorrelationPHASTA 
Case studies 
Results 
퐂퐋 
퐂퐃 
Relative velocity studies 
( dvdy=1.0 ) 
R12.5: 
0.3596 
0.6805 
R17.5: 
0.3775 
0.4493 
R25: 
0.3807 
0.3172 
R40: 
0.4086 
0.2075 
R50: 
0.4264 
0.1722 
R60: 
0.4223 
0.1520 
R70: 
0.4142 
0.1372 
R80: 
0.4177 
0.1266 
R90: 
0.3970 
0.1198 
R100: 
0.3925 
0.1139 
Tomiyama A, Kataoka I, Zun I, Sakaguchi T. Drag coefficients of single bubbles under normal and micro gravity conditions. JSME international journal.Series B, fluids and thermal engineering. 1998;41(2):472-479 
퐶퐷=min16 푅푅 1+0.15푅푒0.687,48 푅푅 
푅푅= 휌퐿푉푇푑 휇퐿
Wall effect study 11 
1.54660.97750.60320.70490.38750.0384-0.3297-0.500.511.52012345678 Lift coefficient Minimum Distance of Bubble Interface To the Top Wall (in the unit of bubble radius) 
Wall has the effect on the bubble behavior: 
emerging concept is to have variable lift force instead of counteracting “wall” force in multiphase CFD approach
High Shear Laminar Flow (110 s-1 ) 12 
About 1M cells, 64-core node runs for ~24 hours
Lift and drag coefficient vs. shear rate 13 
0 
0.1 
0.2 
0.3 
0.4 
0.5 
0.6 
10 
30 
50 
70 
90 
110 
Lift Coefficient 
Shear Rate (s-1) 
PHASTA Values 
Tomiyama et al. (2002) Prediction 
Legendre & Magnaudet (1998) Prediction 
0 
0.1 
0.2 
0.3 
0.4 
0.5 
0.6 
10 
60 
110 
Drag Coefficient 
Shear Rate (s-1) 
PHASTA Values 
Tomiyama et al. (1998) Prediction 
• 
Trends agree with Legendre & Magnaudet (1998) observation 
• 
Correlations are independent of shear rate except Legendre & Magnaudet (1998) 
퐶퐿= 6 휋22.255 푅푅푅푅0.51+0.2푅푅/푆1.52+ 121+16/푅푅 1+29/푅푅 2 
푆
= 휔휔 푣푟
High shear turbulent (470 s-1) 14
Multiple bubble simulations 15 
60 bubbles; 20M hex cell mesh; ~4,096 core runs; ~5M CPU-hours for good statistics
Bubble tracking / advanced analysis 16 
The bubbles can be marked by its unique ID and tracked during the simulations.
Data collection / analysis 17 
Probes are created and placed in the domain in a flat plane arrangement as shown in the figure below. 
푈푖(푡)= 1 푁푒 ෍ 1 푁푤 ෍푢푚푖 (푡+푡푗)) 푁푤 푗=1 푁푒 푚=1 
푘(푡)= 1 푁푒 ෍ 1 푁푤 ෍෍ 12(푢′푚푖 푡+푡푗 23 푖=1) 푁푤 푗=1 푁푒 푚=1
Subchannel flow - initialization 18 
54 M tetrahedral unstructured mesh
Subchannel: two-phase flow 19
Averaged data 20 
푈+= 1 휅 log푦++퐵 
Law of the wall analysis shown in figure with dashed line results in the coefficients of 퐵=5.8 and 휅=0.48 observed in the subchannel simulation 
0510152025110100 U+ y+ 
Law of the wall profile for single phase simulation of a subchannel. Three time-averaged windows are shown in red, green and blue. Viscous sublayer (solid black) and log layer (dashed black) are shown. 
0.02.04.06.08.010.012.014.016.018.020.00.0E+001.0E-042.0E-043.0E-044.0E-045.0E-046.0E-047.0E-040100200300400500600 U+ TKE y+ 
Turbulent kinetic energy (TKE, blue) profile and dimensionless velocity (U+, red).
Multi-bubble single subchannel flow 
320 M elements; 72 bubbles at 1% void fraction; Re = 60,000 21
Simulated Mixing Vanes Design 
Realistic reactor spacer grids (modified – not EC) and mixing vanes used for turbulent flow simulations. 
Movie produced by in-situ ! 
190M mesh: 
22
Some future virtual experiments 
• 
Increasing fidelity of bubbly flow simulations 
• 
Incorporate complex geometry analysis for nuclear applications 
• 
Phase-change simulations 
• 
Boiling flows at challenging conditions, including flow regime transition 23
PWR-relevant problem sizes 
• 
Current runs (2013): 
– 
2x2 subchannel, 40cm long 
– 
Reynolds number of 180 (3.6% of typical normal operating conditions). 
– 
Mesh size: 190M elements 
• 
Short term (2015): 
– 
Reynolds number of 500 (10% of PWR conditions) 
– 
Mesh size: 2B elements 
– 
Feasible on 512K BG cores at 4K elements per core 
– 
Would resolve 600 bubbles at 1% void fraction 
• 
Mid term (2020): 
– 
Reynolds number of 5000 (typical PWR conditions) 
– 
Mesh size: 355B elements 
– 
Could run on up to 90M(!) cores at 4K elements per core 
– 
Would resolve 118,000 bubbles at 1% void fraction 
24
Larger domain problem sizes 
• 
Long term capabilities (2030 ?): 
– 
17x17 subchannel, 4.0 m long 
– 
Reynolds number of 5000 (typical PWR conditions) 
– 
Mesh size: 256T elements 
– 
Could run on up to 16B cores at 16K elements per core 
– 
Would resolve 85M bubbles at 1% void fraction 
• 
Direct simulation of whole reactor core (2060 ?): 
– 
About 160 fuel assemblies, 17x17 each 
– 
Reynolds number of 5000 (typical PWR conditions) 
– 
Mesh size: 40,000T elements 
– 
Could run on up to 320B cores at 128K elements per core 
– 
Would resolve 13.6B bubbles at average 1% void fraction 
• 
Time domain – larger meshes require smaller timesteps to maintain fidelity in two-phase flow simulations 
• 
Computing cost ? Advanced results must justify it! 25
Conclusions 
• 
The ongoing effort will allow to equip the massively parallel ITM code, PHASTA, with validated capabilities for high fidelity simulation of multiphase flows. 
• 
Advanced analysis tools is an integral component of large simulations which must be developed to take advantage of the vast amount of information provided by DNS/ITM approach. 
• 
HPC development certainly allows to make high fidelity modeling and simulation applied to a much wider set of problems. High fidelity thermal hydraulic analysis will allow to support design and licensing of reactor core components in the new future. 26
Back-up: Resources / Capabilities 
Local computing: 
• 
Large memory (1,024 G RAM) rack-mounted workstation for mesh generation 
• 
6-node computing cluster with 64 cores per node, Infiniband interconnect (384 compute cores = about 3.3M CPU-hours annually) Meshing: 
• 
Simmetrix serial and parallel meshing tools for complex geometries (tested up to 92B elements) 
• 
Pointwise license for CMFD meshing Remote computing: 
• 
2014 ALCC award for 76.8M processor-hours at Mira (Argonne National Lab, #5 supercomputer, 768,000 computing cores on BG/Q) 
• 
Access to Titan at ORNL (through CASL, about 5-20M processor-hours yearly), #2 supercomputer 
• 
2012 INCITE award for 14M processor hours on Titan 27

More Related Content

PPSX
multiphase flow modeling and simulation ,Pouriya Niknam , UNIFI
PPTX
Multiphase models
PPTX
Multiphase model
PPTX
01 reactive flows - governing equations favre averaging
PDF
Multiphase Flow Modeling
PDF
01 multiphase flows- fundamental definitions
PDF
00 reactive flows - species transport
PDF
00 reactive flows - governing equations
multiphase flow modeling and simulation ,Pouriya Niknam , UNIFI
Multiphase models
Multiphase model
01 reactive flows - governing equations favre averaging
Multiphase Flow Modeling
01 multiphase flows- fundamental definitions
00 reactive flows - species transport
00 reactive flows - governing equations

What's hot (20)

PDF
Cengel cimbala solutions_chap01
PDF
Cengel cimbala solutions_chap04
PDF
01 reactive flows - finite-rate formulation for reaction modeling
PDF
Atp (Advancede transport phenomena)
PPTX
5 homogeneous equilibrium model
PDF
Comparison of flow analysis of a sudden and gradual change
PDF
Comparison of flow analysis of a sudden and gradual change of pipe diameter u...
PDF
Optimal control of electrodynamic tether orbit transfers
PDF
Fluid Mechanics for Chemical Engineers, 3rd Edition
PPTX
Drift flux model
PDF
Chapter 5 fluid mechanics
PDF
Fluid dynamic
PDF
Fluid Mechanics Chapter 4. Differential relations for a fluid flow
PPTX
Dimensional analysis Similarity laws Model laws
PPTX
Rate and equilibrium in mass transfer processes
PPTX
Sedimentation for determining molecular weight of macromolecules
PDF
Mass transfer studies in an agitated vessel with radial axial impeller combin...
PPSX
CE 6451 FMM Unit 1 Properties of fluids
PDF
Wellwood-Thesis
Cengel cimbala solutions_chap01
Cengel cimbala solutions_chap04
01 reactive flows - finite-rate formulation for reaction modeling
Atp (Advancede transport phenomena)
5 homogeneous equilibrium model
Comparison of flow analysis of a sudden and gradual change
Comparison of flow analysis of a sudden and gradual change of pipe diameter u...
Optimal control of electrodynamic tether orbit transfers
Fluid Mechanics for Chemical Engineers, 3rd Edition
Drift flux model
Chapter 5 fluid mechanics
Fluid dynamic
Fluid Mechanics Chapter 4. Differential relations for a fluid flow
Dimensional analysis Similarity laws Model laws
Rate and equilibrium in mass transfer processes
Sedimentation for determining molecular weight of macromolecules
Mass transfer studies in an agitated vessel with radial axial impeller combin...
CE 6451 FMM Unit 1 Properties of fluids
Wellwood-Thesis
Ad

Similar to Multiphase Flow Modeling and Simulation: HPC-Enabled Capabilities Today and Tomorrow (20)

PPT
Compact Linear Collider (CLIC)
PPT
The CLIC project accelerator overview
PPTX
PhD_Thesis_Radu_Andrei_Negrila_EMF_stirring_final
PPTX
Printed Supercapacitors
PDF
636907main miller presentation
PDF
637131main radiation shielding symposium_r1
PDF
Towards Exascale Engine Simulations with NEK5000
PDF
Power Electronics Applications: From Electrified Transportation To High Energ...
PPTX
Dimensional Effect on Engineering Systems & Clean Room & Classification
PPTX
MODELING AND OPTIMIZATION OF COLD CRUCIBLE FURNACES FOR MELTING METALS
PPTX
Final Presentation
PDF
ITEP-2021-03-FCL Expertise.pdf
PDF
Portable Pico Linear Generator Design with Different Magnet Shapes for Wave E...
PDF
Structural Health Monitoring: The paradigm and the benefits shown in some mon...
PPTX
shagufta-final review
PPT
Performance studies on a direct drive turbine for wave power generation in a ...
PPT
CLIC Accelerator: status, plans and outlook
PDF
Electromagnetic formationflightoct02
PDF
Future Usage of Superconductivity
PDF
The SpaceDrive Project - First Results on EMDrive and Mach-Effect Thrusters
Compact Linear Collider (CLIC)
The CLIC project accelerator overview
PhD_Thesis_Radu_Andrei_Negrila_EMF_stirring_final
Printed Supercapacitors
636907main miller presentation
637131main radiation shielding symposium_r1
Towards Exascale Engine Simulations with NEK5000
Power Electronics Applications: From Electrified Transportation To High Energ...
Dimensional Effect on Engineering Systems & Clean Room & Classification
MODELING AND OPTIMIZATION OF COLD CRUCIBLE FURNACES FOR MELTING METALS
Final Presentation
ITEP-2021-03-FCL Expertise.pdf
Portable Pico Linear Generator Design with Different Magnet Shapes for Wave E...
Structural Health Monitoring: The paradigm and the benefits shown in some mon...
shagufta-final review
Performance studies on a direct drive turbine for wave power generation in a ...
CLIC Accelerator: status, plans and outlook
Electromagnetic formationflightoct02
Future Usage of Superconductivity
The SpaceDrive Project - First Results on EMDrive and Mach-Effect Thrusters
Ad

More from inside-BigData.com (20)

PDF
Major Market Shifts in IT
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
PPTX
Transforming Private 5G Networks
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
PDF
HPC Impact: EDA Telemetry Neural Networks
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
PDF
Machine Learning for Weather Forecasts
PPTX
HPC AI Advisory Council Update
PDF
Fugaku Supercomputer joins fight against COVID-19
PDF
Energy Efficient Computing using Dynamic Tuning
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
PDF
State of ARM-based HPC
PDF
Versal Premium ACAP for Network and Cloud Acceleration
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
PDF
Scaling TCO in a Post Moore's Era
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
PDF
Introducing HPC with a Raspberry Pi Cluster
PDF
Overview of HPC Interconnects
Major Market Shifts in IT
Preparing to program Aurora at Exascale - Early experiences and future direct...
Transforming Private 5G Networks
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
HPC Impact: EDA Telemetry Neural Networks
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Machine Learning for Weather Forecasts
HPC AI Advisory Council Update
Fugaku Supercomputer joins fight against COVID-19
Energy Efficient Computing using Dynamic Tuning
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
State of ARM-based HPC
Versal Premium ACAP for Network and Cloud Acceleration
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Scaling TCO in a Post Moore's Era
CUDA-Python and RAPIDS for blazing fast scientific computing
Introducing HPC with a Raspberry Pi Cluster
Overview of HPC Interconnects

Recently uploaded (20)

PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Encapsulation theory and applications.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPT
Teaching material agriculture food technology
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Cloud computing and distributed systems.
PPTX
Big Data Technologies - Introduction.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
The Rise and Fall of 3GPP – Time for a Sabbatical?
Network Security Unit 5.pdf for BCA BBA.
“AI and Expert System Decision Support & Business Intelligence Systems”
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Dropbox Q2 2025 Financial Results & Investor Presentation
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Encapsulation theory and applications.pdf
MYSQL Presentation for SQL database connectivity
Assigned Numbers - 2025 - Bluetooth® Document
20250228 LYD VKU AI Blended-Learning.pptx
Teaching material agriculture food technology
Diabetes mellitus diagnosis method based random forest with bat algorithm
Encapsulation_ Review paper, used for researhc scholars
Cloud computing and distributed systems.
Big Data Technologies - Introduction.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Spectroscopy.pptx food analysis technology
Advanced methodologies resolving dimensionality complications for autism neur...

Multiphase Flow Modeling and Simulation: HPC-Enabled Capabilities Today and Tomorrow

  • 1. Multiphase flow modeling and simulation: HPC-enabled capabilities today and tomorrow Igor A. Bolotnov, Assistant Professor Department of Nuclear Engineering North Carolina State University Joint Faculty Appointment with Oak Ridge National Laboratory through the DOE Energy Innovation Hub “CASL” Acknowledgements: Research support: DOE-CASL; U.S.NRC; NSF HPC resources: INCITE and ALCC awards through DOE 54th HPC User Forum September 15th-17th, 2014 – Seattle, Washington
  • 2. Outline • NC State: – College of Engineering – Department of Nuclear Engineering • Massively parallel CFD-DNS code PHASTA: – Application examples – Scaling performance • Interface tracking simulation examples: – Lift force evaluation – Bubble influence on the turbulence – Subchannel simulation of single/two-phase turbulent flows – Reactor subchannel geometry with spacer grids / mixing vanes • Future applications: – Useful/usable virtual experiments for nuclear industry – Computing requirement / estimates / possible timeframe 2 http://www.hpcwire.com/2014/06/05/titan-enables-next-gen-nuclear-models/
  • 3. NC State College of Engineering • Fall 2013: Total enrollments 9,114 – Undergraduate: 6,186 – Masters: 1,803 – PhD: 1,125 – Size & quality indicators on the rise • FY 13 Research expenditures > $160M • Among all U.S. engineering colleges: – 10th largest undergraduate enrollment – 9th largest graduate enrollment – 8th in number of BS degrees awarded – 14th in number of MS degrees awarded – 16th in number of PhD degrees awarded • 281 tenured & tenure-track faculty members – 13 members of the National Academy of Engineering
  • 4. Nuclear Engineering Department Brief History 1950 Established as graduate program in Physics Dept 1953 First non-governmental university-based research reactor 1954 Two PhDs awarded 1961 Department of Nuclear Engineering established 1965 Rapid growth from 4 to 9 faculty; thrust areas: (1) Fission power reactors; (2) Radiation applications 1973 1MW PULSTAR operational (4th on-campus reactor) 1983 Added Plasma/fusion graduate track 1994 Combined five-year BS/MNE degree established 2008 Master of Nuclear Engineering degree via Distance Ed 2014 PULSTAR is being licensed by NRC for 2MW operation
  • 5. NCSU’s Nuclear Engineering Today • Our Faculty: – 8 active faculty in October 2007  14 today – 2 open positions currently in search – 2 endowed chairs: Progress Energy (in search) & Duke Energy – Multiple Joint Faculty Appointments with ORNL & INL – Lead in $25M Consortium for Nonproliferation Enabling Capabilities – Pivotal role in CASL: Turinsky Chief Scientist, Doster Ed Programs – Gilligan: Director of NEUP (~$60M annually in DOE research) • Our Students: – Growing enrolments: ~100 Grads, ~150 UGs (sophomore – senior) – Won Mark Mills Award (best PhD) 10 times in Award’s 55 years – ~10% win one or more award, scholarship, or fellowship annually • Space: – Increased by more than 50% since 2008 – Future move to new building on Centennial Campus
  • 6. CFD-DNS code overview 6 PHASTA is a Parallel, Hierarchic, higher-order accurate, Adaptive, Stabilized, finite element method (FEM) Transient Analysis flow solver for both incompressible and compressible flows (Jansen, 1993). Governing equations: • Mass Conservation: 푢푖,푗=0 • Momentum Conservation: ρ휙푢푖,푡+ρ휙푢푗푢푖,푗=−푝,푖+τ푖푖,푗+푓푖 • Incompressible Newtonian Fluid: τ푖푖,푗=2μ휙Si,j=μ휙(푢푖,푗+ 푢푗,푖) • Continuum Surface Tension (CST) model of Brackbill et al. (1992) The Level-set Tracking method is implemented in PHASTA, which allows it simulate the two-phase phenomena.
  • 8. Scaling performance • Most recent: ANL’s BG/Q “Mira”: – Complex wing design on 11B elements – on up to 3M parts • Strong scaling results with 5.07B elements up to 294,912 cores on JUGENE and up to 98,304 cores on Kraken 8 0.000.501.001.502.002.503264128256512768 Strong scaling K cores 1 mpi/core2 mpi/core4 mpi/core
  • 9. Lift force: introduction 9 Physics • Four fundamental factors govern lift force (Hibiki & Ishii, 2007) • Relative velocity • Shear rate • Bubble rotational speed • Bubble surface boundary condition Control Solution • PID-based controller • Control bubble’s location at (quasi) steady state • Control forces balance lift and drag forces Force Balance: 퐹퐷=퐹퐵+퐹푥푥=휌푙−휌푔푉푏푔+퐹푥푥=12퐶퐷휌푙푣푟 2퐴 퐹퐿=퐹푦푦=−퐶퐿휌푙푉푏푣⃗푟×푐푐푐푐푣⃗푙 2δ x z y g 2πδ 2πδ/3
  • 10. Drag force validation 10 0.000.100.200.300.400.500.600.7002004006008001000 Drag coefficient Reynolds NumberTomiyama's CorrelationPHASTA Case studies Results 퐂퐋 퐂퐃 Relative velocity studies ( dvdy=1.0 ) R12.5: 0.3596 0.6805 R17.5: 0.3775 0.4493 R25: 0.3807 0.3172 R40: 0.4086 0.2075 R50: 0.4264 0.1722 R60: 0.4223 0.1520 R70: 0.4142 0.1372 R80: 0.4177 0.1266 R90: 0.3970 0.1198 R100: 0.3925 0.1139 Tomiyama A, Kataoka I, Zun I, Sakaguchi T. Drag coefficients of single bubbles under normal and micro gravity conditions. JSME international journal.Series B, fluids and thermal engineering. 1998;41(2):472-479 퐶퐷=min16 푅푅 1+0.15푅푒0.687,48 푅푅 푅푅= 휌퐿푉푇푑 휇퐿
  • 11. Wall effect study 11 1.54660.97750.60320.70490.38750.0384-0.3297-0.500.511.52012345678 Lift coefficient Minimum Distance of Bubble Interface To the Top Wall (in the unit of bubble radius) Wall has the effect on the bubble behavior: emerging concept is to have variable lift force instead of counteracting “wall” force in multiphase CFD approach
  • 12. High Shear Laminar Flow (110 s-1 ) 12 About 1M cells, 64-core node runs for ~24 hours
  • 13. Lift and drag coefficient vs. shear rate 13 0 0.1 0.2 0.3 0.4 0.5 0.6 10 30 50 70 90 110 Lift Coefficient Shear Rate (s-1) PHASTA Values Tomiyama et al. (2002) Prediction Legendre & Magnaudet (1998) Prediction 0 0.1 0.2 0.3 0.4 0.5 0.6 10 60 110 Drag Coefficient Shear Rate (s-1) PHASTA Values Tomiyama et al. (1998) Prediction • Trends agree with Legendre & Magnaudet (1998) observation • Correlations are independent of shear rate except Legendre & Magnaudet (1998) 퐶퐿= 6 휋22.255 푅푅푅푅0.51+0.2푅푅/푆1.52+ 121+16/푅푅 1+29/푅푅 2 푆 = 휔휔 푣푟
  • 14. High shear turbulent (470 s-1) 14
  • 15. Multiple bubble simulations 15 60 bubbles; 20M hex cell mesh; ~4,096 core runs; ~5M CPU-hours for good statistics
  • 16. Bubble tracking / advanced analysis 16 The bubbles can be marked by its unique ID and tracked during the simulations.
  • 17. Data collection / analysis 17 Probes are created and placed in the domain in a flat plane arrangement as shown in the figure below. 푈푖(푡)= 1 푁푒 ෍ 1 푁푤 ෍푢푚푖 (푡+푡푗)) 푁푤 푗=1 푁푒 푚=1 푘(푡)= 1 푁푒 ෍ 1 푁푤 ෍෍ 12(푢′푚푖 푡+푡푗 23 푖=1) 푁푤 푗=1 푁푒 푚=1
  • 18. Subchannel flow - initialization 18 54 M tetrahedral unstructured mesh
  • 20. Averaged data 20 푈+= 1 휅 log푦++퐵 Law of the wall analysis shown in figure with dashed line results in the coefficients of 퐵=5.8 and 휅=0.48 observed in the subchannel simulation 0510152025110100 U+ y+ Law of the wall profile for single phase simulation of a subchannel. Three time-averaged windows are shown in red, green and blue. Viscous sublayer (solid black) and log layer (dashed black) are shown. 0.02.04.06.08.010.012.014.016.018.020.00.0E+001.0E-042.0E-043.0E-044.0E-045.0E-046.0E-047.0E-040100200300400500600 U+ TKE y+ Turbulent kinetic energy (TKE, blue) profile and dimensionless velocity (U+, red).
  • 21. Multi-bubble single subchannel flow 320 M elements; 72 bubbles at 1% void fraction; Re = 60,000 21
  • 22. Simulated Mixing Vanes Design Realistic reactor spacer grids (modified – not EC) and mixing vanes used for turbulent flow simulations. Movie produced by in-situ ! 190M mesh: 22
  • 23. Some future virtual experiments • Increasing fidelity of bubbly flow simulations • Incorporate complex geometry analysis for nuclear applications • Phase-change simulations • Boiling flows at challenging conditions, including flow regime transition 23
  • 24. PWR-relevant problem sizes • Current runs (2013): – 2x2 subchannel, 40cm long – Reynolds number of 180 (3.6% of typical normal operating conditions). – Mesh size: 190M elements • Short term (2015): – Reynolds number of 500 (10% of PWR conditions) – Mesh size: 2B elements – Feasible on 512K BG cores at 4K elements per core – Would resolve 600 bubbles at 1% void fraction • Mid term (2020): – Reynolds number of 5000 (typical PWR conditions) – Mesh size: 355B elements – Could run on up to 90M(!) cores at 4K elements per core – Would resolve 118,000 bubbles at 1% void fraction 24
  • 25. Larger domain problem sizes • Long term capabilities (2030 ?): – 17x17 subchannel, 4.0 m long – Reynolds number of 5000 (typical PWR conditions) – Mesh size: 256T elements – Could run on up to 16B cores at 16K elements per core – Would resolve 85M bubbles at 1% void fraction • Direct simulation of whole reactor core (2060 ?): – About 160 fuel assemblies, 17x17 each – Reynolds number of 5000 (typical PWR conditions) – Mesh size: 40,000T elements – Could run on up to 320B cores at 128K elements per core – Would resolve 13.6B bubbles at average 1% void fraction • Time domain – larger meshes require smaller timesteps to maintain fidelity in two-phase flow simulations • Computing cost ? Advanced results must justify it! 25
  • 26. Conclusions • The ongoing effort will allow to equip the massively parallel ITM code, PHASTA, with validated capabilities for high fidelity simulation of multiphase flows. • Advanced analysis tools is an integral component of large simulations which must be developed to take advantage of the vast amount of information provided by DNS/ITM approach. • HPC development certainly allows to make high fidelity modeling and simulation applied to a much wider set of problems. High fidelity thermal hydraulic analysis will allow to support design and licensing of reactor core components in the new future. 26
  • 27. Back-up: Resources / Capabilities Local computing: • Large memory (1,024 G RAM) rack-mounted workstation for mesh generation • 6-node computing cluster with 64 cores per node, Infiniband interconnect (384 compute cores = about 3.3M CPU-hours annually) Meshing: • Simmetrix serial and parallel meshing tools for complex geometries (tested up to 92B elements) • Pointwise license for CMFD meshing Remote computing: • 2014 ALCC award for 76.8M processor-hours at Mira (Argonne National Lab, #5 supercomputer, 768,000 computing cores on BG/Q) • Access to Titan at ORNL (through CASL, about 5-20M processor-hours yearly), #2 supercomputer • 2012 INCITE award for 14M processor hours on Titan 27