SlideShare a Scribd company logo
RNN Explore
RNN,	LSTM,	GRU,	Hyperparameters
By	Yan	Kang
1
2
3
4
Three	Recurrent	Cells
Hyperparameters
Experiments	and	Results
Conclusion
CONTENT
RNN Cells
Why	RNN?
Standard	Neural	Network:	
Images	from:	https://en.wikipedia.org/wiki/Artificial_neural_network
Why	RNN?
Standard	Neural	Network:			Only	accept	fixed-size vector	as	input
and	output
Images	from:	https://en.wikipedia.org/wiki/Artificial_neural_network
Standard	Neural	Network:			Only	accept	fixed-size vector	as	input
and	output
Why	RNN?
Images	from:	https://en.wikipedia.org/wiki/Artificial_neural_network
Why	RNN?
Standard	Neural	Network:			Only	accept	fixed-size vector	as	input
and	output
Images	from:	https://en.wikipedia.org/wiki/Artificial_neural_network
Why	RNN?
Standard	Neural	Network:			Only	accept	fixed-size vector	as	input
and	output
X Images	from:
https://en.wikipedia.org/wiki/Artificial_neural_network
http://agustis-place.blogspot.com/2010/01/4th-eso-msc-computer-assisted-task-unit.html?_sm_au_=iVVJSQ4WZH27rJM0
Why	RNN?
Standard	Neural	Network:			Only	accept	fixed-size vector	as	input
and	output
X Images	from:
https://en.wikipedia.org/wiki/Artificial_neural_network
http://agustis-place.blogspot.com/2010/01/4th-eso-msc-computer-assisted-task-unit.html?_sm_au_=iVVJSQ4WZH27rJM0
Vanilla	RNN
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
加线
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Vanilla	RNN
Achieve	it	in	1	min:	Ht =	tanh(Xt*Ui +	ht-1*Ws +	b)
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Vanilla	RNN
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM	
Limitation?
Redundant	gates/parameters:
LSTM	
Limitation?
Redundant	gates/parameters:
The	output	gate	was	the	least	important	for	the	
performance	of	the	LSTM.	When	removed,	ℎ" simply	becomes	
tanℎ(𝐶") which	was	sufficient	for	retaining	most	of	the	LSTM’s	
performance.
-- Google	”An	Empirical	Exploration	of
Recurrent	Network	Architectures”
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM	
Limitation?
Redundant	gates/parameters:
The	LSTM	unit	computes	the	new	memory	content	
without	any	separate	control	of	the	amount	of	information	
flowing	from	the	previous	time	step.	
-- “Empirical	Evaluation	of	Gated	Recurrent	Neural	
Networks	on	Sequence	Modeling”
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
GRU
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
GRU
GRU
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
LSTM
GRU
Hyperparameters
Number	of
Layers
Other	than	using	only	one	recurrent	cell,	there	is	another	
very	common	way	to	construct	the	recurrent	units.
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Number	of
Layers
Other	than	using	only	one	recurrent	cell,	there	is	another	
very	common	way	to	construct	the	recurrent	units.
Stacked	RNN:
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Hidden	Size
RNN LSTM GRU
Hidden	size:
Hidden	state	size	in	RNN
Cell	state	and	hidden	state	sizes	in	LSTM	
Hidden	state	size	in	GRU
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Hidden	Size
RNN LSTM GRU
Hidden	size:
Hidden	state	size	in	RNN
Cell	state	and	hidden	state	sizes	in	LSTM	
Hidden	state	size	in	GRU
The	larger,	the	more	complicated	model	Recurrent	Unit	could		
memory	and	represent.
Image	from:	http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Batch	Size
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
Batch	Size
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
B	=	|x| Gradient	Descent
B	>		1	&	B	<	|x|	 Stochastic	Gradient	Descent
Batch	Size
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
B	=	|x| Gradient	Descent
B	>		1	&	B	<	|x|	 Stochastic	Gradient	Descent
Batch	size	:	B	– the	number	of	instances	used	to	update	weights	once.
Learning
Rate
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
Learning	
Rate
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
Learning	rate	𝜀 𝑡 -- how	much	weights	are	
changed	in	each	update.
Decrease	it	when	getting	close	to	the	target.
Learning	
Rate
Image	from:	https://www.quora.com/Whats-the-difference-
between-gradient-descent-and-stochastic-gradient-descent
Optimization	function:
Two	learning	rate	updating	methods	were	used	in	experiments
First	one,	after	each	epoch,	learning	rate	decays	+
,.
Second	one,	after	each	5	epochs,	learning	rate	decays	+
,.
Experiments	&	Results
𝑠0
𝑠+
𝑠,
𝑠1
Variable	Length:	
𝑠0’
𝑠+’
𝑠,’
𝑠1’
0	…
0	…
0	…
Batch	0
𝑠0’
𝑠+’
𝑠,’
𝑠1’
Variable	Length	vs	Sliding	Window
𝑙0
𝑙+
𝑙,
𝑙1
𝑙0
𝑙+
𝑙,
𝑙1
𝑠0
𝑠+
𝑠,
𝑠1
……𝑤00 𝑤0+ 𝑤04
Batch	0 Batch	1
𝑤0+
𝑤0,
𝑤01
𝑤05
𝑤00
𝑤0+
𝑤0,
𝑤056+
……
Sliding	window:	
Variable	Length	vs	Sliding	Window
Sliding	Window:
Advantages:
Each	sequence	might	generate	tens	of	or	even	hundreds	of	subsequences.	With	same	
batch	size	to	the	variable	length	method,	it	means	more	batches	in	one	epoch	and	
more	weights	update	times	in	each	epochs	– faster	converge	rate	per	epoch.
Disadvantages:
1) Time	consuming,	longer	time	for	each	epoch;	
2) Assigning	same	label	to	all	subsequence	might	be	biased	and	might	cause	the	
network	not	converge.
Variable	Length	vs	Sliding	Window
Variable	Length	vs	Sliding	Window
Variable	Length: AUSLAN	Dataset	2565	instances
Sliding	Window: AUSLAN	Dataset	2565	instances
Variable	Length	vs	Sliding	Window
Variable	Length: Character	Trajectories	Dataset	2858	instances
Variable	Length	vs	Sliding	Window
Sliding	Window: Character	Trajectories	Dataset	2858	instances
Variable	Length	vs	Sliding	Window
Variable	Length: Japanese	Vowels	Dataset	640	instances
Variable	Length	vs	Sliding	Window
Sliding	Window: Japanese	Vowels	Dataset	640	instances
Variable	Length	vs	Sliding	Window
RNN	vs	LSTM	vs	GRU
GRU	is	a	simpler	variant	of	LSTM	that	share	many	of	the	same	properties,	both	of	
them	could	prevent	gradient	vanishing	and	“remember”	long	term	dependence.	And	
both	of	them	outperform	vanilla	RNN	on	almost	all	the	datasets	and,	either	using		
Sliding	Window	or	Variable	Length.
But	GRU has	fewer	parameters	than	LSTM,	and	thus	may	train	a	bit	faster	or	need	
less	iterations	to	generalize.	As	shown	in	the	plots,	GRU	does	converge	slightly	faster.
RNN	vs	LSTM	vs	GRU
RNN	vs	LSTM	vs	GRU
RNN	vs	LSTM	vs	GRU
RNN	vs	LSTM	vs	GRU
Hyperparameters Comparisons	
• Learning	Rate
• Batch	Size
• Number	of	Layers
• Hidden	Size
Learning	Rate	
Two	learning	rate	updating	methods	were	used	in	experiments
• First	one,	after	each	epoch,	learning	rate	decays	+
,. ,	totally	24	epochs.
• Second	one,	after	each	5	epochs,	learning	rate	decays	+
,. ,	totally	120	epochs.
The	left	side	in	the	following	plots	uses	24	epochs,	and	the	right	side	uses	120	epochs.	
Because	of	the	change	of	learning	rate	updating	mechanism,	some	not	converging	
configurations in	the	left	(24	epochs)	work	pretty	well	in	the	right	(120	epochs).
Learning	Rate	
Japanese	Vowels,	Sliding	Window,	LSTM
24	epochs 120	epochs
Learning	Rate	
Japanese	Vowels,	Sliding	Window,	GRU
24	epochs 120	epochs
Learning	Rate	
Japanese	Vowels,	Variable	Length,	LSTM
24	epochs 120	epochs
Learning	Rate	
Japanese	Vowels,	Variable	Length,	GRU
24	epochs 120	epochs
Batch	Size
The	larger	batch	size	means	that	each	time	we	update	weights	with	more	instance.	So	it	
has	lower	bias	but	also	slower	converge	rate.
On	the	contrary,	small	batch	size	updates	the	weights	more	frequently.	So	small	batch	
size	converges	faster	but	has	higher	bias.
What	we	ought	to	do	might	be	finding	the	balance	between	the	converge	rate	and	the	
risk.
Batch	Size
Japanese	Vowels
Sliding	Window
Batch	Size
Japanese	Vowels
Variable	Length
Batch	Size
UWave
Full	Length	Sliding	Window
Number	of	layers
Multi-layer	RNN	is	more	difficult	to	converge.	With	the	number	of	layers	
increasing,	it’s	slower	to	converge.	
And	even	they	do,	we	don’t	gain	too	much	from	the	larger	hidden	units	inside	
it,	at	least	on	Japanese	Vowel	dataset.	The	final	accuracy	doesn’t	seem	better	
than	the	one	layer	recurrent	networks.	This	matches	some	paper’s	results	that	
stacked	RNNs	could	be	taken	place	by	one	layer	with	larger	hidden	size.
Number	of	layers
Japanese	Vowels
Sliding	Window
Number	of	layers
Japanese	Vowels
Variable	Length
Number	of	layers
UWave
Full	length	Sliding	Window
Hidden	Size
Either	from	Japanese	Vowels	or	UWave,	the	larger	the	hidden	size	on	LSTM	and	
GRU,	the	better	the	final	accuracy	would	be.	And	different	hidden	size	share	
similar	converge	rate	on	LSTM	and	GRU.	But	the	trade-off	of	larger	hidden	size	is	
that	it	takes	longer	time/epoch	to	train	the	network.
There’re	some	abnormal	behavior	on	vanilla	RNN,	which	might	be	caused	by	the	
gradient	vanishing.
Hidden	Size
Japanese	Vowels
Sliding	Window
Hidden	Size
Japanese	Vowels
Variable	Length
Hidden	Size
UWave
Full	Length	Sliding	Window
Conclusion
Conclusion
In	this	presentation,	we	first	discussed:
• What	are	RNN,	LSTM	and	GRU,	and	why	using	them.
• What	are	the	definitions	of		the	four	hyperparameters.
And	through	roughly	800	experiments,	we	analyzed:
• Difference	between	Sliding	Window	and	Variable	Length.
• Difference	among	RNN,	LSTM	and	GRU.
• What’s	the	influence	of	number	of	layers.
• What’s	the	influence	of	hidden	size.
• What’s	the	influence	of	batch	size
• What’s	the	influence	of	learning	rate
Conclusion
In	this	presentation,	we	first	discussed:
• What	are	RNN,	LSTM	and	GRU,	and	why	using	them..
• What	are	the	definitions	of		the	four	hyperparameters.
And	through	roughly	800	experiments,	we	analyzed:
• Difference	between	Sliding	Window	and	Variable	Length.
• Difference	among	RNN,	LSTM	and	GRU.
• What’s	the	influence	of	number	of	layers.
• What’s	the	influence	of	hidden	size.
• What’s	the	influence	of	batch	size
• What’s	the	influence	of	learning	rate
Generally	speaking,	GRU	works	better	than	LSTM,	and,	because	of	suffering	gradient		
vanishing,	vanilla	RNN	works	worst.
Sliding	window	is	good	to	solve	limited	instance	datasets,	which	1)	may	have	repetitive
feature	or	2)	sub-sequence	could	capture	key	feature	of	the	full	sequence.
All	these	four	hyperparameters play	important	role	in	tuning	the	network.
Limitations
However	there	are	still	some	limitations:
1. Variable	length:
• The	sequence	length	is	too	long	(~100-300	for	most	datasets,	some	even	
larger	than	1000)
Limitations
However	there	are	still	some	limitations:
1. Variable	length:
• The	sequence	length	is	too	long	(~100-300	for	most	datasets,	some	even	
larger	than	1000)
2. Sliding	window:
• Ignores	the	continuality	between	the	sliced	subsequences.	
• Biased	labeling	may	causes	similar	subsequences	being	labeled	differently.
Limitations
However	there	are	still	some	limitations:
1. Variable	length:
• The	sequence	length	is	too	long	(~100-300	for	most	datasets,	some	even	
larger	than	1000)
2. Sliding	window:
• Ignores	the	continuality	between	the	sliced	subsequences.	
• Biased	labeling	may	causes	similar	subsequences	being	labeled	differently.
Luckily,	these	two	limitations	could	be	solved	simultaneously.
Limitations
However	there	are	still	some	limitations:
1. Variable	length:
• The	sequence	length	is	too	long	(~100-300	for	most	datasets,	some	even	
larger	than	1000)
2. Sliding	window:
• Ignores	the	continuality	between	the	sliced	subsequences.	
• Biased	labeling	may	causes	similar	subsequences	being	labeled	differently.
Luckily,	these	two	limitations	could	be	solved	simultaneously.
-- By	Truncated	Gradient
What’s	next?
Truncated	gradient:
• Slicing	the	sequences	in	a	special	order	that,	between	neighbor	batches,	each	instance	of	
the	batch	is	continuous.
• Not	like	Sliding	Window	initializing	states	in	each	batch	from	random	around	zero,	the	
states	from	the	last	batch	are	used	to	initialize	the	next	batch	state.	
• So	that	even	the	recurrent	units	are	unrolled	in	a	short	range	(e.g.	20	steps),	the	states	
could	be	passed	through	and	the	former	‘memory’	could	be	saved.
𝑠0
𝑠+
𝑠,
𝑠1
……𝑤00 𝑤0+ 𝑤04
𝑤+0 𝑤++
𝑠+
……
Batch	0 Batch	1
𝑤0+
𝑤++
𝑤,+
𝑤56+,+
𝑤00
𝑤+0
𝑤,0
𝑤56+,0
Initialize	state	𝑠0
𝑠,
𝑤,0
𝑤,+
What’s	next?
Averaged	outputs	to	do	classification:
• Right	now,	we	are	using	last	time	step’s	output	to	do	softmax and	then	using	Cross	
Entropy	to	estimate	each	class’s	probability.
• Using	the	averaged	outputs	of	all	time	steps	or	weighted	averaged	outputs	might	be	a	
good	choice	to	try.
What’s	next?
Averaged	outputs	to	do	classification:
• Right	now,	we	are	using	last	time	step’s	output	to	do	softmax and	then	using	Cross	
Entropy	to	estimate	each	class’s	probability.
• Using	the	averaged	outputs	of	all	time	steps	or	weighted	averaged	outputs	might	be	a	
good	choice	to	try.
Prediction	(sequence	modeling):
• Already	did	the	sequence	to	sequence	model	with	l2-norm	loss	function.
• What	needs	to	be	done	is	finding	a	proper	way	to	analyze	the	predicted	sequence.
THANK YOU
Thanks for Dmitriy’s instructions
And discussions with Feipeng and Xi
Questions?

More Related Content

PDF
A Brief Introduction on Recurrent Neural Network and Its Application
PDF
LSTM Tutorial
PPTX
Deep Learning With Neural Networks
PDF
Deep learning - Conceptual understanding and applications
DOCX
mohsin dalvi artificial neural networks questions
PPTX
Sparse Distributed Representations: Our Brain's Data Structure
PPTX
mohsin dalvi artificial neural networks presentation
PDF
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
A Brief Introduction on Recurrent Neural Network and Its Application
LSTM Tutorial
Deep Learning With Neural Networks
Deep learning - Conceptual understanding and applications
mohsin dalvi artificial neural networks questions
Sparse Distributed Representations: Our Brain's Data Structure
mohsin dalvi artificial neural networks presentation
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...

What's hot (20)

PDF
Intro to Neural Networks
PDF
Deep learning frameworks v0.40
PDF
Tutorial on Deep Learning
PDF
Deep Learning - Convolutional Neural Networks
PDF
Neural Networks and Deep Learning
PDF
Deep learning - A Visual Introduction
PPTX
Artificial intelligence NEURAL NETWORKS
PDF
Synthetic dialogue generation with Deep Learning
 
PPTX
Introduction to Deep Learning
PPTX
HML: Historical View and Trends of Deep Learning
PDF
Convolutional neural network
PPTX
Deep neural networks
PPT
Neural Networks in Data Mining - “An Overview”
PDF
Using Deep Learning to do Real-Time Scoring in Practical Applications - 2015-...
PPTX
Deep learning from a novice perspective
PDF
Deep Learning - Overview of my work II
PPTX
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
PPTX
Neural network & its applications
PDF
Introduction to Deep Learning
PDF
Handwritten Recognition using Deep Learning with R
Intro to Neural Networks
Deep learning frameworks v0.40
Tutorial on Deep Learning
Deep Learning - Convolutional Neural Networks
Neural Networks and Deep Learning
Deep learning - A Visual Introduction
Artificial intelligence NEURAL NETWORKS
Synthetic dialogue generation with Deep Learning
 
Introduction to Deep Learning
HML: Historical View and Trends of Deep Learning
Convolutional neural network
Deep neural networks
Neural Networks in Data Mining - “An Overview”
Using Deep Learning to do Real-Time Scoring in Practical Applications - 2015-...
Deep learning from a novice perspective
Deep Learning - Overview of my work II
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Neural network & its applications
Introduction to Deep Learning
Handwritten Recognition using Deep Learning with R
Ad

Similar to RNN Explore (20)

PDF
rnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
PDF
Introduction to Recurrent Neural Network
PDF
Recurrent Neural Networks RNN - Xavier Giro - UPC TelecomBCN Barcelona 2020
PDF
Sequence Modelling with Deep Learning
PPT
14889574 dl ml RNN Deeplearning MMMm.ppt
PPTX
10.0 SequenceModeling-merged-compressed_edited.pptx
PPT
Deep-Learning-2017-Lecture6RNN.ppt
PPT
PPT
12337673 deep learning RNN RNN DL ML sa.ppt
PPT
Deep-Learning-2017-Lecture ML DL RNN.ppt
PPT
Deep-Learning-2017-Lecture6RNN.ppt
PPT
Deep learning for detection hate speech.ppt
PDF
Cheatsheet recurrent-neural-networks
PPTX
Recurrent Neural Network
PPT
Recurrent neural network power point presentation
PPT
Recurrent neural network power point presentation
PDF
Skip RNN: Learning to Skip State Updates in RNNs (ICLR 2018)
PDF
Recurrent Neural Networks (DLAI D7L1 2017 UPC Deep Learning for Artificial In...
PPTX
Introduction to deep learning
PDF
DotNet 2019 | Pablo Doval - Recurrent Neural Networks with TF2.0
rnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
Introduction to Recurrent Neural Network
Recurrent Neural Networks RNN - Xavier Giro - UPC TelecomBCN Barcelona 2020
Sequence Modelling with Deep Learning
14889574 dl ml RNN Deeplearning MMMm.ppt
10.0 SequenceModeling-merged-compressed_edited.pptx
Deep-Learning-2017-Lecture6RNN.ppt
12337673 deep learning RNN RNN DL ML sa.ppt
Deep-Learning-2017-Lecture ML DL RNN.ppt
Deep-Learning-2017-Lecture6RNN.ppt
Deep learning for detection hate speech.ppt
Cheatsheet recurrent-neural-networks
Recurrent Neural Network
Recurrent neural network power point presentation
Recurrent neural network power point presentation
Skip RNN: Learning to Skip State Updates in RNNs (ICLR 2018)
Recurrent Neural Networks (DLAI D7L1 2017 UPC Deep Learning for Artificial In...
Introduction to deep learning
DotNet 2019 | Pablo Doval - Recurrent Neural Networks with TF2.0
Ad

Recently uploaded (20)

PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
System and Network Administration Chapter 2
PPTX
history of c programming in notes for students .pptx
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PPTX
L1 - Introduction to python Backend.pptx
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PPTX
Introduction to Artificial Intelligence
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
System and Network Administraation Chapter 3
Upgrade and Innovation Strategies for SAP ERP Customers
Adobe Illustrator 28.6 Crack My Vision of Vector Design
System and Network Administration Chapter 2
history of c programming in notes for students .pptx
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
Odoo POS Development Services by CandidRoot Solutions
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
2025 Textile ERP Trends: SAP, Odoo & Oracle
Understanding Forklifts - TECH EHS Solution
Odoo Companies in India – Driving Business Transformation.pdf
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
L1 - Introduction to python Backend.pptx
PTS Company Brochure 2025 (1).pdf.......
Which alternative to Crystal Reports is best for small or large businesses.pdf
Introduction to Artificial Intelligence
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
System and Network Administraation Chapter 3

RNN Explore