SlideShare a Scribd company logo
❤ Convolutional Neural Network
Presented by Junho Cho (@junhocho)
© Junho Cho, 2016 1
Convolutional?
© Junho Cho, 2016 2
This is Neural Network (NN)
© Junho Cho, 2016 3
Machine Learning =
Feature Representation + Classifier
such as ...
© Junho Cho, 2016 4
SIFT feature + SVM
© Junho Cho, 2016 5
HoG feature + Random Forest
© Junho Cho, 2016 6
HoG feature + SVM
© Junho Cho, 2016 7
SIFT feature + Random Forest
© Junho Cho, 2016 8
CNN feature + SVM
© Junho Cho, 2016 9
SIFT feature + Neural Network
© Junho Cho, 2016 10
Problem
Lots of feature extractor, hard feature engineering
Which classifier?
Framework is extremely modular
© Junho Cho, 2016 11
Deep Learning
enables learning representation and classifier
End-to-End (Learn together)
© Junho Cho, 2016 12
To be End-to-End
All part in network is differentiable
For Back-propagation
© Junho Cho, 2016 13
Much easier training (relatively too past)
Don't need to extract feature yourself
Just let the Neural Network
Learn feature by itself!
Including classifier!
© Junho Cho, 2016 14
Require less domain knowledge
Apply to various domain!
Speech, Text, Image, Reinforcement Learning
© Junho Cho, 2016 15
DenseCap : localize and caption (Vision + Natural Language)
© Junho Cho, 2016 16
and Better performance ⭐
© Junho Cho, 2016 17
This is Typical Convolutional
Neural Network (CNN)
[LeNet-5, LeCun 1980]
© Junho Cho, 2016 18
Baic CNN is
composed of
1. Convolution (Conv)
2. Pooling (Subsampling)
3. Rectified Linear Unit (ReLU)
4. Fully Connected layers (FC)
© Junho Cho, 2016 19
Basic CNN
[(Conv-ReLU)*n - POOL] * m - (FC-ReLU) * k - loss
that's it! for real
© Junho Cho, 2016 20
© Junho Cho, 2016 21
Will explain these computations later
© Junho Cho, 2016 22
CNN usage?
Mostly on
Images!
© Junho Cho, 2016 23
Used as image recognizer
• Object Classification(Recognition)
• Object Detection
• Image Captioning
• Visual Q&A
• Even in Alphago
© Junho Cho, 2016 24
Where to begin ...
the History!
© Junho Cho, 2016 25
[LeNet-5, LeCun 1980]
© Junho Cho, 2016 26
Shallow CNN existed.
But Deep CNN wasn't popular
1. Computationally hard at the moment
2. Vanishing gradient problem: Can't train deep net
© Junho Cho, 2016 27
We wanna go
deeeeeeeper
© Junho Cho, 2016 28
Deep Learning
This is now solved with several
advantages
1. Lots of data (ImageNet)
2. Powerful computation (GPU)
3. Some practical technique (ReLU, Dropout)
© Junho Cho, 2016 29
What is ImageNet?
© Junho Cho, 2016 30
http://image-net.org
ImageNet is an image database containing 14,197,122 images
with labels.
ILSVRC : challenge of Classification/Localization/Detection
© Junho Cho, 2016 31
ImageNet Top-5 Classification
Error
© Junho Cho, 2016 32
Go deeeeeeeper
© Junho Cho, 2016 33
Where we use it
again?
© Junho Cho, 2016 34
© Junho Cho, 2016 35
© Junho Cho, 2016 36
© Junho Cho, 2016 37
© Junho Cho, 2016 38
© Junho Cho, 2016 39
© Junho Cho, 2016 40
© Junho Cho, 2016 41
© Junho Cho, 2016 42
Neural Art
video link
© Junho Cho, 2016 43
Now let's understand the computation
Conv, ReLU, Pool, FC
© Junho Cho, 2016 44
Reminder: Perceptron
© Junho Cho, 2016 45
Fully Connected (FC) layers
Densely connected.
Compute all input neurons, Spatial information disappears.
© Junho Cho, 2016 46
FC is matrix multiplcation.
output = tf.matmul(input, W)
input size :
output size :
then W has parameters
© Junho Cho, 2016 47
Convolution
It keeps spatial information using convolutional filters
© Junho Cho, 2016 48
Reminder: 1D Convolution
© Junho Cho, 2016 49
But We do 2D
convolution on
images
© Junho Cho, 2016 50
© Junho Cho, 2016 51
© Junho Cho, 2016 52
© Junho Cho, 2016 53
© Junho Cho, 2016 54
© Junho Cho, 2016 55
© Junho Cho, 2016 56
© Junho Cho, 2016 57
© Junho Cho, 2016 58
© Junho Cho, 2016 59
Basically train these Conv filters
Via Back-propagation
© Junho Cho, 2016 60
More hyperparameters in Conv
1. stride : step of Conv filter
2. padding : add borders (usually zero) of input
© Junho Cho, 2016 61
Summup of Conv
• Slide Conv filter over input.
• Maintain spatial info with shared filter weight
• Parameters: kernel_size, filterNum, padding, stride
• Learning parameter: of kernel_size
© Junho Cho, 2016 62
In TensorFlow
tf.nn.conv2d(input, filter,
strides, padding,
use_cudnn_on_gpu=None,
data_format=None,
name=None)
© Junho Cho, 2016 63
© Junho Cho, 2016 64
© Junho Cho, 2016 65
Quiz
© Junho Cho, 2016 66
© Junho Cho, 2016 67
• kernel_size = 3
• Stride = 2
• padding = 0
© Junho Cho, 2016 68
© Junho Cho, 2016 69
• kernel_size = 3
• Stride = 2
• padding = 1
© Junho Cho, 2016 70
© Junho Cho, 2016 71
• kernel_size = 3
• Stride = 1
• padding = 1
© Junho Cho, 2016 72
© Junho Cho, 2016 73
© Junho Cho, 2016 74
© Junho Cho, 2016 75
© Junho Cho, 2016 76
Compare FC and Conv
Local Invariance
figure credit to CS231n
© Junho Cho, 2016 77
CNN is powerful because
• Local invariance
the convolution filters are ‘sliding’ over the input image, the
exact location of the object we want to find does not matter
much.
• Less parameters
This helps preventing overfitting.
© Junho Cho, 2016 78
Basically Conv is special case of FC.
• Doubly block circulant matrix
• Toeplitz matrix
Conv can be implemented with Matrix Multiplication
© Junho Cho, 2016 79
We apply ReLU to have non-linearity of model.
© Junho Cho, 2016 80
We needs activation function after Conv
© Junho Cho, 2016 81
Rectified Linear Unit
this is activation function, replacing sigmoid
© Junho Cho, 2016 82
single Perceptron
: XOR is unsolvable
1. Thus bend the dimension (add non-linearity!)
2. Multi Layer Perceptron
© Junho Cho, 2016 83
Activation functions are like
switch.
On/Off of each neurons
Apply Non-linearity of Neural Network
Let's test it!
© Junho Cho, 2016 84
sigmoid has Gradient Vanishing problem.
ReLU is practically the best activation function in CNN
© Junho Cho, 2016 85
But also sigmoid and tanh is occasionally used
Also there is PReLU, LeakyReLU
© Junho Cho, 2016 86
Pooling
• makes the representations smaller and more manageable
• Reduces number of parameters and prevent overfitting
• operates over each activation map independently
• Average, L2-norm, Max-pooling
© Junho Cho, 2016 87
Max-pooling
Normally use max-pooling because generally performs better
© Junho Cho, 2016 88
However, recent model replace pooling with strided-Conv
© Junho Cho, 2016 89
Dropout
While training, turn off neuron randomly.
Regularizes the model.
© Junho Cho, 2016 90
How they look in code?
def model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):
l1a = tf.nn.relu(tf.nn.conv2d(X, w,
strides=[1, 1, 1, 1], padding='SAME'))
# l1a output shape=(?, input_height, input_width, number_of_channels_layer1)
l1 = tf.nn.max_pool(l1a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l1 output shape=(?, input_height/2, input_width/2, number_of_channels_layer1)
l1 = tf.nn.dropout(l1, p_keep_conv)
l2a = tf.nn.relu(tf.nn.conv2d(l1, w2,
strides=[1, 1, 1, 1], padding='SAME'))
# l2a output shape=(?, input_height/2, input_width/2, number_of_channels_layer2)
l2 = tf.nn.max_pool(l2a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l2 shape=(?, input_height/4, input_width/4, number_of_channels_layer2)
l2 = tf.nn.dropout(l2, p_keep_conv)
l3a = tf.nn.relu(tf.nn.conv2d(l2, w3,
strides=[1, 1, 1, 1], padding='SAME'))
# l3a shape=(?, input_height/4, input_width/4, number_of_channels_layer3)
l3 = tf.nn.max_pool(l3a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l3 shape=(?, input_height/8, input_width/8, number_of_channels_layer3)
l3 = tf.reshape(l3, [-1, w4.get_shape().as_list()[0]])
# flatten to (?, input_height/8 * input_width/8 * number_of_channels_layer3)
l3 = tf.nn.dropout(l3, p_keep_conv)
l4 = tf.nn.relu(tf.matmul(l3, w4))
#fully connected_layer
l4 = tf.nn.dropout(l4, p_keep_hidden)
pyx = tf.matmul(l4, w_o)
return pyx
© Junho Cho, 2016 91
Let's analyze famous architecutre
© Junho Cho, 2016 92
AlexNet
© Junho Cho, 2016 93
© Junho Cho, 2016 94
© Junho Cho, 2016 95
VGGNet
© Junho Cho, 2016 96
© Junho Cho, 2016 97
© Junho Cho, 2016 98
© Junho Cho, 2016 99
Only uses 3x3 Conv layers.
In point of receptive field,
two 3x3 is better than one 5x5
or three 3x3 is better than 7x7
because of lesser parameters.
Too deep network is hard to train.
Performance: VGG16 > VGG19
Many researchers finetuned this net for
their task, because of simplicity of the
net architecture.
© Junho Cho, 2016 100
GoogLeNet
© Junho Cho, 2016 101
Use inception model.
Enhance scale invariance.
© Junho Cho, 2016 102
ResNetYou can actually train super deep network!
© Junho Cho, 2016 103
© Junho Cho, 2016 104
Skip-connection optimizes training!
Learn residual transformation.
© Junho Cho, 2016 105
© Junho Cho, 2016 106
Batch Normalization
© Junho Cho, 2016 107
© Junho Cho, 2016 108
To train your Neural Network.
Keep in mind
1. Data-preparation : preprocessing, input & output
2. Architecture : dimension check
3. Loss function : CrossEntropy? L2?
4. Optimizer : SGD, Adam, ...
© Junho Cho, 2016 109
Optimizer
© Junho Cho, 2016 110
© Junho Cho, 2016 111
© Junho Cho, 2016 112
ADAM optimizer
currently rules.
Just use it.
© Junho Cho, 2016 113
Just replace
SGD: tf.train.GradientDescentOptimizer
with ADAM: tf.train.AdamOptimizer
slide from link
© Junho Cho, 2016 114
import tensorflow as tf
Let's do practice!
© Junho Cho, 2016 115
Import libraries!
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
© Junho Cho, 2016 116
function for variables
def init_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.01))
© Junho Cho, 2016 117
Define model
def model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden):
l1a = tf.nn.relu(tf.nn.conv2d(X, w,
strides=[1, 1, 1, 1], padding='SAME'))
# l1a output shape=(?, input_height, input_width, number_of_channels_layer1)
l1 = tf.nn.max_pool(l1a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l1 output shape=(?, input_height/2, input_width/2, number_of_channels_layer1)
l1 = tf.nn.dropout(l1, p_keep_conv)
l2a = tf.nn.relu(tf.nn.conv2d(l1, w2,
strides=[1, 1, 1, 1], padding='SAME'))
# l2a output shape=(?, input_height/2, input_width/2, number_of_channels_layer2)
l2 = tf.nn.max_pool(l2a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l2 shape=(?, input_height/4, input_width/4, number_of_channels_layer2)
l2 = tf.nn.dropout(l2, p_keep_conv)
l3a = tf.nn.relu(tf.nn.conv2d(l2, w3,
strides=[1, 1, 1, 1], padding='SAME'))
# l3a shape=(?, input_height/4, input_width/4, number_of_channels_layer3)
l3 = tf.nn.max_pool(l3a, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# l3 shape=(?, input_height/8, input_width/8, number_of_channels_layer3)
l3 = tf.reshape(l3, [-1, w4.get_shape().as_list()[0]])
# flatten to (?, input_height/8 * input_width/8 * number_of_channels_layer3)
l3 = tf.nn.dropout(l3, p_keep_conv)
l4 = tf.nn.relu(tf.matmul(l3, w4))
#fully connected_layer
l4 = tf.nn.dropout(l4, p_keep_hidden)
pyx = tf.matmul(l4, w_o)
return pyx
© Junho Cho, 2016 118
Prepare Data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
X_trn, Y_trn, X_test, Y_test = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
X_trn = X_trn.reshape(-1, 28, 28, 1) # 28x28x1 input img
X_test = X_test.reshape(-1, 28, 28, 1) # 28x28x1 input img
© Junho Cho, 2016 119
Initialize
w = init_weights([3, 3, 1, 32])
w2 = init_weights([3, 3, 32, 64])
w3 = init_weights([3, 3, 64, 128])
w4 = init_weights([128 * 4 * 4, 625])
w_o = init_weights([625, 10])
p_keep_conv = tf.placeholder(tf.float32)
p_keep_hidden = tf.placeholder(tf.float32)
py_x = model(X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden)
© Junho Cho, 2016 120
Define loss function
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(py_x, Y))
Select Optimizer
train_op = tf.train.AdagradOptimizer(learning_rate=0.05).minimize(loss)
© Junho Cho, 2016 121
Then Train!
of course with lots of debugging
© Junho Cho, 2016 122
Monitor accuracy of my model
correct = tf.nn.in_top_k(py_x, Y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
Monitor my loss function drop
x = np.arange(50)
plt.plot(x, trn_loss_list)
plt.plot(x, test_loss_list)
plt.title("cross entropy loss")
plt.legend(["train loss", "test_loss"])
plt.xlabel("epoch")
plt.ylabel("cross entropy")
© Junho Cho, 2016 123
Tips for start training
• Good weight initialization
• Random gaussian initialization
• Famous initialization: Xavier, He
• Or use pretrained network
• ImageNet pretrained VGG16 !
© Junho Cho, 2016 124
• Don't ignore data preparation
• you need lots of data
• Most annoying and difficult part
• Preprocessing: normalize input image
• 0~256 >> -1~1
© Junho Cho, 2016 125
• Visualize Training samples and loss function
• Don't just pray for result
• Monitoring is necessary
© Junho Cho, 2016 126
• Check gradient & Adjust learning rate
• NaN ??!!
• gradient explosion: lower learning rate
• Don't be too much creative
• start from base code/architecture like VGG
• Ensemble
• gain extra 2% performance more
© Junho Cho, 2016 127
Other Deep Learning FrameWorks?
• Torch
• TensorFlow (Keras, Slim, PrettyTensor, ...)
• Caffe
• Theano (Keras, Lasagne)
• MXnet, CNTK, PaddlePaddle, Chainer, ...
© Junho Cho, 2016 128
ConvNet benchmark
© Junho Cho, 2016 129
HardwareNo CPU, Use GPU
Not AMD, Use NVidia Graphic Cards
© Junho Cho, 2016 130
© Junho Cho, 2016 131
TitanX
© Junho Cho, 2016 132
Graphic cards
VRAM is the most important
TitanX: 12GB, 1080/1070: 8GB, 980/970: 4GB
© Junho Cho, 2016 133
Most bottleneck of training time is actually
DATA I/O or CPU
Nice data loading codes already implemented
© Junho Cho, 2016 134
Some more CNN applications!
© Junho Cho, 2016 135
Detection
R-CNN, Fast-RCNN, Faster-RCNN
YOLO, Single Shot Detector
© Junho Cho, 2016 136
Segmentation
Fully Convolutional Network, DeepMask
© Junho Cho, 2016 137
© Junho Cho, 2016 138
Deconvolutionalso known as Up-Convoluion / Transpose-Convolution
Computation is same as Back-propagation of Convolution
It increase spatial dimension
© Junho Cho, 2016 139
ETC© Junho Cho, 2016 140
© Junho Cho, 2016 141
Class Activation Mapping
© Junho Cho, 2016 142
NeuralArt
© Junho Cho, 2016 143
Adversarial Examples
You can fool CNN classifier.
© Junho Cho, 2016 144
© Junho Cho, 2016 145
© Junho Cho, 2016 146
Generative Adversarial Network
© Junho Cho, 2016 147
© Junho Cho, 2016 148
© Junho Cho, 2016 149
GAN formulation
Generator Network and Discriminator Network fools each
other.
Finally, Generator produces samples that fool Discriminator.
© Junho Cho, 2016 150
SRGAN : Super Resolution GAN
© Junho Cho, 2016 151

More Related Content

PDF
Grafana 7.0
PDF
ACRiウェビナー:小野様ご講演資料
PPTX
Transformer Zoo
PDF
CEDEC 2015 IoT向け汎用protocol MQTTのリアルタイムゲーム通信利用と実装、そして未来へ…
PDF
Go Observability (in practice)
PPTX
How To Improve Quality With Static Code Analysis
PDF
Clean architectures with fast api pycones
PDF
On deriving the private key from a public key
Grafana 7.0
ACRiウェビナー:小野様ご講演資料
Transformer Zoo
CEDEC 2015 IoT向け汎用protocol MQTTのリアルタイムゲーム通信利用と実装、そして未来へ…
Go Observability (in practice)
How To Improve Quality With Static Code Analysis
Clean architectures with fast api pycones
On deriving the private key from a public key

What's hot (20)

PDF
Agile team workflow
PDF
Intro - Cloud Native
PDF
Jenkins
PDF
On to code review lessons learned at microsoft
PDF
WebRTC と Native とそれから、それから。
PDF
Demystifying observability
PDF
DevOps for Databricks
PDF
How to run P4 BMv2
PDF
JenkinsとDockerって何が良いの? 〜言うてるオレもわからんわ〜 #jenkinsstudy
PDF
【Unite Tokyo 2019】Understanding C# Struct All Things
PDF
[CEDEC 2021] 運用中タイトルでも怖くない! 『メルクストーリア』におけるハイパフォーマンス・ローコストなリアルタイム通信技術の導入事例
PDF
C・C++用のコードカバレッジツールを自作してみた話
PDF
20111015 勉強会 (PCIe / SR-IOV)
PPTX
IBM Cloud Manager with OpenStack Overview
PDF
PF開発に使えるAOSPのツール達
PPTX
DevOps Overview
PDF
Gitlab, GitOps & ArgoCD
PDF
System.Drawing 周りの話
PDF
GitOps with ArgoCD
PDF
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Agile team workflow
Intro - Cloud Native
Jenkins
On to code review lessons learned at microsoft
WebRTC と Native とそれから、それから。
Demystifying observability
DevOps for Databricks
How to run P4 BMv2
JenkinsとDockerって何が良いの? 〜言うてるオレもわからんわ〜 #jenkinsstudy
【Unite Tokyo 2019】Understanding C# Struct All Things
[CEDEC 2021] 運用中タイトルでも怖くない! 『メルクストーリア』におけるハイパフォーマンス・ローコストなリアルタイム通信技術の導入事例
C・C++用のコードカバレッジツールを自作してみた話
20111015 勉強会 (PCIe / SR-IOV)
IBM Cloud Manager with OpenStack Overview
PF開発に使えるAOSPのツール達
DevOps Overview
Gitlab, GitOps & ArgoCD
System.Drawing 周りの話
GitOps with ArgoCD
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Ad

Similar to Convolutional Neural Network (20)

PPTX
Artificial Intelligence on Data Centric Platform
PDF
161209 Unsupervised Learning of Video Representations using LSTMs
PDF
“High-fidelity Conversion of Floating-point Networks for Low-precision Infere...
PDF
KaoNet: Face Recognition and Generation App using Deep Learning
PDF
Wcre12c.ppt
PDF
PR-270: PP-YOLO: An Effective and Efficient Implementation of Object Detector
PDF
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
PDF
Temporal Segment Network
PDF
Volodymyr Lyubinets: Аналіз супутникових зображень: визначаємо параметри буді...
PDF
Training Neural Networks
PDF
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
PDF
Common Design of Deep Learning Frameworks
PPTX
Online learning, Vowpal Wabbit and Hadoop
PDF
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
PDF
Mastering Concurrency in GO: From Patterns to Production
PDF
Maximize Impact: Learn from the Dual Pillars of Open-Source Energy Planning T...
PDF
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
PDF
2016 dg2
PDF
深層学習フレームワーク概要とChainerの事例紹介
PDF
Chap 8. Optimization for training deep models
Artificial Intelligence on Data Centric Platform
161209 Unsupervised Learning of Video Representations using LSTMs
“High-fidelity Conversion of Floating-point Networks for Low-precision Infere...
KaoNet: Face Recognition and Generation App using Deep Learning
Wcre12c.ppt
PR-270: PP-YOLO: An Effective and Efficient Implementation of Object Detector
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
Temporal Segment Network
Volodymyr Lyubinets: Аналіз супутникових зображень: визначаємо параметри буді...
Training Neural Networks
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
Common Design of Deep Learning Frameworks
Online learning, Vowpal Wabbit and Hadoop
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
Mastering Concurrency in GO: From Patterns to Production
Maximize Impact: Learn from the Dual Pillars of Open-Source Energy Planning T...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
2016 dg2
深層学習フレームワーク概要とChainerの事例紹介
Chap 8. Optimization for training deep models
Ad

More from Junho Cho (8)

PDF
Image Translation with GAN
PDF
Get Used to Command Line Interface
PPTX
160805 End-to-End Memory Networks
PPTX
160205 NeuralArt - Understanding Neural Representation
PPTX
151106 Sketch-based 3D Shape Retrievals using Convolutional Neural Networks
PPTX
150807 Fast R-CNN
PPTX
150424 Scalable Object Detection using Deep Neural Networks
PDF
Unsupervised Cross-Domain Image Generation
Image Translation with GAN
Get Used to Command Line Interface
160805 End-to-End Memory Networks
160205 NeuralArt - Understanding Neural Representation
151106 Sketch-based 3D Shape Retrievals using Convolutional Neural Networks
150807 Fast R-CNN
150424 Scalable Object Detection using Deep Neural Networks
Unsupervised Cross-Domain Image Generation

Recently uploaded (20)

PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
annual-report-2024-2025 original latest.
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPT
Quality review (1)_presentation of this 21
PDF
Foundation of Data Science unit number two notes
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
Computer network topology notes for revision
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Miokarditis (Inflamasi pada Otot Jantung)
annual-report-2024-2025 original latest.
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
Quality review (1)_presentation of this 21
Foundation of Data Science unit number two notes
ISS -ESG Data flows What is ESG and HowHow
IB Computer Science - Internal Assessment.pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Computer network topology notes for revision
IBA_Chapter_11_Slides_Final_Accessible.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Clinical guidelines as a resource for EBP(1).pdf
Data_Analytics_and_PowerBI_Presentation.pptx
Introduction-to-Cloud-ComputingFinal.pptx

Convolutional Neural Network