SlideShare a Scribd company logo
Sequence to Sequence Learning
with Tensor2Tensor
Łukasz Kaiser and Ryan Sepassi
• Intro
• Basics
• Tensor view of neural networks
• TensorFlow core and higher-level APIs, Tensor2Tensor
• Exercise: understand T2T pipeline fully, try on MNIST
• Sequence models
• Basics
• Transformer
• Exercise: train basic sequence models, use Transformer
• Outlook: deep learning and Tensor2Tensor community
But Why?
(Tom Bianco, datanami.com)
Speed
TPUv2: 180 TF/2$/h
TPUv2 pod: 11.5 PF
TPUv3 pod: over 100 PF
Top supercomputer: 122
PF
(Double precision, could be over 1
exaflop for ML applications.)
ML Arxiv Papers per Year
~50 New ML papers every day!
Rapid accuracy improvements
Image courtesy of Canziani et al, 2017
2012
2014-2015
2015-2016
2017
Radically open culture
How Deep Learning Quietly
Revolutionized NLP (2016)
What NLP tasks are we talking about?
● Part Of Speech Tagging Assign part-of-speech to each word.
● Parsing Create a grammar tree given a sentence.
● Named Entity Recognition Recognize people, places, etc. in a sentence.
● Language Modeling Generate natural sentences.
● Translation Translate a sentence into another language.
● Sentence Compression Remove words to summarize a sentence.
● Abstractive Summarization Summarize a paragraph in new words.
● Question Answering Answer a question, maybe given a passage.
● ….
Can deep learning solve these tasks?
● Inputs and outputs have variable size, how can neural networks handle it?
● Recurrent Neural Networks can do it, but how do we train them?
● Long Short-Term Memory [Hochreiter et al., 1997], but how to compose it?
● Encoder-Decoder (sequence-to-sequence) architectures
[Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014]
Parsing with sequence-to-sequence LSTMs
(1) Represent the tree as a sequence.
(2) Generate data and train a sequence-to-sequence LSTM model.
(3) Results: 92.8 F1 score vs 92.4 previous best [Vinyals & Kaiser et al., 2014]
Language modeling with LSTMs
Language model performance is measured in perplexity (lower is better).
● Kneser-Ney 5-gram: 67.6 [Chelba et al., 2013]
● RNN-1024 + 9-gram: 51.3 [Chelba et al., 2013]
● LSTM-512-512: 54.1 [Józefowicz et al., 2016]
● 2-layer LSTM-8192-1024: 30.6 [Józefowicz et al., 2016]
● 2-l.-LSTM-4096-1024+MoE: 28.0 [Shazeer & Mirhoseini et al., 2016]
Model size seems to be the decisive factor.
Language modeling with LSTMs: Examples
Raw (not hand-selected) sampled sentences: [Józefowicz et al., 2016]
About 800 people gathered at Hever Castle on Long Beach from noon to 2pm ,
three to four times that of the funeral cortege .
It is now known that coffee and cacao products can do no harm on the body .
Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second
half but neither Drogba nor Malouda was able to push on through the Barcelona
defence .
Sentence compression with LSTMs
Example:
Input: State Sen. Stewart Greenleaf discusses his proposed human
trafficking bill at Calvery Baptist Church in Willow Grove Thursday night.
Output: Stewart Greenleaf discusses his human trafficking bill.
Results: readability informativeness
MIRA (previous best): 4.31 3.55
LSTM [Filippova et al., 2015]: 4.51 3.78
Translation with LSTMs
Translation performance is measured in BLEU scores (higher is better, EnDe):
● Phrase-Based MT: 20.7 [Durrani et al., 2014]
● Early LSTM model: 19.4 [Sébastien et al., 2015]
● DeepAtt (large LSTM): 20.6 [Zhou et al., 2016]
● GNMT (large LSTM): 24.9 [Wu et al., 2016]
● GNMT+MoE: 26.0 [Shazeer & Mirhoseini et al., 2016]
Again, model size and tuning seem to be the decisive factor.
Translation with LSTMs: Examples
German:
Probleme kann man niemals mit derselben Denkweise lösen, durch die sie
entstanden sind.
PBMT Translate: GNMT Translate:
No problem can be solved from Problems can never be solved
the same consciousness that with the same way of thinking
they have arisen. that caused them.
Translation with LSTMs: How good is it?
PBMT GNMT Human Relative improvement
English → Spanish 4.885 5.428 5.504 87%
English → French 4.932 5.295 5.496 64%
English → Chinese 4.035 4.594 4.987 58%
Spanish → English 4.872 5.187 5.372 63%
French → English 5.046 5.343 5.404 83%
Chinese → English 3.694 4.263 4.636 60%
Google Translate production data, median score by human evaluation on the scale 0-6. [Wu et al., ‘16]
That was 2016. Now.
Attention: Machine Translation Results
29.7
Basics
Old School View
Convolutions
(Illustration from machinelearninguru.com)
Modern View
h = f(Wx + B) [or h = conv(W, x)]
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP where P =
{W,W’,B,B’}
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
See this online course for a nice introduction:
https://www.coursera.org/learn/machine-learning
TensorFlow
Core TF Model
Yet another dataflow system
MatMul
Add Relu
biases
weights
examples
labels
Xent
Graph of Nodes, also called Operations or ops.
Yet another dataflow systemwith tensors
MatMul
Add Relu
biases
weights
examples
labels
Xent
Edges are N-dimensional arrays: Tensors
Yet another dataflow systemwith state
Add Mul
biases
...
learning rate
−=...
'Biases' is a variable −= updates biasesSome ops compute gradients
Device A Device B
Yet another dataflow systemdistributed
Add Mul
biases
...
learning rate
−=...
Devices: Processes, Machines, GPUs, etc
What's not in the Core Model
● Anything about neural networks, machine learning, ...
● Anything about backpropagation, differentiation, ...
● Anything about gradient descent, parameter servers…
These are built by combining existing operations, or defining new operations.
Core system can be applied to other problems than machine learning.
Core TF API
API Families
Graph Construction
● Assemble a Graph of Operations.
Graph Execution
● Deploy and execute operations in a Graph.
Hello, world!
import tensorflow as tf
# Create an operation.
hello = tf.constant("Hello, world!")
# Create a session.
sess = tf.Session()
# Execute that operation and print its result.
print sess.run(hello)
Graph Construction
Library of predefined Ops
● Constant, Variables, Math ops, etc.
Functions to add Ops for common needs
● Gradients: Add Ops to compute derivatives.
● Training methods: Add Ops to update variables (SGD, Adagrad, etc.)
All operations are added to a global Default Graph.
Slightly more advanced calls let you control the Graph more precisely.
Op that holds state that persists across calls to Run()
v = tf.get_variable(‘v’, [4, 3]) # 4x3 matrix, float by default
Variable State
Variable
Value Reference
Some Ops modify the Variable state: InitVariable, Assign, AssignSub, AssignAdd.
init = v.assign(tf.random_uniform(shape=v.shape))
Variables State
Variable
Value Reference
Random
Parameters
Assign
Updates the variable value when run.
Outputs the value for convenienceState
Variable
Math Ops
A variety of Operations for linear algebra, convolutions, etc.
c = tf.constant(...)
w = tf.get_variable(...)
b = tf.get_variable(...)
y = tf.add(tf.matmul(c, w), b)
Overloaded Python operators help: y = tf.matmul(c, w) + b
w
c
MatMul
b
Add
Operations, plenty of them
● Array ops
○ Concat
○ Slice
○ Reshape
○ ...
● Math ops
○ Linear algebra (MatMul, …)
○ Component-wise ops (Mul, ...)
○ Reduction ops (Sum, …)
Documentation at tensorflow.org
● Neural network ops
○ Non-linearities (Relu, …)
○ Convolutions (Conv2D, …)
○ Pooling (AvgPool, …)
● ...and many more
○ Constants, Data flow, Control flow,
Embedding, Initialization, I/O, Legacy
Input Layers, Logging, Random,
Sparse, State, Summary, etc.
Graph Construction Helpers
● Gradients
● Optimizers
● Higher-Level APIs in core TF
● Higher-Level libraries outside core TF
Gradients
Given a loss, add Ops to compute gradients for Variables.
var1
var0 Op
Op
Op
loss
many ops
Gradients
tf.gradients(loss, [var0, var1]) # Generate gradients
var1
var0 Op
Op
Op
loss
many ops
Op
Op
many opsGradients for var0
Gradients for var1 Op
Example
Gradients for MatMul
MatMul
MatMul
Transpose
Transpose
MatMul gw
gx
y
x
w
x
gy
w
Optimizers
Apply gradients to Variables: SGD(var, grad, learning_rate)
var
AssignSub
Mul
grad
Note: learning_rate is just output of an Op, it can easily be decayed
learning_rate
Easily Add Optimizers
Builtin
● SGD, Adagrad, Momentum, Adam, …
Contributed
● LazyAdam, NAdam, YellowFin, Adafactor, ...
Putting all together to train a Neural
Net
Build a Graph by adding Operations:
● For Variables to hold the parameters of the Neural Net.
● To compute the Neural Net output: e.g. classification predictions.
● To compute a training loss: e.g. cross entropy, parameter L2 norms.
● To calculate gradients for the parameters to train.
● To apply gradients with a training function.
Distributed Execution
Graph Execution
Session API
● API to deploy a Graph in a Tensorflow runtime
● Can run any subset of the graph
● Can add Ops to an existing Graph (for interactive use in colab for example)
Training Utilities
● Checkpoint, Recovery, Summaries, Replicas, etc.
Python Program
create graph
create session
sess.run()
Local Runtime
Runtime
Session
CPU
GPU
Python Program
create graph
create session
sess.run()
Remote Runtime
Session
Master
Worker
CPU
Worker
CPU
GPU
Worker
CPU
GPU
Run([ops])
RunSubGraph()
GetTensor()
CreateGraph()
Running and fetching output
an op Fetch
# Run an Op and fetch its output.
# "values" is a numpy ndarray.
values = sess.run(<an op output>)
Running and fetching output
an op Fetch
Transitive closure of needed ops is Run
Execution happens in parallel
Feeding input, Running, and Fetching
a
an op Fetch
Feed
a_val = ...a numpy ndarray...
values = sess.run(<an op output>,
feed_input({<a output>: a_val})
Feeding input, Running, and Fetching
a
an op Fetch
Feed
Only the required Ops are run.
Higher-Level Core TF API
Layers are ops that create Variables
def embedding(x, vocab_size, dense_size,
name=None, reuse=None, multiplier=1.0):
"""Embed x of type int64 into dense vectors."""
with tf.variable_scope( # Use scopes like this.
name, default_name="emb", values=[x], reuse=reuse):
embedding_var = tf.get_variable(
"kernel", [vocab_size, dense_size])
return tf.gather(embedding_var, x)
Models are built from Layers
def bytenet(inputs, targets, hparams):
final_encoder = common_layers.residual_dilated_conv(
inputs, hparams.num_block_repeat, "SAME", "encoder", hparams)
shifted_targets = common_layers.shift_left(targets)
kernel = (hparams.kernel_height, hparams.kernel_width)
decoder_start = common_layers.conv_block(
tf.concat([final_encoder, shifted_targets], axis=3),
hparams.hidden_size, [((1, 1), kernel)], padding="LEFT")
return common_layers.residual_dilated_conv(
decoder_start, hparams.num_block_repeat,
"LEFT", "decoder", hparams)
Training Utilities
Training program typically runs multiple threads
● Execute the training op in a loop.
● Checkpoint every so often.
● Gather summaries for the Visualizer.
● Other, eg. monitors Nans, costs, etc.
Estimator
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = y)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
TensorFlow view:
h = tf.layers.dense(x, h_size, name=”h1”)
o = tf.layers.dense(h, 1, name=”output”)
l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
But data? Where do we get {x,y} from?
Tensor2Tensor
Tensor2Tensor (T2T) is a library of deep learning models and
datasets designed to accelerate deep learning research and
make it more accessible.
● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ...
● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet,
Transformer, ByteNet, Neural GPU, LSTM, ...
● Tools: cloud training, hyperparameter tuning, TPU, ...
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
TensorFlow: goo.gl/njJftZ
x, y = mnist.dataset
h = tf.layers.dense(x, h_size, name=”h1”)
o = tf.layers.dense(h, 1, name=”output”)
l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
Play with the colab
goo.gl/njJftZ
● Try pure SGD instead of the Adam optimizer and others like AdaFactor
○ Find in tensorflow.org where is the API and how optimizers are called
○ Find the AdaFactor paper on arxiv and read it; use it from Tensor2Tensor
● Try other layer sizes and numbers of layers, other activation functions.
● Try running a few times, how does initialization affect results?
● Try running on Cifar10, how does your model perform?
● Make a convolutional model, is it better? (tf.layers.dense -> tf.layers.conv2d)
● Try residual connections through conv layers, check out shake-shake in T2T
Sequence Models
RNNs Everywhere
Sequence to Sequence Learning with Neural Networks
Auto-Regressive CNNs
WaveNet and ByteNet
Transformer
Based on Attention Is All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin and other works with Samy Bengio, Eugene Brevdo, Francois Chollet,
Stephan Gouws, Nal Kalchbrenner, Ofir Nachum, Aurko Roy, Ryan Sepassi.
Attention
Convolution Attention
Dot-Product Attention
k0
v0
k1
v1
k2
v2
q0
q1
AA
Dot-Product Attention
def dot_product_attention(q, k, v, bias, dropout_rate=0.0, image_shapes=None, name=None,
make_image_summary=True, save_weights_to=None, dropout_broadcast_dims=None):
with tf.variable_scope(
name, default_name="dot_product_attention", values=[q, k, v]) as scope:
# [batch, num_heads, query_length, memory_length]
logits = tf.matmul(q, k, transpose_b=True)
if bias is not None:
logits += bias
weights = tf.nn.softmax(logits, name="attention_weights")
if save_weights_to is not None:
save_weights_to[scope.name] = weights
# dropping out the attention links for each of the heads
weights = common_layers.dropout_with_broadcast_dims(
weights, 1.0 - dropout_rate, broadcast_dims=dropout_broadcast_dims)
if expert_utils.should_generate_summaries() and make_image_summary:
attention_image_summary(weights, image_shapes)
return tf.matmul(weights, v)
Ops Activations
Attention (dot-prod) n2
· d n2
+ n · d
Attention (additive) n2
· d n2
· d
Recurrent n · d2
n · d
Convolutional n · d2
n · d
n = sequence length d = depth k = kernel size
What’s missing from Self-Attention?
Convolution Self-Attention
What’s missing from Self-Attention?
Convolution Self-Attention
● Convolution: a different linear transformation for each relative position.
Allows you to distinguish what information came from where.
● Self-Attention: a weighted average :(
The Fix: Multi-Head Attention
Convolution Multi-Head Attention
● Multiple attention layers (heads) in parallel (shown by different colors)
● Each head uses different linear transformations.
● Different heads can learn different relationships.
The Fix: Multi-Head Attention
The Fix: Multi-Head Attention
Ops Activations
Multi-Head Attention
with linear transformations.
For each of the h heads,
dq
= dk
= dv
= d/h
n2
· d + n · d2
n2
· h + n · d
Recurrent n · d2
n · d
Convolutional n · d2
n · d
n = sequence length d = depth k = kernel size
Three ways of attention
Encoder-Decoder Attention
Encoder Self-Attention MaskedDecoder Self-Attention
The Transformer
Machine Translation Results: WMT-14
29.1 41.8
Ablations
Coreference resolution (Winograd schemas)
Coreference resolution (Winograd schemas)
Sentence Google Translate Transformer
The cow ate the hay because it
was delicious.
La vache mangeait le foin
parce qu'elle était délicieuse.
La vache a mangé le foin parce
qu'il était délicieux.
The cow ate the hay because it
was hungry.
La vache mangeait le foin
parce qu'elle avait faim.
La vache mangeait le foin
parce qu'elle avait faim.
The women stopped drinking
the wines because they were
carcinogenic.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
cancérogènes.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
cancérigènes.
The women stopped drinking
the wines because they were
pregnant.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
enceintes.
Les femmes ont cessé de boire
les vins parce qu'elles étaient
enceintes.
The city councilmen refused the
female demonstrators a permit
because they advocated
violence.
Les conseillers municipaux ont
refusé aux femmes
manifestantes un permis parce
qu'ils préconisaient la violence.
Le conseil municipal a refusé
aux manifestantes un permis
parce qu'elles prônaient la
violence.
The city councilmen refused the
female demonstrators a permit
because they feared violence.
Les conseillers municipaux ont
refusé aux femmes
manifestantes un permis parce
qu'ils craignaient la violence
Le conseil municipal a refusé
aux manifestantes un permis
parce qu'elles craignaient la
violence.*
Long Text Generation
Generating entire Wikipedia
articles by summarizing top
search results and references.
(Memory-Compressed Attn.)
'''The Transformer''' are a Japanese [[hardcore punk]] band.
==Early years==
The band was formed in 1968, during the height of Japanese music
history. Among the legendary [[Japanese people|Japanese]] composers of
[Japanese lyrics], they prominently exemplified Motohiro Oda's
especially tasty lyrics and psychedelic intention. Michio was a
longtime member of the every Sunday night band PSM. His alluring was
of such importance as being the man who ignored the already successful
image and that he municipal makeup whose parents were&amp;nbsp;– the
band was called
Jenei.&lt;ref&gt;http://www.separatist.org/se_frontend/post-punk-musician-the-kidney.html&lt;/ref&gt;
From a young age the band was very close, thus opting to pioneer what
From a young age the band was very close, thus opting to pioneer what
had actually begun as a more manageable core hardcore punk
band.&lt;ref&gt;http://www.talkradio.net/article/independent-music-fades-from-the-closed-drawings-out&lt;/ref&gt;
==History==
===Born from the heavy metal revolution===
In 1977 the self-proclaimed King of Tesponsors, [[Joe Lus:
: It was somewhere... it was just a guile ... taking this song to
Broadway. It was the first record I ever heard on A.M., After some
opposition I received at the hands of Parsons, and in the follow-up
notes myself.&lt;ref&gt;http://www.discogs.com/artist/The+Op%C5%8Dn+&amp;+Psalm&lt;/ref&gt;
The band cut their first record album titled ''Transformed, furthered
The band cut their first record album titled ''Transformed, furthered
and extended Extended'',&lt;ref&gt;[https://www.discogs.com/album/69771
MC – Transformed EP (CDR) by The Moondrawn – EMI, 1994]&lt;/ref&gt;
and in 1978 the official band line-up of the three-piece pop-punk-rock
band TEEM. They generally played around [[Japan]], growing from the
Top 40 standard.
===1981-2010: The band to break away===
On 1 January 1981 bassist Michio Kono, and the members of the original
line-up emerged. Niji Fukune and his [[Head poet|Head]] band (now
guitarist) Kazuya Kouda left the band in the hands of the band at the
May 28, 1981, benefit season of [[Led Zeppelin]]'s Marmarin building.
In June 1987, Kono joined the band as a full-time drummer, playing a
few nights in a 4 or 5 hour stint with [[D-beat]]. Kono played through
the mid-1950s, at Shinlie, continued to play concerts with drummers in
Ibis, Cor, and a few at the Leo Somu Studio in Japan. In 1987, Kono
recruited new bassist Michio Kono and drummer Ayaka Kurobe as drummer
for band. Kono played trumpet with supplement music with Saint Etienne
as a drummer. Over the next few years Kono played as drummer and would
get many alumni news invitations to the bands' ''Toys Beach'' section.
In 1999 he joined the [[CT-182]].
His successor was Barrie Bell on a cover of [[Jethro Tull
(band)|Jethro Tull]]'s original 1967 hit &quot;Back Home&quot; (last
appearance was in Jethro), with whom he shares a name.
===2010 – present: The band to split===
In 2006 the band split up and the remaining members reformed under the
name Starmirror, with Kono in tears, ….
'''''The Transformer''''' is a [[book]] by British [[illuminatist]]
[[Herman Muirhead]], set in a post-apocalyptic world that border on a
mysterious alien known as the &quot;Transformer Planet&quot; which is
his trademark to save Earth. The book is about 25 years old, and it
contains forty-one different demographic models of the human race, as
in the cases of two fictional
''groups'',&amp;nbsp;''[[Robtobeau]]''&amp;nbsp;&quot;Richard&quot;
and &quot;The Transformers Planet&quot;.
== Summary ==
The book benefits on the [[3-D film|3-D film]], taking his one-third
of the world's pure &quot;answer&quot; and gas age from 30 to 70
within its confines.
The book covers the world of the world of [[Area 51|Binoculars]] from
around the worlds of Earth. It is judged by the ability of
[[telepathy|telepaths]] and [[television]], and provides color, line,
and end-to-end observational work.
and end-to-end observational work.
To make the book up and document the recoverable quantum states of the
universe, in order to inspire a generation that fantasy producing a
tele-recording-offering machine is ideal. To make portions of this
universe home, he recreates the rostrum obstacle-oriented framework
Minou.&lt;ref&gt;http://www.rewunting.net/voir/BestatNew/2007/press/Story.html)&lt;/ref&gt;
== ''The Transformer''==
The book was the first on a [[Random Access Album|re-issue]] since its
original version of ''[[Robtobeau]]'', despite the band naming itself
a &quot;Transformer Planet&quot; in the book.&lt;ref
name=prweb-the-1985&gt;{{cite
web|url=http://www.prnewswire.co.uk/cgi/news/release?id=9010884|title=''The
Transformer''|publisher=www.prnewswire.co.uk|date=|accessdate=2012-04-25}}&lt;/ref&gt;
Today, &quot;[[The Transformers Planet]]&quot; is played entirely
open-ended, there are more than just the four previously separate only
bands. A number of its groups will live on one abandoned volcano in
North America,
===Conceptual ''The Transformer'' universe===
Principals a setting-man named “The Supercongo Planet,” who is a
naturalistic device transferring voice and humour from ''The
Transformer Planet,'' whose two vice-maks appear often in this
universe existence, and what the project in general are trying to
highlight many societal institutions. Because of the way that the
corporation has made it, loneliness, confidence, research and renting
out these universes are difficult to organise without the bands
creating their own universe. The scientist is none other than a singer
and musician. Power plants are not only problematic, but if they want
programmed them to create and perform the world's first Broadcast of
itself once the universe started, but deliberately Acta Biological
Station, db.us and BB on ''The Transformer Planet'', ''The Transformer
Planet'', aren't other things Scheduled for.
:&lt;blockquote&gt;A man called Dick Latanii Bartow, known the
greatest radio dot Wonderland administrator at influential arrangers
in a craze over the complex World of Biological Predacial Engineer in
Rodel bringing Earth into a 'sortjob' with fans. During this
'Socpurportedly Human', Conspiracy was being released to the world as
Baron Maadia on planet Nature. A world-renowned scientist named Julia
Samur is able to cosmouncish society and run for it - except us who is
he and he is before talking this entire T100 before Cell physiologist
Cygnets. Also, the hypnotic Mr. Mattei arrived, so it is Mischief who
over-manages for himself - but a rising duplicate of Phil Rideout
makes it almost affable. There is plenty of people at work to make
use of it and animal allies out of politics. But Someday in 1964, when
we were around, we were steadfast against the one man's machine and he
did an amazing job at the toe of the mysterious...
Mr. Suki who is an engineering desk lecturer at the University of}}}}
…………….
Image Generation
Model Type % unrecognized
(max = 50%)
ResNet 4.0%
Superresolution GAN
(Garcia’16)
8.5%
PixelRecursive
(Dahl et al., 2017)
11%
Image Transformer 36.9%
How about GANs?
(Are GANs Created Equal? A Large-Scale Study)
Problem 1: Variance
Problem 2: Even best models are not great:
Image Transformer: 36.6
Play with the colab
goo.gl/njJftZ
● Try a pre-trained Transformer on translation, see attentions.
● See https://jalammar.github.io/illustrated-transformer/
● Add Transformer layer on the previous sequence tasks, try it.
● Try the non-deterministic sequence task: 50% copy / 50% repeat-even:
○ See that previous sequence model fails on unclear outputs
○ Add auto-regressive part and attention
○ See that the new model is 50% correct (best possible)
○ *Does it generalize less with attention? Why? What could be done?
How do I get it?
Tensor2Tensor
Tensor2Tensor
Tensor2Tensor (T2T) is a library of deep learning models
and datasets designed to make deep learning more
accessible and accelerate ML research.
● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B,
...
● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet,
Transformer, ByteNet, Neural GPU, LSTM, ...
Tensor2Tensor Cutting Edge
Tensor2Tensor Code (github)
● data_generators/ : datasets, must subclass Problem
● models/ : models, must subclass T2TModel
● utils/ , bin/ , etc. : utilities, binaries, cloud helpers, …
pip install tensor2tensor && t2t-trainer 
--generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/mnist 
--problems=image_mnist --model=shake_shake --hparams_set=shake_shake_quick 
--train_steps=1000 --eval_steps=100
Tensor2Tensor Applications
pip install tensor2tensor && t2t-trainer 
--generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/dir 
--problems=$P --model=$M --hparams_set=$H
● Translation (state-of-the-art both on speed and accuracy):
$P=translate_ende_wmt32k, $M=transformer, $H=transformer_big
● Image classification (CIFAR, also ImageNet):
$P=image_cifar10, $M=shake_shake, $H=shakeshake_big
● Summarization (CNN):
$P=summarize_cnn_dailymail32k, $M=transformer, $H=transformer_prepend
● Speech recognition (Librispeech):
$P=librispeech, $M=transformer, $H=transformer_librispeech
Why Tensor2Tensor?
● No need to reinvent ML. Best practices and SOTA models.
● Modularity helps. Easy to change models, hparams, data.
● Trains everywhere. Multi-GPU, distributed, Cloud, TPUs.
● Used by Google Brain. Papers, preferred for Cloud TPU LMs.
● Great active community! Find us on github, gitter, groups, ...
Tensor2Tensor + CloudML
How do I train a model on my data?
See the Cloud ML poetry tutorial!
● How to hook up your data to the library of models.
● How to easily run on Cloud ML and use all features.
● How to tune the configuration of a model automatically
Result: Even with 20K data examples it generates poetry!

More Related Content

PDF
Training at AI Frontiers 2018 - Ni Lao: Weakly Supervised Natural Language Un...
PPTX
Wei Xu at AI Frontiers : Language Learning in an Interactive and Embodied Set...
PDF
Ilya Sutskever at AI Frontiers : Progress towards the OpenAI mission
PDF
Lukasz Kaiser at AI Frontiers: How Deep Learning Quietly Revolutionized NLP
PPTX
The How and Why of Feature Engineering
PDF
Li Deng at AI Frontiers: Three Generations of Spoken Dialogue Systems (Bots)
PPTX
Feature engineering for diverse data types
PDF
MILA DL & RL summer school highlights
Training at AI Frontiers 2018 - Ni Lao: Weakly Supervised Natural Language Un...
Wei Xu at AI Frontiers : Language Learning in an Interactive and Embodied Set...
Ilya Sutskever at AI Frontiers : Progress towards the OpenAI mission
Lukasz Kaiser at AI Frontiers: How Deep Learning Quietly Revolutionized NLP
The How and Why of Feature Engineering
Li Deng at AI Frontiers: Three Generations of Spoken Dialogue Systems (Bots)
Feature engineering for diverse data types
MILA DL & RL summer school highlights

What's hot (20)

PPTX
Deep Learning for Artificial Intelligence (AI)
PDF
李俊良/Feature Engineering in Machine Learning
PDF
Jeff Dean at AI Frontiers: Trends and Developments in Deep Learning Research
PDF
Generative Adversarial Networks and Their Applications
PDF
Generative Adversarial Network and its Applications to Speech Processing an...
PDF
"Large-Scale Deep Learning for Building Intelligent Computer Systems," a Keyn...
PDF
GANs and Applications
PDF
Nikko Ström at AI Frontiers: Deep Learning in Alexa
PPTX
Deep Learning for Natural Language Processing
PDF
[DSC x TAAI 2016] 林守德 / 人工智慧與機器學習在推薦系統上的應用
PDF
Introduction to Artificial Intelligence
PDF
The Unreasonable Benefits of Deep Learning
PDF
Adam Coates at AI Frontiers: AI for 100 Million People with Deep Learning
PDF
許永真/Crowd Computing for Big and Deep AI
PDF
Introduction To Applied Machine Learning
PDF
Generative adversarial networks
PDF
Variants of GANs - Jaejun Yoo
PDF
A brief overview of Reinforcement Learning applied to games
PDF
Deep learning for natural language embeddings
PPTX
Lessons learnt at building recommendation services at industry scale
Deep Learning for Artificial Intelligence (AI)
李俊良/Feature Engineering in Machine Learning
Jeff Dean at AI Frontiers: Trends and Developments in Deep Learning Research
Generative Adversarial Networks and Their Applications
Generative Adversarial Network and its Applications to Speech Processing an...
"Large-Scale Deep Learning for Building Intelligent Computer Systems," a Keyn...
GANs and Applications
Nikko Ström at AI Frontiers: Deep Learning in Alexa
Deep Learning for Natural Language Processing
[DSC x TAAI 2016] 林守德 / 人工智慧與機器學習在推薦系統上的應用
Introduction to Artificial Intelligence
The Unreasonable Benefits of Deep Learning
Adam Coates at AI Frontiers: AI for 100 Million People with Deep Learning
許永真/Crowd Computing for Big and Deep AI
Introduction To Applied Machine Learning
Generative adversarial networks
Variants of GANs - Jaejun Yoo
A brief overview of Reinforcement Learning applied to games
Deep learning for natural language embeddings
Lessons learnt at building recommendation services at industry scale
Ad

Similar to Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning with Tensor2Tensor (20)

PDF
Language translation with Deep Learning (RNN) with TensorFlow
 
PDF
Let Android dream electric sheep: Making emotion model for chat-bot with Pyth...
PDF
Natural language processing open seminar For Tensorflow usage
PDF
Deep Learning Introduction - WeCloudData
PDF
Overview of TensorFlow For Natural Language Processing
PPT
Deep learning is a subset of machine learning and AI
PPT
Overview of Deep Learning and its advantage
PPT
Introduction to Deep Learning presentation
PPT
deepnet-lourentzou.ppt
PPTX
Recurrent Neural Networks for Text Analysis
PDF
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
PDF
5_RNN_LSTM.pdf
 
PDF
Deep Dive on Deep Learning (June 2018)
PDF
Deep Learning, Where Are You Going?
PDF
Large Scale Deep Learning with TensorFlow
PDF
MapReduce: teoria e prática
PDF
Introduction to Tensor Flow for Optical Character Recognition (OCR)
PDF
Tensorflow 2.0 and Coral Edge TPU
PDF
The Flow of TensorFlow
PDF
computer science cousre related to python
Language translation with Deep Learning (RNN) with TensorFlow
 
Let Android dream electric sheep: Making emotion model for chat-bot with Pyth...
Natural language processing open seminar For Tensorflow usage
Deep Learning Introduction - WeCloudData
Overview of TensorFlow For Natural Language Processing
Deep learning is a subset of machine learning and AI
Overview of Deep Learning and its advantage
Introduction to Deep Learning presentation
deepnet-lourentzou.ppt
Recurrent Neural Networks for Text Analysis
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
5_RNN_LSTM.pdf
 
Deep Dive on Deep Learning (June 2018)
Deep Learning, Where Are You Going?
Large Scale Deep Learning with TensorFlow
MapReduce: teoria e prática
Introduction to Tensor Flow for Optical Character Recognition (OCR)
Tensorflow 2.0 and Coral Edge TPU
The Flow of TensorFlow
computer science cousre related to python
Ad

More from AI Frontiers (20)

PPTX
Divya Jain at AI Frontiers : Video Summarization
PPTX
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI
PDF
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-Lecture 1: Heuristi...
PDF
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-lecture 2: Incremen...
PDF
Training at AI Frontiers 2018 - Udacity: Enhancing NLP with Deep Neural Networks
PDF
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-Lecture 3: Any-Angl...
PDF
Percy Liang at AI Frontiers : Pushing the Limits of Machine Learning
PDF
Mark Moore at AI Frontiers : Uber Elevate
PPTX
Mario Munich at AI Frontiers : Consumer robotics: embedding affordable AI in ...
PPTX
Arnaud Thiercelin at AI Frontiers : AI in the Sky
PPTX
Anima Anandkumar at AI Frontiers : Modern ML : Deep, distributed, Multi-dimen...
PPTX
Sumit Gupta at AI Frontiers : AI for Enterprise
PPTX
Yuandong Tian at AI Frontiers : Planning in Reinforcement Learning
PPTX
Alex Ermolaev at AI Frontiers : Major Applications of AI in Healthcare
PPTX
Long Lin at AI Frontiers : AI in Gaming
PDF
Melissa Goldman at AI Frontiers : AI & Finance
PPTX
Li Deng at AI Frontiers : From Modeling Speech/Language to Modeling Financial...
PPTX
Ashok Srivastava at AI Frontiers : Using AI to Solve Complex Economic Problems
PPTX
Rohit Tripathi at AI Frontiers : Using intelligent connectivity and AI to tra...
PPTX
Kai-Fu Lee at AI Frontiers : The Era of Artificial Intelligence
Divya Jain at AI Frontiers : Video Summarization
Training at AI Frontiers 2018 - LaiOffer Data Session: How Spark Speedup AI
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-Lecture 1: Heuristi...
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-lecture 2: Incremen...
Training at AI Frontiers 2018 - Udacity: Enhancing NLP with Deep Neural Networks
Training at AI Frontiers 2018 - LaiOffer Self-Driving-Car-Lecture 3: Any-Angl...
Percy Liang at AI Frontiers : Pushing the Limits of Machine Learning
Mark Moore at AI Frontiers : Uber Elevate
Mario Munich at AI Frontiers : Consumer robotics: embedding affordable AI in ...
Arnaud Thiercelin at AI Frontiers : AI in the Sky
Anima Anandkumar at AI Frontiers : Modern ML : Deep, distributed, Multi-dimen...
Sumit Gupta at AI Frontiers : AI for Enterprise
Yuandong Tian at AI Frontiers : Planning in Reinforcement Learning
Alex Ermolaev at AI Frontiers : Major Applications of AI in Healthcare
Long Lin at AI Frontiers : AI in Gaming
Melissa Goldman at AI Frontiers : AI & Finance
Li Deng at AI Frontiers : From Modeling Speech/Language to Modeling Financial...
Ashok Srivastava at AI Frontiers : Using AI to Solve Complex Economic Problems
Rohit Tripathi at AI Frontiers : Using intelligent connectivity and AI to tra...
Kai-Fu Lee at AI Frontiers : The Era of Artificial Intelligence

Recently uploaded (20)

PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
KodekX | Application Modernization Development
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
Big Data Technologies - Introduction.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Modernizing your data center with Dell and AMD
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Unlocking AI with Model Context Protocol (MCP)
“AI and Expert System Decision Support & Business Intelligence Systems”
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Encapsulation_ Review paper, used for researhc scholars
KodekX | Application Modernization Development
Network Security Unit 5.pdf for BCA BBA.
Understanding_Digital_Forensics_Presentation.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Big Data Technologies - Introduction.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
NewMind AI Monthly Chronicles - July 2025
Diabetes mellitus diagnosis method based random forest with bat algorithm
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Modernizing your data center with Dell and AMD
Reach Out and Touch Someone: Haptics and Empathic Computing
Dropbox Q2 2025 Financial Results & Investor Presentation
Unlocking AI with Model Context Protocol (MCP)

Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning with Tensor2Tensor

  • 1. Sequence to Sequence Learning with Tensor2Tensor Łukasz Kaiser and Ryan Sepassi
  • 2. • Intro • Basics • Tensor view of neural networks • TensorFlow core and higher-level APIs, Tensor2Tensor • Exercise: understand T2T pipeline fully, try on MNIST • Sequence models • Basics • Transformer • Exercise: train basic sequence models, use Transformer • Outlook: deep learning and Tensor2Tensor community
  • 4. (Tom Bianco, datanami.com) Speed TPUv2: 180 TF/2$/h TPUv2 pod: 11.5 PF TPUv3 pod: over 100 PF Top supercomputer: 122 PF (Double precision, could be over 1 exaflop for ML applications.)
  • 5. ML Arxiv Papers per Year ~50 New ML papers every day!
  • 6. Rapid accuracy improvements Image courtesy of Canziani et al, 2017 2012 2014-2015 2015-2016 2017
  • 8. How Deep Learning Quietly Revolutionized NLP (2016)
  • 9. What NLP tasks are we talking about? ● Part Of Speech Tagging Assign part-of-speech to each word. ● Parsing Create a grammar tree given a sentence. ● Named Entity Recognition Recognize people, places, etc. in a sentence. ● Language Modeling Generate natural sentences. ● Translation Translate a sentence into another language. ● Sentence Compression Remove words to summarize a sentence. ● Abstractive Summarization Summarize a paragraph in new words. ● Question Answering Answer a question, maybe given a passage. ● ….
  • 10. Can deep learning solve these tasks? ● Inputs and outputs have variable size, how can neural networks handle it? ● Recurrent Neural Networks can do it, but how do we train them? ● Long Short-Term Memory [Hochreiter et al., 1997], but how to compose it? ● Encoder-Decoder (sequence-to-sequence) architectures [Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014]
  • 11. Parsing with sequence-to-sequence LSTMs (1) Represent the tree as a sequence. (2) Generate data and train a sequence-to-sequence LSTM model. (3) Results: 92.8 F1 score vs 92.4 previous best [Vinyals & Kaiser et al., 2014]
  • 12. Language modeling with LSTMs Language model performance is measured in perplexity (lower is better). ● Kneser-Ney 5-gram: 67.6 [Chelba et al., 2013] ● RNN-1024 + 9-gram: 51.3 [Chelba et al., 2013] ● LSTM-512-512: 54.1 [Józefowicz et al., 2016] ● 2-layer LSTM-8192-1024: 30.6 [Józefowicz et al., 2016] ● 2-l.-LSTM-4096-1024+MoE: 28.0 [Shazeer & Mirhoseini et al., 2016] Model size seems to be the decisive factor.
  • 13. Language modeling with LSTMs: Examples Raw (not hand-selected) sampled sentences: [Józefowicz et al., 2016] About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cortege . It is now known that coffee and cacao products can do no harm on the body . Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence .
  • 14. Sentence compression with LSTMs Example: Input: State Sen. Stewart Greenleaf discusses his proposed human trafficking bill at Calvery Baptist Church in Willow Grove Thursday night. Output: Stewart Greenleaf discusses his human trafficking bill. Results: readability informativeness MIRA (previous best): 4.31 3.55 LSTM [Filippova et al., 2015]: 4.51 3.78
  • 15. Translation with LSTMs Translation performance is measured in BLEU scores (higher is better, EnDe): ● Phrase-Based MT: 20.7 [Durrani et al., 2014] ● Early LSTM model: 19.4 [Sébastien et al., 2015] ● DeepAtt (large LSTM): 20.6 [Zhou et al., 2016] ● GNMT (large LSTM): 24.9 [Wu et al., 2016] ● GNMT+MoE: 26.0 [Shazeer & Mirhoseini et al., 2016] Again, model size and tuning seem to be the decisive factor.
  • 16. Translation with LSTMs: Examples German: Probleme kann man niemals mit derselben Denkweise lösen, durch die sie entstanden sind. PBMT Translate: GNMT Translate: No problem can be solved from Problems can never be solved the same consciousness that with the same way of thinking they have arisen. that caused them.
  • 17. Translation with LSTMs: How good is it? PBMT GNMT Human Relative improvement English → Spanish 4.885 5.428 5.504 87% English → French 4.932 5.295 5.496 64% English → Chinese 4.035 4.594 4.987 58% Spanish → English 4.872 5.187 5.372 63% French → English 5.046 5.343 5.404 83% Chinese → English 3.694 4.263 4.636 60% Google Translate production data, median score by human evaluation on the scale 0-6. [Wu et al., ‘16]
  • 23. Modern View h = f(Wx + B) [or h = conv(W, x)] o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP where P = {W,W’,B,B’}
  • 24. So what do we need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale See this online course for a nice introduction: https://www.coursera.org/learn/machine-learning
  • 27. Yet another dataflow system MatMul Add Relu biases weights examples labels Xent Graph of Nodes, also called Operations or ops.
  • 28. Yet another dataflow systemwith tensors MatMul Add Relu biases weights examples labels Xent Edges are N-dimensional arrays: Tensors
  • 29. Yet another dataflow systemwith state Add Mul biases ... learning rate −=... 'Biases' is a variable −= updates biasesSome ops compute gradients
  • 30. Device A Device B Yet another dataflow systemdistributed Add Mul biases ... learning rate −=... Devices: Processes, Machines, GPUs, etc
  • 31. What's not in the Core Model ● Anything about neural networks, machine learning, ... ● Anything about backpropagation, differentiation, ... ● Anything about gradient descent, parameter servers… These are built by combining existing operations, or defining new operations. Core system can be applied to other problems than machine learning.
  • 33. API Families Graph Construction ● Assemble a Graph of Operations. Graph Execution ● Deploy and execute operations in a Graph.
  • 34. Hello, world! import tensorflow as tf # Create an operation. hello = tf.constant("Hello, world!") # Create a session. sess = tf.Session() # Execute that operation and print its result. print sess.run(hello)
  • 35. Graph Construction Library of predefined Ops ● Constant, Variables, Math ops, etc. Functions to add Ops for common needs ● Gradients: Add Ops to compute derivatives. ● Training methods: Add Ops to update variables (SGD, Adagrad, etc.) All operations are added to a global Default Graph. Slightly more advanced calls let you control the Graph more precisely.
  • 36. Op that holds state that persists across calls to Run() v = tf.get_variable(‘v’, [4, 3]) # 4x3 matrix, float by default Variable State Variable Value Reference
  • 37. Some Ops modify the Variable state: InitVariable, Assign, AssignSub, AssignAdd. init = v.assign(tf.random_uniform(shape=v.shape)) Variables State Variable Value Reference Random Parameters Assign Updates the variable value when run. Outputs the value for convenienceState Variable
  • 38. Math Ops A variety of Operations for linear algebra, convolutions, etc. c = tf.constant(...) w = tf.get_variable(...) b = tf.get_variable(...) y = tf.add(tf.matmul(c, w), b) Overloaded Python operators help: y = tf.matmul(c, w) + b w c MatMul b Add
  • 39. Operations, plenty of them ● Array ops ○ Concat ○ Slice ○ Reshape ○ ... ● Math ops ○ Linear algebra (MatMul, …) ○ Component-wise ops (Mul, ...) ○ Reduction ops (Sum, …) Documentation at tensorflow.org ● Neural network ops ○ Non-linearities (Relu, …) ○ Convolutions (Conv2D, …) ○ Pooling (AvgPool, …) ● ...and many more ○ Constants, Data flow, Control flow, Embedding, Initialization, I/O, Legacy Input Layers, Logging, Random, Sparse, State, Summary, etc.
  • 40. Graph Construction Helpers ● Gradients ● Optimizers ● Higher-Level APIs in core TF ● Higher-Level libraries outside core TF
  • 41. Gradients Given a loss, add Ops to compute gradients for Variables. var1 var0 Op Op Op loss many ops
  • 42. Gradients tf.gradients(loss, [var0, var1]) # Generate gradients var1 var0 Op Op Op loss many ops Op Op many opsGradients for var0 Gradients for var1 Op
  • 44. Optimizers Apply gradients to Variables: SGD(var, grad, learning_rate) var AssignSub Mul grad Note: learning_rate is just output of an Op, it can easily be decayed learning_rate
  • 45. Easily Add Optimizers Builtin ● SGD, Adagrad, Momentum, Adam, … Contributed ● LazyAdam, NAdam, YellowFin, Adafactor, ...
  • 46. Putting all together to train a Neural Net Build a Graph by adding Operations: ● For Variables to hold the parameters of the Neural Net. ● To compute the Neural Net output: e.g. classification predictions. ● To compute a training loss: e.g. cross entropy, parameter L2 norms. ● To calculate gradients for the parameters to train. ● To apply gradients with a training function.
  • 48. Graph Execution Session API ● API to deploy a Graph in a Tensorflow runtime ● Can run any subset of the graph ● Can add Ops to an existing Graph (for interactive use in colab for example) Training Utilities ● Checkpoint, Recovery, Summaries, Replicas, etc.
  • 49. Python Program create graph create session sess.run() Local Runtime Runtime Session CPU GPU
  • 50. Python Program create graph create session sess.run() Remote Runtime Session Master Worker CPU Worker CPU GPU Worker CPU GPU Run([ops]) RunSubGraph() GetTensor() CreateGraph()
  • 51. Running and fetching output an op Fetch # Run an Op and fetch its output. # "values" is a numpy ndarray. values = sess.run(<an op output>)
  • 52. Running and fetching output an op Fetch Transitive closure of needed ops is Run Execution happens in parallel
  • 53. Feeding input, Running, and Fetching a an op Fetch Feed a_val = ...a numpy ndarray... values = sess.run(<an op output>, feed_input({<a output>: a_val})
  • 54. Feeding input, Running, and Fetching a an op Fetch Feed Only the required Ops are run.
  • 56. Layers are ops that create Variables def embedding(x, vocab_size, dense_size, name=None, reuse=None, multiplier=1.0): """Embed x of type int64 into dense vectors.""" with tf.variable_scope( # Use scopes like this. name, default_name="emb", values=[x], reuse=reuse): embedding_var = tf.get_variable( "kernel", [vocab_size, dense_size]) return tf.gather(embedding_var, x)
  • 57. Models are built from Layers def bytenet(inputs, targets, hparams): final_encoder = common_layers.residual_dilated_conv( inputs, hparams.num_block_repeat, "SAME", "encoder", hparams) shifted_targets = common_layers.shift_left(targets) kernel = (hparams.kernel_height, hparams.kernel_width) decoder_start = common_layers.conv_block( tf.concat([final_encoder, shifted_targets], axis=3), hparams.hidden_size, [((1, 1), kernel)], padding="LEFT") return common_layers.residual_dilated_conv( decoder_start, hparams.num_block_repeat, "LEFT", "decoder", hparams)
  • 58. Training Utilities Training program typically runs multiple threads ● Execute the training op in a loop. ● Checkpoint every so often. ● Gather summaries for the Visualizer. ● Other, eg. monitors Nans, costs, etc.
  • 60. So what do we need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = y) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale TensorFlow view: h = tf.layers.dense(x, h_size, name=”h1”) o = tf.layers.dense(h, 1, name=”output”) l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y) But data? Where do we get {x,y} from?
  • 61. Tensor2Tensor Tensor2Tensor (T2T) is a library of deep learning models and datasets designed to accelerate deep learning research and make it more accessible. ● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ... ● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet, Transformer, ByteNet, Neural GPU, LSTM, ... ● Tools: cloud training, hyperparameter tuning, TPU, ...
  • 62. So what do we need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale TensorFlow: goo.gl/njJftZ x, y = mnist.dataset h = tf.layers.dense(x, h_size, name=”h1”) o = tf.layers.dense(h, 1, name=”output”) l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
  • 63. Play with the colab goo.gl/njJftZ ● Try pure SGD instead of the Adam optimizer and others like AdaFactor ○ Find in tensorflow.org where is the API and how optimizers are called ○ Find the AdaFactor paper on arxiv and read it; use it from Tensor2Tensor ● Try other layer sizes and numbers of layers, other activation functions. ● Try running a few times, how does initialization affect results? ● Try running on Cifar10, how does your model perform? ● Make a convolutional model, is it better? (tf.layers.dense -> tf.layers.conv2d) ● Try residual connections through conv layers, check out shake-shake in T2T
  • 65. RNNs Everywhere Sequence to Sequence Learning with Neural Networks
  • 67. Transformer Based on Attention Is All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin and other works with Samy Bengio, Eugene Brevdo, Francois Chollet, Stephan Gouws, Nal Kalchbrenner, Ofir Nachum, Aurko Roy, Ryan Sepassi.
  • 70. Dot-Product Attention def dot_product_attention(q, k, v, bias, dropout_rate=0.0, image_shapes=None, name=None, make_image_summary=True, save_weights_to=None, dropout_broadcast_dims=None): with tf.variable_scope( name, default_name="dot_product_attention", values=[q, k, v]) as scope: # [batch, num_heads, query_length, memory_length] logits = tf.matmul(q, k, transpose_b=True) if bias is not None: logits += bias weights = tf.nn.softmax(logits, name="attention_weights") if save_weights_to is not None: save_weights_to[scope.name] = weights # dropping out the attention links for each of the heads weights = common_layers.dropout_with_broadcast_dims( weights, 1.0 - dropout_rate, broadcast_dims=dropout_broadcast_dims) if expert_utils.should_generate_summaries() and make_image_summary: attention_image_summary(weights, image_shapes) return tf.matmul(weights, v)
  • 71. Ops Activations Attention (dot-prod) n2 · d n2 + n · d Attention (additive) n2 · d n2 · d Recurrent n · d2 n · d Convolutional n · d2 n · d n = sequence length d = depth k = kernel size
  • 72. What’s missing from Self-Attention? Convolution Self-Attention
  • 73. What’s missing from Self-Attention? Convolution Self-Attention ● Convolution: a different linear transformation for each relative position. Allows you to distinguish what information came from where. ● Self-Attention: a weighted average :(
  • 74. The Fix: Multi-Head Attention Convolution Multi-Head Attention ● Multiple attention layers (heads) in parallel (shown by different colors) ● Each head uses different linear transformations. ● Different heads can learn different relationships.
  • 75. The Fix: Multi-Head Attention
  • 76. The Fix: Multi-Head Attention
  • 77. Ops Activations Multi-Head Attention with linear transformations. For each of the h heads, dq = dk = dv = d/h n2 · d + n · d2 n2 · h + n · d Recurrent n · d2 n · d Convolutional n · d2 n · d n = sequence length d = depth k = kernel size
  • 78. Three ways of attention Encoder-Decoder Attention Encoder Self-Attention MaskedDecoder Self-Attention
  • 80. Machine Translation Results: WMT-14 29.1 41.8
  • 83. Coreference resolution (Winograd schemas) Sentence Google Translate Transformer The cow ate the hay because it was delicious. La vache mangeait le foin parce qu'elle était délicieuse. La vache a mangé le foin parce qu'il était délicieux. The cow ate the hay because it was hungry. La vache mangeait le foin parce qu'elle avait faim. La vache mangeait le foin parce qu'elle avait faim. The women stopped drinking the wines because they were carcinogenic. Les femmes ont cessé de boire les vins parce qu'ils étaient cancérogènes. Les femmes ont cessé de boire les vins parce qu'ils étaient cancérigènes. The women stopped drinking the wines because they were pregnant. Les femmes ont cessé de boire les vins parce qu'ils étaient enceintes. Les femmes ont cessé de boire les vins parce qu'elles étaient enceintes. The city councilmen refused the female demonstrators a permit because they advocated violence. Les conseillers municipaux ont refusé aux femmes manifestantes un permis parce qu'ils préconisaient la violence. Le conseil municipal a refusé aux manifestantes un permis parce qu'elles prônaient la violence. The city councilmen refused the female demonstrators a permit because they feared violence. Les conseillers municipaux ont refusé aux femmes manifestantes un permis parce qu'ils craignaient la violence Le conseil municipal a refusé aux manifestantes un permis parce qu'elles craignaient la violence.*
  • 84. Long Text Generation Generating entire Wikipedia articles by summarizing top search results and references. (Memory-Compressed Attn.)
  • 85. '''The Transformer''' are a Japanese [[hardcore punk]] band. ==Early years== The band was formed in 1968, during the height of Japanese music history. Among the legendary [[Japanese people|Japanese]] composers of [Japanese lyrics], they prominently exemplified Motohiro Oda's especially tasty lyrics and psychedelic intention. Michio was a longtime member of the every Sunday night band PSM. His alluring was of such importance as being the man who ignored the already successful image and that he municipal makeup whose parents were&amp;nbsp;– the band was called Jenei.&lt;ref&gt;http://www.separatist.org/se_frontend/post-punk-musician-the-kidney.html&lt;/ref&gt; From a young age the band was very close, thus opting to pioneer what
  • 86. From a young age the band was very close, thus opting to pioneer what had actually begun as a more manageable core hardcore punk band.&lt;ref&gt;http://www.talkradio.net/article/independent-music-fades-from-the-closed-drawings-out&lt;/ref&gt; ==History== ===Born from the heavy metal revolution=== In 1977 the self-proclaimed King of Tesponsors, [[Joe Lus: : It was somewhere... it was just a guile ... taking this song to Broadway. It was the first record I ever heard on A.M., After some opposition I received at the hands of Parsons, and in the follow-up notes myself.&lt;ref&gt;http://www.discogs.com/artist/The+Op%C5%8Dn+&amp;+Psalm&lt;/ref&gt; The band cut their first record album titled ''Transformed, furthered
  • 87. The band cut their first record album titled ''Transformed, furthered and extended Extended'',&lt;ref&gt;[https://www.discogs.com/album/69771 MC – Transformed EP (CDR) by The Moondrawn – EMI, 1994]&lt;/ref&gt; and in 1978 the official band line-up of the three-piece pop-punk-rock band TEEM. They generally played around [[Japan]], growing from the Top 40 standard. ===1981-2010: The band to break away=== On 1 January 1981 bassist Michio Kono, and the members of the original line-up emerged. Niji Fukune and his [[Head poet|Head]] band (now guitarist) Kazuya Kouda left the band in the hands of the band at the May 28, 1981, benefit season of [[Led Zeppelin]]'s Marmarin building. In June 1987, Kono joined the band as a full-time drummer, playing a
  • 88. few nights in a 4 or 5 hour stint with [[D-beat]]. Kono played through the mid-1950s, at Shinlie, continued to play concerts with drummers in Ibis, Cor, and a few at the Leo Somu Studio in Japan. In 1987, Kono recruited new bassist Michio Kono and drummer Ayaka Kurobe as drummer for band. Kono played trumpet with supplement music with Saint Etienne as a drummer. Over the next few years Kono played as drummer and would get many alumni news invitations to the bands' ''Toys Beach'' section. In 1999 he joined the [[CT-182]]. His successor was Barrie Bell on a cover of [[Jethro Tull (band)|Jethro Tull]]'s original 1967 hit &quot;Back Home&quot; (last appearance was in Jethro), with whom he shares a name. ===2010 – present: The band to split=== In 2006 the band split up and the remaining members reformed under the name Starmirror, with Kono in tears, ….
  • 89. '''''The Transformer''''' is a [[book]] by British [[illuminatist]] [[Herman Muirhead]], set in a post-apocalyptic world that border on a mysterious alien known as the &quot;Transformer Planet&quot; which is his trademark to save Earth. The book is about 25 years old, and it contains forty-one different demographic models of the human race, as in the cases of two fictional ''groups'',&amp;nbsp;''[[Robtobeau]]''&amp;nbsp;&quot;Richard&quot; and &quot;The Transformers Planet&quot;. == Summary == The book benefits on the [[3-D film|3-D film]], taking his one-third of the world's pure &quot;answer&quot; and gas age from 30 to 70 within its confines. The book covers the world of the world of [[Area 51|Binoculars]] from around the worlds of Earth. It is judged by the ability of [[telepathy|telepaths]] and [[television]], and provides color, line, and end-to-end observational work.
  • 90. and end-to-end observational work. To make the book up and document the recoverable quantum states of the universe, in order to inspire a generation that fantasy producing a tele-recording-offering machine is ideal. To make portions of this universe home, he recreates the rostrum obstacle-oriented framework Minou.&lt;ref&gt;http://www.rewunting.net/voir/BestatNew/2007/press/Story.html)&lt;/ref&gt; == ''The Transformer''== The book was the first on a [[Random Access Album|re-issue]] since its original version of ''[[Robtobeau]]'', despite the band naming itself a &quot;Transformer Planet&quot; in the book.&lt;ref name=prweb-the-1985&gt;{{cite web|url=http://www.prnewswire.co.uk/cgi/news/release?id=9010884|title=''The Transformer''|publisher=www.prnewswire.co.uk|date=|accessdate=2012-04-25}}&lt;/ref&gt; Today, &quot;[[The Transformers Planet]]&quot; is played entirely open-ended, there are more than just the four previously separate only bands. A number of its groups will live on one abandoned volcano in North America,
  • 91. ===Conceptual ''The Transformer'' universe=== Principals a setting-man named “The Supercongo Planet,” who is a naturalistic device transferring voice and humour from ''The Transformer Planet,'' whose two vice-maks appear often in this universe existence, and what the project in general are trying to highlight many societal institutions. Because of the way that the corporation has made it, loneliness, confidence, research and renting out these universes are difficult to organise without the bands creating their own universe. The scientist is none other than a singer and musician. Power plants are not only problematic, but if they want programmed them to create and perform the world's first Broadcast of itself once the universe started, but deliberately Acta Biological Station, db.us and BB on ''The Transformer Planet'', ''The Transformer Planet'', aren't other things Scheduled for.
  • 92. :&lt;blockquote&gt;A man called Dick Latanii Bartow, known the greatest radio dot Wonderland administrator at influential arrangers in a craze over the complex World of Biological Predacial Engineer in Rodel bringing Earth into a 'sortjob' with fans. During this 'Socpurportedly Human', Conspiracy was being released to the world as Baron Maadia on planet Nature. A world-renowned scientist named Julia Samur is able to cosmouncish society and run for it - except us who is he and he is before talking this entire T100 before Cell physiologist Cygnets. Also, the hypnotic Mr. Mattei arrived, so it is Mischief who over-manages for himself - but a rising duplicate of Phil Rideout makes it almost affable. There is plenty of people at work to make use of it and animal allies out of politics. But Someday in 1964, when we were around, we were steadfast against the one man's machine and he did an amazing job at the toe of the mysterious... Mr. Suki who is an engineering desk lecturer at the University of}}}} …………….
  • 93. Image Generation Model Type % unrecognized (max = 50%) ResNet 4.0% Superresolution GAN (Garcia’16) 8.5% PixelRecursive (Dahl et al., 2017) 11% Image Transformer 36.9%
  • 94. How about GANs? (Are GANs Created Equal? A Large-Scale Study) Problem 1: Variance Problem 2: Even best models are not great: Image Transformer: 36.6
  • 95. Play with the colab goo.gl/njJftZ ● Try a pre-trained Transformer on translation, see attentions. ● See https://jalammar.github.io/illustrated-transformer/ ● Add Transformer layer on the previous sequence tasks, try it. ● Try the non-deterministic sequence task: 50% copy / 50% repeat-even: ○ See that previous sequence model fails on unclear outputs ○ Add auto-regressive part and attention ○ See that the new model is 50% correct (best possible) ○ *Does it generalize less with attention? Why? What could be done?
  • 96. How do I get it?
  • 98. Tensor2Tensor Tensor2Tensor (T2T) is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. ● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ... ● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet, Transformer, ByteNet, Neural GPU, LSTM, ...
  • 100. Tensor2Tensor Code (github) ● data_generators/ : datasets, must subclass Problem ● models/ : models, must subclass T2TModel ● utils/ , bin/ , etc. : utilities, binaries, cloud helpers, … pip install tensor2tensor && t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/mnist --problems=image_mnist --model=shake_shake --hparams_set=shake_shake_quick --train_steps=1000 --eval_steps=100
  • 101. Tensor2Tensor Applications pip install tensor2tensor && t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/dir --problems=$P --model=$M --hparams_set=$H ● Translation (state-of-the-art both on speed and accuracy): $P=translate_ende_wmt32k, $M=transformer, $H=transformer_big ● Image classification (CIFAR, also ImageNet): $P=image_cifar10, $M=shake_shake, $H=shakeshake_big ● Summarization (CNN): $P=summarize_cnn_dailymail32k, $M=transformer, $H=transformer_prepend ● Speech recognition (Librispeech): $P=librispeech, $M=transformer, $H=transformer_librispeech
  • 102. Why Tensor2Tensor? ● No need to reinvent ML. Best practices and SOTA models. ● Modularity helps. Easy to change models, hparams, data. ● Trains everywhere. Multi-GPU, distributed, Cloud, TPUs. ● Used by Google Brain. Papers, preferred for Cloud TPU LMs. ● Great active community! Find us on github, gitter, groups, ...
  • 103. Tensor2Tensor + CloudML How do I train a model on my data? See the Cloud ML poetry tutorial! ● How to hook up your data to the library of models. ● How to easily run on Cloud ML and use all features. ● How to tune the configuration of a model automatically Result: Even with 20K data examples it generates poetry!