SlideShare a Scribd company logo
Flux and InfluxDB 2.0
Paul Dix

@pauldix

paul@influxdata.com
Flux and InfluxDB 2.0
• Data-scripting language

• Functional

• MIT Licensed

• Language & Runtime/Engine
Language + Query Engine
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
2.0
Biggest Change Since 0.9
Clean Migration Path
Compatibility Layer
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
TICK is dead
Long Live InfluxDB 2.0
(and Telegraf)
Consistent Documented API
Collection, Write/Query, Streaming & Batch Processing, Dashboards
Flux and InfluxDB 2.0
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
Flux and InfluxDB 2.0
Ways to run Flux - (interpreter,
InfluxDB 1.7 & 2.0)
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Flux Language Elements
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
// variables
some_int = 23
// variables
some_int = 23
some_float = 23.2
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
some_object = {foo: "hello" bar: 22}
Data Model & Working with
Tables
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Measurement
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Field
Table
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Column
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Record
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Group Key
_measurement=mem,host=A,region=west,_field=free
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Every record has
the same value!
_measurement=mem,host=A,region=west,_field=free
Table Per Series
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 11
_measurement host region _field _time _value
mem B west free 2018-06-14T09:15:00 20
mem B west free 2018-06-14T09:14:50 22
_measurement host region _field _time _value
cpu A west usage_user 2018-06-14T09:15:00 45
cpu A west usage_user 2018-06-14T09:14:50 49
_measurement host region _field _time _value
cpu A west usage_system 2018-06-14T09:15:00 35
cpu A west usage_system 2018-06-14T09:14:50 38
input tables -> function -> output tables
input tables -> function -> output tables
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
What to sum on?
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
Default columns argument
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum(columns: [“_value”])
input tables -> function -> output tables
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
Input in table form
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
sum()
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
sum()
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
21
_meas
ureme
host region _field _time _valu
e
mem B west free 2018-06-
14T09:15
42
N to N table mapping
(1 to 1 mapping)
N to M table mapping
window
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
30s of data (4 samples)
window
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
split into 20s windows
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
Input
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s) _meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem A west free …14:40 11
_meas
ureme
host region _field _time _valu
emem B west free …14:50 23
mem B west free …15:00 24
_meas
ureme
host region _field _time _valu
emem B west free …14:30 20
mem B west free …14:40 22
_meas
ureme
host region _field _time _valu
emem A west free …14:50 12
mem A west free …15:00 13
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s) _meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem A west free …14:40 11
_meas
ureme
host region _field _time _valu
emem B west free …14:50 23
mem B west free …15:00 24
_meas
ureme
host region _field _time _valu
emem B west free …14:30 20
mem B west free …14:40 22
_meas
ureme
host region _field _time _valu
emem A west free …14:50 12
mem A west free …15:00 13
N to M tables
Window based on time
_start and _stop columns
group
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
group
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
new group key
group
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
group
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
group(
keys:
[“region”])
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
_meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem B west free …14:30 20
mem A west free …14:40 11
mem B west free …14:40 21
mem A west free …14:50 12
mem B west free …14:50 22
mem B west free …15:00 13
mem B west free …15:00 23
N to M tables
M == cardinality(group keys)
Group based on columns
Flux Design Principles
Useable
Make Everyone a Data
Programmer!
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Readable
Flexible
Composable
Testable
Contributable
Shareable
Functions Overview
Inputs
from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.
Flux != InfluxDB
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Flux and InfluxDB 2.0
Follow Telegraf Model
import "mysql"
customers = mysql.from(connect: loadSecret(name:”mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Imports for sharing code!
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Pulling data from a non-InfluxDB source
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Raw query (for now)
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Loading Secret
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Renaming & Shaping Data
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Join on any column
Outputs
to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.
Outputs are for Tasks
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Option syntax for tasks
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Get at the last value without specifying time range
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: “critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Adding a column to decorate the data
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts") To writes to the local InfluxDB
Separate Alerts From
Notifications!
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”)
|> to(bucket: “notifications")
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == “critical”)
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
We have state so we don’t resend
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Use last time as argument to range
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Now function for current time
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Map function to iterate
over values
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
String interpolation
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Send to Slack and
record in InfluxDB
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Cron syntax
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Closures
Tasks run logs
(just another time series)
UI will hide complexity
Built on top of primitives
API for Defining Dashboards
Bulk Import & Export
Specify bucket, range, predicate
Same API in OSS, Cloud, and
Enterprise
CLI & UI
2.0
Thank you.
Paul Dix

@pauldix

paul@influxdata.com

More Related Content

PDF
Airflow presentation
PDF
Stream Processing made simple with Kafka
PPTX
Prometheus in Practice: High Availability with Thanos (DevOpsDays Edinburgh 2...
PDF
Intro to InfluxDB
PPTX
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)
PDF
What is the State of my Kafka Streams Application? Unleashing Metrics. | Neil...
PDF
Flux and InfluxDB 2.0 by Paul Dix
PPTX
Introduction to Kafka and Zookeeper
Airflow presentation
Stream Processing made simple with Kafka
Prometheus in Practice: High Availability with Thanos (DevOpsDays Edinburgh 2...
Intro to InfluxDB
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)
What is the State of my Kafka Streams Application? Unleashing Metrics. | Neil...
Flux and InfluxDB 2.0 by Paul Dix
Introduction to Kafka and Zookeeper

What's hot (20)

PDF
PDF
Introduction to DataFusion An Embeddable Query Engine Written in Rust
PDF
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...
PDF
Apache Airflow
PDF
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
PPTX
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
PPTX
Autoscaling on Kubernetes
PDF
Secrets of Performance Tuning Java on Kubernetes
PDF
Apache Airflow Architecture
PDF
VictoriaLogs: Open Source Log Management System - Preview
PPTX
Terraform training 🎒 - Basic
PPTX
Discovering the 2 in Alfresco Search Services 2.0
PDF
초보자를 위한 분산 캐시 이야기
PDF
Container Performance Analysis
PDF
Introduction To Flink
PPT
HBASE Overview
PDF
Understanding InfluxDB Basics: Tags, Fields and Measurements
PDF
CDC patterns in Apache Kafka®
PDF
Mind the App: How to Monitor Your Kafka Streams Applications | Bruno Cadonna,...
PPTX
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Why My Streaming Job is Slow - Profiling and Optimizing Kafka Streams Apps (L...
Apache Airflow
Getting Data In and Out of Flink - Understanding Flink and Its Connector Ecos...
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Autoscaling on Kubernetes
Secrets of Performance Tuning Java on Kubernetes
Apache Airflow Architecture
VictoriaLogs: Open Source Log Management System - Preview
Terraform training 🎒 - Basic
Discovering the 2 in Alfresco Search Services 2.0
초보자를 위한 분산 캐시 이야기
Container Performance Analysis
Introduction To Flink
HBASE Overview
Understanding InfluxDB Basics: Tags, Fields and Measurements
CDC patterns in Apache Kafka®
Mind the App: How to Monitor Your Kafka Streams Applications | Bruno Cadonna,...
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Ad

Similar to Flux and InfluxDB 2.0 (20)

PDF
Optimizing the Grafana Platform for Flux
PPTX
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
PPTX
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
PDF
InfluxData Platform Future and Vision
PDF
Router Queue Simulation in C++ in MMNN and MM1 conditions
PDF
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
PDF
Monitoring InfluxEnterprise
PDF
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
PPT
3900458 LTE Accessibilty case study.ppt
PPS
Ns2 introduction 2
PPTX
Kapacitor - Real Time Data Processing Engine
PPTX
First Flink Bay Area meetup
PDF
Virtual training Intro to Kapacitor
PDF
Writing a TSDB from scratch_ performance optimizations.pdf
PDF
計算機性能の限界点とその考え方
ODP
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
PDF
ClojureScript loves React, DomCode May 26 2015
PDF
Using eBPF Off-CPU Sampling to See What Your DBs are Really Waiting For by Ta...
PDF
Unleashing your Kafka Streams Application Metrics!
PDF
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
Optimizing the Grafana Platform for Flux
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
InfluxData Platform Future and Vision
Router Queue Simulation in C++ in MMNN and MM1 conditions
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
Monitoring InfluxEnterprise
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
3900458 LTE Accessibilty case study.ppt
Ns2 introduction 2
Kapacitor - Real Time Data Processing Engine
First Flink Bay Area meetup
Virtual training Intro to Kapacitor
Writing a TSDB from scratch_ performance optimizations.pdf
計算機性能の限界点とその考え方
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
ClojureScript loves React, DomCode May 26 2015
Using eBPF Off-CPU Sampling to See What Your DBs are Really Waiting For by Ta...
Unleashing your Kafka Streams Application Metrics!
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
Ad

More from InfluxData (20)

PPTX
Announcing InfluxDB Clustered
PDF
Best Practices for Leveraging the Apache Arrow Ecosystem
PDF
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
PDF
Power Your Predictive Analytics with InfluxDB
PDF
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
PDF
Build an Edge-to-Cloud Solution with the MING Stack
PDF
Meet the Founders: An Open Discussion About Rewriting Using Rust
PDF
Introducing InfluxDB Cloud Dedicated
PDF
Gain Better Observability with OpenTelemetry and InfluxDB
PPTX
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
PDF
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
PPTX
Introducing InfluxDB’s New Time Series Database Storage Engine
PDF
Start Automating InfluxDB Deployments at the Edge with balena
PDF
Understanding InfluxDB’s New Storage Engine
PDF
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
PPTX
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
PDF
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Announcing InfluxDB Clustered
Best Practices for Leveraging the Apache Arrow Ecosystem
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
Power Your Predictive Analytics with InfluxDB
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
Build an Edge-to-Cloud Solution with the MING Stack
Meet the Founders: An Open Discussion About Rewriting Using Rust
Introducing InfluxDB Cloud Dedicated
Gain Better Observability with OpenTelemetry and InfluxDB
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
Introducing InfluxDB’s New Time Series Database Storage Engine
Start Automating InfluxDB Deployments at the Edge with balena
Understanding InfluxDB’s New Storage Engine
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022

Recently uploaded (20)

PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Internet of Things (IOT) - A guide to understanding
DOCX
573137875-Attendance-Management-System-original
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPT
Mechanical Engineering MATERIALS Selection
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT
Project quality management in manufacturing
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
OOP with Java - Java Introduction (Basics)
PDF
PPT on Performance Review to get promotions
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Internet of Things (IOT) - A guide to understanding
573137875-Attendance-Management-System-original
Structs to JSON How Go Powers REST APIs.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
Mechanical Engineering MATERIALS Selection
Operating System & Kernel Study Guide-1 - converted.pdf
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
CH1 Production IntroductoryConcepts.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Project quality management in manufacturing
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
OOP with Java - Java Introduction (Basics)
PPT on Performance Review to get promotions
UNIT-1 - COAL BASED THERMAL POWER PLANTS
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk

Flux and InfluxDB 2.0

  • 1. Flux and InfluxDB 2.0 Paul Dix @pauldix paul@influxdata.com
  • 3. • Data-scripting language • Functional • MIT Licensed • Language & Runtime/Engine
  • 7. 2.0
  • 11. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 12. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 14. Long Live InfluxDB 2.0 (and Telegraf)
  • 15. Consistent Documented API Collection, Write/Query, Streaming & Batch Processing, Dashboards
  • 17. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 20. Ways to run Flux - (interpreter, InfluxDB 1.7 & 2.0)
  • 24. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 25. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 26. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 27. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 28. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 29. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 30. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 31. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 32. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 33. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 35. // variables some_int = 23 some_float = 23.2
  • 36. // variables some_int = 23 some_float = 23.2 some_string = “cpu"
  • 37. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h
  • 38. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  • 39. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  • 40. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  • 41. Data Model & Working with Tables
  • 45. Table _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10
  • 46. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Column
  • 47. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Record
  • 48. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Group Key _measurement=mem,host=A,region=west,_field=free
  • 49. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Every record has the same value! _measurement=mem,host=A,region=west,_field=free
  • 50. Table Per Series _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 11 _measurement host region _field _time _value mem B west free 2018-06-14T09:15:00 20 mem B west free 2018-06-14T09:14:50 22 _measurement host region _field _time _value cpu A west usage_user 2018-06-14T09:15:00 45 cpu A west usage_user 2018-06-14T09:14:50 49 _measurement host region _field _time _value cpu A west usage_system 2018-06-14T09:15:00 35 cpu A west usage_system 2018-06-14T09:14:50 38
  • 51. input tables -> function -> output tables
  • 52. input tables -> function -> output tables // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 53. input tables -> function -> output tables What to sum on? // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 54. input tables -> function -> output tables Default columns argument // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum(columns: [“_value”])
  • 55. input tables -> function -> output tables _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 Input in table form // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 56. input tables -> function -> output tables _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 57. input tables -> function -> output tables // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 21 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 42
  • 58. N to N table mapping (1 to 1 mapping)
  • 59. N to M table mapping
  • 60. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) 30s of data (4 samples)
  • 61. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) split into 20s windows
  • 62. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) Input
  • 63. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s)
  • 64. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu emem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu emem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu emem A west free …14:50 12 mem A west free …15:00 13
  • 65. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu emem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu emem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu emem A west free …14:50 12 mem A west free …15:00 13 N to M tables
  • 66. Window based on time _start and _stop columns
  • 67. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  • 68. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) new group key
  • 69. group _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  • 70. group _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 group( keys: [“region”]) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem B west free …14:30 20 mem A west free …14:40 11 mem B west free …14:40 21 mem A west free …14:50 12 mem B west free …14:50 22 mem B west free …15:00 13 mem B west free …15:00 23 N to M tables M == cardinality(group keys)
  • 71. Group based on columns
  • 74. Make Everyone a Data Programmer!
  • 85. Inputs from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.
  • 92. import "mysql" customers = mysql.from(connect: loadSecret(name:”mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results")
  • 93. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Imports for sharing code!
  • 94. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Pulling data from a non-InfluxDB source
  • 95. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Raw query (for now)
  • 96. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Loading Secret
  • 97. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Renaming & Shaping Data
  • 98. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Join on any column
  • 99. Outputs to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.
  • 100. Outputs are for Tasks
  • 101. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts")
  • 102. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Option syntax for tasks
  • 103. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Get at the last value without specifying time range
  • 104. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: “critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Adding a column to decorate the data
  • 105. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") To writes to the local InfluxDB
  • 107. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”) |> to(bucket: “notifications")
  • 108. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == “critical”) // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") We have state so we don’t resend
  • 109. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Use last time as argument to range
  • 110. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Now function for current time
  • 111. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Map function to iterate over values
  • 112. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") String interpolation
  • 113. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Send to Slack and record in InfluxDB
  • 114. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  • 115. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Cron syntax
  • 116. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Closures
  • 117. Tasks run logs (just another time series)
  • 118. UI will hide complexity
  • 119. Built on top of primitives
  • 120. API for Defining Dashboards
  • 121. Bulk Import & Export Specify bucket, range, predicate
  • 122. Same API in OSS, Cloud, and Enterprise
  • 124. 2.0