SlideShare a Scribd company logo
Flux and InfluxDB 2.0
Paul Dix

@pauldix

paul@influxdata.com
Flux and InfluxDB 2.0 by Paul Dix
• Data-scripting language

• Functional

• MIT Licensed

• Language, VM, engine, planner, optimizer
Language + Query Engine
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul Dix
2.0
Biggest Change Since 0.9
Clean Migration Path
Compatibility Layer
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
Consistent Documented API
Collection, Write/Query, Streaming & Batch Processing, Dashboards
Flux and InfluxDB 2.0 by Paul Dix
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
Multi-tenant roles
• Operator

• Organization Administrator

• User
Data Model
• Organizations

• Buckets (retention)

• Time series data

• Tasks

• Runs

• Logs

• Dashboards

• Users

• Tokens

• Authorizations

• Protos (templates)

• Scrapers

• Telegrafs

• Labels
Flux and InfluxDB 2.0 by Paul Dix
Ways to run Flux - (interpreter,
InfluxDB 1.7 & 2.0)
Flux and InfluxDB 2.0 by Paul Dix
Flux Basics
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
// variables
some_int = 23
// variables
some_int = 23
some_float = 23.2
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
some_object = {foo: "hello" bar: 22}
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
This is potentially new
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
from(bucket:"foo")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "samples")
|> square()
|> filter(fn: (r) => r._value > 23.2)
Data Sources (inputs)
Data Sinks (outputs)
Tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
cron scheduling
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
packages & imports
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
map
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message) String interpolation
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Ship data elsewhere
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Store secrets in a
store like Vault
Open Questions
User Packages &
Dependencies
// in a file called package.flux
package "paul"
option version = "0.1.1"
// define square here or…
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// or this
packages.load(glob: "*.flux")
import "myorg/paul" // latest, will load package.flux
data |> paul.square()
import "myorg/paul", "0.1.0" // specific version
// 1. look in $fluxhome/myorg/paul/package.flux
// 2. look in $fluxhome/myorg/paul/0.1.0/package.flux
// 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul
data |> paul.square()
import "myorg/paul", ">=0.1.0" // at least this version
data |> paul.square()
Error Handling?
import "slack"
// what if this returns an error?
ret = slack.send(room: "foo", message: "testing this", token: "...")
Option Types?
match ret {
// on match ret gets mapped as the new type
Error => {
// do something ret
},
Else => {
// do something with ret
}
}
Loops?
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
// get matching records
foos = records
|> filter(fn: (r) => r.name == "foo")
while(fn: () => {
// do stuff
})
while(fn: () => {
// do stuff
})
while = (fn) =>
if fn()
while(fn)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
loopUntil = (fn, iteration, times) =>
if iteration < times {
fn(iteration)
loopUntil(fn, iteration + 1, times)
}
Syntactic Sugar
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
from(bucket:"foo")
|> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user")
|> range(start: 2018-11-07, stop: 2018-11-08)
|> keep(columns: ["_measurement", "_time", "_value", “_field"])
from(bucket:"foo")[_measurement == "cpu"]
// notice the trailing commas can be left off
from(bucket: "foo")
|> filter(fn: (row) => row._measurement == "cpu")
|> last()
from(bucket:"foo")["some tag" == "asdf",,]
from(bucket: "foo")
|> filter(fn: (row) => row["some tag"] == "asdf")
|> last()
from(bucket:"foo")[foo=="bar",-1h]
from(bucket: "foo")
|> filter(fn: (row) => row.foo == "bar")
|> range(start: -1h)
bucket = "foo"
start = -3
from(bucket: bucket)
|> range(start: start, end: -1)
// shortcut if the variable name is the same as the argument
from(bucket)
|> range(start, end: -1)
Flux Office Hours Tomorrow
InfluxDB 2.0 Status
Thank you
Paul Dix

paul@influxdata.com

@pauldix

More Related Content

PDF
PPT
Monitoring using Prometheus and Grafana
PPTX
Prometheus - Intro, CNCF, TSDB,PromQL,Grafana
PDF
Understanding of Apache kafka metrics for monitoring
PDF
Efficient Kubernetes scaling using Karpenter
PDF
Kubernetes
PPTX
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)
PDF
Beautiful Monitoring With Grafana and InfluxDB
Monitoring using Prometheus and Grafana
Prometheus - Intro, CNCF, TSDB,PromQL,Grafana
Understanding of Apache kafka metrics for monitoring
Efficient Kubernetes scaling using Karpenter
Kubernetes
Infrastructure-as-Code (IaC) Using Terraform (Intermediate Edition)
Beautiful Monitoring With Grafana and InfluxDB

What's hot (20)

PPTX
Terraform training 🎒 - Basic
PDF
Advanced Terraform
PDF
ksqlDB로 실시간 데이터 변환 및 스트림 처리
PPTX
MeetUp Monitoring with Prometheus and Grafana (September 2018)
PDF
Prometheus Storage
PDF
Apache Kafka Streams + Machine Learning / Deep Learning
PDF
Apache kafka performance(latency)_benchmark_v0.3
PDF
Prometheus and Docker (Docker Galway, November 2015)
PPTX
Grafana optimization for Prometheus
PDF
Monitoring with prometheus
PDF
Flux and InfluxDB 2.0
PPTX
Apache Beam: A unified model for batch and stream processing data
PPTX
What Is Docker? | What Is Docker And How It Works? | Docker Tutorial For Begi...
PDF
Hands-On Introduction to Kubernetes at LISA17
PPTX
Building an Event Streaming Architecture with Apache Pulsar
PPTX
Infrastructure-as-Code (IaC) using Terraform
PDF
오픈스택 멀티노드 설치 후기
PDF
Write your own telegraf plugin
PDF
Flink 2.0: Navigating the Future of Unified Stream and Batch Processing
PDF
Getting Started Monitoring with Prometheus and Grafana
Terraform training 🎒 - Basic
Advanced Terraform
ksqlDB로 실시간 데이터 변환 및 스트림 처리
MeetUp Monitoring with Prometheus and Grafana (September 2018)
Prometheus Storage
Apache Kafka Streams + Machine Learning / Deep Learning
Apache kafka performance(latency)_benchmark_v0.3
Prometheus and Docker (Docker Galway, November 2015)
Grafana optimization for Prometheus
Monitoring with prometheus
Flux and InfluxDB 2.0
Apache Beam: A unified model for batch and stream processing data
What Is Docker? | What Is Docker And How It Works? | Docker Tutorial For Begi...
Hands-On Introduction to Kubernetes at LISA17
Building an Event Streaming Architecture with Apache Pulsar
Infrastructure-as-Code (IaC) using Terraform
오픈스택 멀티노드 설치 후기
Write your own telegraf plugin
Flink 2.0: Navigating the Future of Unified Stream and Batch Processing
Getting Started Monitoring with Prometheus and Grafana
Ad

Similar to Flux and InfluxDB 2.0 by Paul Dix (20)

PPTX
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
PDF
Artimon - Apache Flume (incubating) NYC Meetup 20111108
PDF
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
PPTX
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
PDF
Monitoring with Prometheus
PDF
InfluxData Platform Future and Vision
PPTX
Anti patterns
PDF
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
PDF
ClojureScript loves React, DomCode May 26 2015
PDF
服务框架: Thrift & PasteScript
PDF
Monitoring InfluxEnterprise
PDF
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
PDF
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
PDF
Optimizing the Grafana Platform for Flux
PDF
RestMQ - HTTP/Redis based Message Queue
PDF
Presto anatomy
PDF
Wprowadzenie do technologi Big Data i Apache Hadoop
PDF
Job Queue in Golang
PDF
Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...
PDF
What's new with Apache Spark's Structured Streaming?
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
Artimon - Apache Flume (incubating) NYC Meetup 20111108
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Monitoring with Prometheus
InfluxData Platform Future and Vision
Anti patterns
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
ClojureScript loves React, DomCode May 26 2015
服务框架: Thrift & PasteScript
Monitoring InfluxEnterprise
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash course
Optimizing the Grafana Platform for Flux
RestMQ - HTTP/Redis based Message Queue
Presto anatomy
Wprowadzenie do technologi Big Data i Apache Hadoop
Job Queue in Golang
Java/Scala Lab: Анатолий Кметюк - Scala SubScript: Алгебра для реактивного пр...
What's new with Apache Spark's Structured Streaming?
Ad

More from InfluxData (20)

PPTX
Announcing InfluxDB Clustered
PDF
Best Practices for Leveraging the Apache Arrow Ecosystem
PDF
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
PDF
Power Your Predictive Analytics with InfluxDB
PDF
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
PDF
Build an Edge-to-Cloud Solution with the MING Stack
PDF
Meet the Founders: An Open Discussion About Rewriting Using Rust
PDF
Introducing InfluxDB Cloud Dedicated
PDF
Gain Better Observability with OpenTelemetry and InfluxDB
PPTX
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
PDF
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
PPTX
Introducing InfluxDB’s New Time Series Database Storage Engine
PDF
Start Automating InfluxDB Deployments at the Edge with balena
PDF
Understanding InfluxDB’s New Storage Engine
PDF
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
PPTX
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
PDF
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Announcing InfluxDB Clustered
Best Practices for Leveraging the Apache Arrow Ecosystem
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
Power Your Predictive Analytics with InfluxDB
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
Build an Edge-to-Cloud Solution with the MING Stack
Meet the Founders: An Open Discussion About Rewriting Using Rust
Introducing InfluxDB Cloud Dedicated
Gain Better Observability with OpenTelemetry and InfluxDB
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
Introducing InfluxDB’s New Time Series Database Storage Engine
Start Automating InfluxDB Deployments at the Edge with balena
Understanding InfluxDB’s New Storage Engine
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022

Recently uploaded (20)

PPT
Teaching material agriculture food technology
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
cuic standard and advanced reporting.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Approach and Philosophy of On baking technology
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
Big Data Technologies - Introduction.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Machine learning based COVID-19 study performance prediction
Teaching material agriculture food technology
The AUB Centre for AI in Media Proposal.docx
Chapter 3 Spatial Domain Image Processing.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Advanced methodologies resolving dimensionality complications for autism neur...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
cuic standard and advanced reporting.pdf
A Presentation on Artificial Intelligence
Approach and Philosophy of On baking technology
Unlocking AI with Model Context Protocol (MCP)
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Big Data Technologies - Introduction.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Dropbox Q2 2025 Financial Results & Investor Presentation
20250228 LYD VKU AI Blended-Learning.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Machine learning based COVID-19 study performance prediction

Flux and InfluxDB 2.0 by Paul Dix

  • 1. Flux and InfluxDB 2.0 Paul Dix @pauldix paul@influxdata.com
  • 3. • Data-scripting language • Functional • MIT Licensed • Language, VM, engine, planner, optimizer
  • 7. 2.0
  • 11. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 12. Consistent Documented API Collection, Write/Query, Streaming & Batch Processing, Dashboards
  • 14. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 16. Multi-tenant roles • Operator • Organization Administrator • User
  • 17. Data Model • Organizations • Buckets (retention) • Time series data • Tasks • Runs • Logs • Dashboards • Users • Tokens • Authorizations • Protos (templates) • Scrapers • Telegrafs • Labels
  • 19. Ways to run Flux - (interpreter, InfluxDB 1.7 & 2.0)
  • 22. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 23. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 24. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 25. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 26. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 27. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 28. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 29. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 30. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 31. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 33. // variables some_int = 23 some_float = 23.2
  • 34. // variables some_int = 23 some_float = 23.2 some_string = “cpu"
  • 35. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h
  • 36. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  • 37. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  • 38. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  • 39. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value})
  • 40. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) This is potentially new
  • 41. // defining a pipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) from(bucket:"foo") |> range(start: -1h) |> filter(fn: (r) => r._measurement == "samples") |> square() |> filter(fn: (r) => r._value > 23.2)
  • 44. Tasks
  • 45. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  • 46. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) tasks
  • 47. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) cron scheduling
  • 48. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) packages & imports
  • 49. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) map
  • 50. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) String interpolation
  • 51. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Ship data elsewhere
  • 52. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Store secrets in a store like Vault
  • 55. // in a file called package.flux package "paul" option version = "0.1.1" // define square here or…
  • 56. // in a file called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"])
  • 57. // in a file called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"]) // or this packages.load(glob: "*.flux")
  • 58. import "myorg/paul" // latest, will load package.flux data |> paul.square()
  • 59. import "myorg/paul", "0.1.0" // specific version // 1. look in $fluxhome/myorg/paul/package.flux // 2. look in $fluxhome/myorg/paul/0.1.0/package.flux // 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul data |> paul.square()
  • 60. import "myorg/paul", ">=0.1.0" // at least this version data |> paul.square()
  • 62. import "slack" // what if this returns an error? ret = slack.send(room: "foo", message: "testing this", token: "...")
  • 64. match ret { // on match ret gets mapped as the new type Error => { // do something ret }, Else => { // do something with ret } }
  • 66. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ]
  • 67. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1})
  • 68. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 )
  • 69. records = [ {name: "foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 ) // get matching records foos = records |> filter(fn: (r) => r.name == "foo")
  • 70. while(fn: () => { // do stuff })
  • 71. while(fn: () => { // do stuff }) while = (fn) => if fn() while(fn)
  • 72. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10)
  • 73. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times)
  • 74. // or loop some number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times) loopUntil = (fn, iteration, times) => if iteration < times { fn(iteration) loopUntil(fn, iteration + 1, times) }
  • 77. // <stream object>[<predicate>,<time>:<time>,<list of strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]]
  • 78. // <stream object>[<predicate>,<time>:<time>,<list of strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]] from(bucket:"foo") |> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user") |> range(start: 2018-11-07, stop: 2018-11-08) |> keep(columns: ["_measurement", "_time", "_value", “_field"])
  • 79. from(bucket:"foo")[_measurement == "cpu"] // notice the trailing commas can be left off from(bucket: "foo") |> filter(fn: (row) => row._measurement == "cpu") |> last()
  • 80. from(bucket:"foo")["some tag" == "asdf",,] from(bucket: "foo") |> filter(fn: (row) => row["some tag"] == "asdf") |> last()
  • 81. from(bucket:"foo")[foo=="bar",-1h] from(bucket: "foo") |> filter(fn: (row) => row.foo == "bar") |> range(start: -1h)
  • 82. bucket = "foo" start = -3 from(bucket: bucket) |> range(start: start, end: -1) // shortcut if the variable name is the same as the argument from(bucket) |> range(start, end: -1)
  • 83. Flux Office Hours Tomorrow