SlideShare a Scribd company logo
Copyright © 2014 Splunk Inc.
Power of Splunk
Search Processing Language
(SPL™)
2
Safe Harbor Statement
2
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
3
Agenda
Overview & Anatomy of a Search
– Quick refresher on search language and structure
SPL Commands and Examples
– Searching, charting, converging, exploring
Custom Commands
– Extend the capabilities of SPL
Q&A
3
SPL Overview
5
SPL Overview
Over 140+ search commands
Syntax was originally based upon the Unix pipeline and SQL
and is optimized for time series data
The scope of SPL includes data searching, filtering, modification,
manipulation, enrichment, insertion and deletion
5
6
Why Create a New Query Language?
Flexibility and
effectiveness on
small and big data
Late-binding schema
More/better methods of
correlation
Not just analyze, but
visualize
6
7
search and filter | munge | report | cleanup
| rename sum(KB) AS "Total KB" dc(clientip) AS "Unique Customers"
| eval KB=bytes/1024
sourcetype=access*
| stats sum(KB) dc(clientip)
SPL Basic Structure
7
SPL Examples
9
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
9
10
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
10
11
Search and Filter
Examples
• Keyword search:
sourcetype=access* http
• Filter:
sourcetype=access* http
host=webserver-02
• Combined:
sourcetype=access* http
host=webserver-02 (503 OR 504)
1
12
Search and Filter
Examples
• Keyword search:
sourcetype=access* http
• Filter:
sourcetype=access* http
host=webserver-02
• Combined:
sourcetype=access* http
host=webserver-02 (503 OR 504)
1
13
Search and Filter
Examples
• Keyword search:
sourcetype=access* http
• Filter:
sourcetype=access* http
host=webserver-02
• Combined:
sourcetype=access* http
host=webserver-02 (503 OR 504)
1
14
Eval – Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response =
if(status != 200, ”Error", ”OK”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
1
15
Eval – Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response =
if(status != 200, ”Error", ”OK”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
1
16
Eval – Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response =
if(status != 200, ”Error", ”OK”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
1
17
Eval – Just Getting Started!
Splunk Search Quick Reference Guide
1
18
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
18
19
Stats, Chart, Timechart
19
20
Stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
• Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
• By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
by instance
2
21
Stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
• Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
• By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
by instance
2
22
Stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=netapp:perf
| stats avg(read_ops) AS “Read OPs”
• Multiple statistics
sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
• By another field
Sourcetype=netapp:perf
| stats avg(read_ops) AS Read_OPs
sparkline(avg(read_ops)) AS Read_Trend
by instance
2
23
Timechart – Visualize Statistics Over Time
Examples
• Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
• Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | trendline sma5(read_ops)
• Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | predict read_ops
2
24
Timechart – Visualize Statistics Over Time
Examples
• Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
• Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | trendline sma5(read_ops)
• Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | predict read_ops
2
25
Timechart – Visualize Statistics Over Time
Examples
• Visualize stats over time
sourcetype=netapp:perf
| timechart avg(read_ops)
• Add a trendline
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | trendline sma5(read_ops)
• Add a prediction overlay
sourcetype=netapp:perf
| timechart avg(read_ops) as
read_ops | predict read_ops
2
26
Stats/Timechart – But Wait, There’s More!
Splunk Search Quick Reference Guide
2
27
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
27
28
Converging Data Sources
Index Untapped Data: Any Source, Type, Volume
Online
Services Web
Services
Servers
Security GPS
Location
Storage
Desktops
Networks
Packaged
Applications
Custom
ApplicationsMessaging
Telecoms
Online
Shopping
Cart
Web
Clickstreams
Databases
Energy
Meters
Call Detail
Records
Smartphones
and Devices
RFID
On-
Premises
Private
Cloud
Public
Cloud
Ask Any Question
Application Delivery
Security, Compliance,
and Fraud
IT Operations
Business Analytics
Industrial Data and
the Internet of Things
29
Converging Data Sources
Examples
• Implicit join on time
index=* http | timechart count by
sourcetype
• Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid |
stats count by customer_value
• Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) as hr_stdev] …
2
30
Lookup – Converging Data Sources
Examples
• Implicit join on time
index=* http | timechart count by
sourcetype
• Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid |
stats count by customer_value
• Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) as hr_stdev] …
3
31
Appendcols – Converging Data Sources
Examples
• Implicit join on time
index=* http | timechart count by
sourcetype
• Enrich data with lookup
sourcetype=access_combined status=503
| lookup customer_info uid |
stats count by customer_value
• Append results from another
search
… | appendcols [search earliest=-1h
sourcetype=Kepware units=W row=A
| stats stdev(Value) as hr_stdev] …
3
32
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
32
33
Transaction – Group Related Events Spanning Time
Examples
• Group by session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
• Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
3
34
Transaction – Group Related Events Spanning Time
Examples
• Group by session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
• Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
3
35
Transaction – Group Related Events Spanning Time
Examples
• Group by session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate session durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration)
avg(duration)
• Stats is better
sourcetype=access*
| stats min(_time) AS earliest max(_time)
AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration)
avg(duration)
3
36
SPL Examples and Recipes
Search and filter + creating/modifying fields
Charting statistics and predicting values
Converging data sources
Identifying and grouping transactions
Data exploration & finding relationships between fields
36
37
Data Exploration
37
| anomalies
| arules
| associate
| cluster
| contingency
| correlate
38
Cluster – Exploring Your Data
Examples
• Find most/least common events
* | cluster showcount=t t=.1
| table _raw cluster_count
• Show patterns of co-occurring fields.
sourcetype=access_combined
| fields – date* source* time*
| correlate
• Build contingency table to view field
relationships
sourcetype=access_combined
| contingency uri status
• Automatically deduce conclusions
sourcetype=access_combined
| associate uri status
3
39
Correlate – Exploring Your Data
Examples
• Find most/least common events
* | cluster showcount=t t=.1
| table _raw cluster_count
• Show patterns of co-occurring fields.
sourcetype=access_combined
| fields – date* source* time*
| correlate
• Build contingency table to view field
relationships
sourcetype=access_combined
| contingency uri status
• Automatically deduce conclusions
sourcetype=access_combined
| associate uri status
3
40
Contingency – Exploring Your Data
Examples
• Find most/least common events
* | cluster showcount=t t=.1
| table _raw cluster_count
• Show patterns of co-occurring fields.
sourcetype=access_combined
| fields – date* source* time*
| correlate
• Build contingency table to view
field relationships
sourcetype=access_combined
| contingency uri status
• Automatically deduce conclusions
sourcetype=access_combined
| associate uri status
4
41
Associate – Exploring Your Data
Examples
• Find most/least common events
* | cluster showcount=t t=.1
| table _raw cluster_count
• Show patterns of co-occurring fields.
sourcetype=access_combined
| fields – date* source* time*
| correlate
• Build contingency table to view field
relationships
sourcetype=access_combined
| contingency uri status
• Automatically deduce conclusions
sourcetype=access_combined
| associate uri status
4
Custom Commands
43
Custom Commands
What is a Custom Command?
– “| haversine origin="47.62,-122.34" outputField=dist lat lon”
Why do we use Custom Commands?
– Run other/external algorithms on your Splunk data
– Save time munging data (see Timewrap!)
– Because you can!
Create your own or download as Apps
– Haversine (Distance between two GPS coords)
– Timewrap (Enhanced Time overlay)
– Levenshtein (Fuzzy string compare)
– R Project (Utilize R!)
43
44
Custom Commands – Haversine
Examples
• Download and install App
Haversine
• Read documentation then
use in SPL!
sourcetype=access*
| iplocation clientip
| search City=A*
| haversine origin="47.62,-122.34"
units=mi outputField=dist lat lon
| table clientip, City, dist, lat, lon
4
45
Custom Commands – Haversine
Examples
• Download and install App
Haversine
• Read documentation then
use in SPL!
sourcetype=access*
| iplocation clientip
| search City=A*
| haversine origin="47.62,-122.34"
units=mi outputField=dist lat lon
| table clientip, City, dist, lat, lon
4
46
For More Information
Additional information can be found in:
– Search Manual
– Blogs
– Answers
– Operational Intelligence Cookbook – available for purchase
– Exploring Splunk
46
Q & A
Thank you!

More Related Content

PPTX
Power of SPL Breakout Session
PPTX
Power of SPL
PDF
An Introduction to Spark with Scala
PDF
Frequent Pattern Mining - Krishna Sridhar, Feb 2016
PPTX
SplunkLive! Dallas Nov 2012 - Metro PCS
PDF
Azure Search for Your App
PPT
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
PPT
Extending Solr: Building a Cloud-like Knowledge Discovery Platform
Power of SPL Breakout Session
Power of SPL
An Introduction to Spark with Scala
Frequent Pattern Mining - Krishna Sridhar, Feb 2016
SplunkLive! Dallas Nov 2012 - Metro PCS
Azure Search for Your App
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
Extending Solr: Building a Cloud-like Knowledge Discovery Platform

What's hot (9)

PPT
Econs honour year tutorial
PDF
Haystacks slides
PPTX
Search summit-2018-ltr-presentation
PPTX
Search summit-2018-content-engineering-slides
PDF
Vespa, A Tour
PDF
Enhancing relevancy through personalization & semantic search
DOCX
Seminar report(rohitsahu cs 17 vth sem)
PDF
Hybrid geo textual index structure
PDF
Context based Web Indexing for Storage of Relevant Web Pages
Econs honour year tutorial
Haystacks slides
Search summit-2018-ltr-presentation
Search summit-2018-content-engineering-slides
Vespa, A Tour
Enhancing relevancy through personalization & semantic search
Seminar report(rohitsahu cs 17 vth sem)
Hybrid geo textual index structure
Context based Web Indexing for Storage of Relevant Web Pages
Ad

Similar to Power of SPL Breakout Session (20)

PPTX
Power of SPL Breakout Session
PPTX
Power of SPL
PDF
Nationwide Splunk Ninjas!
PPTX
SplunkLive! London: Splunk ninjas- new features and search dojo
PPTX
Power of SPL
PPTX
Splunk Ninjas: New Features, Pivot, and Search Dojo
PPTX
Splunk Ninjas: New Features and Search Dojo
PPTX
Power of Splunk Search Processing Language (SPL) ...
PPTX
Splunk Ninjas Breakout Session
PPTX
Splunk Ninjas: New features, pivot, and search dojo
PPTX
Splunk Ninjas: New Features, Pivot, and Search Dojo
PPTX
Splunk Ninjas: New Features, Pivot, and Search Dojo
PPTX
SplunkLive! Analytics with Splunk Enterprise
PPTX
SplunkLive! Data Models 101
PPTX
Data Models Breakout Session
PPTX
Splunk live! ninjas_break-out
PPTX
Data models pivot with splunk break out session
PPTX
Power of Splunk Search Processing Language (SPL)
PPTX
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
PPTX
Splunk Ninjas: New Features and Search Dojo
Power of SPL Breakout Session
Power of SPL
Nationwide Splunk Ninjas!
SplunkLive! London: Splunk ninjas- new features and search dojo
Power of SPL
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features and Search Dojo
Power of Splunk Search Processing Language (SPL) ...
Splunk Ninjas Breakout Session
Splunk Ninjas: New features, pivot, and search dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
SplunkLive! Analytics with Splunk Enterprise
SplunkLive! Data Models 101
Data Models Breakout Session
Splunk live! ninjas_break-out
Data models pivot with splunk break out session
Power of Splunk Search Processing Language (SPL)
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features and Search Dojo
Ad

More from Splunk (20)

PDF
Splunk Leadership Forum Wien - 20.05.2025
PDF
Splunk Security Update | Public Sector Summit Germany 2025
PDF
Building Resilience with Energy Management for the Public Sector
PDF
IT-Lagebild: Observability for Resilience (SVA)
PDF
Nach dem SOC-Aufbau ist vor der Automatisierung (OFD Baden-Württemberg)
PDF
Monitoring einer Sicheren Inter-Netzwerk Architektur (SINA)
PDF
Praktische Erfahrungen mit dem Attack Analyser (gematik)
PDF
Cisco XDR & Splunk SIEM - stronger together (DATAGROUP Cyber Security)
PDF
Security - Mit Sicherheit zum Erfolg (Telekom)
PDF
One Cisco - Splunk Public Sector Summit Germany April 2025
PDF
.conf Go 2023 - Data analysis as a routine
PDF
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
PDF
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
PDF
.conf Go 2023 - Raiffeisen Bank International
PDF
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
PDF
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
PDF
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
PDF
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
PDF
.conf go 2023 - De NOC a CSIRT (Cellnex)
PDF
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
Splunk Leadership Forum Wien - 20.05.2025
Splunk Security Update | Public Sector Summit Germany 2025
Building Resilience with Energy Management for the Public Sector
IT-Lagebild: Observability for Resilience (SVA)
Nach dem SOC-Aufbau ist vor der Automatisierung (OFD Baden-Württemberg)
Monitoring einer Sicheren Inter-Netzwerk Architektur (SINA)
Praktische Erfahrungen mit dem Attack Analyser (gematik)
Cisco XDR & Splunk SIEM - stronger together (DATAGROUP Cyber Security)
Security - Mit Sicherheit zum Erfolg (Telekom)
One Cisco - Splunk Public Sector Summit Germany April 2025
.conf Go 2023 - Data analysis as a routine
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
.conf Go 2023 - Raiffeisen Bank International
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
.conf go 2023 - De NOC a CSIRT (Cellnex)
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
1 - Historical Antecedents, Social Consideration.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PPTX
Tartificialntelligence_presentation.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
A Presentation on Touch Screen Technology
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
August Patch Tuesday
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Getting Started with Data Integration: FME Form 101
PDF
Encapsulation theory and applications.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Programs and apps: productivity, graphics, security and other tools
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Encapsulation_ Review paper, used for researhc scholars
Digital-Transformation-Roadmap-for-Companies.pptx
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
1 - Historical Antecedents, Social Consideration.pdf
cloud_computing_Infrastucture_as_cloud_p
Tartificialntelligence_presentation.pptx
Assigned Numbers - 2025 - Bluetooth® Document
A Presentation on Touch Screen Technology
A comparative study of natural language inference in Swahili using monolingua...
August Patch Tuesday
Web App vs Mobile App What Should You Build First.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Heart disease approach using modified random forest and particle swarm optimi...
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Getting Started with Data Integration: FME Form 101
Encapsulation theory and applications.pdf

Power of SPL Breakout Session

  • 1. Copyright © 2014 Splunk Inc. Power of Splunk Search Processing Language (SPL™)
  • 2. 2 Safe Harbor Statement 2 During the course of this presentation, we may make forward looking statements regarding future events or the expected performance of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results could differ materially. For important factors that may cause actual results to differ from those contained in our forward-looking statements, please review our filings with the SEC. The forward-looking statements made in this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or accurate information. We do not assume any obligation to update any forward looking statements we may make. In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionality described orto includeany suchfeatureor functionalityina futurerelease.
  • 3. 3 Agenda Overview & Anatomy of a Search – Quick refresher on search language and structure SPL Commands and Examples – Searching, charting, converging, exploring Custom Commands – Extend the capabilities of SPL Q&A 3
  • 5. 5 SPL Overview Over 140+ search commands Syntax was originally based upon the Unix pipeline and SQL and is optimized for time series data The scope of SPL includes data searching, filtering, modification, manipulation, enrichment, insertion and deletion 5
  • 6. 6 Why Create a New Query Language? Flexibility and effectiveness on small and big data Late-binding schema More/better methods of correlation Not just analyze, but visualize 6
  • 7. 7 search and filter | munge | report | cleanup | rename sum(KB) AS "Total KB" dc(clientip) AS "Unique Customers" | eval KB=bytes/1024 sourcetype=access* | stats sum(KB) dc(clientip) SPL Basic Structure 7
  • 9. 9 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 9
  • 10. 10 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 10
  • 11. 11 Search and Filter Examples • Keyword search: sourcetype=access* http • Filter: sourcetype=access* http host=webserver-02 • Combined: sourcetype=access* http host=webserver-02 (503 OR 504) 1
  • 12. 12 Search and Filter Examples • Keyword search: sourcetype=access* http • Filter: sourcetype=access* http host=webserver-02 • Combined: sourcetype=access* http host=webserver-02 (503 OR 504) 1
  • 13. 13 Search and Filter Examples • Keyword search: sourcetype=access* http • Filter: sourcetype=access* http host=webserver-02 • Combined: sourcetype=access* http host=webserver-02 (503 OR 504) 1
  • 14. 14 Eval – Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status != 200, ”Error", ”OK”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port 1
  • 15. 15 Eval – Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status != 200, ”Error", ”OK”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port 1
  • 16. 16 Eval – Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status != 200, ”Error", ”OK”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port 1
  • 17. 17 Eval – Just Getting Started! Splunk Search Quick Reference Guide 1
  • 18. 18 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 18
  • 20. 20 Stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=netapp:perf | stats avg(read_ops) AS “Read OPs” • Multiple statistics sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend • By another field Sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend by instance 2
  • 21. 21 Stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=netapp:perf | stats avg(read_ops) AS “Read OPs” • Multiple statistics sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend • By another field Sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend by instance 2
  • 22. 22 Stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=netapp:perf | stats avg(read_ops) AS “Read OPs” • Multiple statistics sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend • By another field Sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs sparkline(avg(read_ops)) AS Read_Trend by instance 2
  • 23. 23 Timechart – Visualize Statistics Over Time Examples • Visualize stats over time sourcetype=netapp:perf | timechart avg(read_ops) • Add a trendline sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | trendline sma5(read_ops) • Add a prediction overlay sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | predict read_ops 2
  • 24. 24 Timechart – Visualize Statistics Over Time Examples • Visualize stats over time sourcetype=netapp:perf | timechart avg(read_ops) • Add a trendline sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | trendline sma5(read_ops) • Add a prediction overlay sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | predict read_ops 2
  • 25. 25 Timechart – Visualize Statistics Over Time Examples • Visualize stats over time sourcetype=netapp:perf | timechart avg(read_ops) • Add a trendline sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | trendline sma5(read_ops) • Add a prediction overlay sourcetype=netapp:perf | timechart avg(read_ops) as read_ops | predict read_ops 2
  • 26. 26 Stats/Timechart – But Wait, There’s More! Splunk Search Quick Reference Guide 2
  • 27. 27 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 27
  • 28. 28 Converging Data Sources Index Untapped Data: Any Source, Type, Volume Online Services Web Services Servers Security GPS Location Storage Desktops Networks Packaged Applications Custom ApplicationsMessaging Telecoms Online Shopping Cart Web Clickstreams Databases Energy Meters Call Detail Records Smartphones and Devices RFID On- Premises Private Cloud Public Cloud Ask Any Question Application Delivery Security, Compliance, and Fraud IT Operations Business Analytics Industrial Data and the Internet of Things
  • 29. 29 Converging Data Sources Examples • Implicit join on time index=* http | timechart count by sourcetype • Enrich data with lookup sourcetype=access_combined status=503 | lookup customer_info uid | stats count by customer_value • Append results from another search … | appendcols [search earliest=-1h sourcetype=Kepware units=W row=A | stats stdev(Value) as hr_stdev] … 2
  • 30. 30 Lookup – Converging Data Sources Examples • Implicit join on time index=* http | timechart count by sourcetype • Enrich data with lookup sourcetype=access_combined status=503 | lookup customer_info uid | stats count by customer_value • Append results from another search … | appendcols [search earliest=-1h sourcetype=Kepware units=W row=A | stats stdev(Value) as hr_stdev] … 3
  • 31. 31 Appendcols – Converging Data Sources Examples • Implicit join on time index=* http | timechart count by sourcetype • Enrich data with lookup sourcetype=access_combined status=503 | lookup customer_info uid | stats count by customer_value • Append results from another search … | appendcols [search earliest=-1h sourcetype=Kepware units=W row=A | stats stdev(Value) as hr_stdev] … 3
  • 32. 32 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 32
  • 33. 33 Transaction – Group Related Events Spanning Time Examples • Group by session ID sourcetype=access* | transaction JSESSIONID • Calculate session durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) 3
  • 34. 34 Transaction – Group Related Events Spanning Time Examples • Group by session ID sourcetype=access* | transaction JSESSIONID • Calculate session durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) 3
  • 35. 35 Transaction – Group Related Events Spanning Time Examples • Group by session ID sourcetype=access* | transaction JSESSIONID • Calculate session durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) 3
  • 36. 36 SPL Examples and Recipes Search and filter + creating/modifying fields Charting statistics and predicting values Converging data sources Identifying and grouping transactions Data exploration & finding relationships between fields 36
  • 37. 37 Data Exploration 37 | anomalies | arules | associate | cluster | contingency | correlate
  • 38. 38 Cluster – Exploring Your Data Examples • Find most/least common events * | cluster showcount=t t=.1 | table _raw cluster_count • Show patterns of co-occurring fields. sourcetype=access_combined | fields – date* source* time* | correlate • Build contingency table to view field relationships sourcetype=access_combined | contingency uri status • Automatically deduce conclusions sourcetype=access_combined | associate uri status 3
  • 39. 39 Correlate – Exploring Your Data Examples • Find most/least common events * | cluster showcount=t t=.1 | table _raw cluster_count • Show patterns of co-occurring fields. sourcetype=access_combined | fields – date* source* time* | correlate • Build contingency table to view field relationships sourcetype=access_combined | contingency uri status • Automatically deduce conclusions sourcetype=access_combined | associate uri status 3
  • 40. 40 Contingency – Exploring Your Data Examples • Find most/least common events * | cluster showcount=t t=.1 | table _raw cluster_count • Show patterns of co-occurring fields. sourcetype=access_combined | fields – date* source* time* | correlate • Build contingency table to view field relationships sourcetype=access_combined | contingency uri status • Automatically deduce conclusions sourcetype=access_combined | associate uri status 4
  • 41. 41 Associate – Exploring Your Data Examples • Find most/least common events * | cluster showcount=t t=.1 | table _raw cluster_count • Show patterns of co-occurring fields. sourcetype=access_combined | fields – date* source* time* | correlate • Build contingency table to view field relationships sourcetype=access_combined | contingency uri status • Automatically deduce conclusions sourcetype=access_combined | associate uri status 4
  • 43. 43 Custom Commands What is a Custom Command? – “| haversine origin="47.62,-122.34" outputField=dist lat lon” Why do we use Custom Commands? – Run other/external algorithms on your Splunk data – Save time munging data (see Timewrap!) – Because you can! Create your own or download as Apps – Haversine (Distance between two GPS coords) – Timewrap (Enhanced Time overlay) – Levenshtein (Fuzzy string compare) – R Project (Utilize R!) 43
  • 44. 44 Custom Commands – Haversine Examples • Download and install App Haversine • Read documentation then use in SPL! sourcetype=access* | iplocation clientip | search City=A* | haversine origin="47.62,-122.34" units=mi outputField=dist lat lon | table clientip, City, dist, lat, lon 4
  • 45. 45 Custom Commands – Haversine Examples • Download and install App Haversine • Read documentation then use in SPL! sourcetype=access* | iplocation clientip | search City=A* | haversine origin="47.62,-122.34" units=mi outputField=dist lat lon | table clientip, City, dist, lat, lon 4
  • 46. 46 For More Information Additional information can be found in: – Search Manual – Blogs – Answers – Operational Intelligence Cookbook – available for purchase – Exploring Splunk 46
  • 47. Q & A

Editor's Notes

  • #2: This presentation has some animations and content to help tell stories as you go. Feel free to change ANY of this to your own liking! Here is what you need for this presentation: You should have the following installed: The latest OI Demo 3.0 - Get it here: https://splunk.box.com/s/unocxl3jeun0tmhlczvlv3ei2h55pnfw --- More official coming soon Optional: Splunk Search Reference Guide handouts Mini buttercups or other prizes to give out for answering questions during the presentation I found it is best to pre-load all of the demo dashboards with the search examples instead of clicking on each picture (link to the search) from the slides and moving between the powerpoint presentation and a splunk demo instance too frequently. I would definitely practice your flow once or twice before a presentation.
  • #3: Safe Harbor Statement
  • #4: Disclaimer: What this class is vs. what it is not? - This class is meant to showcase examples of the Splunk Search Processing Language. We’ll go through basic steps of how to use a few of commands, but for the most part it is meant to demo, however you can learn much more in depth by enrolling in the Basic and Advanced Search and Reporting classes or read up on the docs online. Don’t worry - anything you see I’ll provide references and the examples will be available for d/l after the session. Opening Tell for each Agenda Item: What and why is it important? Anatomy of a Search: - First we’ll do a quick refresher on the anatomy of a search and why it’s useful. It’s important to understand the basic flow of the language and also the benefits of it. Examples of SPL: - Next we’ll show how both basic and more advanced search commands can be used to answer real world questions and build operation intelligence. In fact, we’ll breakdown a few of the searches in the Operational Intelligence demo you saw on the main stage. Additionally we’ll look at how SPL can help you explore new and complex data. In my opinion, this is an often overlooked and really powerful benefit of SPL. Custom Commands: - Lastly, I’ll show how to extend the Splunk search language using custom commands. This is also exciting due to the fact that the community has already made so many additions. Q&As: - And ofcourse we’ll finish with some Q & A’s. Time: (Total 60 min) Overview: 5 min Examples of SPL: 35 min Custom Commands 10 min Q & A: 10 min
  • #6: “The Splunk search language has over 140+ commands, is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging or modifying, and reporting.” “The Syntax was …” “Why? Because SQL is good for certain tasks and the Unix pipeline is amazing!” This is great BUT… WHY WOULD WE WANT TO CREATE A NEW LANGUAGE AND WHY DO YOU CARE?
  • #7: <Engage audience here.. Before showing bullet points ask “Why do you think we would want to create a new language?”> <Also Feel free to change pictures or flow of this slide..> -- have buttercups to throw out if anyone answers correctly? - Today we require the ability to quickly search and correlate through large amounts of data, sometimes in an unstructured or semi-unstructured way. Conventional query languages (such as SQL or MDX) simply do not provide the flexibility required for the effective searching of big data. Not only this but STREAMING data. (SQL can be great at joining a bunch of small tables together, but really large joins on datasets can be a problem whereas hadoop can be great with larger data sets, but sometimes inefficient when it comes to many small files or datasets. ) - Machine Data is different: - It is voluminous unstructured time series data with no predefined schema - It is generated by all IT systems– from applications and servers, to networks and RFIDs. - It is non-standard data and characterized by unpredictable and changing formats Traditional approaches are just not engineered for managing this high volume, high velocity, and highly diverse form of data. Splunk’s NoSQL query approach does not involve or impose any predefined schema. This enables the increased flexibility mentioned above, as there are No limits on the formats of data – No limits on where you can collect it from No limits on the questions that you can ask of it And no limits on scale Methods of Correlation enabled by SPL Time & GeoLocation: Identify relationships based on time and geographic location Transactions: Track a series of events as a single transaction Subsearches: Results of one search as input into other searches Lookups: Enhance, enrich, validate or add context to event data SQL-like joins between different data sets In addition to flexible searching and correlation, the same language is used to rapidly construct reports, dashboards, trendlines and other visualizations. This is useful because you can understand and leverage your data without the cost associated with the formal structuring or modeling of the data first. (With hadoop or SQL you run a job or query to generate results, but then you have need to integrate more software to actually visualize it!) “OK.. Let’s move on..”
  • #8: “Let’s take a closer look at the syntax, notice the unix pipeline” “The structure of SPL creates an easy way to stitch a variety of commands together to solve almost any question you may ask of your data.” “Search and Filter” - The search and filter piece allows you to use fields or keywords to reduce the data set. It’s an important but often overlooked part of the search due to the performance implications. “Munge” - The munge step is a powerful piece because you can “re-shape” data on the fly. In this example we show creating a new field called KB from an existing field “bytes”. “Report” - Once we’ve shaped and massaged the data we now have an abundant set of reporting commands that are used to visualize results through charts and tables, or even send to a third party application in whatever format they require. “Cleanup” - Lastly there are some cleanup options to help you create better labeling and add or remove fields. Again, sticthing together makes it easier to utilize and understand advanced commands, better flow etc. Additionally the implicit join on time and automatic granularity helps reduces complexity compared to what you would have to do in SQL and excel or other tools. “Let’s look at some more in depth examples”
  • #10: “In this next section we’ll take a more in depth look at some search examples and recipes. It would be impossible for us to go over every command and use case so the goal of this is to show a few different commands that can help solve most problems and generate quick time to value in the following area."
  • #11: “We’ll start by looking at a few Search and Filter basics. Most searches begin here and it’s important to understand how to reduce your data set down to find what your looking for as well as optimal performance” <The way you present/demo is flexible. The slides can be used as a reference and backup when needed, otherwise you can do most of it in the demo itself> <<<< ALL PICTURES ARE LINKED TO THE SEARCHES IN SPLUNK to help going back and forth>>>>
  • #12: Note how the search assistant shows the number of both exact and similar matched terms before you even click search. This can be very useful when exploring and previewing your data sets without having to run searches over and over again to find a result.
  • #13: Additionally we can further filter our data set down to a specific host.
  • #14: Lastly we can combine filters and keyword searches very easily. “This is pretty basic, but the key here is that SPL makes it incredibly easy and flexible to filter your searches down and reduce your data set to exactly what you’re looking for.
  • #15: Remember Munging or Re-shaping our data on the fly? Talk about Eval and it’s importance sourcetype=access* |eval KB=bytes/1024
  • #16: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”)
  • #17: sourcetype=access* | eval connection = clientip.":".port
  • #18: “There are tons of EVAL commands to help you shape or manipulate your data the way you want it.” Optional <Click on image to go to show and scroll through online quick reference quide>
  • #19: Next we’ll talk about Splunk’s charting and statistical commands. Notes: Stats Timechart Trendline Predict Add streamstats and eventstats or keep simple?
  • #20: There are 3 commands that are the basis of calculating statistics and visualizing results. Essentially chart is just stats visualized and timechart is stats by _time visualized. These SPL commands are extremely powerful and easy to use. “Let’s go through some examples – additionally we’ll make it more interesting and pull apart some searches and visualizations from one of the demo’s you saw on stage” <Go to IT Ops Visibility, click on Storage indicator> 1. Use Read/Write OPs by instance for STATS, bonus w/ sparkline 2. Use Read/Write OPs for TIMECHART
  • #21: *Note these searches are from the latest OI Demo 3, if you don’t want to use OI Demo 3 you can switch back to sourcetype=access* and use the bytes field” <Go to IT Ops Visibility, click on Storage indicator> sourcetype=netapp:perf | stats avg(read_ops) AS Read_OPs
  • #22: sourcetype=netapp:perf | stats avg(read_ops) AS Read_Ops sparkline(avg(read_ops) AS Read_Trend Can change out the avg with sum, min, max, etc. Sparkline is bonus option, can interchange with another statistical function but thought it might be fun to show.
  • #23: sourcetype=netapp:perf | stats avg(read_ops) AS Read_Ops sparkline(avg(read_ops) AS Read_Trend by instance Final: sourcetype=netapp:perf | stats avg(read_ops) as Read_OPs sparkline(avg(read_ops)) as Read_Trend avg(write_ops) as Write_OPs sparkline(avg(write_ops)) as Write_Trend by instance
  • #24: <Back to IT Ops Dashboard – Click on Netapp performance to start timechart example> Show difference between stats and timechart (adds _time buckets, visualize, etc.) Why is this awesome? We can do all of the same statistical calculations over time with almost any level of granularity. For example… <change timepicker from 60min to 15min, add span=1s to search and zoom in> Add below? Due to the implicit time dimension, it’s very easy to use timechart to visualize disparate data sets with varying time frequencies. SQL vs Timechart actual comparison?
  • #25: Walk through trendline basic options
  • #26: Walk through predict basic options “The timechart command plus other SPL commands make it very easy to visualize your data any way you want.”
  • #27: “Again, don’t forget about the quick reference guide. There are many more statistical functions you can use with these commands on your data.”
  • #28: Implicit join on time Appendcols Lookup Join – not sure if adding this yet?
  • #29: Context is everything when it comes to building successful operational intelligence. When you are stuck analyzing events from a single data source at a time, you might be missing out on rich contextual information or new insights that other data sources can provide. Let’s take a quick look at a few powerful SPL commands that can help make this happen.
  • #30: “Don’t forget that you already have an implicit join on time across all of your data sources. Without even using additional commands we can find insights just by looking at the simple frequency and patterns of data.” index=* http | timechart count by sourcetype
  • #31: “Let’s look at another example from the Operational Intelligence demo, more specifically the Business Analytics dashboard.” “When operational issues arose the question was asked ‘Can we tell if our “high-value” customers are being impacted by these issues?” “Given a spreadsheet or database with customer information we can do just that by using lookups” <Show excel file of customer_info.csv> “Both our access_logs and customer information data have a user id that we can use as a key” “Just like that we can run real-time analytics on all of the fields from that data source!” “Lookups can be configured automatically so you don’t have to type them in everytime.” sourcetype=access_combined status=503 | lookup customer_info uid | stats count by customer_value
  • #32: This is a more complex example, feel free to exchange this out with another “In this example we are going to be converging (or stitching together) multiple searches and use everything we’ve learned so far such as searching and filtering, creating fields, and using stats/timechart.” <Go to IoT Dashboard and show power graph> “While we are monitoring power usage by rack, maybe we want to be more proactive in the future and alert on significant deviations in power. To do this we’ll calculate the 2nd standard deviation of power usage in the past day, and compare it against our results in the past hour.” sourcetype=Kepware units=W row=A | timechart mean(Value) as mean_watts | appendcols [search earliest=-1d sourcetype=Kepware units=W row=A | stats stdev(Value) as hr_stdev] | eval 2stdv_upper = mean_watts + hr_stdev*2 | filldown 2stdv_upper | eval 2stdv_lower = mean_watts - hr_stdev*2 | filldown 2stdv_lower | fields - hr_stdev Might need to redo this example… is it simple enough? Also there is technically a more efficient way using eventstats (IF you are calculating the stdev over the same timerange as the search) .. In this case we are taking the daily stdev and appending that result Need to add JOIN? Talk about how there is a Join command, but many times don’t need it. Can usually use a simple OR instead, add this example when have time.
  • #33: <Please feel free to add more complex transaction searches here. For now just using the very basic”
  • #34: sourcetype=access* | transaction JSESSIONID
  • #35: sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration)
  • #36: NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers). sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • #37: Pull up search: Associate Correlate Ctable/Contingency Arules Cluster
  • #38: Feel free to change this and use your own story! “Data Exploration is when we try to find patterns and relationships between fields, values and formats of data in order to gain additional insight or help narrow down data sets to the most important fields. It is also the process of characterizing and researching behavior of both existing and new data sources.” “ For example while you may have an existing data source you are already used to, but there still could be some unknown value in in terms of patterns, relationships between fields and rare events that could point you to new insights or help with predictive analytics. This capability gives you confidence to explore new data sources as well because you can quickly look for replacements and nuggets that stick out or help classify data. A friend once asked me to look at some biomedical data with DNA information. The vocabulary and field definitions were way above me, but I was able to quickly understand patterns and relationships with Splunk and provide them value instaneously. With Splunk you literally become afraid of no data!” Let’s look at a few quick examples.
  • #39: “The cluster command is used to find common and/or rare events within your data” <Show simple table search first and point out # of events, then run cluster and sort on cluster count to show common vs rare events> * | table _raw _time * | cluster showcount=t t=.1 | table _raw cluster_count | sort - cluster_count
  • #40: “The correlate command is used to find co-occurrence between fields. Basically a matrix showing the ‘Field1 exists 80% of the time when Field2 exists’” sourcetype=access_combined | fields – date* source* time* | correlate “This can be useful for both making sure your field extractions are correct (if you expect a field to exist %100 of the time when another field exists) and also helping you identify potential patterns and trends between different fields.”
  • #41: “The contingency command is used to look for relationships of between two fields. Basically for these two fields, how many different value combinations are there and what are they / most common” sourcetype=access_combined | contingency uri status
  • #42: “I’ll be honest this one is a bit more complicated. Maybe the more statistical honed folks will like this one”. Associate looks for relationships between events using common field pair values. It calculates the certainty of values of one field given the value from another field. So basically in this example, when the status is 404 or 503*, I can see the entropy decreases meaning there is less chance of chance/uncertaintity in the values.” (Might need to update this?) sourcetype=access_combined | associate uri status
  • #44: Depending on remaining time can show 1 or more custom command examples. “We’ve gone over a variety of Splunk search commands.. but what happens when we can’t find a command that fits our needs OR want to use a complex algorithm someone already OR even create your own?? Enter Custom Commands.” Additional Text: Splunk's search language includes a wide variety of commands that you can use to get what you want out of your data and even to display the results in different ways. You have commands to correlate events and calculate statistics on your results, evaluate fields and reorder results, reformat and enrich your data, build charts, and more. Still, Splunk enables you to expand the search language to customize these commands to better meet your needs or to write your own search commands for custom processing or calculations.
  • #45: Let’s see Haversine in action. <Pull up search>
  • #46: *Note – Coordinates of origin in this Haversine example is currently “Seattle”, You can change to the location of your Splunk Live event
  • #47: References: Little about each
  • #48: TBD
  • #49: TBD