SlideShare a Scribd company logo
Developing High Frequency Indicators
Using Real-Time Tick Data
on Apache Superset and Druid
CBRT Big Data Team
Emre Tokel, Kerem Başol, M. Yağmur Şahin
Zekeriya Besiroglu / Komtas Bilgi Yonetimi
21 March 2019 Barcelona
Agenda
WHO WE ARE
CBRT & Our Team
PROJECT DETAILS
Before, Test Cluster,
Phase 1-2-3, Prod
Migration
HIGH FREQUENCY
INDICATORS
Importance & Goals
CURRENT ARCHITECTURE
Apache Kafka, Spark,
Druid & Superset
WORK IN
PROGRESS
Further analyses
FUTURE PLANS
6
5
4
3
2
1
Who We Are
1
Our Solutions
Data Management
• Data Governance Solutions
• Next Generation Analytics
• 360 Engagement
• Data Security
Analytics
• Data Warehouse Solutions
• Customer Journey Analytics
• Advanced Marketing Analytics Solutions
• Industry-specific analytic use cases
• Online Customer Data Platform
• IoT Analytics
• Analytic Lab Solution
Big Data & AI
• Big Data & AI Advisory Services
• Big Data & AI Accelerators
• Data Lake Foundation
• EDW Optimization / Offloading
• Big Data Ingestion and Governance
• AI Implementation – Chatbot
• AI Implementation – Image Recognition
Security Analytics
• Security Analytic Advisory Services
• Integrated Law Enforcement Solutions
• Cyber Security Solutions
• Fraud Analytics Solutions
• Governance, Risk & Compliance Solutions
• +20 IT , +18 DB&DWH
• +7 BIG DATA
• Lead Archtitect &Big Data /Analytics
@KOMTAS
• Instructor&Consultant
• ITU,MEF,Şehir Uni. BigData Instr.
• Certified R programmer
• Certified Hadoop Administrator
Our Organization
 The Central Bank of the Republic of Turkey is primarily responsible for steering the
monetary and exchange rate policies in Turkey.
o Price stability
o Financial stability
o Exchange rate regime
o The privilege of printing and issuing banknotes
o Payment systems
• Big Data Engineer• Big Data Engineer
M. Yağmur Şahin Emre Tokel Kerem Başol
• Big Data Team Leader
High Frequency
Indicators
2
1
Importance and Goals
 To observe foreign exchange markets in real-time
o Are there any patterns regarding to specific time intervals during the day?
o Is there anything to observe before/after local working hours throughout the whole day?
o What does the difference between bid/ask prices tell us?
 To be able to detect risks and take necessary policy measures in a timely manner
o Developing liquidity and risk indicators based real-time tick data
o Visualizing observations for decision makers in real-time
o Finally, discovering possible intraday seasonality
 Wouldn’t it be great to be able to correlate with news flow as well?
Project Details 3
2
1
Development of High Frequency Indicators Using Real-Time Tick
Data on Apache Superset and Druid
Phase 1
Prod
migration
Next
phases
Test
Cluster
Phase 2 Phase 3
Test Cluster
 Our first studies on big data have started on very humble servers
o 5 servers with 32 GB RAM for each
o 3 TB storage
 HDP 2.6.0.3 installed
o Not the latest version back then
 Technical difficulties
o Performance problems
o Apache Druid indexing
o Apache Superset maturity
Development of High Frequency Indicators Using Real-Time Tick
Data on Apache Superset and Druid
Phase 1
Prod
migration
Next
phases
Test
Cluster
Phase 2 Phase 3
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
Thomson Reuters Enterprise Platform (TREP)
 Thomson Reuters provides its subscribers with an enterprise platform that they can
collect the market data as it is generated
 Each financial instrument on TREP has a unique code called RIC
 The event queue implemented by the platform can be consumed with the provided
Java SDK
 We developed a Java application for consuming this event queue to collect tick-data
according to required RICs
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
Apache Kafka
 The data flow is very fast and quite dense
o We published the messages containing tick data collected by our Java application to a message
queue
o Twofold analysis: Batch and real-time
 We decided to use Apache Kafka residing on our test big data cluster
 We created a topic for each RIC on Apache Kafka and published data to related topics
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
Apache NiFi
 In order to manage the flow, we decided to use Apache NiFi
 We used KafkaConsumer processor to consume messages from Kafka queues
 The NiFi flow was designed to be persisted on MongoDB
Our NiFi Flow
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
MongoDB
 We had prepared data in JSON format with our Java application
 Since we have MongoDB installed on our enterprise systems, we decided to persist
this data to MongoDB
 Although MongoDB is not a part of HDP, it seemed as a good choice for our
researchers to use this data in their analyses
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
Apache Zeppelin
 We provided our researchers with access to Apache Zeppelin and connection to
MongoDB via Python
 By doing so, we offered an alternative to the tools on local computers and provided a
unified interface for financial analysis
Business Intelligence on Client Side
 Our users had to download daily tick-data manually from their Thomson Reuters
Terminals and work on Excel
 Users were then able to access tick-data using Power BI
o We also provided our users with a news timeline along with the tick-data
We needed more!
 We had to visualize the data in real-time
o Analysis on persisted data using MongoDB, PowerBI and Apache Zeppelin was not enough
TREP API
Apache
Kafka
Apache NiFi MongoDB
Apache
Zeppelin &
Power BI
Development of High Frequency Indicators Using Real-Time Tick
Data on Apache Superset and Druid
Phase 1
Prod
migration
Next
phases
Test
Cluster
Phase 2 Phase 3
TREP
API
Apache
Kafka
Apache
Druid
Apache
Superset
Apache Druid
 We needed a database which was able to:
o Answer ad-hoc queries (slice/dice) for a limited window efficiently
o Store historic data and seamlessly integrate current and historic data
o Provide native integration with possible real-time visualization frameworks (preferably from
Apache stack)
o Provide native integration with Apache Kafka
 Apache Druid addressed all the aforementioned requirements
 Indexing task was achieved using Tranquility
TREP
API
Apache
Kafka
Apache
Druid
Apache
Superset
Apache Superset
 Apache Superset was the obvious alternative for real-time visualization since tick-data
was stored on Apache Druid
o Native integration with Apache Druid
o Freely available on Hortonworks service stack
 We prepared real-time dashboards including:
o Transaction Count
o Bid / Ask Prices
o Contributor Distribution
o Bid - Ask Spread
We needed more, again!
 Reliability issues with Druid
 Performance issues
 Enterprise integration requirements
Development of High Frequency Indicators Using Real-Time Tick
Data on Apache Superset and Druid
Phase 1
Prod
migration
Next
phases
Test
Cluster
Phase 2 Phase 3
Architecture
Internet Data
Enterprise Content
Social Media/Media
Micro Level Data
Commercial Data Vendors
Ingestion
Big Data Platform Data Science
GovernanceData Sources
Development of High Frequency Indicators Using Real-Time Tick
Data on Apache Superset and Druid
Phase 1
Prod
migration
Next
phases
Test
Cluster
Phase 2 Phase 3
TREP API Apache Kafka
Apache
Hive + Druid
Integration
Apache Spark
Apache
Superset
Apache Hive + Druid Integration
 After setting up our production environment (using HDP 3.0.1.0) and started to
feed data, we realized that data were scattered and we were missing the option to
co-utilize these different data sources
 We then realized that Apache Hive was already providing Kafka & Druid indexing
service in the form of a simple table creation and querying facility for Druid from
Hive
TREP API Apache Kafka
Apache
Hive + Druid
Integration
Apache Spark
Apache
Superset
Apache Spark
 Due to additional calculation requirements of our users, we decided to utilize Apache
Spark
 With Apache Spark 2.4, we used Spark Streaming and Spark SQL contexts together in
the same application
 In our Spark application
o For every 5 seconds, a 30-second window is created
o On each window, outlier boundaries are calculated
o Outlier data points are detected
Observing Intraday Indicators Using Real-Time Tick Data on Apache Superset and Druid
Current Architecture
4
3
2
1
Current Architecture & Progress So Far
Java Application
Kafka Topic (real-time)
Kafka Topic (windowed)
TREP Event Queue
Consume Publish
Spark Application
Consume
Publish
Druid Datasource
(real-time)
Druid Datasource
(windowed)
Superset Dashboard
(tick data)
Superset Dashboard
(outlier)
TREP Data Flow
Windowed Spark Streaming
Tick-Data Dashboard
Outlier Dashboard
Work in Progress
5
4
3
2
1
Implementing…
 Moving average calculation (20-day window)
 Volatility Indicator
 Average True Range Indicator (moving average)
o [ max(t) - min(t) ]
o [ max(t) - close(t-1) ]
o [ max(t) - close(t-1) ]
Future Plans
6
5
4
3
2
1
To-Do List
 Matching data subscription
 Bringing historical tick data into real-time analysis
 Possible use of machine learning for intraday indicators
Thank you!
Q & A

More Related Content

PDF
Optimizing Hive Queries
PDF
Kafka used at scale to deliver real-time notifications
PPTX
Introduction to Apache Flink
PDF
SAP API Business Hub
PPTX
Apache NiFi Crash Course Intro
PDF
3 Kafka patterns to deliver Streaming Machine Learning models with Andrea Spi...
PDF
Data Discoverability at SpotHero
PPTX
Integrating Apache Spark and NiFi for Data Lakes
Optimizing Hive Queries
Kafka used at scale to deliver real-time notifications
Introduction to Apache Flink
SAP API Business Hub
Apache NiFi Crash Course Intro
3 Kafka patterns to deliver Streaming Machine Learning models with Andrea Spi...
Data Discoverability at SpotHero
Integrating Apache Spark and NiFi for Data Lakes

What's hot (20)

PDF
Power BI Desktop | Power BI Tutorial | Power BI Training | Edureka
PPTX
Apache NiFi in the Hadoop Ecosystem
PPTX
PL-100 Microsoft Power Platform App Maker
PDF
Change Data Streaming Patterns For Microservices With Debezium (Gunnar Morlin...
PDF
Introduction to Spring webflux
PDF
Data in Motion Tour 2024 Riyadh, Saudi Arabia
PDF
Delta Lake Cheat Sheet.pdf
PDF
Power BI Report Server & Office Online Server
PDF
Best Practices for Building and Deploying Data Pipelines in Apache Spark
PDF
Spark DataFrames and ML Pipelines
PPTX
HBase and HDFS: Understanding FileSystem Usage in HBase
PDF
Spark with Delta Lake
PPTX
Introduction to Microsoft Power Platform (PowerApps, Flow)
PPTX
Hive and Apache Tez: Benchmarked at Yahoo! Scale
PDF
Vectorized Query Execution in Apache Spark at Facebook
PDF
Accelerate Your ML Pipeline with AutoML and MLflow
PDF
Solving Enterprise Data Challenges with Apache Arrow
PPTX
Spring Webflux
PDF
Real-Time Market Data Analytics Using Kafka Streams
PDF
The Apache Spark File Format Ecosystem
Power BI Desktop | Power BI Tutorial | Power BI Training | Edureka
Apache NiFi in the Hadoop Ecosystem
PL-100 Microsoft Power Platform App Maker
Change Data Streaming Patterns For Microservices With Debezium (Gunnar Morlin...
Introduction to Spring webflux
Data in Motion Tour 2024 Riyadh, Saudi Arabia
Delta Lake Cheat Sheet.pdf
Power BI Report Server & Office Online Server
Best Practices for Building and Deploying Data Pipelines in Apache Spark
Spark DataFrames and ML Pipelines
HBase and HDFS: Understanding FileSystem Usage in HBase
Spark with Delta Lake
Introduction to Microsoft Power Platform (PowerApps, Flow)
Hive and Apache Tez: Benchmarked at Yahoo! Scale
Vectorized Query Execution in Apache Spark at Facebook
Accelerate Your ML Pipeline with AutoML and MLflow
Solving Enterprise Data Challenges with Apache Arrow
Spring Webflux
Real-Time Market Data Analytics Using Kafka Streams
The Apache Spark File Format Ecosystem
Ad

Similar to Observing Intraday Indicators Using Real-Time Tick Data on Apache Superset and Druid (20)

PDF
Developing high frequency indicators using real time tick data on apache supe...
PDF
Aggregated queries with Druid on terrabytes and petabytes of data
PPT
Counting Unique Users in Real-Time: Here's a Challenge for You!
PDF
Sherlock: an anomaly detection service on top of Druid
PDF
PPSX
Big Data
PDF
Aws certified big data specialty exam dumps
PDF
Hadoop at datasift
PPTX
Real time monitoring of hadoop and spark workflows
PPTX
Druid Scaling Realtime Analytics
PPTX
Big Data Pipeline and Analytics Platform Using NetflixOSS and Other Open Sour...
PPTX
Big Data Pipeline and Analytics Platform
PDF
Axibase Time Series Database
PPTX
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...
PDF
A Trifecta of Real-Time Applications: Apache Kafka, Flink, and Druid
PPTX
Big data and apache hadoop adoption
PDF
Target Holding - Big Dikes and Big Data
PDF
A Survey on Approaches for Frequent Item Set Mining on Apache Hadoop
PPTX
Understanding apache-druid
Developing high frequency indicators using real time tick data on apache supe...
Aggregated queries with Druid on terrabytes and petabytes of data
Counting Unique Users in Real-Time: Here's a Challenge for You!
Sherlock: an anomaly detection service on top of Druid
Big Data
Aws certified big data specialty exam dumps
Hadoop at datasift
Real time monitoring of hadoop and spark workflows
Druid Scaling Realtime Analytics
Big Data Pipeline and Analytics Platform Using NetflixOSS and Other Open Sour...
Big Data Pipeline and Analytics Platform
Axibase Time Series Database
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...
A Trifecta of Real-Time Applications: Apache Kafka, Flink, and Druid
Big data and apache hadoop adoption
Target Holding - Big Dikes and Big Data
A Survey on Approaches for Frequent Item Set Mining on Apache Hadoop
Understanding apache-druid
Ad

More from DataWorks Summit (20)

PPTX
Data Science Crash Course
PPTX
Floating on a RAFT: HBase Durability with Apache Ratis
PPTX
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
PDF
HBase Tales From the Trenches - Short stories about most common HBase operati...
PPTX
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
PPTX
Managing the Dewey Decimal System
PPTX
Practical NoSQL: Accumulo's dirlist Example
PPTX
HBase Global Indexing to support large-scale data ingestion at Uber
PPTX
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
PPTX
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
PPTX
Supporting Apache HBase : Troubleshooting and Supportability Improvements
PPTX
Security Framework for Multitenant Architecture
PDF
Presto: Optimizing Performance of SQL-on-Anything Engine
PPTX
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
PPTX
Extending Twitter's Data Platform to Google Cloud
PPTX
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
PPTX
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
PPTX
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
PDF
Computer Vision: Coming to a Store Near You
PPTX
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Data Science Crash Course
Floating on a RAFT: HBase Durability with Apache Ratis
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
HBase Tales From the Trenches - Short stories about most common HBase operati...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Managing the Dewey Decimal System
Practical NoSQL: Accumulo's dirlist Example
HBase Global Indexing to support large-scale data ingestion at Uber
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Security Framework for Multitenant Architecture
Presto: Optimizing Performance of SQL-on-Anything Engine
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Extending Twitter's Data Platform to Google Cloud
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Computer Vision: Coming to a Store Near You
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark

Recently uploaded (20)

PDF
cuic standard and advanced reporting.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Big Data Technologies - Introduction.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Advanced IT Governance
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
cuic standard and advanced reporting.pdf
Understanding_Digital_Forensics_Presentation.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Reach Out and Touch Someone: Haptics and Empathic Computing
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
GamePlan Trading System Review: Professional Trader's Honest Take
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Big Data Technologies - Introduction.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Network Security Unit 5.pdf for BCA BBA.
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
“AI and Expert System Decision Support & Business Intelligence Systems”
20250228 LYD VKU AI Blended-Learning.pptx
Spectral efficient network and resource selection model in 5G networks
Advanced IT Governance
The AUB Centre for AI in Media Proposal.docx
Dropbox Q2 2025 Financial Results & Investor Presentation
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication

Observing Intraday Indicators Using Real-Time Tick Data on Apache Superset and Druid

  • 1. Developing High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid CBRT Big Data Team Emre Tokel, Kerem Başol, M. Yağmur Şahin Zekeriya Besiroglu / Komtas Bilgi Yonetimi 21 March 2019 Barcelona
  • 2. Agenda WHO WE ARE CBRT & Our Team PROJECT DETAILS Before, Test Cluster, Phase 1-2-3, Prod Migration HIGH FREQUENCY INDICATORS Importance & Goals CURRENT ARCHITECTURE Apache Kafka, Spark, Druid & Superset WORK IN PROGRESS Further analyses FUTURE PLANS 6 5 4 3 2 1
  • 4. Our Solutions Data Management • Data Governance Solutions • Next Generation Analytics • 360 Engagement • Data Security Analytics • Data Warehouse Solutions • Customer Journey Analytics • Advanced Marketing Analytics Solutions • Industry-specific analytic use cases • Online Customer Data Platform • IoT Analytics • Analytic Lab Solution Big Data & AI • Big Data & AI Advisory Services • Big Data & AI Accelerators • Data Lake Foundation • EDW Optimization / Offloading • Big Data Ingestion and Governance • AI Implementation – Chatbot • AI Implementation – Image Recognition Security Analytics • Security Analytic Advisory Services • Integrated Law Enforcement Solutions • Cyber Security Solutions • Fraud Analytics Solutions • Governance, Risk & Compliance Solutions
  • 5. • +20 IT , +18 DB&DWH • +7 BIG DATA • Lead Archtitect &Big Data /Analytics @KOMTAS • Instructor&Consultant • ITU,MEF,Şehir Uni. BigData Instr. • Certified R programmer • Certified Hadoop Administrator
  • 6. Our Organization  The Central Bank of the Republic of Turkey is primarily responsible for steering the monetary and exchange rate policies in Turkey. o Price stability o Financial stability o Exchange rate regime o The privilege of printing and issuing banknotes o Payment systems
  • 7. • Big Data Engineer• Big Data Engineer M. Yağmur Şahin Emre Tokel Kerem Başol • Big Data Team Leader
  • 9. Importance and Goals  To observe foreign exchange markets in real-time o Are there any patterns regarding to specific time intervals during the day? o Is there anything to observe before/after local working hours throughout the whole day? o What does the difference between bid/ask prices tell us?  To be able to detect risks and take necessary policy measures in a timely manner o Developing liquidity and risk indicators based real-time tick data o Visualizing observations for decision makers in real-time o Finally, discovering possible intraday seasonality  Wouldn’t it be great to be able to correlate with news flow as well?
  • 11. Development of High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid Phase 1 Prod migration Next phases Test Cluster Phase 2 Phase 3
  • 12. Test Cluster  Our first studies on big data have started on very humble servers o 5 servers with 32 GB RAM for each o 3 TB storage  HDP 2.6.0.3 installed o Not the latest version back then  Technical difficulties o Performance problems o Apache Druid indexing o Apache Superset maturity
  • 13. Development of High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid Phase 1 Prod migration Next phases Test Cluster Phase 2 Phase 3
  • 14. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 15. Thomson Reuters Enterprise Platform (TREP)  Thomson Reuters provides its subscribers with an enterprise platform that they can collect the market data as it is generated  Each financial instrument on TREP has a unique code called RIC  The event queue implemented by the platform can be consumed with the provided Java SDK  We developed a Java application for consuming this event queue to collect tick-data according to required RICs
  • 16. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 17. Apache Kafka  The data flow is very fast and quite dense o We published the messages containing tick data collected by our Java application to a message queue o Twofold analysis: Batch and real-time  We decided to use Apache Kafka residing on our test big data cluster  We created a topic for each RIC on Apache Kafka and published data to related topics
  • 18. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 19. Apache NiFi  In order to manage the flow, we decided to use Apache NiFi  We used KafkaConsumer processor to consume messages from Kafka queues  The NiFi flow was designed to be persisted on MongoDB
  • 21. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 22. MongoDB  We had prepared data in JSON format with our Java application  Since we have MongoDB installed on our enterprise systems, we decided to persist this data to MongoDB  Although MongoDB is not a part of HDP, it seemed as a good choice for our researchers to use this data in their analyses
  • 23. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 24. Apache Zeppelin  We provided our researchers with access to Apache Zeppelin and connection to MongoDB via Python  By doing so, we offered an alternative to the tools on local computers and provided a unified interface for financial analysis
  • 25. Business Intelligence on Client Side  Our users had to download daily tick-data manually from their Thomson Reuters Terminals and work on Excel  Users were then able to access tick-data using Power BI o We also provided our users with a news timeline along with the tick-data
  • 26. We needed more!  We had to visualize the data in real-time o Analysis on persisted data using MongoDB, PowerBI and Apache Zeppelin was not enough
  • 27. TREP API Apache Kafka Apache NiFi MongoDB Apache Zeppelin & Power BI
  • 28. Development of High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid Phase 1 Prod migration Next phases Test Cluster Phase 2 Phase 3
  • 30. Apache Druid  We needed a database which was able to: o Answer ad-hoc queries (slice/dice) for a limited window efficiently o Store historic data and seamlessly integrate current and historic data o Provide native integration with possible real-time visualization frameworks (preferably from Apache stack) o Provide native integration with Apache Kafka  Apache Druid addressed all the aforementioned requirements  Indexing task was achieved using Tranquility
  • 32. Apache Superset  Apache Superset was the obvious alternative for real-time visualization since tick-data was stored on Apache Druid o Native integration with Apache Druid o Freely available on Hortonworks service stack  We prepared real-time dashboards including: o Transaction Count o Bid / Ask Prices o Contributor Distribution o Bid - Ask Spread
  • 33. We needed more, again!  Reliability issues with Druid  Performance issues  Enterprise integration requirements
  • 34. Development of High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid Phase 1 Prod migration Next phases Test Cluster Phase 2 Phase 3
  • 35. Architecture Internet Data Enterprise Content Social Media/Media Micro Level Data Commercial Data Vendors Ingestion Big Data Platform Data Science GovernanceData Sources
  • 36. Development of High Frequency Indicators Using Real-Time Tick Data on Apache Superset and Druid Phase 1 Prod migration Next phases Test Cluster Phase 2 Phase 3
  • 37. TREP API Apache Kafka Apache Hive + Druid Integration Apache Spark Apache Superset
  • 38. Apache Hive + Druid Integration  After setting up our production environment (using HDP 3.0.1.0) and started to feed data, we realized that data were scattered and we were missing the option to co-utilize these different data sources  We then realized that Apache Hive was already providing Kafka & Druid indexing service in the form of a simple table creation and querying facility for Druid from Hive
  • 39. TREP API Apache Kafka Apache Hive + Druid Integration Apache Spark Apache Superset
  • 40. Apache Spark  Due to additional calculation requirements of our users, we decided to utilize Apache Spark  With Apache Spark 2.4, we used Spark Streaming and Spark SQL contexts together in the same application  In our Spark application o For every 5 seconds, a 30-second window is created o On each window, outlier boundaries are calculated o Outlier data points are detected
  • 43. Current Architecture & Progress So Far Java Application Kafka Topic (real-time) Kafka Topic (windowed) TREP Event Queue Consume Publish Spark Application Consume Publish Druid Datasource (real-time) Druid Datasource (windowed) Superset Dashboard (tick data) Superset Dashboard (outlier)
  • 49. Implementing…  Moving average calculation (20-day window)  Volatility Indicator  Average True Range Indicator (moving average) o [ max(t) - min(t) ] o [ max(t) - close(t-1) ] o [ max(t) - close(t-1) ]
  • 51. To-Do List  Matching data subscription  Bringing historical tick data into real-time analysis  Possible use of machine learning for intraday indicators

Editor's Notes

  • #8: Founded in September 2017 with experienced software engineers Members have academic background on finance and big data PoC work was done to explain the capabilities of a big data platform Payment system data was analyzed First task was to setup a big data platform Emre Tokel - Big Data Team Leader Emre has 15+ years of experience in software development. He has taken role as developer and project manager in various projects. For 2 years now, he has been involved in big data and data intelligence studies within the Bank. Emre has been leading the big data team since last year and is responsible for the architecture of the Big Data Platform, which is based on Hortonworks technologies. He has an MBA degree and is pursuing his Ph.D in finance. Besides IT, he is a divemaster and teaching SCUBA.  Kerem Basol - Big Data Engineer Kerem has 10+ years of experience in software development including mobile, back-end and front-end. For the past two years, he focused on big data technologies and currently working as a big data engineer. Kerem is responsible for data ingestion and building custom solution stacks for business needs using the Big Data Platform, which is based on Hortonworks technologies. He holds an MS degree in CIS from UPENN.  M. Yağmur Sahin - Big Data Engineer Yağmur has been developing software for 10 years. Being experienced in software development, he has completed his masters degree in 2016 on distributed stream processing where he was first introduced with big data technologies. For the last 2 years, he has been designing and implementing big data solutions for the Bank using Hortonworks Data Platform. Yağmur is also pursuing his Ph.D at Medical Informatics department of METU. He loves running and hopefully will complete a marathon in coming years.
  • #23: Power BI has a MongoDB connector
  • #33: (All dashboards included min/max/average values)
  • #41: There were some tasks that cannot be handled declaratively