SlideShare a Scribd company logo
Kian Saha
f
Data Engineering
Technologies
And how to become one…
Introduction
Who is a data Engineer
A data engineer is an IT worker whose primary job is to prepare data for analytical or
operational uses. These software engineers are typically responsible for building data
pipelines to bring together information from di
ff
erent source systems.
Data engineering is one of the most popular and in-demand jobs among the big data
domain across the world.
But what do they do?
• Data Engineers build monitor and re
fi
ne complex data models to help
organizations improve their business outcomes by harnessing data power.
• In other words they work in a variety of settings to build systems that
collect, manage, and convert raw data into usable information for data
scientists and business analysts to interpret.
And what are they trying to achieve?
• Their ultimate goal is to make data accessible so that organizations can use it to evaluate and optimize their performance.
But now lets see what a day in a life of an data engineer is like:
Ok, but What skill would you need to be a Data Engineer?
• Coding: Pro
fi
ciency in coding languages is essential to this role, so consider
taking courses to learn and practice your skills. Common programming
languages include SQL, NoSQL, Python, Java, R, and Scala.
• Relational and non-relational databases: Databases rank among the most
common solutions for data storage. You should be familiar with both
relational and non-relational databases, and how they work.
• ETL (extract, transform, and load) systems: ETL is the process by which
you’ll move data from databases and other sources into a single repository,
like a data warehouse. Common ETL tools include Xplenty, Stitch, Alooma,
and Talend.
• Data security: While some companies might have dedicated data security
teams, many data engineers are still tasked with securely managing and
storing data to protect it from loss or theft.
• Data storage: Not all types of data should be stored the same way, especially
when it comes to big data. As you design data solutions for a company, you’ll
want to know when to use a data lake versus a data warehouse, for example.
• Automation and scripting: Automation is a necessary part of working with big
data simply because organizations are able to collect so much information. You
should be able to write scripts to automate repetitive tasks.
• Machine learning: While machine learning is more the concern of data
scientists, it can be helpful to have a grasp of the basic concepts to better
understand the needs of data scientists on your team.
• Big data tools: Data engineers don’t just work with regular data. They’re often
tasked with managing big data. Tools and technologies are evolving and vary by
company, but some popular ones include Hadoop, MongoDB, Kafka and Spark.
• Cloud computing: You’ll need to understand cloud storage and cloud computing
as companies increasingly trade physical servers for cloud services. Beginners
may consider a course in Amazon Web Services (AWS) or Google Cloud.
Who is AWS? (company)
Amazon Web Services (AWS) is a cloud computing platform o
ff
ered by Amazon.com
that provides a suite of services for building and running applications and websites.
These services include computing, storage, database, analytics, machine learning,
security, and many other functionalities, all of which can be accessed over the internet.
AWS was launched in 2002 and has since become one of the leading cloud computing
platforms in the world. It provides a wide range of services to businesses,
organizations, and individuals, enabling them to build and run their applications and
websites on top of the AWS infrastructure.
AWS services are available on a pay-as-you-go basis, allowing customers to only pay
for the resources they use. This makes it a
fl
exible and cost-e
ff
ective solution for
businesses, as they can scale their resources up or down as needed without having to
make signi
fi
cant upfront investments in hardware and infrastructure.
Why AWS is getting bold now days?
There are several reasons why AWS has become popular in recent years:
1.Scalability: AWS allows businesses to scale their resources up or down as needed, which makes
it a
fl
exible solution for companies that experience
fl
uctuating workloads.
2.Cost-e
ff
ectiveness: AWS charges customers on a pay-as-you-go basis, so they only pay for the
resources they use. This makes it a cost-e
ff
ective solution for businesses, as they don't have to
make signi
fi
cant upfront investments in hardware and infrastructure.
3.Wide range of services: AWS o
ff
ers a wide range of services, including computing, storage,
database, analytics, machine learning, security, and many others. This allows businesses to
build and run a variety of di
ff
erent applications and websites on top of the AWS platform.
4.Reliability: AWS has a strong track record of uptime and reliability, which is important for
businesses that rely on the platform to run their applications and websites.
5.Global presence: AWS has a global infrastructure with regions and availability zones located
around the world. This allows businesses to run their applications and websites in the region that
is closest to their customers, which can improve performance and reduce latency.
Is AWS a market place?
Yes, AWS is a marketplace that allows businesses and individuals to buy and sell a
wide range of cloud computing services. AWS provides a platform for vendors to o
ff
er
their services, and customers can browse and purchase these services through the
AWS website.
AWS o
ff
ers a variety of services, including computing, storage, database, analytics,
machine learning, security, and many others. Customers can use these services to
build and run their applications and websites on top of the AWS infrastructure.
AWS also o
ff
ers a number of tools and resources for vendors to use in developing and
o
ff
ering their services on the AWS marketplace. This includes the AWS Partner
Network, which is a global community of consulting and technology partners that can
help businesses build and sell their services on AWS.
We are going to talk about Spark in a few minutes but lets talk about AWS Glue first
• What is AWS Glue?
• AWS Glue is a serverless data integration service that makes it easy for
analytics users to discover, prepare, move, and integrate data from multiple
sources. You can use it for analytics, machine learning, and application
development. It also includes additional productivity and data ops tooling
for authoring, running jobs, and implementing business work
fl
ows.
• With AWS Glue, you can discover and connect to more than 70 diverse data
sources and manage your data in a centralized data catalog. You can
visually create, run and monitor extract, transform, and load (ETL) pipelines
to load data into your data lakes. Also, you can immediately search and
query cataloged data using Amazon Athena, Amazon EMR, and Amazon
Redshift Spectrum.
But how does It work?
Here's how AWS Glue works:
1.Data extraction: AWS Glue can extract data from a variety of sources,
including Amazon S3, Amazon RDS, Amazon Redshift, and other data stores.
2.Data transformation: AWS Glue can transform the extracted data using a
variety of transformations, such as
fi
ltering, sorting, and aggregating data.
3.Data loading: AWS Glue can load the transformed data into a variety of data
stores, including Amazon S3, Amazon RDS, Amazon Redshift, and other data
stores.
AWS Glue also includes a number of features to help users build and maintain
their ETL jobs, including a visual development environment, a library of pre-built
connectors and transformations, and the ability to schedule ETL jobs.
Data professionals talk about how they define data engineering and how it
differs from data analytics and data science.
Let’s talk about the technologies now:
• These are the Top 20 Most
Commonly Used Data
Engineering Tools in the Year
2022
• Of course we cannot talk about
all of them at the moment but
we will try our best explain some
of them.
1. Amazon Redshift
Amazon Redshift is a fully-managed data warehouse service offered by Amazon
Web Services (AWS). It is designed to handle large amounts of data and allows
users to analyze data using SQL and business intelligence (BI) tools.
Amazon Redshift is based on a columnar data storage model, which allows it to
ef
fi
ciently store and retrieve data for fast querying and analysis. It also includes a
variety of features to optimize performance, such as data compression and the
ability to parallelize queries across multiple nodes.
Customers can use Amazon Redshift to store and analyze data for a wide range
of applications, including business intelligence, data warehousing, analytics, and
more. It is a cost-effective solution, as customers only pay for the resources they
use and can scale their resources up or down as needed.
Using Amazon Redshift for Fast Analytical Reports
Result of Using AmazonRedshift
3. Tableau
• Tableau is an excellent data visualization and business intelligence tool used for reporting and
analyzing vast volumes of data. It is an American company that started in 2003—in June 2019,
Salesforce acquired Tableau. It helps users create di
ff
erent charts, graphs, maps, dashboards,
and stories for visualizing and analyzing data, to help in making business decisions.
• It has some features like:
• Tableau supports powerful data discovery and exploration that enables users to answer
important questions in seconds
• users without relevant experience can start immediately with creating visualizations using
Tableau
• It can connect to several data sources that other BI tools do not support. Tableau enables
users to create reports by joining and blending di
ff
erent datasets
• Tableau Server supports a centralized location to manage all published data sources within an
organization
Usage of Tableau in Walmart
And how to connect your own data warehouse to Tableau
• This is how to get data from
Walmart marketplace(and from
other sources) into Tableau by
locating it into your data
warehouse that is connect to
Tableau.
• Load your Walmart Marketplace
data into your central data
warehouse to analyze it with
Tableau.
Tableau Usage in CVS Health Corporation of USA
Farbod Ahmadian
Spark on AWS
Sharing experience of
Using Spark on AWS infrastructure
What is Spark?
Spark Features
Apache Spark VS Hadoop MapReduce
Two Main Abstractions of Apache Spark
• Resilient Distributed Datasets
(RDD):
is a fundamental data structure of
Spark and it is the primary data
abstraction in Apache Spark and the
Spark Core. RDDs are fault-tolerant,
immutable distributed collections of
objects, which means once you
create an RDD you cannot change
it. Each dataset in RDD is divided
into
logical partitions, which can be
computed on di
ff
erent nodes of the
cluster.
Directed Acyclic Graph (DAG)
Data Engineering
Spark Architecture
More Architecture!
Infra? AWS comes to the game
Every application needs an Infrastructure to run on
MapReduce Algorithm
Break task into smaller sub-tasks, Map each sub-task to a tread, Reduce in master node
Back to AWS: Elastic Map Reduce (EMR)
EMR Cost
EMR Sample Console
Sample Code For Creating Task On EMR
Thanks For listening!
If you have any question now is the time ;-)
Presented by Kian Saha
fi

More Related Content

PPTX
Introduction to Data Engineering
PDF
Data Engineering Basics
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r2)
PPTX
Introduction to Data Engineering
PPTX
Demystifying data engineering
PPTX
SLA Management in Cloud
PDF
Discover AI with Microsoft Azure
PDF
Evs project report on effect of global warming
Introduction to Data Engineering
Data Engineering Basics
Data Lakehouse, Data Mesh, and Data Fabric (r2)
Introduction to Data Engineering
Demystifying data engineering
SLA Management in Cloud
Discover AI with Microsoft Azure
Evs project report on effect of global warming

What's hot (20)

PDF
Introduction SQL Analytics on Lakehouse Architecture
PDF
Summary introduction to data engineering
PDF
Learn to Use Databricks for the Full ML Lifecycle
PPTX
Pentaho-BI
PPTX
Azure data platform overview
PPTX
Free Training: How to Build a Lakehouse
PPTX
Tableau slideshare
PPTX
Introduction to Data Engineering
PDF
Modern Data architecture Design
PPTX
Intro for Power BI
PDF
Data Quality Best Practices
PDF
Introduction to Azure Data Factory
PDF
Owning Your Own (Data) Lake House
PPTX
Introduction to Microsoft Power BI
PPTX
Introduction to Azure Databricks
PDF
Data Product Architectures
PDF
Microsoft Power BI Overview
PPTX
Inside open metadata—the deep dive
PDF
Building End-to-End Delta Pipelines on GCP
PDF
Tableau Suite Analysis
Introduction SQL Analytics on Lakehouse Architecture
Summary introduction to data engineering
Learn to Use Databricks for the Full ML Lifecycle
Pentaho-BI
Azure data platform overview
Free Training: How to Build a Lakehouse
Tableau slideshare
Introduction to Data Engineering
Modern Data architecture Design
Intro for Power BI
Data Quality Best Practices
Introduction to Azure Data Factory
Owning Your Own (Data) Lake House
Introduction to Microsoft Power BI
Introduction to Azure Databricks
Data Product Architectures
Microsoft Power BI Overview
Inside open metadata—the deep dive
Building End-to-End Delta Pipelines on GCP
Tableau Suite Analysis
Ad

Similar to Data Engineering (20)

PPTX
AWS_CLOUD (2).pptx
PPTX
Data Modernization_Harinath Susairaj.pptx
PPT
Co 4, session 2, aws analytics services
PPTX
Building Data Pipelines on AWS
PPTX
Azure Data.pptx
PPTX
Deep architectural competency for deploying azure solutions
PDF
Unleashing the Power of your Data
PPTX
Third party cloud services cloud computing
PPTX
Aws re invent 2018 recap
PPTX
Big Data Platform and Architecture Recommendation
PPTX
Analyzing Billions of Data Rows with Alteryx, Amazon Redshift, and Tableau
PDF
Innovation Track AWS Cloud Experience Argentina - Data Lakes & Analytics en AWS
PPTX
Managing Large Amounts of Data with Salesforce
PPTX
Microsoft Fabric Introduction
PDF
Module -3 Implementation.pdf
PPTX
CLOUD COMPUTING.pptx
PDF
Big Data & Analytics - Innovating at the Speed of Light
PPTX
Mini project on microsoft azure based on time
PDF
Module 3 - QuickSight Overview
PPTX
Unit II Cloud Delivery Models.pptx
AWS_CLOUD (2).pptx
Data Modernization_Harinath Susairaj.pptx
Co 4, session 2, aws analytics services
Building Data Pipelines on AWS
Azure Data.pptx
Deep architectural competency for deploying azure solutions
Unleashing the Power of your Data
Third party cloud services cloud computing
Aws re invent 2018 recap
Big Data Platform and Architecture Recommendation
Analyzing Billions of Data Rows with Alteryx, Amazon Redshift, and Tableau
Innovation Track AWS Cloud Experience Argentina - Data Lakes & Analytics en AWS
Managing Large Amounts of Data with Salesforce
Microsoft Fabric Introduction
Module -3 Implementation.pdf
CLOUD COMPUTING.pptx
Big Data & Analytics - Innovating at the Speed of Light
Mini project on microsoft azure based on time
Module 3 - QuickSight Overview
Unit II Cloud Delivery Models.pptx
Ad

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Machine learning based COVID-19 study performance prediction
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Encapsulation theory and applications.pdf
PDF
KodekX | Application Modernization Development
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Cloud computing and distributed systems.
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Network Security Unit 5.pdf for BCA BBA.
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
The AUB Centre for AI in Media Proposal.docx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Understanding_Digital_Forensics_Presentation.pptx
Spectroscopy.pptx food analysis technology
“AI and Expert System Decision Support & Business Intelligence Systems”
MYSQL Presentation for SQL database connectivity
Machine learning based COVID-19 study performance prediction
Chapter 3 Spatial Domain Image Processing.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Encapsulation_ Review paper, used for researhc scholars
Encapsulation theory and applications.pdf
KodekX | Application Modernization Development
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Cloud computing and distributed systems.
NewMind AI Weekly Chronicles - August'25 Week I
Network Security Unit 5.pdf for BCA BBA.

Data Engineering

  • 2. Introduction Who is a data Engineer A data engineer is an IT worker whose primary job is to prepare data for analytical or operational uses. These software engineers are typically responsible for building data pipelines to bring together information from di ff erent source systems. Data engineering is one of the most popular and in-demand jobs among the big data domain across the world.
  • 3. But what do they do? • Data Engineers build monitor and re fi ne complex data models to help organizations improve their business outcomes by harnessing data power. • In other words they work in a variety of settings to build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret.
  • 4. And what are they trying to achieve? • Their ultimate goal is to make data accessible so that organizations can use it to evaluate and optimize their performance. But now lets see what a day in a life of an data engineer is like:
  • 5. Ok, but What skill would you need to be a Data Engineer? • Coding: Pro fi ciency in coding languages is essential to this role, so consider taking courses to learn and practice your skills. Common programming languages include SQL, NoSQL, Python, Java, R, and Scala. • Relational and non-relational databases: Databases rank among the most common solutions for data storage. You should be familiar with both relational and non-relational databases, and how they work. • ETL (extract, transform, and load) systems: ETL is the process by which you’ll move data from databases and other sources into a single repository, like a data warehouse. Common ETL tools include Xplenty, Stitch, Alooma, and Talend. • Data security: While some companies might have dedicated data security teams, many data engineers are still tasked with securely managing and storing data to protect it from loss or theft.
  • 6. • Data storage: Not all types of data should be stored the same way, especially when it comes to big data. As you design data solutions for a company, you’ll want to know when to use a data lake versus a data warehouse, for example. • Automation and scripting: Automation is a necessary part of working with big data simply because organizations are able to collect so much information. You should be able to write scripts to automate repetitive tasks. • Machine learning: While machine learning is more the concern of data scientists, it can be helpful to have a grasp of the basic concepts to better understand the needs of data scientists on your team. • Big data tools: Data engineers don’t just work with regular data. They’re often tasked with managing big data. Tools and technologies are evolving and vary by company, but some popular ones include Hadoop, MongoDB, Kafka and Spark. • Cloud computing: You’ll need to understand cloud storage and cloud computing as companies increasingly trade physical servers for cloud services. Beginners may consider a course in Amazon Web Services (AWS) or Google Cloud.
  • 7. Who is AWS? (company) Amazon Web Services (AWS) is a cloud computing platform o ff ered by Amazon.com that provides a suite of services for building and running applications and websites. These services include computing, storage, database, analytics, machine learning, security, and many other functionalities, all of which can be accessed over the internet. AWS was launched in 2002 and has since become one of the leading cloud computing platforms in the world. It provides a wide range of services to businesses, organizations, and individuals, enabling them to build and run their applications and websites on top of the AWS infrastructure. AWS services are available on a pay-as-you-go basis, allowing customers to only pay for the resources they use. This makes it a fl exible and cost-e ff ective solution for businesses, as they can scale their resources up or down as needed without having to make signi fi cant upfront investments in hardware and infrastructure.
  • 8. Why AWS is getting bold now days? There are several reasons why AWS has become popular in recent years: 1.Scalability: AWS allows businesses to scale their resources up or down as needed, which makes it a fl exible solution for companies that experience fl uctuating workloads. 2.Cost-e ff ectiveness: AWS charges customers on a pay-as-you-go basis, so they only pay for the resources they use. This makes it a cost-e ff ective solution for businesses, as they don't have to make signi fi cant upfront investments in hardware and infrastructure. 3.Wide range of services: AWS o ff ers a wide range of services, including computing, storage, database, analytics, machine learning, security, and many others. This allows businesses to build and run a variety of di ff erent applications and websites on top of the AWS platform. 4.Reliability: AWS has a strong track record of uptime and reliability, which is important for businesses that rely on the platform to run their applications and websites. 5.Global presence: AWS has a global infrastructure with regions and availability zones located around the world. This allows businesses to run their applications and websites in the region that is closest to their customers, which can improve performance and reduce latency.
  • 9. Is AWS a market place? Yes, AWS is a marketplace that allows businesses and individuals to buy and sell a wide range of cloud computing services. AWS provides a platform for vendors to o ff er their services, and customers can browse and purchase these services through the AWS website. AWS o ff ers a variety of services, including computing, storage, database, analytics, machine learning, security, and many others. Customers can use these services to build and run their applications and websites on top of the AWS infrastructure. AWS also o ff ers a number of tools and resources for vendors to use in developing and o ff ering their services on the AWS marketplace. This includes the AWS Partner Network, which is a global community of consulting and technology partners that can help businesses build and sell their services on AWS.
  • 10. We are going to talk about Spark in a few minutes but lets talk about AWS Glue first • What is AWS Glue? • AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use it for analytics, machine learning, and application development. It also includes additional productivity and data ops tooling for authoring, running jobs, and implementing business work fl ows. • With AWS Glue, you can discover and connect to more than 70 diverse data sources and manage your data in a centralized data catalog. You can visually create, run and monitor extract, transform, and load (ETL) pipelines to load data into your data lakes. Also, you can immediately search and query cataloged data using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.
  • 11. But how does It work? Here's how AWS Glue works: 1.Data extraction: AWS Glue can extract data from a variety of sources, including Amazon S3, Amazon RDS, Amazon Redshift, and other data stores. 2.Data transformation: AWS Glue can transform the extracted data using a variety of transformations, such as fi ltering, sorting, and aggregating data. 3.Data loading: AWS Glue can load the transformed data into a variety of data stores, including Amazon S3, Amazon RDS, Amazon Redshift, and other data stores. AWS Glue also includes a number of features to help users build and maintain their ETL jobs, including a visual development environment, a library of pre-built connectors and transformations, and the ability to schedule ETL jobs.
  • 12. Data professionals talk about how they define data engineering and how it differs from data analytics and data science.
  • 13. Let’s talk about the technologies now: • These are the Top 20 Most Commonly Used Data Engineering Tools in the Year 2022 • Of course we cannot talk about all of them at the moment but we will try our best explain some of them.
  • 14. 1. Amazon Redshift Amazon Redshift is a fully-managed data warehouse service offered by Amazon Web Services (AWS). It is designed to handle large amounts of data and allows users to analyze data using SQL and business intelligence (BI) tools. Amazon Redshift is based on a columnar data storage model, which allows it to ef fi ciently store and retrieve data for fast querying and analysis. It also includes a variety of features to optimize performance, such as data compression and the ability to parallelize queries across multiple nodes. Customers can use Amazon Redshift to store and analyze data for a wide range of applications, including business intelligence, data warehousing, analytics, and more. It is a cost-effective solution, as customers only pay for the resources they use and can scale their resources up or down as needed.
  • 15. Using Amazon Redshift for Fast Analytical Reports
  • 16. Result of Using AmazonRedshift
  • 17. 3. Tableau • Tableau is an excellent data visualization and business intelligence tool used for reporting and analyzing vast volumes of data. It is an American company that started in 2003—in June 2019, Salesforce acquired Tableau. It helps users create di ff erent charts, graphs, maps, dashboards, and stories for visualizing and analyzing data, to help in making business decisions. • It has some features like: • Tableau supports powerful data discovery and exploration that enables users to answer important questions in seconds • users without relevant experience can start immediately with creating visualizations using Tableau • It can connect to several data sources that other BI tools do not support. Tableau enables users to create reports by joining and blending di ff erent datasets • Tableau Server supports a centralized location to manage all published data sources within an organization
  • 18. Usage of Tableau in Walmart And how to connect your own data warehouse to Tableau • This is how to get data from Walmart marketplace(and from other sources) into Tableau by locating it into your data warehouse that is connect to Tableau. • Load your Walmart Marketplace data into your central data warehouse to analyze it with Tableau.
  • 19. Tableau Usage in CVS Health Corporation of USA
  • 20. Farbod Ahmadian Spark on AWS Sharing experience of Using Spark on AWS infrastructure
  • 23. Apache Spark VS Hadoop MapReduce
  • 24. Two Main Abstractions of Apache Spark • Resilient Distributed Datasets (RDD): is a fundamental data structure of Spark and it is the primary data abstraction in Apache Spark and the Spark Core. RDDs are fault-tolerant, immutable distributed collections of objects, which means once you create an RDD you cannot change it. Each dataset in RDD is divided into logical partitions, which can be computed on di ff erent nodes of the cluster.
  • 29. Infra? AWS comes to the game Every application needs an Infrastructure to run on
  • 30. MapReduce Algorithm Break task into smaller sub-tasks, Map each sub-task to a tread, Reduce in master node
  • 31. Back to AWS: Elastic Map Reduce (EMR)
  • 34. Sample Code For Creating Task On EMR
  • 35. Thanks For listening! If you have any question now is the time ;-) Presented by Kian Saha fi