SlideShare a Scribd company logo
Microsoft.com/Learn
Join the chat at https://aka.ms/LearnLiveTV
Title
Speaker Name
Read and write data in Azure
Databricks
Speaker Name
Title
Learning
objectives
 Use Azure Databricks to read multiple file types, both with
and without a Schema.
 Combine inputs from files and data stores, such as Azure
SQL Database.
 Transform and store that data for advanced analytics.
Unit Prerequisites
Microsoft Azure Account: You will need a valid and active Azure
account for the Azure labs.
• If you are a Visual Studio Active Subscriber, you are entitled to Azure credits per month. You can refer
to this link to find out more including how to activate and start using your monthly Azure credit.
• If you are not a Visual Studio Subscriber, you can sign up for the FREE Visual Studio Dev Essentials
program to create Azure free account.
Create the required resources
To complete this lab, you will need to deploy an Azure Databricks
workspace in your Azure subscription.
Agenda  Introduction
 Read data in CSV format
 Read data in JSON format
 Read data in Parquet format
 Read data stored in tables and views
Agenda
continued
 Write data
 Exercises: Read and write data
 Knowledge check
 Summary
Introduction
Introduction
Suppose you're working for a data analytics startup that's now
expanding along with its increasing customer base.
Creating your Databricks
workspace
Deploy an Azure Databricks workspace
Click the following button to open the Azure Resource Manager
Template (ARM) template in the Azure portal.
• Click the following button to open the Azure Resource Manager Template (ARM) template in the Azure portal. Deploy Databricks
from the ARM Template
• Provide the required values to create your Azure Databricks workspace:
• Subscription: Choose the Azure Subscription in which to deploy the workspace.
• Resource Group: Leave at Create new and provide a name for the new resource group.
• Location: Select a location near you for deployment. For the list of regions supported by Azure Databricks, see Azure
services available by region.
• Workspace Name: Provide a name for your workspace.
• Pricing Tier: Ensure premium is selected.
• Accept the terms and conditions.
• Select Purchase.
• The workspace creation takes a few minutes. During workspace creation, the portal displays the Submitting deployment for Azure
Databricks tile on the right side. You may need to scroll right on your dashboard to see the tile. There is also a progress bar
displayed near the top of the screen. You can watch either area for progress.
Create a cluster
When your Azure Databricks workspace creation is complete, select
the link to go to the resource.
Clone the Databricks archive
If you do not currently have your Azure Databricks workspace open: in
the Azure portal, navigate to your deployed Azure Databricks
workspace and select Launch Workspace.
• Select Import.
• Select the 03-Reading-and-writing-data-in-
Azure-Databricks folder that appears.
Read data in CSV format
Read data in CSV format
In this unit, you need to complete the exercises within a Databricks
Notebook.
Complete the following notebook
Open the 1.Reading Data - CSV notebook.
• Start working with the API documentation
• Introduce the class SparkSession and other entry points
• Introduce the class DataFrameReader
• Read data from:
• CSV without a Schema
• CSV with a Schema
Read data in JSON format
Read data in JSON format
In your Azure Databricks workspace, open the 03-Reading-and-
writing-data-in-Azure-Databricks folder that you imported within
your user folder.
• Read data from:
• JSON without a Schema
• JSON with a Schema
Read data in Parquet format
Read data in Parquet format
In your Azure Databricks workspace, open the 03-Reading-and-
writing-data-in-Azure-Databricks folder that you imported within
your user folder.
• Introduce the Parquet file format
• Read data from:
• Parquet files without a schema
• Parquet files with a schema
Read data stored in tables and
views
Read data stored in tables and views
In your Azure Databricks workspace, open the 03-Reading-and-
writing-data-in-Azure-Databricks folder that you imported within
your user folder.
• Demonstrate how to pre-register data sources in Azure Databricks
• Introduce temporary views over files
• Read data from tables/views
Write data
Write data
In your Azure Databricks workspace, open the 03-Reading-and-
writing-data-in-Azure-Databricks folder that you imported within
your user folder.
• Write data to a Parquet file
• Read the Parquet file back and display the results
Exercise
Exercises: Read and write data
Exercises: Read and write data
In your Azure Databricks workspace, open the 03-Reading-and-
writing-data-in-Azure-Databricks folder that you imported within
your user folder.
Knowledge check
Question 1
How do you list files in DBFS within a notebook?
A. ls /my-file-path
B. %fs dir /my-file-path
C. %fs ls /my-file-path
Question 1
How do you list files in DBFS within a notebook?
A. ls /my-file-path
B. %fs dir /my-file-path
C. %fs ls /my-file-path
Question 2
How do you infer the data types and column names when you read a
JSON file?
A. spark.read.option("inferSchema", "true").json(jsonFile)
B. spark.read.inferSchema("true").json(jsonFile)
C. spark.read.option("inferData", "true").json(jsonFile)
Question 2
How do you infer the data types and column names when you read a
JSON file?
A. spark.read.option("inferSchema", "true").json(jsonFile)
B. spark.read.inferSchema("true").json(jsonFile)
C. spark.read.option("inferData", "true").json(jsonFile)
Summary
Summary
In this module, you learned the basics about reading and writing data
in Azure Databricks.
• Read data from CSV files into a Spark Dataframe
• Provide a Schema when reading Data into a Spark Dataframe
• Read data from JSON files into a Spark Dataframe
• Read Data from parquet files into a Spark Dataframe
• Create Tables and Views
• Write data from a Spark Dataframe
Clean up
If you plan on completing other Azure Databricks modules, don't
delete your Azure Databricks instance yet.
Delete the Azure Databricks instance
• Navigate to the Azure portal.
• Navigate to the resource group that contains your Azure Databricks instance.
• Select Delete resource group.
• Type the name of the resource group in the confirmation text box.
• Select Delete.
Next Steps
Practice your knowledge by
trying these Learn modules:
Please tell us how you liked this
workshop by filling out this survey:
https://aka.ms/workshopomatic-
feedback
There is a slightly more advanced and
involved learning path that covers Data
Engineering with Azure Databricks.
© Copyright Microsoft Corporation. All rights reserved.

More Related Content

PPTX
Azure DataBricks for Data Engineering by Eugene Polonichko
PPTX
Databricks for Dummies
PPTX
Azure Databricks (For Data Analytics).pptx
PDF
201905 Azure Databricks for Machine Learning
PDF
Databricks and Logging in Notebooks
PPTX
Azure data bricks by Eugene Polonichko
PDF
Predicting Flights with Azure Databricks
PPTX
TechEvent Databricks on Azure
Azure DataBricks for Data Engineering by Eugene Polonichko
Databricks for Dummies
Azure Databricks (For Data Analytics).pptx
201905 Azure Databricks for Machine Learning
Databricks and Logging in Notebooks
Azure data bricks by Eugene Polonichko
Predicting Flights with Azure Databricks
TechEvent Databricks on Azure

Similar to Azure Data serices and databricks architecture (20)

PPTX
Introduction to Azure Databricks
PPTX
Azure Databricks Training | Azure Databricks Online Training
DOCX
Databricks Online Training | Databricks Online Course
PPTX
Azure Databricks - An Introduction 2019 Roadshow.pptx
PDF
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
PDF
Azure Databricks – Customer Experiences and Lessons Denzil Ribeiro Madhu Ganta
PDF
Azure+Databricks+Course+Slide+Deck+V4.pdf
PDF
Comparing Microsoft Big Data Platform Technologies
PDF
Sparkcamp @ Strata CA: Intro to Apache Spark with Hands-on Tutorials
PDF
azure-cloud-data-engineer-training-curriculum (1).pdf
PDF
Streaming & Scaling Spark - London Spark Meetup 2016
PDF
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
PPTX
Azure Databricks & Spark @ Techorama 2018
PPTX
Nouveautes_Databricks decouvrire un use case general
PDF
Azure Fundamentals.pdf
PDF
Jupyter Notebooks for machine learning on Kubernetes & OpenShift | DevNation ...
PDF
Building a Turbo-fast Data Warehousing Platform with Databricks
PDF
Beyond shuffling - Scala Days Berlin 2016
PPTX
Azure Databricks is Easier Than You Think
PPTX
Big Data with Azure
Introduction to Azure Databricks
Azure Databricks Training | Azure Databricks Online Training
Databricks Online Training | Databricks Online Course
Azure Databricks - An Introduction 2019 Roadshow.pptx
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Azure Databricks – Customer Experiences and Lessons Denzil Ribeiro Madhu Ganta
Azure+Databricks+Course+Slide+Deck+V4.pdf
Comparing Microsoft Big Data Platform Technologies
Sparkcamp @ Strata CA: Intro to Apache Spark with Hands-on Tutorials
azure-cloud-data-engineer-training-curriculum (1).pdf
Streaming & Scaling Spark - London Spark Meetup 2016
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
Azure Databricks & Spark @ Techorama 2018
Nouveautes_Databricks decouvrire un use case general
Azure Fundamentals.pdf
Jupyter Notebooks for machine learning on Kubernetes & OpenShift | DevNation ...
Building a Turbo-fast Data Warehousing Platform with Databricks
Beyond shuffling - Scala Days Berlin 2016
Azure Databricks is Easier Than You Think
Big Data with Azure
Ad

Recently uploaded (20)

PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Cloud computing and distributed systems.
PPTX
A Presentation on Artificial Intelligence
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Empathic Computing: Creating Shared Understanding
PDF
cuic standard and advanced reporting.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Dropbox Q2 2025 Financial Results & Investor Presentation
20250228 LYD VKU AI Blended-Learning.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
The AUB Centre for AI in Media Proposal.docx
Cloud computing and distributed systems.
A Presentation on Artificial Intelligence
Chapter 3 Spatial Domain Image Processing.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Spectral efficient network and resource selection model in 5G networks
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Empathic Computing: Creating Shared Understanding
cuic standard and advanced reporting.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Ad

Azure Data serices and databricks architecture

  • 2. Join the chat at https://aka.ms/LearnLiveTV Title Speaker Name Read and write data in Azure Databricks Speaker Name Title
  • 3. Learning objectives  Use Azure Databricks to read multiple file types, both with and without a Schema.  Combine inputs from files and data stores, such as Azure SQL Database.  Transform and store that data for advanced analytics.
  • 4. Unit Prerequisites Microsoft Azure Account: You will need a valid and active Azure account for the Azure labs. • If you are a Visual Studio Active Subscriber, you are entitled to Azure credits per month. You can refer to this link to find out more including how to activate and start using your monthly Azure credit. • If you are not a Visual Studio Subscriber, you can sign up for the FREE Visual Studio Dev Essentials program to create Azure free account.
  • 5. Create the required resources To complete this lab, you will need to deploy an Azure Databricks workspace in your Azure subscription.
  • 6. Agenda  Introduction  Read data in CSV format  Read data in JSON format  Read data in Parquet format  Read data stored in tables and views
  • 7. Agenda continued  Write data  Exercises: Read and write data  Knowledge check  Summary
  • 9. Introduction Suppose you're working for a data analytics startup that's now expanding along with its increasing customer base.
  • 11. Deploy an Azure Databricks workspace Click the following button to open the Azure Resource Manager Template (ARM) template in the Azure portal. • Click the following button to open the Azure Resource Manager Template (ARM) template in the Azure portal. Deploy Databricks from the ARM Template • Provide the required values to create your Azure Databricks workspace: • Subscription: Choose the Azure Subscription in which to deploy the workspace. • Resource Group: Leave at Create new and provide a name for the new resource group. • Location: Select a location near you for deployment. For the list of regions supported by Azure Databricks, see Azure services available by region. • Workspace Name: Provide a name for your workspace. • Pricing Tier: Ensure premium is selected. • Accept the terms and conditions. • Select Purchase. • The workspace creation takes a few minutes. During workspace creation, the portal displays the Submitting deployment for Azure Databricks tile on the right side. You may need to scroll right on your dashboard to see the tile. There is also a progress bar displayed near the top of the screen. You can watch either area for progress.
  • 12. Create a cluster When your Azure Databricks workspace creation is complete, select the link to go to the resource.
  • 13. Clone the Databricks archive If you do not currently have your Azure Databricks workspace open: in the Azure portal, navigate to your deployed Azure Databricks workspace and select Launch Workspace. • Select Import. • Select the 03-Reading-and-writing-data-in- Azure-Databricks folder that appears.
  • 14. Read data in CSV format
  • 15. Read data in CSV format In this unit, you need to complete the exercises within a Databricks Notebook.
  • 16. Complete the following notebook Open the 1.Reading Data - CSV notebook. • Start working with the API documentation • Introduce the class SparkSession and other entry points • Introduce the class DataFrameReader • Read data from: • CSV without a Schema • CSV with a Schema
  • 17. Read data in JSON format
  • 18. Read data in JSON format In your Azure Databricks workspace, open the 03-Reading-and- writing-data-in-Azure-Databricks folder that you imported within your user folder. • Read data from: • JSON without a Schema • JSON with a Schema
  • 19. Read data in Parquet format
  • 20. Read data in Parquet format In your Azure Databricks workspace, open the 03-Reading-and- writing-data-in-Azure-Databricks folder that you imported within your user folder. • Introduce the Parquet file format • Read data from: • Parquet files without a schema • Parquet files with a schema
  • 21. Read data stored in tables and views
  • 22. Read data stored in tables and views In your Azure Databricks workspace, open the 03-Reading-and- writing-data-in-Azure-Databricks folder that you imported within your user folder. • Demonstrate how to pre-register data sources in Azure Databricks • Introduce temporary views over files • Read data from tables/views
  • 24. Write data In your Azure Databricks workspace, open the 03-Reading-and- writing-data-in-Azure-Databricks folder that you imported within your user folder. • Write data to a Parquet file • Read the Parquet file back and display the results
  • 26. Exercises: Read and write data In your Azure Databricks workspace, open the 03-Reading-and- writing-data-in-Azure-Databricks folder that you imported within your user folder.
  • 28. Question 1 How do you list files in DBFS within a notebook? A. ls /my-file-path B. %fs dir /my-file-path C. %fs ls /my-file-path
  • 29. Question 1 How do you list files in DBFS within a notebook? A. ls /my-file-path B. %fs dir /my-file-path C. %fs ls /my-file-path
  • 30. Question 2 How do you infer the data types and column names when you read a JSON file? A. spark.read.option("inferSchema", "true").json(jsonFile) B. spark.read.inferSchema("true").json(jsonFile) C. spark.read.option("inferData", "true").json(jsonFile)
  • 31. Question 2 How do you infer the data types and column names when you read a JSON file? A. spark.read.option("inferSchema", "true").json(jsonFile) B. spark.read.inferSchema("true").json(jsonFile) C. spark.read.option("inferData", "true").json(jsonFile)
  • 33. Summary In this module, you learned the basics about reading and writing data in Azure Databricks. • Read data from CSV files into a Spark Dataframe • Provide a Schema when reading Data into a Spark Dataframe • Read data from JSON files into a Spark Dataframe • Read Data from parquet files into a Spark Dataframe • Create Tables and Views • Write data from a Spark Dataframe
  • 34. Clean up If you plan on completing other Azure Databricks modules, don't delete your Azure Databricks instance yet.
  • 35. Delete the Azure Databricks instance • Navigate to the Azure portal. • Navigate to the resource group that contains your Azure Databricks instance. • Select Delete resource group. • Type the name of the resource group in the confirmation text box. • Select Delete.
  • 36. Next Steps Practice your knowledge by trying these Learn modules: Please tell us how you liked this workshop by filling out this survey: https://aka.ms/workshopomatic- feedback There is a slightly more advanced and involved learning path that covers Data Engineering with Azure Databricks.
  • 37. © Copyright Microsoft Corporation. All rights reserved.

Editor's Notes

  • #3: Link to published module on Learn: https://docs.microsoft.com/en-us/learn/modules/learn-pr/wwl-data-ai/read-write-data-azure-databricks/
  • #5: Microsoft Azure Account: You will need a valid and active Azure account for the Azure labs. If you do not have one, you can sign up for a free trial If you are a Visual Studio Active Subscriber, you are entitled to Azure credits per month. You can refer to this link to find out more including how to activate and start using your monthly Azure credit. If you are not a Visual Studio Subscriber, you can sign up for the FREE Visual Studio Dev Essentials program to create Azure free account.
  • #6: To complete this lab, you will need to deploy an Azure Databricks workspace in your Azure subscription.
  • #10: Suppose you're working for a data analytics startup that's now expanding along with its increasing customer base. You receive customer data from multiple sources in different raw formats. To efficiently handle huge amounts of customer data, your company has decided to invest in Azure Databricks. Your team is responsible for analyzing how Databricks supports day-to-day data-handling functions, such as reads, writes, and queries. Your team performs these tasks to prepare the data for advanced analytics and machine learning operations.
  • #12: Click the following button to open the Azure Resource Manager Template (ARM) template in the Azure portal. Deploy Databricks from the ARM Template Provide the required values to create your Azure Databricks workspace: Subscription: Choose the Azure Subscription in which to deploy the workspace. Resource Group: Leave at Create new and provide a name for the new resource group. Location: Select a location near you for deployment. For the list of regions supported by Azure Databricks, see Azure services available by region. Workspace Name: Provide a name for your workspace. Pricing Tier: Ensure premium is selected. Accept the terms and conditions. Select Purchase. The workspace creation takes a few minutes. During workspace creation, the portal displays the Submitting deployment for Azure Databricks tile on the right side. You may need to scroll right on your dashboard to see the tile. There is also a progress bar displayed near the top of the screen. You can watch either area for progress.
  • #13: When your Azure Databricks workspace creation is complete, select the link to go to the resource. Select Launch Workspace to open your Databricks workspace in a new tab. In the left-hand menu of your Databricks workspace, select Clusters. Select Create Cluster to add a new cluster. Enter a name for your cluster. Use your name or initials to easily differentiate your cluster from your coworkers. Select the Cluster Mode: Single Node. Select the Databricks RuntimeVersion: Runtime: 7.3 LTS (Scala 2.12, Spark 3.0.1). Under Autopilot Options, leave the box checked and in the text box enter 45. Select the Node Type: Standard_DS3_v2. Select Create Cluster.
  • #14: If you do not currently have your Azure Databricks workspace open: in the Azure portal, navigate to your deployed Azure Databricks workspace and select Launch Workspace. In the left pane, select Workspace > Users, and select your username (the entry with the house icon). In the pane that appears, select the arrow next to your name, and select Import. In the Import Notebooks dialog box, select the URL and paste in the following URL: https://github.com/solliancenet/microsoft-learning-paths-databricks-notebooks/blob/master/data-engineering/DBC/03-Reading-and-writing-data-in-Azure-Databricks.dbc?raw=true Select Import. Select the 03-Reading-and-writing-data-in-Azure-Databricks folder that appears.
  • #16: In this unit, you need to complete the exercises within a Databricks Notebook. To begin, you need to have access to an Azure Databricks workspace. If you do not have a workspace available, follow the instructions below. Otherwise, you can skip to the bottom of the page to Clone the Databricks archive.
  • #17: Open the 1.Reading Data - CSV notebook. Make sure you attach your cluster to the notebook before following the instructions and running the cells within. Within the notebook, you will: Start working with the API documentation Introduce the class SparkSession and other entry points Introduce the class DataFrameReader Read data from: CSV without a Schema CSV with a Schema After you've completed the notebook, return to this screen, and continue to the next step.
  • #19: In your Azure Databricks workspace, open the 03-Reading-and-writing-data-in-Azure-Databricks folder that you imported within your user folder. Open the 2.Reading Data - JSON notebook. Make sure you attach your cluster to the notebook before following the instructions and running the cells within. Within the notebook, you will: Read data from: JSON without a Schema JSON with a Schema After you've completed the notebook, return to this screen, and continue to the next step.
  • #21: In your Azure Databricks workspace, open the 03-Reading-and-writing-data-in-Azure-Databricks folder that you imported within your user folder. Open the 3.Reading Data - Parquet notebook. Make sure you attach your cluster to the notebook before following the instructions and running the cells within. Within the notebook, you will: Introduce the Parquet file format Read data from: Parquet files without a schema Parquet files with a schema After you've completed the notebook, return to this screen, and continue to the next step.
  • #23: In your Azure Databricks workspace, open the 03-Reading-and-writing-data-in-Azure-Databricks folder that you imported within your user folder. Open the 4.Reading Data - Tables and Views notebook. Make sure you attach your cluster to the notebook before following the instructions and running the cells within. Within the notebook, you will: Demonstrate how to pre-register data sources in Azure Databricks Introduce temporary views over files Read data from tables/views After you've completed the notebook, return to this screen, and continue to the next step.
  • #25: In your Azure Databricks workspace, open the 03-Reading-and-writing-data-in-Azure-Databricks folder that you imported within your user folder. Open the 5.Writing Data notebook. Make sure you attach your cluster to the notebook before following the instructions and running the cells within. Within the notebook, you will: Write data to a Parquet file Read the Parquet file back and display the results After you've completed the notebook, return to this screen, and continue to the next step.
  • #26: Link to published module on Learn: https://docs.microsoft.com/en-us/learn/modules/learn-pr/wwl-data-ai/read-write-data-azure-databricks/7-exercises
  • #27: https://docs.microsoft.com/en-us/learn/modules/learn-pr/wwl-data-ai/read-write-data-azure-databricks/7-exercises
  • #29: Explanation: Correct. You added the file system magic to the cell before executing the ls command.
  • #30: Explanation: Correct. You added the file system magic to the cell before executing the ls command.
  • #31: Explanation: Correct. This approach is the correct way to infer the file's schema.
  • #32: Explanation: Correct. This approach is the correct way to infer the file's schema.
  • #34: In this module, you learned the basics about reading and writing data in Azure Databricks. You now know how to read CSV, JSON, and Parquet file formats, and how to write Parquet files to the Databricks file system (DBFS) with compression options. Though you only wrote the files in Parquet format, you can use the same DataFrame.write method to output to other formats. Finally, you put your knowledge to the test by completing an exercise that required you to read a random file that you had not yet seen. Now that you have concluded this module, you should know: Read data from CSV files into a Spark Dataframe Provide a Schema when reading Data into a Spark Dataframe Read data from JSON files into a Spark Dataframe Read Data from parquet files into a Spark Dataframe Create Tables and Views Write data from a Spark Dataframe
  • #35: If you plan on completing other Azure Databricks modules, don't delete your Azure Databricks instance yet. You can use the same environment for the other modules.
  • #36: Navigate to the Azure portal. Navigate to the resource group that contains your Azure Databricks instance. Select Delete resource group. Type the name of the resource group in the confirmation text box. Select Delete.