SlideShare a Scribd company logo
Data Science Seminar: When, why and How? The important of Business Intelligence
9.11.2022, Tartu, Estonia
DATA LAKE OR DATA WAREHOUSE?
DATA CLEANING OR DATA WRANGLING?
HOW TO ENSURE THE QUALITY OF
YOUR DATA?
Anastasija Nikiforova
Assistant Professor of Information Systems, Faculty of Science and Technology,
Institute of Computer Science, Chair of Software Engineering, University of Tartu
European Open Science CLoud (EOSC) Task Force “FAIR metrics and data quality”
BACKGROUND
Today, billions of data sources continuously generate, collect, process, and exchange data ⇒ with the rapid
increase in the number of devices and IS, the amount and variety of data are increasing.
There is a need to integrate ever-increasing volumes of data, regardless of the source, format or amount, where
the data quality, flexibility and scalability in connecting and processing different data sources are crucial.
an effective mechanism should be employed to ensure
faster value creation from these data
DATA QUALITY
Why data quality? Again? and still?
DATA QUALITY - WHAT, WHY, HOW, 10 BEST PRACTICES & MORE - Enterprise Master Data Management • Profisee
DATA QUALITY - WHAT, WHY, HOW, 10 BEST PRACTICES & MORE - Enterprise Master Data Management • Profisee
Among other “nuances”, data quality is use-case dependent and dynamic (as
well as relative) in nature!
*** “absolute data quality” == a level of data quality at which the data would satisfy all possible use
cases - is not possible to achieve, but this is the objective to be pursued
DATA REPOSITORY
DATA WAREHOUSE
DATA LAKE?
Maybe even something more?
Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
schema on read
schema on write
“single source of truth”
Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
DW were considered to be «a
silver bullet» for Business
Intelligence …
Implementing a Data Lake or Data Warehouse Architecture for Business Intelligence? | by Lan Chu | Towards Data Science
Data Lake or Data Warehouse? Data Cleaning or Data Wrangling? How to Ensure the Quality of Your Data?
Image source: https://towardsdatascience.com/augment-your-data-lake-analytics-with-snowflake-b417f1186615
So how to get its benefits?
Image source: https://www.google.com/url?sa=i&url=https%3A%2F%2Ftwitter.com%2Frokar9%2Fstatus%2F1452339921629302784&psig=AOvVaw2IUSKtgUWxeaplk56f7CoK&ust=1668004535620000&source=images&cd=vfe&ved=0CA4QjhxqFwoTCJDHwbjnnvsCFQAAAAAdAAAAABAM
DATA LAKE & DATA WAREHOUSE
DATA LAKEHOUSE
Data lakehouse is seen as a combination of data warehousing workloads & data lake economics
Running Analytics on the Data Lake - The Databricks Blog
Running Analytics on the Data Lake - The Databricks Blog, Build a Lake House Architecture on AWS | AWS Big Data Blog (amazon.com), The Data Lakehouse, the Data Warehouse and a Modern Data platform architecture - Microsoft Community Hub
DATA LAKE FOR BUSINESS
INTELLIGENCE
BUSINESS DATA LAKE
https://www.capgemini.com/wp-content/uploads/2017/07/pivotal_data_lake_vs_traditional_bi_20140805.pdf
https://www.capgemini.com/wp-content/uploads/2017/07/pivotal_data_lake_vs_traditional_bi_20140805.pdf
Image source: The abstracted future of data engineering | by Justin Gage | Datalogue | Medium
Or how to avoid GIGO*?
*“garbage in, garbage out”
DATA CLEANING or DATA WRANGLING?
https://pediaa.com/what-is-the-difference-between-data-wrangling-and-data-cleaning/
✔
a process of iterative data exploration
and transformation that enables their
further analysis by making them (1)
usable, (2) credible and (3) useful
Image source: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.ecloudvalley.com%2Fwhat-is-datalake-and-datawarehouse%2F&psig=AOvVaw1vGuz42Qfu2-0J_bpmhpbJ&ust=1651908882490000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCNiaquD6yvcCFQAAAAAdAAAAABAO
⮚ The nature of data lake allows to store a variety of data within the memory
BUT
⮚ there is a need to clean up dirty data and enrich them in a pre-processing process, where data wrangling is found to be suitable
for these purposes.
⮚ The goal is to convert complex data types and data formats into structured data without programming efforts  users should
be able to prepare and transform their data without the need of using the ETL tools or familiarity and use of programming
languages, where these transformations should be automatically suggested after reading the data based on machine learning
algorithms that greatly speeds up this process.
Source: https://monkeylearn.com/blog/data-wrangling/, https://www.altair.com/what-is-data-wrangling/
DATA LAKE + DATA WRANGLING
=
DATA QUALITY IN IS
[an asset, not a silver bullet]
The data wrangling process to prepare research information and integrate it into CRIS
Depending on the IS and the desired or required target quality*, individual steps should be carried out several times  data wrangling is a continuous process
that repeats itself repeatedly at regular intervals.
Step Description
Select data The required data records are identified in different data sources. When selecting data, a record is evaluated by its value 🡪 if there is added value, the availability and terms of use
of the data and subsequent data from this data source are checked
Structure In most cases, there is little or no structure in the data 🡪 change the structure of the data for easier accessibility.
Clean Almost every dataset contains some outliers that can skew the analysis results 🡪 the data are extensively cleaned for better analysis (processing of null values, removing duplicates and
special characters, and standardization of the formatting to improve data consistency)
Enrich The data needs to be enriched - an inventory of the data set and a strategy for improving it by adding additional data should be carried out. The data set is enriched with various
metadata:
✔ Schematic metadata provide basic information about the processing and ingestion of data 🡪 the data wrangler analyzes / parses data records according to an existing
schema.
✔ Conversation metadata are exchanged between accessing instances with the idea to document information obtained during the processing or analysis of these data for
subsequent users.
The recognized peculiarities/ features of a data set can be saved.
*Data lake The physical transfer of data in the data lake. Although data are prepared using metadata, the record is not pre-processed.
The goal is to avoid a data swamp 🡪 estimate the value of the data and decide on their lifespan depending on the data quality and its interconnectedness with the rest of the DB.
Analyzes are not performed directly in the data lake, but only on the relevant data. To be able to use the data, the requester needs the appropriate access rights 🡪 Data Wrangler
performs data extraction, however, general viewing and exploration of the data should be possible directly in the data lake.
*Data
governance
The contents of the data lake, technologies and hardware used are subject to change 🡪 an audit is required to take care of the care and maintenance of the data lake. The main
principles / guidelines and measures that regulates data maintenance coordinating all processes in the data lake and responsibilities are defined
Validate the data are checked one more time before they are integrated into the target CRIS to identify problems with the data quality and consistency of the data, or to confirm that the
transformation has been successful.
Verify that the values of the attribute are correct and conform to the syntactic and distribution constraints, thus ensuring high data quality AND document every change so that
older versions can be restored, or history of changes can be viewed. If new data are generated during data analysis in CRIS, it can be re-included in Data Lake**
**New data go through the data wrangling process, starting with the step 2 of data validating and structuring the data.
At the end of this process, research information can be used by analytical applications and protected from unauthorized access by access control
USE-CASE
✔ Data formatting
✔ Correction of incorrect data (e.g. address data)
USE-CASE
✔Normalization and standardization (e.g. phone numbers, titles, etc.)
✔Structuring (e.g. separation of names into titles, first and last names, etc.)
✔Identification and cleaning of duplicates
USE-CASE: TRIFACTA FOR DATA WRANGLING
CONCLUSIONS
✔ As the volume of research information and data sources increases, the prerequisite for data to be complete, findable, comprehensively accessible,
interoperable, reusable (compliant with FAIR principles), but also securely stored, structured, and networked in order to be useful remain critical but
at the same time become more difficult to fulfill  data wrangling can be seen a valuable asset in ensuring this.
✔ The goal is to counteract the growing number of data silos that isolate data from different areas of the organization. Once successfully
implemented, data can be retrieved, managed and made available and accessible to everyone within the entity.
✔ A data lake and data wrangling can be implemented to improve and simplify IT infrastructure and architecture, governance and compliance.
They provide valuable support for predictive analytics and self-service analysis by making it easier and faster to access large amount of data from
multiple sources.
✔ The proper organization of the data lake makes it easier to find the data the user needs. Managing the data that have already been pre-processed
results in an increased efficiency and cost saving, as preparing data for their further use is the most resource-consuming part of data analysis.
✔ By providing pre-processed data, users with limited or no experience in data preparation (low level of data literacy) can be supported and analyzes
can be carried out faster and more accurately.
THANK YOU FOR
ATTENTION!
QUESTIONS?
For more information, see ResearchGate,
anastasijanikiforova.com
For questions or any queries, contact me via
Nikiforova.Anastasija@gmail.com,

More Related Content

PDF
Building Data Lakehouse.pdf
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPTX
Design Principles for a Modern Data Warehouse
PDF
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
PPTX
Analytical Functions for DWH
PDF
What is DBT - The Ultimate Data Build Tool.pdf
PPTX
Performance Optimizations in Apache Impala
Building Data Lakehouse.pdf
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 1
Design Principles for a Modern Data Warehouse
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Analytical Functions for DWH
What is DBT - The Ultimate Data Build Tool.pdf
Performance Optimizations in Apache Impala

What's hot (20)

PDF
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
PDF
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data Pipelines
PPTX
Big data architectures and the data lake
PPTX
Snowflake essentials
PPTX
Databricks Platform.pptx
PPTX
Building a modern data warehouse
PPTX
Zero to Snowflake Presentation
PPTX
Snowflake Architecture.pptx
PPTX
Azure data platform overview
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r1)
PPT
An overview of snowflake
PDF
[EN] Building modern data pipeline with Snowflake + DBT + Airflow.pdf
PPTX
The Path to Data and Analytics Modernization
PDF
Data Mesh for Dinner
PDF
Snowflake for Data Engineering
PDF
Modern Data architecture Design
PDF
Cloud DW technology trends and considerations for enterprises to apply snowflake
PDF
Time to Talk about Data Mesh
PPTX
TechEvent Databricks on Azure
PDF
Data Lake Architecture – Modern Strategies & Approaches
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...
Putting the Ops in DataOps: Orchestrate the Flow of Data Across Data Pipelines
Big data architectures and the data lake
Snowflake essentials
Databricks Platform.pptx
Building a modern data warehouse
Zero to Snowflake Presentation
Snowflake Architecture.pptx
Azure data platform overview
Data Lakehouse, Data Mesh, and Data Fabric (r1)
An overview of snowflake
[EN] Building modern data pipeline with Snowflake + DBT + Airflow.pdf
The Path to Data and Analytics Modernization
Data Mesh for Dinner
Snowflake for Data Engineering
Modern Data architecture Design
Cloud DW technology trends and considerations for enterprises to apply snowflake
Time to Talk about Data Mesh
TechEvent Databricks on Azure
Data Lake Architecture – Modern Strategies & Approaches
Ad

Similar to Data Lake or Data Warehouse? Data Cleaning or Data Wrangling? How to Ensure the Quality of Your Data? (20)

PDF
Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS
PPTX
Chap3-Data Warehousing and OLAP operations..pptx
PDF
Data Warehouse Learn In 1 Day Krishna Rungta
PDF
Data Lakes versus Data Warehouses
PDF
Data lakes
PDF
02.BigDataAnalytics curso de Legsi (1).pdf
PDF
An Overview of Data Lake
PPS
Data Warehouse 101
PPTX
DATA WAREHOUSING.2.pptx
PDF
Building Data Warehouse in SQL Server
PPS
Introduction to Data Warehousing
PPT
Introduction to Business Intelligence and Data warehousing - ppt
PPTX
158001210111bapan data warehousepptse.pptx
PDF
Traditional BI vs. Business Data Lake – A Comparison
PDF
Big Data Pitfalls
PPT
Data Warehouse
PDF
Got data?… now what? An introduction to modern data platforms
PPTX
Data Lake Overview
PPTX
Data lake ppt
PPT
Data Warehousing Datamining Concepts
Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS
Chap3-Data Warehousing and OLAP operations..pptx
Data Warehouse Learn In 1 Day Krishna Rungta
Data Lakes versus Data Warehouses
Data lakes
02.BigDataAnalytics curso de Legsi (1).pdf
An Overview of Data Lake
Data Warehouse 101
DATA WAREHOUSING.2.pptx
Building Data Warehouse in SQL Server
Introduction to Data Warehousing
Introduction to Business Intelligence and Data warehousing - ppt
158001210111bapan data warehousepptse.pptx
Traditional BI vs. Business Data Lake – A Comparison
Big Data Pitfalls
Data Warehouse
Got data?… now what? An introduction to modern data platforms
Data Lake Overview
Data lake ppt
Data Warehousing Datamining Concepts
Ad

More from Anastasija Nikiforova (20)

PPTX
From the evolution of public data ecosystems to the evolving horizons of the ...
PDF
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...
PDF
Towards High-Value Datasets determination for data-driven development: a syst...
PDF
Public data ecosystems in and for smart cities: how to make open / Big / smar...
PDF
Artificial Intelligence for open data or open data for artificial intelligence?
PDF
Overlooked aspects of data governance: workflow framework for enterprise data...
PDF
Data Quality as a prerequisite for you business success: when should I start ...
PDF
Framework for understanding quantum computing use cases from a multidisciplin...
PPTX
Putting FAIR Principles in the Context of Research Information: FAIRness for ...
PDF
Open data hackathon as a tool for increased engagement of Generation Z: to h...
PDF
Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Inno...
PDF
The role of open data in the development of sustainable smart cities and smar...
PDF
Data security as a top priority in the digital world: preserve data value by ...
PDF
IoTSE-based Open Database Vulnerability inspection in three Baltic Countries:...
PDF
Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can...
PDF
ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detect...
PDF
OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERS
PDF
Invited talk "Open Data as a driver of Society 5.0: how you and your scientif...
PDF
Towards enrichment of the open government data: a stakeholder-centered determ...
PDF
Atvērto datu potenciāls
From the evolution of public data ecosystems to the evolving horizons of the ...
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...
Towards High-Value Datasets determination for data-driven development: a syst...
Public data ecosystems in and for smart cities: how to make open / Big / smar...
Artificial Intelligence for open data or open data for artificial intelligence?
Overlooked aspects of data governance: workflow framework for enterprise data...
Data Quality as a prerequisite for you business success: when should I start ...
Framework for understanding quantum computing use cases from a multidisciplin...
Putting FAIR Principles in the Context of Research Information: FAIRness for ...
Open data hackathon as a tool for increased engagement of Generation Z: to h...
Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Inno...
The role of open data in the development of sustainable smart cities and smar...
Data security as a top priority in the digital world: preserve data value by ...
IoTSE-based Open Database Vulnerability inspection in three Baltic Countries:...
Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can...
ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detect...
OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERS
Invited talk "Open Data as a driver of Society 5.0: how you and your scientif...
Towards enrichment of the open government data: a stakeholder-centered determ...
Atvērto datu potenciāls

Recently uploaded (20)

PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PPTX
The various Industrial Revolutions .pptx
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
STKI Israel Market Study 2025 version august
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
1. Introduction to Computer Programming.pptx
PDF
project resource management chapter-09.pdf
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
August Patch Tuesday
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
The various Industrial Revolutions .pptx
Univ-Connecticut-ChatGPT-Presentaion.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Hindi spoken digit analysis for native and non-native speakers
Zenith AI: Advanced Artificial Intelligence
Developing a website for English-speaking practice to English as a foreign la...
STKI Israel Market Study 2025 version august
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Getting started with AI Agents and Multi-Agent Systems
A contest of sentiment analysis: k-nearest neighbor versus neural network
Module 1.ppt Iot fundamentals and Architecture
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
A comparative study of natural language inference in Swahili using monolingua...
1. Introduction to Computer Programming.pptx
project resource management chapter-09.pdf
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
August Patch Tuesday

Data Lake or Data Warehouse? Data Cleaning or Data Wrangling? How to Ensure the Quality of Your Data?

  • 1. Data Science Seminar: When, why and How? The important of Business Intelligence 9.11.2022, Tartu, Estonia DATA LAKE OR DATA WAREHOUSE? DATA CLEANING OR DATA WRANGLING? HOW TO ENSURE THE QUALITY OF YOUR DATA? Anastasija Nikiforova Assistant Professor of Information Systems, Faculty of Science and Technology, Institute of Computer Science, Chair of Software Engineering, University of Tartu European Open Science CLoud (EOSC) Task Force “FAIR metrics and data quality”
  • 2. BACKGROUND Today, billions of data sources continuously generate, collect, process, and exchange data ⇒ with the rapid increase in the number of devices and IS, the amount and variety of data are increasing. There is a need to integrate ever-increasing volumes of data, regardless of the source, format or amount, where the data quality, flexibility and scalability in connecting and processing different data sources are crucial. an effective mechanism should be employed to ensure faster value creation from these data
  • 3. DATA QUALITY Why data quality? Again? and still?
  • 4. DATA QUALITY - WHAT, WHY, HOW, 10 BEST PRACTICES & MORE - Enterprise Master Data Management • Profisee
  • 5. DATA QUALITY - WHAT, WHY, HOW, 10 BEST PRACTICES & MORE - Enterprise Master Data Management • Profisee Among other “nuances”, data quality is use-case dependent and dynamic (as well as relative) in nature! *** “absolute data quality” == a level of data quality at which the data would satisfy all possible use cases - is not possible to achieve, but this is the objective to be pursued
  • 7. DATA WAREHOUSE DATA LAKE? Maybe even something more?
  • 8. Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
  • 9. Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/ schema on read schema on write “single source of truth”
  • 10. Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
  • 11. Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/
  • 12. Image source: https://www.grazitti.com/blog/data-lake-vs-data-warehouse-which-one-should-you-go-for/, https://www.qubole.com/data-lakes-vs-data-warehouses-the-co-existence-argument/ DW were considered to be «a silver bullet» for Business Intelligence …
  • 13. Implementing a Data Lake or Data Warehouse Architecture for Business Intelligence? | by Lan Chu | Towards Data Science
  • 17. DATA LAKE & DATA WAREHOUSE DATA LAKEHOUSE
  • 18. Data lakehouse is seen as a combination of data warehousing workloads & data lake economics Running Analytics on the Data Lake - The Databricks Blog
  • 19. Running Analytics on the Data Lake - The Databricks Blog, Build a Lake House Architecture on AWS | AWS Big Data Blog (amazon.com), The Data Lakehouse, the Data Warehouse and a Modern Data platform architecture - Microsoft Community Hub
  • 20. DATA LAKE FOR BUSINESS INTELLIGENCE BUSINESS DATA LAKE https://www.capgemini.com/wp-content/uploads/2017/07/pivotal_data_lake_vs_traditional_bi_20140805.pdf
  • 22. Image source: The abstracted future of data engineering | by Justin Gage | Datalogue | Medium Or how to avoid GIGO*? *“garbage in, garbage out”
  • 23. DATA CLEANING or DATA WRANGLING?
  • 24. https://pediaa.com/what-is-the-difference-between-data-wrangling-and-data-cleaning/ ✔ a process of iterative data exploration and transformation that enables their further analysis by making them (1) usable, (2) credible and (3) useful
  • 25. Image source: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.ecloudvalley.com%2Fwhat-is-datalake-and-datawarehouse%2F&psig=AOvVaw1vGuz42Qfu2-0J_bpmhpbJ&ust=1651908882490000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCNiaquD6yvcCFQAAAAAdAAAAABAO ⮚ The nature of data lake allows to store a variety of data within the memory BUT ⮚ there is a need to clean up dirty data and enrich them in a pre-processing process, where data wrangling is found to be suitable for these purposes. ⮚ The goal is to convert complex data types and data formats into structured data without programming efforts  users should be able to prepare and transform their data without the need of using the ETL tools or familiarity and use of programming languages, where these transformations should be automatically suggested after reading the data based on machine learning algorithms that greatly speeds up this process. Source: https://monkeylearn.com/blog/data-wrangling/, https://www.altair.com/what-is-data-wrangling/
  • 26. DATA LAKE + DATA WRANGLING = DATA QUALITY IN IS [an asset, not a silver bullet]
  • 27. The data wrangling process to prepare research information and integrate it into CRIS Depending on the IS and the desired or required target quality*, individual steps should be carried out several times  data wrangling is a continuous process that repeats itself repeatedly at regular intervals.
  • 28. Step Description Select data The required data records are identified in different data sources. When selecting data, a record is evaluated by its value 🡪 if there is added value, the availability and terms of use of the data and subsequent data from this data source are checked Structure In most cases, there is little or no structure in the data 🡪 change the structure of the data for easier accessibility. Clean Almost every dataset contains some outliers that can skew the analysis results 🡪 the data are extensively cleaned for better analysis (processing of null values, removing duplicates and special characters, and standardization of the formatting to improve data consistency) Enrich The data needs to be enriched - an inventory of the data set and a strategy for improving it by adding additional data should be carried out. The data set is enriched with various metadata: ✔ Schematic metadata provide basic information about the processing and ingestion of data 🡪 the data wrangler analyzes / parses data records according to an existing schema. ✔ Conversation metadata are exchanged between accessing instances with the idea to document information obtained during the processing or analysis of these data for subsequent users. The recognized peculiarities/ features of a data set can be saved. *Data lake The physical transfer of data in the data lake. Although data are prepared using metadata, the record is not pre-processed. The goal is to avoid a data swamp 🡪 estimate the value of the data and decide on their lifespan depending on the data quality and its interconnectedness with the rest of the DB. Analyzes are not performed directly in the data lake, but only on the relevant data. To be able to use the data, the requester needs the appropriate access rights 🡪 Data Wrangler performs data extraction, however, general viewing and exploration of the data should be possible directly in the data lake. *Data governance The contents of the data lake, technologies and hardware used are subject to change 🡪 an audit is required to take care of the care and maintenance of the data lake. The main principles / guidelines and measures that regulates data maintenance coordinating all processes in the data lake and responsibilities are defined Validate the data are checked one more time before they are integrated into the target CRIS to identify problems with the data quality and consistency of the data, or to confirm that the transformation has been successful. Verify that the values of the attribute are correct and conform to the syntactic and distribution constraints, thus ensuring high data quality AND document every change so that older versions can be restored, or history of changes can be viewed. If new data are generated during data analysis in CRIS, it can be re-included in Data Lake** **New data go through the data wrangling process, starting with the step 2 of data validating and structuring the data. At the end of this process, research information can be used by analytical applications and protected from unauthorized access by access control
  • 29. USE-CASE ✔ Data formatting ✔ Correction of incorrect data (e.g. address data)
  • 30. USE-CASE ✔Normalization and standardization (e.g. phone numbers, titles, etc.) ✔Structuring (e.g. separation of names into titles, first and last names, etc.) ✔Identification and cleaning of duplicates
  • 31. USE-CASE: TRIFACTA FOR DATA WRANGLING
  • 32. CONCLUSIONS ✔ As the volume of research information and data sources increases, the prerequisite for data to be complete, findable, comprehensively accessible, interoperable, reusable (compliant with FAIR principles), but also securely stored, structured, and networked in order to be useful remain critical but at the same time become more difficult to fulfill  data wrangling can be seen a valuable asset in ensuring this. ✔ The goal is to counteract the growing number of data silos that isolate data from different areas of the organization. Once successfully implemented, data can be retrieved, managed and made available and accessible to everyone within the entity. ✔ A data lake and data wrangling can be implemented to improve and simplify IT infrastructure and architecture, governance and compliance. They provide valuable support for predictive analytics and self-service analysis by making it easier and faster to access large amount of data from multiple sources. ✔ The proper organization of the data lake makes it easier to find the data the user needs. Managing the data that have already been pre-processed results in an increased efficiency and cost saving, as preparing data for their further use is the most resource-consuming part of data analysis. ✔ By providing pre-processed data, users with limited or no experience in data preparation (low level of data literacy) can be supported and analyzes can be carried out faster and more accurately.
  • 33. THANK YOU FOR ATTENTION! QUESTIONS? For more information, see ResearchGate, anastasijanikiforova.com For questions or any queries, contact me via Nikiforova.Anastasija@gmail.com,