top of page
DP-3011 | Implement a Data Analytics Solution with Azure Databricks

DP-3011 | Implement a Data Analytics Solution with Azure Databricks

 

This course explores how to use Databricks and Apache Spark on Azure to take data projects from exploration to production. You’ll learn how to ingest, transform, and analyze large-scale datasets with Spark DataFrames, Spark SQL, and PySpark, while also building confidence in managing distributed data processing. Along the way, you’ll get hands-on with the Databricks workspace-navigating clusters and creating and optimizing Delta tables.

You’ll also dive into data engineering practices, including designing ETL pipelines, handling schema evolution, and enforcing data quality. The course then moves into orchestration, showing you how to automate and manage workloads with Lakeflow Jobs and pipelines. To round things out, you’ll explore governance and security capabilities such as Unity Catalog and Purview integration, ensuring you can work with data in a secure, well-managed, and production-ready environment.

 

Audience Profile

Before taking this course, learners should already be comfortable with the fundamentals of Python and SQL. This includes being able to write simple Python scripts and work with common data structures, as well as writing SQL queries to filter, join, and aggregate data. A basic understanding of common file formats such as CSV, JSON, or Parquet will also help when working with datasets. In addition, familiarity with the Azure portal and core services like Azure Storage is important, along with a general awareness of data concepts such as batch versus streaming processing and structured versus unstructured data. While not mandatory, prior exposure to big data frameworks like Spark, and experience working with Jupyter notebooks, can make the transition to Databricks smoother.

 

Prerequisites

Before starting this learning path, you should already be comfortable with the fundamentals of Python and SQL. This includes being able to write simple Python scripts and work with common data structures, as well as writing SQL queries to filter, join, and aggregate data. A basic understanding of common file formats such as CSV, JSON, or Parquet will also help when working with datasets.

In addition, familiarity with the Azure portal and core services like Azure Storage is important, along with a general awareness of data concepts such as batch versus streaming processing and structured versus unstructured data. While not mandatory, prior exposure to big data frameworks like Spark, and experience working with Jupyter notebooks, can make the transition to Databricks smoother.


Role

  • Data Analyst


Course Outline

Module 1: Explore Azure Databricks

Azure Databricks is a cloud service that provides a scalable platform for data analytics using Apache Spark.

  • Introduction
  • Get started with Azure Databricks
  • Identify Azure Databricks workloads
  • Understand key concepts
  • Data governance using Unity Catalog and Microsoft Purview
  • Exercise - Explore Azure Databricks
  • Module assessment
  • Summary

Module 2: Perform data analysis with Azure Databricks

Learn how to perform data analysis using Azure Databricks. Explore various data ingestion methods and how to integrate data from sources like Azure Data Lake and Azure SQL Database. This module guides you through using collaborative notebooks to perform exploratory data analysis (EDA), so you can visualize, manipulate, and examine data to uncover patterns, anomalies, and correlations.

  • Introduction
  • Ingest data with Azure Databricks
  • Data exploration tools in Azure Databricks
  • Data analysis using DataFrame APIs
  • Exercise - Explore data with Azure Databricks
  • Module assessment
  • Summary

Module 3: Use Apache Spark in Azure Databricks

Azure Databricks is built on Apache Spark and enables data engineers and analysts to run Spark jobs to transform, analyze and visualize data at scale.

  • Introduction
  • Get to know Spark
  • Create a Spark cluster
  • Use Spark in notebooks
  • Use Spark to work with data files
  • Visualize data
  • Exercise - Use Spark in Azure Databricks
  • Module assessment
  • Summary

Module 4: Manage data with Delta Lake

Delta Lake is a data management solution in Azure Databricks providing features including ACID transactions, schema enforcement, and time travel ensuring data consistency, integrity, and versioning capabilities.

  • Introduction
  • Get started with Delta Lake
  • Create Delta tables
  • Implement schema enforcement
  • Data versioning and time travel in Delta Lake
  • Data integrity with Delta Lake
  • Exercise - Use Delta Lake in Azure Databricks
  • Module assessment
  • Summary

Module 5: Build Lakeflow Declarative Pipelines

Building Lakeflow Declarative Pipelines enables real-time, scalable, and reliable data processing using Delta Lake's advanced features in Azure Databricks.

  • Introduction
  • Explore Lakeflow Declarative Pipelines
  • Data ingestion and integration
  • Real-time processing
  • Exercise - Create a Lakeflow Declarative Pipeline
  • Module assessment
  • Summary

Module 6: Deploy workloads with Lakeflow Jobs

Deploying workloads with Lakeflow Jobs involves orchestrating and automating complex data processing pipelines, machine learning workflows, and analytics tasks. In this module, you learn how to deploy workloads with Databricks Lakeflow Jobs.

  • Introduction
  • What are Lakeflow Jobs?
  • Understand key components of Lakeflow Jobs
  • Explore the benefits of Lakeflow Jobs
  • Deploy workloads using Lakeflow Jobs
  • Exercise - Create a Lakeflow Job
  • Module assessment
  • Summary


Descargue el temario para conocer el detalle completo de los contenidos.


Debido a las constantes actualizaciones de los contenidos de los cursos por parte del fabricante, el contenido de este temario puede variar con respecto al publicado en el sitio oficial, sin embargo, Netec siempre entregará la versión actualizada de éste.

DP-3011 | Implement a Data Analytics Solution with Azure Databricks

SKU: MICROSOFT-DP-3011
bottom of page