Azure Databricks Engineer.

Overview

Job Title: Azure Data Engineer

Primary Skill: SQL, Azure Databricks

Secondary Skills: PySpark, Azure Data Factory

Location: Pune, Hyderabad

Experience: 6-11 years

Employment Type : Contract


Responsibilities:

As an Azure Data Engineer, your day-to-day work activities will be as follows:

·       Design & develop the ETL

·       Good experience in writing SQL, Python and PySpark programming

·       Create the Pipelines (simple and complex) using ADF.

·       Work with other Azure stack modules like Azure Data Lakes, SQL DW

·       Must be extremely well versed with handling large volume of data.

·       Understand the business requirements for Data flow process needs.

·       Understand requirements, functional and technical specification documents.

·       Development of mapping document and transformation business rules as per scope and requirements/Source to target.

·       Responsible for continuous formal and informal communication on project status

·       Good understanding of JIRA stories process for SQL development activities

 

Job Description

Requirements:

Candidates are required to have these mandatory skills:

  • Overall, 6+ years of developer skills with SQL, Python with Spark (PySpark)
  • Experience in Azure Data Factory, Data Sets, Data Frame, Azure Blob & Storage Explorer
  • Implement data ingestion pipelines from multiple data sources using ADF, ADB (Azure data bricks)
  • Experience in creating Data Factory Pipelines, custom Azure development, deployment, troubleshoot data load / extraction using ADF.
  • Extensive experience on SQL, python, PySpark in Azure Databricks
  • Able to write Python code in PySpark frame by using Dataframes
  • Have good understanding on Agile/Scrum methodologies.

 

Skills & Requirements

SQL, Azure Databricks , Pyspark, Azure Data Factory