Closed

Azure Data Engineer Advanced Analytics

Posted 7 months ago by Anthon Byberg
Remote
Apply Now

Apply for this job

Job Description

We are looking for 2 Azure Data engineers to join the Advanced Analytics Service Delivery Team.
The focus of the Advance Analytics SDT is to provide capabilities around data onboarding/engineering and model deployment for the prioritised Use Cases. The team consists of Data Engineers, Subject Matter Experts, Service Architects, DevOps Engineer, Tester, Technical Product Owner and Scrum Master. Backlog prioritisation is done together with Business Product Owners based on the pipeline of Use Cases.

The team supports the Cloud Analytics Platform to create capabilities around digital solutions and advanced analytics. The platform hosts the clients enterprise Data Lake where IoT data from equipment at customer plants around the world is collected and supplemented with internal data sources. The team use their data engineering capabilities to gain insight and generate solutions to customer problems.

As a Data Engineer, you will
Identify data sets required per Use Case in collaboration with Source Teams and Business Analysts
Onboard data in the Enterprise Data Lake
Move and transform data from Data Lake to SQL DB or SQL DW as required per Use Case.
Implement logic as per business requirements (e.g. KPI definitions etc)
Collaborate with other IT delivery teams to create a complete solution around a Use Case.
Life Cycle Management of delivered use cases
Soft Skills
Personal drive & problem-solving mentality
Stakeholder management skills
Ability to work across time-zones, culturally aware and able to collaborate with people from other countries.
Team player – will collect and share information with colleagues in order to improve our services.
Self-motivated and driven. Communicates and debates solutions to issues found. Proactively contacts colleagues and takes the lead to work on problems or improvements.
Service-minded

Top Skills
Experience with the Microsoft Azure Platform (Data Lake, Data Factory, Databricks, Data Warehouse, Azure DevOps)
Experience in DataBase and Information Modelling
Experience with data wrangling using SQL and Python (R is a bonus)
Experience with the Apache Spark Framework (preferably PySpark – Python)
Experience with scripting languages such as Python and PowerShell (C#/.NET)
Experience integrating systems and services
Experience consuming REST APIs
Experience developing REST APIs (preferably with Flask – Python)
Experience with Docker containers