Job (Project) Description:

We are looking for a skilled Data Engineer to join our team and work with modern data platforms, focusing on Microsoft Fabric and Azure Data solutions.

Location: Europe

Responsibilities:

  • Design and implement scalable data models within Microsoft Fabric (Lakehouse/Warehouse) to support ingestion, transformation, and analytics scenarios
  • Develop and maintain robust data pipelines using Fabric Data Factory, Notebooks, and Spark, ensuring data consistency and reliability across multiple sources
  • Optimize data processing performance through query tuning, partitioning strategies, and efficient use of Fabric compute engines
  • Implement data security and governance using Microsoft Purview, role-based access control (RBAC), and data sensitivity labeling
  • Automate data workflows, orchestration, and monitoring within the Fabric ecosystem
  • Implement and support data migration processes from legacy platforms to Microsoft Fabric
  • Collaborate with cross-functional teams to solve complex data challenges and troubleshoot platform-related issues

Requirements:

  • Bachelor's degree in computer science, information technology, or a related field
  • 5+ years of experience in data engineering
  • 2+ years of hands-on experience with Microsoft Fabric, Azure Synapse Analytics, or Azure Data Platform
  • Strong understanding of Lakehouse architecture, data modeling, and ETL/ELT processes
  • Experience working with Fabric components (Data Factory, Data Engineering, Data Warehouse)
  • Solid Python programming skills (PySpark preferred)
  • Proficiency in SQL and query optimization techniques
  • Hands-on experience building scalable and reliable data pipelines
  • Experience with workflow orchestration and scheduling tools (e.g., Fabric pipelines, Apache Airflow)
  • Familiarity with Microsoft Azure (Data Lake, Azure SQL, Synapse, etc.)
  • Experience with CI/CD pipelines and DevOps practices (Azure DevOps, GitHub Actions)
  • Strong understanding of data warehousing concepts, data lakes, distributed processing

Nice to have:

  • Experience leveraging large language models (LLMs) and AI-assisted tools to enhance data engineering workflows, including code generation, pipeline development, and data quality automation
  • Experience with Microsoft Purview for data governance and cataloging
  • Familiarity with Delta Lake / OneLake concepts and open table formats
  • Hands-on experience with Databricks or other Spark-based platforms
  • Experience with dbt or similar transformation frameworks

Other skills:

  • English – Excellent written and verbal communication skills
  • Ability to work in a global, multi-cultural, multi-national environment
  • Strong communication and stakeholder management skills
  • Ability to engage with both technical and business audiences
  • Proven ability to work independently and as part of an international team

What We Offer:

  • Competitive salary;
  • 100% remote opportunity;
  • Flexible work environment (in-office, remote, or hybrid depending on preferences and manager approval);
  • Opportunities for professional growth and career advancement;
  • Collaborative and innovative work environment;
  • Paid time off, including holidays, vacation and sick leave;
  • Benefits may vary by location and will be confirmed during the interview process.



Job ID: JR-172830