Aktuelle Jobs
Entdecken und Bewerben Sie sich für Jobs
Data Engineer (m/f/d)
Contract
Abu Dhabi, United Arab Emirates
25.02.2026
We are looking for a Data Engineer with a software engineering mindset to join our Data & AI team. This is not just a role for writing SQL scripts; it is an opportunity to build robust, scalable, and observable data infrastructure on the cloud.
You will work with a modern tech stack (Dagster, dbt, Clickhouse, AWS) to build the pipelines that power our analytics, machine learning, and GenAI products. If you care about code quality, automation, and "Data as a Product," this role is for you.
Our Tech Stack
● Languages: SQL, Python
● Orchestration: Dagster (migrating from Airflow).
● Data Stores: Redshift, Clickhouse, S3.
● Transformation: dbt, Fivetran.
● Cloud & Infra: AWS (ECS/EKS, Glue, Lambda, Athena)
● IaC: Terraform with Terragrunt.
● AI/GenAI: AWS Bedrock, LangChain, LLMs.
Key Responsibilities:
● Integrate GenAI capabilities (LLMs, LangChain) into our engineering workflows.
● Develop and maintain reliable ETL/ELT pipelines using SQL and Python.
● Use dbt to model raw data into clean, business-ready datasets (Star Schema) that enable stakeholders to self-serve.
● Own the quality of your data. Implement tests (dbt tests, unit tests)
● Work with AWS services (S3, DMS, Glue) and containerized environments (Docker/Kubernetes) to deploy your code.
● Partner with Data Scientists and Product Managers to understand their data needs and deliver high-quality solutions.
Essential Experience:
● 3+ years of hands-on experience in Data Engineering.
● Familiarity with LLMs, AWS Bedrock, or LangChain
● Experience or exposure to Retrieval-Augmented Generation (RAG).
● Experience automating workflows using AI and integrating LLM capabilities into engineering or data pipelines.
● You treat data pipelines like software products. You are comfortable with Version Control (Git), code reviews, and testing.
● You can write complex, efficient queries and understand data modeling concepts
● You can write clean Python scripts for data manipulation and automation (beyond just "notebook scripting").
#LI-KM1
You will work with a modern tech stack (Dagster, dbt, Clickhouse, AWS) to build the pipelines that power our analytics, machine learning, and GenAI products. If you care about code quality, automation, and "Data as a Product," this role is for you.
Our Tech Stack
● Languages: SQL, Python
● Orchestration: Dagster (migrating from Airflow).
● Data Stores: Redshift, Clickhouse, S3.
● Transformation: dbt, Fivetran.
● Cloud & Infra: AWS (ECS/EKS, Glue, Lambda, Athena)
● IaC: Terraform with Terragrunt.
● AI/GenAI: AWS Bedrock, LangChain, LLMs.
Key Responsibilities:
● Integrate GenAI capabilities (LLMs, LangChain) into our engineering workflows.
● Develop and maintain reliable ETL/ELT pipelines using SQL and Python.
● Use dbt to model raw data into clean, business-ready datasets (Star Schema) that enable stakeholders to self-serve.
● Own the quality of your data. Implement tests (dbt tests, unit tests)
● Work with AWS services (S3, DMS, Glue) and containerized environments (Docker/Kubernetes) to deploy your code.
● Partner with Data Scientists and Product Managers to understand their data needs and deliver high-quality solutions.
Essential Experience:
● 3+ years of hands-on experience in Data Engineering.
● Familiarity with LLMs, AWS Bedrock, or LangChain
● Experience or exposure to Retrieval-Augmented Generation (RAG).
● Experience automating workflows using AI and integrating LLM capabilities into engineering or data pipelines.
● You treat data pipelines like software products. You are comfortable with Version Control (Git), code reviews, and testing.
● You can write complex, efficient queries and understand data modeling concepts
● You can write clean Python scripts for data manipulation and automation (beyond just "notebook scripting").
#LI-KM1