
Opdrachten
Info
Functie
Data EngineerLocatie
RotterdamUren per week
40 uren per weekLooptijd
12.05.2025 - 30.12.2025Opdrachtnummer
234344Sluitingsdatum
Data Engineer
Purpose:
We are looking for an experienced Data Engineer to join our data team. In this role, you will be responsible for designing, building, and maintaining scalable and reliable data pipelines and systems that power analytics and business decision-making. You’ll work with modern tools and cloud infrastructure, with a strong focus on containerization, orchestration, and workflow automation.
Knowledge & Experience:
• Experience: 3-7 years in a data engineering role with a strong background in data integration and data platform.
• Education: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field.
Required Skills:
• Kubernetes, Docker, Airflow – You’ll use these tools to deploy, schedule, and monitor robust data pipelines.
• Python – As our main programming language for data processing, automation, and integrations.
• dbt, Snowflake, dimensional Modelling – For transforming, organizing, and optimizing data for analytics.
Preferred Skills:
• Experience with CI/CD for data workflows
• Knowledge of version control systems like Git
• Good communication skills and a collaborative mindset
• Experience with data warehousing, data profiling tools, data catalogues, and data modelling tools.
• Familiarity with Scrum methodologies.
• Knowledge of data profiling, data modeling, data catalogs, and metadata management tools.
• Work Approach: Ability to work independently with general supervision.
Key Responsibilities:
• Develop and manage scalable data pipelines using Apache Airflow and Python
• Containerize and orchestrate services using Docker and Kubernetes
• Design and implement data models using dimensional modelling techniques
• Transform data using dbt and maintain structured, performant models in Snowflake
• Collaborate with data analysts, scientists, and engineers to ensure reliable and timely data delivery
• Optimize queries and pipelines for performance and cost efficiency in a cloud environment
• Develop automated tests for continuous integration and deployment processes.
• Data Quality & Governance: Establish data profiling and quality processes to ensure accuracy, consistency, and reliability.
• Support & Maintenance: Provide support for data-related issues and monitor DWH performance.
Dimensions of the Role:
• Direct Responsibility: Ensuring operational efficiency and reliability of data systems.
• Accountability: Data integrity, compliance with departmental standards, and adherence to quality controls.
• Financial Responsibility: Managing budget allocations for data storage and processing.
Personal Competencies:
• Strong problem-solving skills and ability to think algorithmically.
• Effective communication skills for collaboration with technical and non-technical teams.
• Detail-oriented with a commitment to high-quality data practices.
• Adaptability to rapidly changing technologies and environments.
• Proactive, self-driven, and capable of managing multiple projects.
Key Relationships:
• Reports to: Team Lead IT research Desk
• Collaborates with: Data Scientists, Analytics Teams, IT Support Teams
• Key Customers: Internal departments requiring data for decision-making, external partners for data exchange initiatives.
Viterra B.V.
Data Engineer
Purpose:
We are looking for an experienced Data Engineer to join our data team. In this role, you will be responsible for designing, building, and maintaining scalable and reliable data pipelines and systems that power analytics and business decision-making. You’ll work with modern tools and cloud infrastructure, with a strong focus on containerization, orchestration, and workflow automation.
Knowledge & Experience:
• Experience: 3-7 years in a data engineering role with a strong background in data integration and data platform.
• Education: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field.
Required Skills:
• Kubernetes, Docker, Airflow – You’ll use these tools to deploy, schedule, and monitor robust data pipelines.
• Python – As our main programming language for data processing, automation, and integrations.
• dbt, Snowflake, dimensional Modelling – For transforming, organizing, and optimizing data for analytics.
Preferred Skills:
• Experience with CI/CD for data workflows
• Knowledge of version control systems like Git
• Good communication skills and a collaborative mindset
• Experience with data warehousing, data profiling tools, data catalogues, and data modelling tools.
• Familiarity with Scrum methodologies.
• Knowledge of data profiling, data modeling, data catalogs, and metadata management tools.
• Work Approach: Ability to work independently with general supervision.
Key Responsibilities:
• Develop and manage scalable data pipelines using Apache Airflow and Python
• Containerize and orchestrate services using Docker and Kubernetes
• Design and implement data models using dimensional modelling techniques
• Transform data using dbt and maintain structured, performant models in Snowflake
• Collaborate with data analysts, scientists, and engineers to ensure reliable and timely data delivery
• Optimize queries and pipelines for performance and cost efficiency in a cloud environment
• Develop automated tests for continuous integration and deployment processes.
• Data Quality & Governance: Establish data profiling and quality processes to ensure accuracy, consistency, and reliability.
• Support & Maintenance: Provide support for data-related issues and monitor DWH performance.
Dimensions of the Role:
• Direct Responsibility: Ensuring operational efficiency and reliability of data systems.
• Accountability: Data integrity, compliance with departmental standards, and adherence to quality controls.
• Financial Responsibility: Managing budget allocations for data storage and processing.
Personal Competencies:
• Strong problem-solving skills and ability to think algorithmically.
• Effective communication skills for collaboration with technical and non-technical teams.
• Detail-oriented with a commitment to high-quality data practices.
• Adaptability to rapidly changing technologies and environments.
• Proactive, self-driven, and capable of managing multiple projects.
Key Relationships:
• Reports to: Team Lead IT research Desk
• Collaborates with: Data Scientists, Analytics Teams, IT Support Teams
• Key Customers: Internal departments requiring data for decision-making, external partners for data exchange initiatives.
Voor deze opdracht dien je een bieding te plaatsen op Striive. Striive is het grootste opdrachtenplatform van de Benelux waar jaarlijks meer dan 20.000 opdrachten gepubliceerd worden.