Opdrachten

Rabobank Dev Ops Engineer B 08

Dev Ops Engineer B 08

Info

Functie

Dev Ops Engineer B 08

Locatie

Utrecht

Uren per week

36 uren per week

Looptijd

14.05.2025 - 29.04.2026

Opdrachtnummer

234093

Sluitingsdatum

date-icon13.05.2025 clock-icon15:00
Reageren op deze opdracht? Dit doe je op Striive.

Rolomschrijving en taakafspraken

Language : English mandatory
ZZP Allowed : No



Experienced software engineer to work in a busy environment to create and maintain applications and solutions for communications data ingest and decommissioned environments into a searchable centralised environment for regulatory purposes.



The candidate should have with a minimum 3 years experience with:



Powershell


.Net


React js


Ansible


SQL



The candidate should be capable of driving initiative to their successful conclusion to agreed timelines, escalate when necessary and be capable of communicating effectively to technical colleagues, management and stakeholders.


The candidate should have a good security focus and take ownership of their deliverables.



Detailed description:


Within the bank, a number of solutions are implemented to support the archiving of business data from decommissioned applications and also the archiving of regulatory data, retaining it in a searchable form. This new role will analyse the requirements and build a standard data ingestion and search interface to simplify on-boarding of new applications in the future. Ultimately, the aim is to provide a semi-automated on-boarding service.



An understanding of how structured and non-structured data may be collected and combined to answer business questions should be combined with an ability to code Extract, Transform, Load (ETL) and manage the underlying object store in which the data is ultimately stored. The role is a hybrid data scientist, application developer and infrastructure engineer.



Data is stored into a Hitachi Content Platform (HCP) Object Store.



• The candidate will work within the Storage Services team as part of Engineering & Enterprise Tech.


• The purpose of the role is to identify generic and bespoke archiving and data analysis requirements for the banks data, to build standardised extraction, transformation and loading of that data into existing Object Stores to satisfy analysis and regulatory requirements.


• Develop standardised ingestion pipelines will deliver scalable, repeatable, secure solutions meeting multiple data archiving requirements. This will include organisation and storage of legacy data from retired applications as well as building API’s into real-time data collection and archiving, ensuring regulatory compliance for the bank.


• Code, test, document, and maintain the standardised, automated data pipelines that will form the basis of an Archive Service.


• Assist in building and managing the infrastructure required for optimal extraction, transformation, and loading of data from a variety of data sources.


• Build analytics tools that utilise the data pipeline to provide key business performance metrics.


• Develop reporting services across Storage services using PowerBI and associated tools.


• Work with stakeholders including the business, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.


• Anticipate and innovate to develop the banks archiving solutions



The key areas of responsibility are:


• Based on demand build ETL architecture transferring structured and non-structured data into a Hitachi Content Platform (HCP) Object store as the back end for archiving


• Build bespoke archives of data to satisfy point business solutions such as regulation and compliance


• Build a generic framework to ingest decommissioned application data into searchable archives such that applications may present their data from e.g. a database in a pre-determined format to be ingested in a searchable form


• Deliver a standard interface for searching such archives securely


• Anticipate demand and innovate existing solutions


• Code, test, document, and maintain standardised, automated data pipelines


• Help design and build the infrastructure required for optimal extraction, transformation, and loading of data from a variety of data sources


• Build analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics


• Responsible for testing, documentation and handover to the customer.


• Delivery to meet aggressive timescales.


• Communication of progress and risk to the team



Key Relationships:


• Technical Support teams retiring applications


• Technical teams responsible for the Backup and Retention globally


• Technical team responsible for the Archiving environment globally


• Technical teams responsible for the File server & SharePoint environments globally




Key Performance Indicators:


• Deadlines achieved according to the project plans


• Successfulness of daily ingests


• Availability of service


• Flexibility and adaptability of developed solution to allow seamless on-boarding of new customers


• Quality of error processing and capture of exceptions and reruns



Business Knowledge/ Technical Skills:


• Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.


• Strong analytic skills related to working with both structured and non-structured datasets.


• Build processes supporting data transformation, data structures, metadata, dependency and workload management.


• A successful history of manipulating, combining, processing and extracting value from large disconnected datasets.


• Working knowledge and experience working with MSSQL and Oracle databases.


• Experience supporting and working with cross-functional teams in a dynamic environment.


• We are looking for a candidate with experience or a strong aptitude in a Data Engineering role, and ideally have experience using the following software/tools:


o In-depth experience of developing PowerShell scripts


o Experience with development languages: .Net, Ansible, React js and SQL

Salarisschaal: FIO08RG000RG

Bedrijfsgegevens

Bedrijfs gegevens

Rabobank

Rolomschrijving en taakafspraken

Language : English mandatory
ZZP Allowed : No



Experienced software engineer to work in a busy environment to create and maintain applications and solutions for communications data ingest and decommissioned environments into a searchable centralised environment for regulatory purposes.



The candidate should have with a minimum 3 years experience with:



Powershell


.Net


React js


Ansible


SQL



The candidate should be capable of driving initiative to their successful conclusion to agreed timelines, escalate when necessary and be capable of communicating effectively to technical colleagues, management and stakeholders.


The candidate should have a good security focus and take ownership of their deliverables.



Detailed description:


Within the bank, a number of solutions are implemented to support the archiving of business data from decommissioned applications and also the archiving of regulatory data, retaining it in a searchable form. This new role will analyse the requirements and build a standard data ingestion and search interface to simplify on-boarding of new applications in the future. Ultimately, the aim is to provide a semi-automated on-boarding service.



An understanding of how structured and non-structured data may be collected and combined to answer business questions should be combined with an ability to code Extract, Transform, Load (ETL) and manage the underlying object store in which the data is ultimately stored. The role is a hybrid data scientist, application developer and infrastructure engineer.



Data is stored into a Hitachi Content Platform (HCP) Object Store.



• The candidate will work within the Storage Services team as part of Engineering & Enterprise Tech.


• The purpose of the role is to identify generic and bespoke archiving and data analysis requirements for the banks data, to build standardised extraction, transformation and loading of that data into existing Object Stores to satisfy analysis and regulatory requirements.


• Develop standardised ingestion pipelines will deliver scalable, repeatable, secure solutions meeting multiple data archiving requirements. This will include organisation and storage of legacy data from retired applications as well as building API’s into real-time data collection and archiving, ensuring regulatory compliance for the bank.


• Code, test, document, and maintain the standardised, automated data pipelines that will form the basis of an Archive Service.


• Assist in building and managing the infrastructure required for optimal extraction, transformation, and loading of data from a variety of data sources.


• Build analytics tools that utilise the data pipeline to provide key business performance metrics.


• Develop reporting services across Storage services using PowerBI and associated tools.


• Work with stakeholders including the business, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.


• Anticipate and innovate to develop the banks archiving solutions



The key areas of responsibility are:


• Based on demand build ETL architecture transferring structured and non-structured data into a Hitachi Content Platform (HCP) Object store as the back end for archiving


• Build bespoke archives of data to satisfy point business solutions such as regulation and compliance


• Build a generic framework to ingest decommissioned application data into searchable archives such that applications may present their data from e.g. a database in a pre-determined format to be ingested in a searchable form


• Deliver a standard interface for searching such archives securely


• Anticipate demand and innovate existing solutions


• Code, test, document, and maintain standardised, automated data pipelines


• Help design and build the infrastructure required for optimal extraction, transformation, and loading of data from a variety of data sources


• Build analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics


• Responsible for testing, documentation and handover to the customer.


• Delivery to meet aggressive timescales.


• Communication of progress and risk to the team



Key Relationships:


• Technical Support teams retiring applications


• Technical teams responsible for the Backup and Retention globally


• Technical team responsible for the Archiving environment globally


• Technical teams responsible for the File server & SharePoint environments globally




Key Performance Indicators:


• Deadlines achieved according to the project plans


• Successfulness of daily ingests


• Availability of service


• Flexibility and adaptability of developed solution to allow seamless on-boarding of new customers


• Quality of error processing and capture of exceptions and reruns



Business Knowledge/ Technical Skills:


• Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.


• Strong analytic skills related to working with both structured and non-structured datasets.


• Build processes supporting data transformation, data structures, metadata, dependency and workload management.


• A successful history of manipulating, combining, processing and extracting value from large disconnected datasets.


• Working knowledge and experience working with MSSQL and Oracle databases.


• Experience supporting and working with cross-functional teams in a dynamic environment.


• We are looking for a candidate with experience or a strong aptitude in a Data Engineering role, and ideally have experience using the following software/tools:


o In-depth experience of developing PowerShell scripts


o Experience with development languages: .Net, Ansible, React js and SQL

Salarisschaal: FIO08RG000RG

De recruiter

Mandy Hoogland

HeadFirst

Deel deze opdracht

Plaats jouw bieding op Striive

https://login.striive.com/

Voor deze opdracht dien je een bieding te plaatsen op Striive. Striive is het grootste opdrachtenplatform van de Benelux waar jaarlijks meer dan 20.000 opdrachten gepubliceerd worden.

Taurusavenue 18
2132 LS, Hoofddorp

Vragen?

Als het gaat om support op Select, dan mag het naar servicedesk@select.hr of gebeld worden met (023) 56 856 30

Privacy Preference Center