Data Engineer
Imagine a future where high-quality data from offshore wind farms enables predictive insights, AI-driven decisions, and reliable operations across the entire asset lifecycle.
Join us in this role where you’ll design, build, and operate scalable data products that stream large volumes of operational data from OT and SCADA environments into enterprise platforms. You’ll ensure that data is reliable, accessible, and ready for both analytics and operational use.
Welcome to the Data Foundation team
You’ll be part of the Data Foundation team within Advanced Analytics product line, where you together with your colleagues will build and maintain the core data products that connects offshore wind assets data with data scientists, analysts, and operational stakeholders. The team works across Denmark, Poland and Malaysia and plays a key role in enabling analytics, AI use cases, and operational decision-making across Ørsted’s asset portfolio. As a team, we collaborate across disciplines, take end-to-end ownership, and operate in a dynamic environment where development and operations go hand in hand.
You’ll play an important role in:
- designing, building, and operating data pipelines and streaming solutions from edge systems to cloud platforms
- developing backend services and APIs to expose data products to data scientists and other consumers
- working with cloud-based infrastructure across Azure and AWS, including containerized workloads
- ensuring data quality, performance, security, and reliability in production environments
- collaborating closely with software engineers, data engineers, architects, and DevOps teams
- contributing to automation, CI/CD pipelines, and continuous improvement across development and operations
To succeed in the role, you:
- have a strong foundation in software engineering and data engineering using Python and/or C# and follow best practices, e.g. SOLID and DRY principles, clean code and onion architecture, design patterns and automations using AI agents
- have experience working with data pipelines and data engineering technologies, e.g. medallion architecture (staging, raw, silver, gold), data mesh and data pipelines (production vs exploration / operational vs analytics and reporting)
- have experience with data storage for analytical and operational use cases, using PGSQL (timeseries, time-buckets), SQL Server (normalizations vs big table, clustered and non-clustered, column store indexing) and KAFKA
- demonstrate experience with containerization and orchestration (Docker, Podman, Kubernetes/k8s), along with artifact and container management using JFrog Artifactory and registries
- build and optimize CI/CD pipelines using GitHub (actions and runners), and support API development and testing using Postman
- apply monitoring and observability practices using Grafana
- think in systems and architecture, understanding impacts across services, pipelines, and platforms
- demonstrate a high sense of ownership, accountability, and a strong quality mindset in an environment without dedicated QA
- show business- and value-aware thinking and are curious about AI or motivated to learn how it can enhance data engineering solutions
Maybe you’ve read the above and can see you have some transferable skills, even though they don’t quite match all the points. If you think you can bring something to the team, we still encourage you to apply.
Shape the future with us
Send your application to us as soon as possible. We’ll be conducting interviews on a continuous basis and reserve the right to take down the advert when we’ve found the right candidate.As an applicant or employee, you may request reasonable work and position accommodation or adjustments via accommodation@orsted.com.
Please note that for your application to be taken into consideration, you must submit your application via our online career pages and answer the screening questions relevant for your country. We don't take applications or inquiries from external recruiters or agencies into account for this position.
Gentofte, DK