Job Descriptions
About the Role:
At UnitedHealth Group and Optum, we want to make healthcare work better for everyone. This depends on hiring the best and brightest. With a thriving ecosystem of investment and innovation, our business in Ireland is constantly growing to support the healthcare needs of the future.
Our teams are at the forefront of building and adapting the latest technologies to propel healthcare forward in a way that better serves everyone. With our hands at work across all aspects of health, we use the most advanced development tools, AI, data science and innovative approaches to make the healthcare system work better for everyone. Join us and do your life's best work.
As a Principal Data Engineer, you will be focused on developing cutting edge data-pipelines that are the foundation of our big-data analytic platform. You will be a critical leader, focused on innovation; developing proofs of concepts that leverage data in new and exciting ways.
Careers with Optum offer flexible work arrangements and individuals who live and work in the Republic or Ireland will have the opportunity to split their monthly work hours between our Dublin and Letterkenny office and telecommuting from a home-based office.
Primary Responsibilities
- Develop data pipelines to ingest and transform data using clean coding principles
- Contribute to common frameworks and best practices in code development, deployment and automation/orchestration of data pipelines.
- Partner with Data Science and Product leaders to design best practices and standards for developing and productionalizing analytic pipelines.
- Partner with Infrastructure leaders on architecture approaches to advance the data and analytics platform, including exploring new tools and techniques that leverage the cloud environment
- Mentor and train junior members of the team
Required Qualifications:
- Extensive hands-on experience developing data pipelines that demonstrate a strong understanding of software engineering principles
- Experience working with both real-time and batch data, knowing the strengths and weaknesses of both and when to apply one over another
- Experience with designing, configuring, and managing cloud infrastructure (Azure/Databricks)
- Experience building data pipelines on either AWS, Azure or GCP, following best practices in Cloud deployments
- Good Exposure to API (REST) web services development & deployment
- Strong experience with Python, SQL, Spark
- Understanding of DevOps tools, Git workflow and building CI/CD pipelines
- Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform)
- Ability to work with business and technical audiences on business requirement meetings, technical white boarding exercises, and SQL coding/debugging sessions
- Familiar with Airflow or similar orchestration tool
Preferred Qualifications:
- Experience working in projects with agile/scrum methodologies
- Experience with shell scripting languages
- Familiarity with production quality ML and/or AI model development and deployment.
- Well versed in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines
- Experience working with Apache Kafka, building appropriate producer/consumer apps