About the job
At Danaher, our work saves lives. And each of us plays a part. Fueled by our culture of continuous improvement, we turn ideas into impact – innovating at the speed of life. Our 63,000+ associates work across the globe at more than 15 unique businesses within life sciences, diagnostics, and biotechnology. Are you ready to accelerate your potential and make a real difference? At Danaher, you can build an incredible career at a leading science and technology company, where we're committed to hiring and developing from within. You'll thrive in a culture of belonging where you and your unique viewpoint matter. **In this role, you will have the opportunity to:** • Design, develop, and maintain robust, scalable, and efficient data pipelines to ingest, transform, and serve data for AI/ML and analytics workloads. • Architect and maintain scalable, secure, and low-latency data pipelines and systems to support agentic AI applications. • Partner with Data Science teams to support model training, feature engineering, and deployment processes. • Collaborate with data scientists, project managers, and Software Engineering teams to understand data needs and translate business requirements into technical solutions. • Develop and manage data architecture, data lakes, and data warehouses supporting Service AI use cases (e.g., ticket analytics, customer interaction insights, predictive maintenance, resource planning etc.). • Optimize data storage, compute, and retrieval strategies for structured and unstructured data (including logs, text, images, telemetry data, etc.). • Support MLOps workflows by enabling model deployment, monitoring, and versioning pipelines. **The essential requirements of the job include:** • Bachelor’s or Master with 8+ years of experience in Computer Science, Information Technology, Engineering or related field • 5+ years of experience in data engineering or data platform development, preferably supporting AI/ML workloads • Strong proficiency in Python, SQL, and data processing frameworks (e.g., PySpark, Spark, Databricks, Snowflake). • Experience with cloud data platforms (AWS, Azure, or GCP, Snowflake) and related services (e.g., S3, Redshift, BigQuery, Synapse) • Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Prefect, Azure Data Factory). • Familiarity with data Lakehouse architectures and distributed systems. • Working knowledge of containerization and CI/CD (Docker, Kubernetes, GitHub Actions, etc.). Experience with APIs, data integration, and real-time streaming pipelines (Kafka, Kinesis, Pub/Sub). Also familiar with creating reports and visualization using Power BI, Power App, Power Automate or a similar BI tool. **It would be a plus if you also possess previous experience in:** • Experience building and deploying equipment failure prediction models at scale • Familiarity with enterprise-scale Service and Support data (CRM, ServiceNow, Salesforce, Oracle, etc.) • Strong understanding of data security, compliance, and governance frameworks. Join our winning team today. Together, we’ll accelerate the real-life impact of tomorrow’s science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life.
Requirements
- Python
- SQL
- Data Processing
- AI
- ML
Qualifications
- Bachelor’s or Master in Computer Science
- Information Technology
- Engineering or related field
Preferred Technologies
- Python
- SQL
- Data Processing
- AI
- ML
About the company
Danaher is a leading science and technology company that innovates in life sciences, diagnostics, and biotechnology. They focus on hiring and developing talent from within and are committed to continuous improvement.
Similar Jobs
Senior Data and AI Engineer
Danaher
Senior Data and AI Engineer
Danaher
Senior Data and AI Engineer
Danaher