Senior Forward Deployed AI Engineer
About the job
Job Description Summary We are seeking a Senior Forward Deployed AI Engineer who can partner directly with customers to define problems, architect GenAI systems, and ship production-grade solutions. You’ll own end-to-end delivery: from problem decomposition and data pipelines to LLM workflow design, RAG, Knowledge Graphs, Graph RAG architectures, agentic orchestration, LLM evaluation, deployment, ... and ongoing optimization. You’ll also integrate and extend GEHC STO AI Fabric to accelerate real-world impact. Roles and Responsibilities: • Own the end-to-end GenAI strategy within customer environments—from discovery and problem decomposition to production deployment and adoption. • Define solution architectures across RAG, agentic AI, knowledge bases, and multi-turn conversational systems with memory. • Establish robust evaluation frameworks (e.g., groundedness, hallucination rate, factuality, latency, cost, and safety) and drive iterative improvement. • Design and deploy large-scale LLM workflows, RAG systems (vector/hybrid, reranking), Graph RAG (entity/relation extraction, query planning), and autonomous/agentic systems (task decomposition, tool use, planning/feedback loops). • Implement conversation memory (short-term context buffers, long-term vector/graph stores, entity memories) to support multi-turn experiences. • Build conversational chatbots and search strategies (BM25 + dense retrieval, hybrid retrieval, query rewriting/expansion) for high-precision answers. • Apply chunking strategies (semantic/hierarchical, overlap/sliding window, summarization-based chunking) to optimize retrieval quality and cost. • Leverage GEHC STO AI Fabric components to build robust data pipelines, define/align ontologies, and integrate ML/LLM models into operational tools and workflows. • Develop and register components (transforms, ontology services, pipelines), ensuring traceability, observability, and reusability within the AI Fabric. • Contribute field learnings, patterns, and edge cases back to the AI Fabric product teams to mature the platform and accelerate customer outcomes. • Work closely with customer executives, SMEs, and end users to understand mission-critical needs and translate them into technical solutions and delivery plans. • Communicate tradeoffs (quality, latency, cost, safety) and guide stakeholders to data-driven decisions. • Implement CI/CD, containerization (Docker), model packaging, deployment, and monitoring across environments (dev/stage/prod). • Build LLMOps observability: tracing, prompt/version management, cost/latency dashboards, eval pipelines, A/B testing, safety guardrails, and human-in-the-loop review where required. • Operate within AWS (SageMaker for training/inference, Bedrock for managed foundation models); manage infrastructure-as-code and environment hardening. • Build scalable pipelines using Databricks, Spark/PySpark, and SQL to ingest, clean, and feature/signal engineer multi-modal data. • Architect and manage knowledge bases (document stores, vector DBs, knowledge graphs) and retrieval services for production traffic. • Bias for shipping high-quality, maintainable software over academic benchmarks. • Write clean, testable code; enforce code quality, documentation, and reusability. • Mentor peers; elevate engineering best practices for GenAI at scale. Qualifications: • Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with a minimum of 5+ years in AI/ML engineering, data science, or applied research; 1.5+ years building production LLM/GenAI systems. • Excellent coding proficiency in Python and either Java or TypeScript; solid software engineering fundamentals (testing, modularity, design patterns). • Deep understanding of LLMs and agentic AI (planning, tool use, function calling, multi-agent collaboration). • Hands-on with RAG (vector/hybrid retrieval, reranking, query planning), Graph RAG (entities/relations, graph queries), knowledge bases, and chunking strategies. • NLP expertise (retrieval, generation, summarization, information extraction); working knowledge of computer vision and GANs for augmentation or synthesis. • Proven experience with conversational AI (multi-turn with memory, grounding, disambiguation) and search strategies (BM25, hybrid, semantic reranking). • Databricks (Delta, Unity Catalog, Jobs/Workflows), Spark/PySpark, and SQL at scale. • MLOps, LLMOps, Docker, Kubernetes, CI/CD, and AWS (SageMaker, Bedrock) for model training, evaluation, and deployment. • Practical experience with LangChain, LangGraph, CrewAI, n8n (or similar orchestrators) and using OpenAI/Anthropic models in production. • Good customer-facing communication skills and the ability to operate on-site with customers when needed. • Experience in regulated domains (healthcare, life sciences) and ontology-driven systems. • Knowledge graphs (e.g., Neo4j), vector stores, and hybrid retrieval with cross-encoder reranking. • Prompt engineering and LLM evaluation frameworks; safety guardrails (PII/PHI redaction, toxicity filters). • Familiarity with Kubernetes (EKS/ECS), feature stores, experiment tracking, and model registries. • Performance optimization for long-context models, batching, streaming, and cost/latency tuning.
Requirements
- GenAI systems
- Problem decomposition
- Large-scale LLM workflows
- Conversational AI
- Cloud services
- AWS
- CI/CD
Qualifications
- Bachelor's Degree in Computer Science or STEM Majors
- 5+ years in AI/ML engineering
Preferred Technologies
- GenAI systems
- Problem decomposition
- Large-scale LLM workflows
- Conversational AI
- Cloud services
- AWS
- CI/CD
About the company
GE Healthcare is a leading global medical technology and digital solutions innovator. Our mission is to improve lives in the moments that matter. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world.
Similar Jobs
Senior Forward Deployed AI Engineer
GE HEALTHCARE
Senior AI Engineer
TIGI HR
Senior AI Engineer
Aptino, Inc.