Q

AI Security Test Engineer

QA Mentor
India Not disclosed
5 days ago
Apply to Job

About the job

Role Description: The AI Security Test Engineer is responsible for identifying, evaluating, and mitigating security risks specific to AI-driven systems. This role focuses on validating the security, robustness, privacy, and trustworthiness of AI/ML models, pipelines, and integrations across the SDLC. The engineer applies adversarial thinking, risk analysis, and human judgment to uncover vulnerabilities that traditional security testing may miss. Key Responsibilities • Assess security risks across AI/ML systems, including data pipelines, models, APIs, and deployments. • Design and execute security test strategies for AI systems (pre- and post-deployment). • Perform adversarial testing, including prompt injection, data poisoning, model inversion, and membership inference. • Validate access controls, authentication, authorization, and API security for AI services. • Test AI systems for privacy leakage, data exposure, and compliance risks (PII, regulated data). • Evaluate model robustness against misuse, abuse, and malicious manipulation. • Collaborate with data scientists, ML engineers, developers, and security teams to remediate findings. • Analyze AI supply-chain risks (datasets, pre-trained models, third-party APIs). • Define security acceptance criteria and risk thresholds for AI releases. • Document vulnerabilities clearly with business impact and remediation guidance. • Stay current with emerging AI threats, attack vectors, and regulatory expectations. Required Skills & Experience • Strong background in application security, penetration testing, or security engineering. • Experience testing APIs, cloud-based systems, and distributed architectures. • Solid understanding of AI/ML concepts (training, inference, models, datasets). • Knowledge of common AI security threats (prompt injection, hallucinations, bias exploitation). • Hands-on experience with security testing tools and techniques. • Ability to think adversarially and beyond documented requirements. • Strong analytical and risk-based thinking skills. • Excellent communication skills to explain complex risks to non-technical stakeholders. Preferred Qualifications • Experience with LLMs, GenAI platforms, or ML model deployment. • Familiarity with OWASP Top 10 for LLM Applications and AI security frameworks. • Experience testing AI in regulated industries (finance, healthcare, insurance). • Background in privacy, compliance, or ethical AI validation. • Scripting or automation skills (Python, Bash, or similar). Key Traits • High attention to detail with a strong investigative mindset. • Comfortable challenging assumptions and design decisions. • Business-aware: understands impact of AI failures on trust, revenue, and reputation. • Independent thinker with strong ownership mentality. Success in This Role Looks Like • AI security risks are identified early, not after production incidents. • Clear visibility into AI-specific vulnerabilities and business impact. • Strong collaboration between security, QA, and AI engineering teams. • Reduced AI-related incidents, data leaks, and reputational risks.

Requirements

  • Application security
  • Penetration testing
  • AI/ML concepts
  • Security testing tools

Preferred Technologies

  • Application security
  • Penetration testing
  • AI/ML concepts
  • Security testing tools

Similar Jobs

Q

AI Security Test Engineer

QA Mentor

IndiaNot disclosed
4 days agoOn-Site
Q

AI Security Test Engineer

QA Mentor

IndiaNot disclosed
Last week
Q

AI Security Test Engineer

QA Mentor

IndiaNot disclosed
Last weekOn-Site