Personal AI Profile

Shubhanshu Rastogi

AI-Driven Quality Engineering Leader

I build robust quality systems across automation, AI evaluation, and release engineering for high-stakes products.

Explore my work through a portfolio-first AI assistant designed for technical conversations, opportunities, and collaboration.

Shubhanshu Rastogi profile portrait

Shubhanshu Rastogi

Scroll to explore

AI Assistant

Ask Questions About Shubhanshu

This assistant uses grounded profile knowledge with retrieval and generator-reviewer orchestration. Unsupported questions are captured for follow-up.

Grounded in approved profile information

Hi, I’m Shubhanshu’s profile assistant. I answer from approved profile knowledge about his experience, QA leadership, AI testing, projects, and recruiter contact details.

Expertise

Depth Across Quality Engineering and AI Assurance

A practical mix of leadership, architecture, and hands-on execution across automation, AI testing, and release reliability.

Quality Strategy & Leadership

Designing quality operating models that align technical rigor with business risk and delivery outcomes.

  • Leads cross-functional quality initiatives across enterprise programs
  • Builds governance frameworks for release confidence and defect prevention
  • Partners with product and engineering leadership on measurable quality outcomes

Automation Engineering

Building scalable UI, API, integration, and performance automation for complex systems.

  • Playwright and Selenium frameworks with resilient architecture patterns
  • API and microservices testing embedded into CI/CD pipelines
  • Performance and non-functional validation for production readiness

AI / LLM Quality

Evaluating assistant behavior, model outputs, and reliability in non-deterministic systems.

  • Functional and behavioral validation for LLM-assisted workflows
  • Prompt and response quality assessment with structured checks
  • Bias, failure mode, and consistency-focused quality evaluation

RAG Evaluation

Improving trust in retrieval-augmented systems through measurable grounding and answer quality.

  • Retrieval relevance and context quality assessment
  • Grounding, faithfulness, and unsupported-answer handling
  • Quality gates that reduce hallucination risk in production assistants

Delivery & DevOps Quality

Integrating quality deeply into delivery pipelines to support speed and reliability.

  • CI/CD-aligned test orchestration and quality gates
  • Shift-left quality practices for faster feedback loops
  • Release-readiness insights for engineering and stakeholder teams

Collaboration & Advisory

Supporting teams with practical guidance on QA, AI testing, and automation challenges.

  • Clear communication across engineering, product, and leadership audiences
  • Mentorship and capability-building around modern quality engineering
  • Approachable technical collaboration for roles, projects, and problem-solving

Featured Builds

Recent Work Across AI, QA, and Automation

Selected repositories that reflect practical engineering depth, quality leadership, and AI-focused experimentation.

AgenticEvalUsingDeepeval

AgenticEvalUsingDeepeval

Agentic evaluation project focused on measuring AI answer quality, grounding behavior, and reliability with DeepEval workflows.

Why It Matters

Shows practical capability to evaluate non-deterministic AI systems with repeatable quality checks.

AI Evaluationai_testingrag_evaluation

Technologies

Python • DeepEval • LLM Evaluation • RAG Testing

View Repository

Talkwise-AI-Language-learning-app

Talkwise AI Language Learning App

AI-powered language learning application built around conversational assistance and guided learning flows.

Why It Matters

Demonstrates product-focused AI implementation with strong UX orientation.

AI Applicationai_applicationsproduct_build

Technologies

Next.js • TypeScript • AI Chat • Frontend Engineering

View Repository

Indian-Criminal-Law-Learning-Chatbot

Indian Criminal Law Learning Chatbot

Domain-focused chatbot built to support learning and exploration in Indian criminal law topics.

Why It Matters

Highlights ability to build niche, knowledge-driven assistants for specific domains.

Domain Chatbotchatbotdomain_ai

Technologies

LLM • Chatbot • Prompting • Knowledge Structuring

View Repository

QnAChatbotwithSummarization

QnA Chatbot with Summarization

Question-answering chatbot with summarization capabilities for improved response clarity and information digestion.

Why It Matters

Combines retrieval-style QnA with concise summarization for practical end-user utility.

Conversational AIchatbotsummarization

Technologies

Python • NLP • Summarization • QnA

View Repository

Unified-automation-framework

Unified Automation Framework

Consolidated automation framework designed to streamline and scale quality execution across test layers.

Why It Matters

Represents reusable engineering approach for maintainable, high-coverage automation.

Test Automationautomationqa_engineering

Technologies

Selenium • Playwright • API Testing • Framework Design

View Repository

Playwright-ts-fullstack

Playwright TS Fullstack

Fullstack TypeScript project emphasizing Playwright-based automation and end-to-end quality coverage.

Why It Matters

Demonstrates practical fullstack testing depth with modern TypeScript tooling.

Fullstack QAplaywrighttypescript

Technologies

Playwright • TypeScript • E2E Testing • Fullstack

View Repository

Selected Highlights

Technical Leadership with Real Delivery Impact

A concise view of my experience, focus areas, and collaboration style across modern quality engineering programs.

Experience

15+ Years

Quality engineering leadership across enterprise delivery environments

Specialization

AI + QA

LLM testing, RAG evaluation, and production-focused quality engineering

Automation

End-to-End

UI, API, integration, and performance testing with modern tooling

Collaboration

Cross-Functional

Trusted partner for technical teams, stakeholders, and delivery leaders

Selected Strengths

  • Scalable automation architecture for enterprise programs
  • AI and RAG quality validation for real-world product behavior
  • Risk-based release confidence and governance

Platform Exposure

  • Web and API-first systems
  • Cloud-aligned CI/CD delivery
  • Data-intensive and customer-facing product ecosystems

Technical Toolkit

  • Playwright, Selenium, API testing frameworks, JMeter
  • DeepEval, RAGAs, Hugging Face Evaluate, Ollama
  • Jenkins, GitHub Actions, Azure DevOps, Docker

Ways to Work Together

  • Opportunities in QA, SDET leadership, and AI quality engineering
  • Collaboration on automation, RAG testing, and reliability programs
  • Technical discussions for complex quality and delivery challenges

Connect

Open to Conversations and Collaboration

Reach out for opportunities, technical collaboration, AI/QA discussions, or help with complex quality engineering challenges.

You can also start in the assistant and share your email for direct follow-up.