Home / Blog / 50 Manual Testing Interview Questions with Detailed Answers

Interview Prep

50 Manual Testing Interview Questions with Detailed Answers

QA Knowledge Hub·2026-04-13·14 min read

Manual testing interviews test your understanding of concepts, your ability to think like a tester, and your communication skills. These 50 questions cover the topics that come up in virtually every QA interview — from fresher to mid-level roles.

Fundamentals

1. What is software testing?

Software testing is the process of evaluating software to ensure it meets specified requirements and to find defects before users do. It involves executing the software under controlled conditions, comparing actual behaviour against expected behaviour, and documenting deviations.


2. What is the difference between testing and debugging?

Testing is the activity of finding defects — identifying that something is wrong. Debugging is the developer's activity of diagnosing and fixing the cause of a defect. QA finds the bug; development debugs and fixes it.


3. What is SDLC?

SDLC (Software Development Life Cycle) is the structured process for developing software. The phases are: Requirements gathering → Design → Development → Testing → Deployment → Maintenance. QA is primarily involved in the Testing phase, but should be involved from Requirements gathering to catch ambiguities early.


4. What is STLC?

STLC (Software Testing Life Cycle) is the QA team's process within SDLC. Phases: Requirements Analysis → Test Planning → Test Case Design → Test Environment Setup → Test Execution → Defect Reporting → Test Closure.


5. What is the difference between verification and validation?

Verification is checking whether we are building the product right — reviewing documents, designs, and code against requirements without executing the software. Validation is checking whether we built the right product — executing the software and verifying it meets user needs.

Mnemonic: Verification = "Are we following the spec?" Validation = "Does it work for users?"


6. What is a test plan?

A test plan is a document that defines the scope, objectives, approach, schedule, resources, and risks for a testing effort. It is typically written by a QA lead before testing begins and includes: what will be tested, what will not be tested, testing environments, tools, entry/exit criteria, and risk mitigation.


7. What is a test case?

A test case is a documented set of conditions and steps used to verify a specific requirement. It contains: Test Case ID, Title, Preconditions, Steps, Expected Result, Actual Result, Status (Pass/Fail), and Priority.


8. What is a test suite?

A test suite is a collection of related test cases. A "Payment Test Suite" contains all test cases related to the payment feature. Test suites help organise test execution and reporting.


9. What is a test scenario?

A test scenario is a high-level description of what to test, without specifying exact steps. It describes a user goal or use case. Example: "Verify user can successfully complete a purchase." A test scenario may generate multiple test cases (valid payment, declined payment, expired card, etc.).


10. What is the difference between a test case and a test scenario?

A test scenario describes what to verify (a feature or user journey). A test case describes how to verify it (specific steps, data, and expected results). One test scenario typically produces multiple test cases.


Defects and Bug Lifecycle

11. What is a defect/bug?

A defect is a deviation between the actual result and the expected result. When software behaves differently from what the requirement specifies or what a reasonable user would expect, it is a defect.


12. Explain the bug lifecycle.

The bug lifecycle (defect lifecycle) tracks a bug from discovery to closure:

  1. New — Bug found and reported by QA
  2. Assigned — Bug assigned to a developer for investigation
  3. Open — Developer begins working on it
  4. Fixed — Developer marks it as fixed
  5. Ready for Retest / In Retest — QA retests the fix
  6. Closed — QA confirms the fix works; bug is closed
  7. Reopened — QA finds the fix did not work; cycle repeats
  8. Duplicate — Already reported by someone else
  9. Rejected — Not a real bug (works as designed)
  10. Deferred — Will be fixed in a later release

13. What is the difference between severity and priority?

Severity = The impact of a bug on the system's functionality. Defined by QA.

Priority = How urgently the bug needs to be fixed. Defined by business/product owner.

They are independent:

  • A typo on the homepage: Low severity (no functional impact), High priority (visible to all users, brand damage)
  • A crash in a rarely-used admin-only page: High severity, Low priority

14. What makes a good bug report?

A good bug report contains:

  1. Clear, one-sentence title describing the problem
  2. Steps to reproduce (numbered, specific)
  3. Expected result
  4. Actual result
  5. Severity and priority
  6. Environment (browser, OS, app version, test data used)
  7. Screenshots or screen recording
  8. Frequency (does it happen every time or intermittently?)

15. What do you do if a developer says "It's not a bug, it's a feature"?

Refer back to the requirement documentation. If the behaviour contradicts the requirement or acceptance criteria, it is a bug regardless of what the developer says. Present the requirement clearly. If the requirement is ambiguous, escalate to the product owner or business analyst to clarify. Document their decision either way.


Test Design Techniques

16. What is boundary value analysis?

A black-box test design technique where test cases are created for values at the boundary of valid input ranges. If a field accepts ages 18–60:

  • Test: 17, 18, 19 (just below, at, just above lower boundary)
  • Test: 59, 60, 61 (just below, at, just above upper boundary)

Bugs cluster at boundaries because off-by-one errors are common in code.


17. What is equivalence partitioning?

Dividing input data into partitions (groups) where all values in a partition are expected to behave identically. Test one value from each partition instead of testing every possible value.

For an age field accepting 18–60:

  • Partition 1 (invalid low): < 18 — test with 5
  • Partition 2 (valid): 18–60 — test with 35
  • Partition 3 (invalid high): > 60 — test with 75

18. What is decision table testing?

A technique for testing combinations of conditions and their corresponding actions. Used when business rules have multiple input conditions. Each column represents a test case with a unique combination of condition values.

Example: Discount eligibility based on membership status and order amount.

ConditionTC1TC2TC3TC4
Premium MemberYYNN
Order > ₹1000YNYN
Discount AppliedYNNN

19. What is state transition testing?

Testing based on the different states an object can be in and the transitions between them. Used for features with state machines — login sessions, order status flows, ATM machine states.

Example: Order states — New → Processing → Shipped → Delivered → Returned. Tests verify valid transitions work and invalid transitions are blocked.


20. What is exploratory testing?

Unscripted testing where the tester simultaneously designs and executes tests based on domain knowledge. There is no pre-written test case — the tester explores freely, documents findings, and adapts based on what they discover. Highly effective for finding bugs that scripted tests miss because they test what was not anticipated.


Testing Types

21. What is regression testing? Why is it important?

Regression testing is re-running previously passing test cases after a code change to verify that existing functionality still works. It is important because every bug fix or new feature is a potential source of regressions — unintended side effects that break working features.


22. What is the difference between smoke testing and sanity testing?

Smoke testing: Broad, shallow verification that a new build is stable enough for detailed testing. Tests core functionality (login, navigation, main flows). Done by QA immediately after each new build.

Sanity testing: Narrow, focused verification that a specific bug fix or change works correctly and has not broken related areas. Done after a specific patch.


23. What is alpha testing and beta testing?

Alpha testing: Testing performed by internal teams (QA, developers, employees) before the product is released to external users. Done in a controlled environment.

Beta testing: Testing performed by a selected group of real end users in a production-like environment before general availability. Feedback is used for final improvements.


24. What is UAT?

User Acceptance Testing (UAT) is testing conducted by business stakeholders or end users to confirm the software meets business requirements before production release. QA engineers often coordinate UAT but the actual testing is done by business users.


25. What is compatibility testing?

Verifying the application works correctly across different browsers (Chrome, Firefox, Safari, Edge), operating systems (Windows, macOS, Android, iOS), devices (desktop, tablet, mobile), and screen resolutions.


Test Management and Process

26. What is entry criteria and exit criteria for testing?

Entry criteria: Conditions that must be met before testing can begin. Example: build deployed to test environment, test cases reviewed and approved, test data ready, no critical open defects from previous cycle.

Exit criteria: Conditions that must be met before testing can be declared complete. Example: all planned test cases executed, no Critical or High bugs open, bug fix rate above 90%.


27. What is a test environment?

The server, database, browser, OS, and network configuration where tests are executed. Typically separate from production. Common environments: Dev (developer testing) → QA/Test (QA testing) → Staging/UAT (pre-production, mirrors production) → Production.


28. What is test coverage?

The degree to which test cases cover the requirements, code, or user scenarios. High requirement coverage means all requirements have at least one test case. Test coverage is a quality indicator — 100% coverage does not mean zero bugs, but gaps in coverage are risks.


29. How do you prioritise test cases when time is limited?

Focus on:

  1. Critical business flows (checkout, payment, login)
  2. High-risk areas (recently changed code, historically buggy features)
  3. High-severity test cases (scenarios where failure causes data loss or security breach)
  4. Smoke test suite (verifies core functionality)

Skip: Low-priority edge cases, UI cosmetic checks, non-critical features not recently changed.


30. What do you do when requirements are unclear or incomplete?

Raise clarification questions immediately — do not assume. Document the ambiguity. Request a meeting with the BA, product owner, or developer to resolve it. Document the agreed interpretation. Incomplete requirements caught during test planning are significantly cheaper to fix than bugs found after release.


Specific Scenarios

31. How would you test a login form?

Positive cases: Valid credentials → success. Different valid credential formats.

Negative cases: Wrong password → error. Wrong username → error. Empty username → validation. Empty password → validation. Both empty → validation.

Edge cases: Maximum length username. Password with special characters. SQL injection attempt (' OR 1=1 --). Case sensitivity (is login case-sensitive?). Multiple failed attempts → account lock? Session persistence (remember me functionality). Redirect after login.


32. How would you test a search feature?

Functional: Search with exact match. Search with partial match. Search with multiple keywords. Search with uppercase/lowercase.

Edge cases: Empty search. Single character search. Very long search string. Special characters. Numbers only. Search with no results.

Non-functional: Response time for search. Behaviour under concurrent searches.


33. What would you test for a payment gateway integration?

Valid payment: Successful transaction. Correct amount charged. Confirmation email sent. Order status updated.

Invalid payment: Declined card. Insufficient funds. Expired card. Invalid card number. Invalid CVV.

Edge cases: Network timeout during payment. User clicks pay twice (double charge prevention). Currency conversion if international.

Security: Is card data sent over HTTPS? Is card number masked in logs? Is CVV stored?


34. How would you test a file upload feature?

Valid: Upload allowed file types (PDF, PNG, JPG). Upload within maximum file size.

Invalid: Upload unsupported file type (.exe, .bat). Upload file exceeding maximum size. Upload empty file. Upload with no file selected.

Edge cases: Upload very large file (near the limit). Upload file with special characters in filename. Upload multiple files if batch upload is allowed. Cancel mid-upload.


35. How do you test APIs manually?

Using Postman or similar tools:

  • Send valid requests, verify status codes and response body
  • Test authentication (valid token, expired token, no token)
  • Test input validation (missing required fields, wrong data types)
  • Test boundary values (maximum string length, zero, negative numbers)
  • Test idempotency for PUT/DELETE
  • Check response headers (Content-Type, CORS, cache headers)

Mindset and Soft Skills

36. What makes a good QA engineer?

Attention to detail (spotting inconsistencies others miss), curiosity (asking "what if?"), systematic thinking (structured approach to coverage), communication (clear bug reports that developers can reproduce), empathy for users (thinking from the user's perspective, not just the spec).


37. How do you handle a situation where a developer disputes your bug report?

Stay factual. Share clear reproduction steps. Show evidence (screenshot, video, logs). Reference the requirement. If disagreement continues, escalate to the tech lead or product owner — not as a conflict, but as a clarification request. Document the outcome either way.


38. How do you ensure you have not missed any test cases?

Use test design techniques (boundary value analysis, equivalence partitioning, decision tables). Trace test cases back to requirements (requirement traceability matrix). Perform exploratory testing after scripted execution. Review with a colleague. Reference similar historical features for coverage ideas.


39. What is a traceability matrix?

A document (usually a table) that maps requirements to test cases, ensuring every requirement has test coverage. Columns typically: Requirement ID → Requirement Description → Test Case IDs. Used to identify gaps (requirements without test cases) and redundancies.


40. What is test data management?

The process of creating, managing, and maintaining data used during testing. Good test data covers: valid cases, invalid cases, boundary conditions, NULL values, large data sets (for performance tests), and edge cases. Test data should be realistic, independent across test cases, and refreshable.


41–50. Quick-Fire Round

41. What is a showstopper bug? A critical defect that blocks further testing or prevents a release. Example: login is completely broken — no other test can be executed.

42. What is positive testing? Testing with valid inputs, verifying the system works as intended.

43. What is negative testing? Testing with invalid inputs, verifying the system handles errors gracefully without crashing.

44. What is end-to-end testing? Testing a complete user journey from start to finish, verifying all integrated components work together.

45. What is the V-model? A development model where each development phase has a corresponding testing phase — requirements ↔ acceptance testing, design ↔ integration testing, coding ↔ unit testing. Testing activities are planned in parallel with development.

46. What is shift-left testing? Moving testing activities earlier in the development process — reviewing requirements, participating in design reviews, testing during development rather than only after. Catches defects earlier when they are cheaper to fix.

47. What is a risk-based testing approach? Prioritising tests based on the probability and impact of failure. High-risk areas (payment, security, core user flows) get the most test coverage. Low-risk areas get basic coverage. Used when time is limited.

48. What is a hotfix and how do you test it? A hotfix is an urgent patch deployed to production to fix a critical bug. Testing a hotfix is limited and focused — verify the fix works, run a targeted sanity test of related areas, and run a quick smoke test of core flows. Full regression is usually not possible in the time available.

49. What is configuration testing? Verifying the application works correctly across different hardware configurations, software configurations (different database versions, web servers), and system settings.

50. What is mutation testing? A technique where small changes ("mutations") are deliberately introduced into the code, then tests are run to check if they catch the mutations. Tests that do not fail for any mutation are considered ineffective. Used to evaluate test suite quality — primarily relevant for unit test suites.


Preparing for Your Interview

Read these answers until you can explain them in your own words — not recite them word-for-word. Interviewers can immediately tell when someone is reciting versus understanding.

For each answer, think of a real or plausible example from an application you have used. "I would test the search feature like this because I noticed when testing Swiggy's restaurant search..." is a much stronger answer than a generic definition.

The QA Interview Kit includes 200+ additional questions with extended model answers, grouped by topic and difficulty level.

Recommended Resource

QA Interview Kit

Interview prep kit with real-world QA and API scenarios.

999Get This Guide →

Related Posts

📝
Interview Prep
Apr 2026·12 min read

Behavioral Interview Questions for QA Engineers (With Sample Answers)

Behavioral interview questions specifically for QA and SDET roles — with the STAR method explained and sample answers that actually sound human.

Read article →
📝
Interview Prep
Apr 2026·12 min read

QA Interview Preparation Checklist: Everything You Need to Know

A complete QA interview preparation checklist — what to study, what to practice, and how to approach each round of a QA or SDET interview.

Read article →
📝
Interview Prep
Apr 2026·10 min read

30 Performance Testing Interview Questions and Answers

Performance testing interview questions for QA roles — covering load testing, stress testing, JMeter, key metrics, and how to analyze results.

Read article →