Home / Blog / Types of Software Testing Every QA Engineer Must Know

Manual Testing

Types of Software Testing Every QA Engineer Must Know

QA Knowledge Hub·2026-04-07·9 min read

Interviewers ask about testing types constantly, and many candidates give vague, textbook answers. This guide explains each type of testing clearly — what it actually means, when teams use it, and the real-world context behind the definition.

The Two High-Level Divisions

Before the full list, the two fundamental categories that every other type falls under:

Functional Testing

Verifies that the software does what it is supposed to do. "Does the login work?" "Does the checkout calculate tax correctly?" Functional testing is about behaviour.

Non-Functional Testing

Verifies how well the software performs a function. "How many users can log in simultaneously?" "How fast does the checkout load?" Non-functional testing is about quality attributes.

Most of your day-to-day work as a manual or automation tester is functional. Performance, security, and accessibility testing are specialised non-functional disciplines.


Functional Testing Types

Unit Testing

What: Testing an individual function or class in isolation. Written and run by developers.

Who does it: Developers.

Example: Testing a single calculateDiscount(price, couponCode) function to verify it returns the correct reduced price for each coupon type.

Tools: JUnit (Java), Pytest (Python), Jest (JavaScript)

QA relevance: You should understand unit tests exist and influence test coverage, but you typically do not write them.


Integration Testing

What: Testing how two or more modules work together. Catches issues that unit tests miss — a function works correctly in isolation but fails when it receives real data from another module.

Example: Testing that the order service correctly calls the payment service, and that the payment service correctly updates the order status when payment succeeds.

Tools: Postman (for API-level integration), REST Assured, custom test harnesses.


System Testing

What: Testing the complete, integrated system against the requirements. This is where most QA engineers spend their time. The entire application is deployed to a test environment and verified end-to-end.

Example: Verifying the complete order journey: user logs in → searches products → adds to cart → applies coupon → enters address → pays → receives confirmation email → order appears in admin dashboard.


End-to-End (E2E) Testing

What: Testing a complete user journey from start to finish across all system layers. Often used interchangeably with system testing, but E2E specifically emphasises the user's perspective — including external integrations like payment gateways, email services, and third-party APIs.

Tools: Selenium, Playwright, Cypress.


Acceptance Testing (UAT)

What: Testing performed to verify the system meets business requirements. Usually done by business stakeholders, product owners, or clients — not QA engineers.

User Acceptance Testing (UAT): Real users test the software in an environment that mirrors production.

Business Acceptance Testing (BAT): Business analysts verify against business requirements.

Regulatory Acceptance Testing: Compliance teams verify the software meets legal or regulatory requirements (common in banking, healthcare, fintech).


Regression Testing

What: Re-running previously passing tests after a code change to ensure existing functionality still works.

Why it matters: Every bug fix or new feature is a potential source of regression — breaking something that used to work. Regression testing is the safety net.

Types of regression:

  • Full regression: Run all test cases. Done before major releases.
  • Partial/selective regression: Run only tests related to changed areas. Done after small patches.
  • Sanity testing: Quick, narrow regression on a specific fix. Done after a bug is patched.

Regression testing is the primary driver for test automation — the same tests run repeatedly, making them perfect automation candidates.


Smoke Testing

What: A quick, high-level check to verify a new build is stable enough for detailed testing.

The analogy: When a circuit board is first powered on, engineers watch for smoke — any visible sign that it is fundamentally broken. If it smokes, no further testing needed.

What smoke tests cover: Core application startup, login functionality, navigation between main sections, one representative function in each major area.

When it runs: Immediately after every new build is deployed to the test environment. If smoke fails, the build is rejected without spending time on detailed testing.


Sanity Testing

What: A narrow, focused test to verify a specific area after a code change or bug fix.

Smoke vs Sanity:

  • Smoke: "Is the application fundamentally working?" (broad, shallow)
  • Sanity: "Is this specific thing working after the fix?" (narrow, deep)

Example: After a developer patches a bug in the search filter, sanity testing verifies that the search filter works correctly. It does not test the entire application.


Exploratory Testing

What: Simultaneous test design and execution without a pre-written test script. The tester explores the application freely, using domain knowledge and intuition to find bugs.

Why it is effective: Scripted test cases find bugs you anticipated. Exploratory testing finds bugs you did not. Real users behave in unexpected ways — exploratory testing simulates that.

How it works in practice: Time-boxed sessions (60–90 minutes) with a charter — a brief description of what to explore. "Explore the checkout flow with different payment methods and edge-case address formats."


Re-Testing

What: Executing a failed test case after a developer claims to have fixed the associated bug, to confirm the fix works.

Different from regression: Re-testing is about the specific bug. Regression testing is about everything else.


Black-Box Testing

What: Testing the application without knowledge of the internal code. The tester only knows inputs and expected outputs — not how the system is implemented.

All manual testing by QA engineers is black-box testing by default.


White-Box Testing

What: Testing with knowledge of the internal code. Test cases are designed to exercise specific code paths, branches, or conditions.

Who does it: Developers, or QA engineers who can read code. Code coverage metrics come from white-box analysis.


Grey-Box Testing

What: A combination — some knowledge of the internal structure, but testing from the user's perspective. Common in API testing, where the tester knows the API contract (endpoints, request/response format) but tests it as an external consumer.


Non-Functional Testing Types

Performance Testing

What: Verifying the system performs adequately under expected and peak load.

Sub-types:

  • Load Testing: Simulate expected user volume. "How does the system behave with 1,000 concurrent users?"
  • Stress Testing: Push beyond normal limits to find the breaking point.
  • Spike Testing: Sudden surge of users — can the system handle a flash sale?
  • Soak Testing / Endurance Testing: Run under sustained load for hours to detect memory leaks.

Tools: JMeter, k6, Gatling, Locust


Security Testing

What: Identifying vulnerabilities that could be exploited by attackers.

Types: Penetration testing, vulnerability scanning, SQL injection testing, cross-site scripting (XSS) testing, authentication testing.

Tools: OWASP ZAP, Burp Suite.


Usability Testing

What: Evaluating how easy the application is for real users to use. Often involves watching users interact with the product and noting where they get confused, frustrated, or lost.

Measured by: Task completion rate, error rate, time on task, satisfaction scores.


Accessibility Testing

What: Verifying the application is usable by people with disabilities — visual, motor, cognitive, and auditory.

Standards: WCAG 2.1 (Web Content Accessibility Guidelines).

Tools: Axe, Lighthouse, NVDA (screen reader testing).

Why it matters: Many countries legally require accessibility compliance for web applications.


Compatibility Testing

What: Verifying the application works across different environments.

Types:

  • Browser compatibility: Chrome, Firefox, Safari, Edge
  • OS compatibility: Windows, macOS, Linux, Android, iOS
  • Device compatibility: Desktop, tablet, mobile (different screen sizes)
  • Version compatibility: Old and new versions of browsers/OS

Tools: BrowserStack, LambdaTest.


Localisation Testing

What: Verifying the application is correctly adapted for a specific locale — language, date/time formats, currency, and cultural norms.

Example: Testing that a date displayed as "04/05/2026" shows correctly in the US (April 5) vs UK (May 4) contexts.


Installation Testing

What: Verifying the software installs, upgrades, and uninstalls correctly on target environments. Primarily relevant for desktop applications, mobile apps, and software packages.


How Testing Types Fit Together

In a real project, these types work in sequence:

Developer writes code
       ↓
Unit tests (by developer)
       ↓
Integration tests (API-level)
       ↓
System / E2E tests (QA team)
       ↓
Performance tests (load profiles)
       ↓
UAT (business stakeholders)
       ↓
Regression tests (before each release)
       ↓
Smoke tests (after each deployment)

Exploratory testing runs in parallel with scripted system testing to find what the scripts miss.

Interview Answer Framework

When an interviewer asks "What types of testing have you done?", answer with specifics:

"In my last role, most of my work was system testing and regression testing for the web application's checkout and payment features. I also did exploratory testing during sprint testing cycles to find edge cases beyond the scripted test cases. I have done API integration testing using Postman for the backend APIs. I have not done performance or security testing directly, but I understand the concepts and have coordinated with the performance testing team."

This answer shows practical experience, honest scope, and awareness of the broader landscape — which is what interviewers are evaluating.

Summary

Every testing type exists to answer a different question:

  • Unit/Integration: Is the code internally correct?
  • System/E2E: Does the application work end to end?
  • Regression: Did we break anything with this change?
  • Smoke/Sanity: Is this build/fix safe to test further?
  • Exploratory: What did we forget to test?
  • Performance: How does it behave under pressure?
  • UAT: Does it meet business needs?

Know what each type is for and when it is used. That understanding — not the ability to recite definitions — is what separates competent QA engineers from those who just execute scripts.

Recommended Resource

QA Interview Kit

Interview prep kit with real-world QA and API scenarios.

999Get This Guide →

Related Posts

📝
Interview Prep
Apr 2026·14 min read

50 Manual Testing Interview Questions with Detailed Answers

The most frequently asked manual testing interview questions with complete model answers — covering SDLC, test design, bug reporting, and testing mindset.

Read article →
📝
Career
Apr 2026·10 min read

How to Get Your First QA Job as a Fresher in 2026

A no-fluff guide to landing your first QA job — what skills to build, how to make a resume that gets shortlisted, and how to clear technical interviews.

Read article →
📝
Career
Apr 2026·10 min read

How to Transition from Manual Tester to Automation Engineer

A realistic, step-by-step guide to moving from manual QA into automation — what to learn, how to build experience, and how to land the role.

Read article →