More
Сhoose
Contact us

API Testing Interview Questions

Level: Mid–Senior
Duration: 45–60 min
Focus: Quality + APIs

A practical API testing interview guide covering fundamentals, tooling, real-world scenarios, and senior-level depth questions — with expected answers and red flags to watch for.

01

Fundamentals

Warm-up questions to establish baseline understanding quickly. A strong candidate answers these fluently without hesitation.
What is the difference between functional and non-functional API testing?
Expected
Functional testing validates that the API behaves correctly — correct responses, proper status codes, right data returned. Non-functional covers performance (response time, throughput), security (authentication, injection), reliability (retries, timeouts), and scalability. A strong answer names specific examples in both categories.
Red flags
Conflates API testing with UI testing. Cannot name non-functional categories beyond 'performance'. Ignores status codes or response contracts entirely.
What are idempotent HTTP methods and why do they matter for testing?
Expected
GET, PUT, and DELETE are idempotent — calling them multiple times produces the same result. POST is not idempotent. This matters for testing retry logic, network failure recovery, and ensuring that duplicate requests don't create duplicate side effects. A strong answer connects idempotency to real-world consequences like double charges or duplicate records.
Red flags
Cannot name which methods are idempotent. Thinks PUT is not idempotent. Cannot explain why retries make idempotency critical in production systems.
What is the difference between a 401 and a 403 response?
Expected
401 Unauthorized means the request lacks valid authentication credentials — the client is not identified. 403 Forbidden means the client is authenticated but does not have permission to access the resource. A strong answer notes that a 401 should prompt re-authentication while a 403 should not — the outcome of re-authenticating as the same user will be the same.
Red flags
Uses 401 and 403 interchangeably. Cannot explain the authentication versus authorisation distinction. Has never tested for incorrect status code responses.
What is the difference between REST and SOAP APIs, and how does testing differ?
Expected
REST uses HTTP, JSON, and is stateless. SOAP uses XML with a defined envelope structure and supports WS-Security, transactions, and formal contracts via WSDL. Testing SOAP requires XML schema validation and WSDL awareness. Testing REST focuses on HTTP semantics, JSON schema validation, and status codes. A strong answer notes that SOAP testing tools differ — SoapUI versus Postman.
Red flags
Has only worked with REST and cannot describe SOAP at a conceptual level. Does not know what WSDL is. Cannot explain why JSON schema validation matters for REST contracts.
What is the difference between black-box and white-box API testing?
Expected
Black-box testing treats the API as an unknown system — inputs are sent and outputs validated without knowledge of the implementation. White-box testing uses knowledge of the code, database schema, or internal logic to design more targeted tests — testing code paths, edge cases revealed by the implementation, and internal state changes. A strong answer notes that most external API testing is black-box, but internal QA often benefits from white-box insight.
Red flags
Has never applied white-box thinking to API testing. Cannot design tests informed by knowing what a function does internally. Thinks all API testing must be black-box.
02

Tooling and Automation

Assess hands-on practical experience with real tools — not just awareness of their existence.
How do you use Postman beyond basic request sending?
Expected
Collections for organised test suites, environments for switching between dev/staging/prod, pre-request scripts for generating dynamic data or auth tokens, test scripts using pm.test() and pm.expect() for assertions, Newman for running collections in CI/CD pipelines, and collection variables for sharing state between requests. A strong answer describes a real workflow they've used in production.
Red flags
Only uses Postman to manually send requests and eyeball responses. Has never written a test script. Does not know what Newman is.
How would you automate API tests and integrate them into a CI/CD pipeline?
Expected
Choose a framework — RestAssured for Java, Pytest with requests or HTTPX for Python, SuperTest for Node.js, or Newman for Postman collections. Write tests that cover happy paths, error cases, and contract validation. Run the test suite on every pull request and block merges on failures. Generate reports for visibility. A strong answer describes a real CI integration they've built or contributed to.
Red flags
Has never integrated API tests into a pipeline. Thinks API automation means UI automation with Selenium. Cannot name a specific API testing framework in a language they work in.
What is the difference between Postman, RestAssured, and Karate DSL for API testing?
Expected
Postman is tool-first and good for exploratory testing, manual testing, and lightweight automation via Newman. RestAssured is a Java library for writing API tests in code — better for large test suites requiring complex logic and CI integration. Karate DSL combines API testing, mocking, and performance testing in a BDD-style syntax without requiring a programming language — good for teams that want readable test files. A strong answer includes when they'd choose each.
Red flags
Only knows one tool. Cannot articulate trade-offs. Has never chosen a testing approach based on team context.
How do you manage authentication in automated API tests?
Expected
Store credentials in environment variables, never hardcode them. Use pre-request scripts to obtain tokens via OAuth flows and inject them into subsequent requests. Implement token refresh logic for long-running test suites. Use API keys in CI/CD via secrets management (AWS Secrets Manager, GitHub Secrets). A strong answer includes a specific approach they've used for OAuth 2.0 or JWT-based APIs.
Red flags
Hardcodes tokens or credentials in test files. Has never dealt with token expiry in automated tests. Cannot describe how to handle OAuth 2.0 token acquisition in automation.
03

Practical Scenarios

How candidates think when requirements are incomplete and conditions are messy. Look for systematic thinking, not just correct answers.
How would you test pagination, filtering, and sorting on a list endpoint?
Expected
Pagination: boundary cases (first page, last page, empty page, page beyond total), consistent ordering across pages, correct total count in response, cursor vs offset pagination edge cases. Filtering: single filters, combined filters, invalid filter values, filters that return empty results. Sorting: ascending and descending, stable sorting for equal values, sorting on non-default fields, sort plus filter combined. A strong answer addresses determinism — sorting must be consistent across requests with identical parameters.
Red flags
Only tests the happy path. Does not consider empty results or boundary pages. Has never verified that sorting is stable and deterministic.
How do you test an API endpoint that triggers a background job or async process?
Expected
Verify the endpoint returns the correct immediate response — typically 202 Accepted with a job ID. Poll the status endpoint or use a webhook to verify eventual completion. Test timeout scenarios and partial failure states. Verify idempotency — submitting the same job twice should not create duplicate processing. A strong answer addresses observability — how do you know the job actually ran?
Red flags
Thinks 200 OK means the job is done. Has never tested async flows. Cannot describe how to verify eventual consistency without polling indefinitely.
How would you validate error handling and status codes across an API?
Expected
Test that the API returns the correct status code for each error type — 400 for validation errors, 401 for missing auth, 403 for permission errors, 404 for missing resources, 422 for semantic validation errors, 429 for rate limiting, 500 for server errors. Verify that error responses follow a consistent schema. Test that error messages are informative enough to debug but do not expose sensitive internal details. A strong answer includes negative test design — deliberately triggering each error condition.
Red flags
Only tests happy paths. Accepts any 4xx as acceptable for all client errors. Has never verified error response schema consistency.
How would you approach testing a third-party payment API integration?
Expected
Use sandbox environments provided by the payment gateway. Test each payment state — success, decline, insufficient funds, expired card, 3D Secure challenge. Verify webhook receipt and idempotent handling of duplicate webhooks. Test refund and chargeback flows. Verify that sensitive card data never appears in logs. A strong answer mentions specific UAE payment gateways like PayTabs or Telr if relevant, and addresses PCI compliance considerations.
Red flags
Has never tested payment integrations. Does not know what a payment sandbox is. Cannot describe how to test webhook delivery and retry.
How do you test an API for rate limiting?
Expected
Send requests at a rate that exceeds the defined limit and verify 429 Too Many Requests responses. Verify that the Retry-After header is present and correct. Test that rate limiting resets after the defined window. Test that rate limiting is scoped correctly — per user, per API key, or per IP as documented. Verify that rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) are present on normal responses. A strong answer includes testing the rate limit boundary — the exact request that triggers it.
Red flags
Has never tested rate limiting. Cannot describe what 429 means. Does not know what the Retry-After header is.
04

Security Testing

Assess whether the candidate thinks about security as part of API testing, not as a separate concern.
What security vulnerabilities do you test for in an API?
Expected
Broken object level authorisation (BOLA/IDOR) — accessing another user's resources by manipulating IDs. Broken authentication — weak tokens, missing expiry, insecure transmission. Excessive data exposure — API returns more fields than the client needs. Injection attacks — SQL, command, and NoSQL injection via API parameters. Mass assignment — sending extra fields that get written to the database. Missing rate limiting. A strong answer references the OWASP API Security Top 10.
Red flags
Cannot name specific API security vulnerabilities. Has never tested for IDOR. Does not know what the OWASP API Security Top 10 is.
How do you test for broken object level authorisation (IDOR)?
Expected
After authenticating as User A, record the IDs of User A's resources. Authenticate as User B. Attempt to access User A's resources using User B's token and User A's resource IDs. Verify that the API returns 403 Forbidden rather than the resource. Test across all resource types and all HTTP methods — GET, PUT, PATCH, DELETE. A strong answer includes automating IDOR checks across a full resource inventory.
Red flags
Does not know what IDOR is. Has never tested cross-user resource access. Thinks authentication testing covers authorisation testing.
How do you verify that sensitive data is not exposed in API responses?
Expected
Review response schemas for fields that should not be returned to clients — password hashes, internal IDs, PII beyond what's needed, financial details, internal system metadata. Test that fields marked as excluded in the documentation are not present in responses. Verify that different user roles receive appropriately scoped data. Check that error responses do not expose stack traces, database error messages, or internal paths. A strong answer mentions data minimisation as a principle.
Red flags
Has never checked API responses for over-exposure. Does not review error response bodies for sensitive data leakage. Cannot name fields that are commonly over-exposed.
05

Senior-Level

Depth checks for senior QA engineers and test leads. Look for systemic thinking, architecture awareness, and the ability to design testing strategy rather than individual test cases.
How would you design a contract testing strategy across a microservices architecture?
Expected
Consumer-driven contract testing using Pact or similar — each consumer defines the contract it expects from a provider, providers verify they satisfy those contracts in their own CI pipeline. This replaces integration tests that require all services to be running simultaneously. Contracts are versioned and stored centrally. Breaking contract changes are caught in the provider's CI before deployment. A strong answer explains why end-to-end tests across microservices are brittle and expensive, and how contract testing solves the specific problems they create.
Red flags
Only suggests end-to-end tests for all microservice verification. Does not know what consumer-driven contract testing is. Cannot explain what Pact does.
How do you approach performance and load testing for an API?
Expected
Define performance requirements — target response time at p50, p95, p99, maximum throughput, acceptable error rate under load. Use k6, JMeter, Locust, or Artillery to generate load. Start with baseline tests at expected load, then stress tests beyond expected load to find breaking points. Identify which endpoints are most critical and most likely to degrade under load. Monitor backend metrics — CPU, memory, database connections, queue depth — alongside API response metrics. A strong answer distinguishes load testing from stress testing from soak testing.
Red flags
Has never done API performance testing. Cannot name a load testing tool. Does not know what p95 or p99 response time means.
How do you test API versioning and backward compatibility?
Expected
Test that v1 clients continue to function correctly after v2 is released. Verify that additive changes — new fields in responses, new optional request parameters — do not break existing clients. Test that breaking changes — removed fields, changed data types, renamed parameters — are only introduced in a new version. Automate backward compatibility checks by running the v1 test suite against the current production API. A strong answer addresses deprecation periods and how to communicate breaking changes to consumers.
Red flags
Has never tested API versioning. Cannot describe what constitutes a breaking change. Does not know how to verify backward compatibility automatically.
How would you build an API testing framework from scratch for a new project?
Expected
Choose a language and framework aligned with the team's existing skills. Define folder structure — separating test data, helper utilities, API clients, and test specs. Implement a base API client that handles authentication, base URL configuration, and common headers. Use environment-based configuration for different target environments. Implement assertion helpers for common patterns — status codes, schema validation, response time. Integrate with CI from the start, not after tests are written. Generate reports. A strong answer includes decisions they've actually made and why.
Red flags
Has never built a testing framework from scratch. Copies test code across files without abstraction. Does not think about maintainability and team adoption.
How do you handle test data management for API tests?
Expected
Isolate test data per test or test suite — tests should not share state that causes interference. Use factory methods or fixtures to create required test data programmatically before each test. Clean up after tests to avoid data accumulation that degrades later test runs. For tests against production-like environments, use data seeding scripts. For sensitive data requirements, use synthetic data that has the same shape as real data. A strong answer addresses the challenge of ordering dependencies — tests that require data created by other tests are fragile.
Red flags
Tests depend on data created by other tests. No cleanup strategy. Uses production data in test environments. Cannot describe how to create isolated test data programmatically.
Additional guidance

API Testing Interview Questions — A Complete Hiring Guide for QA Roles

Hiring a strong API tester is harder than it looks. The role sits at the intersection of quality engineering, backend development awareness, and security mindset — and candidates who are strong on tooling are not always strong on systematic thinking, and vice versa.

This guide gives you a structured set of questions across five levels — fundamentals, tooling, practical scenarios, security, and senior-level strategy — with expected answers detailed enough to evaluate responses confidently, and red flags specific enough to identify gaps that matter.

What Strong API Testing Looks Like

The best API testers think in contracts, not just requests. They understand that an API is a formal agreement between producer and consumer, and that testing an API means verifying that agreement holds — across all inputs, all error conditions, all user contexts, and all load levels.

They think about what can go wrong, not just what should go right. Negative test design — deliberately triggering error conditions to verify they're handled correctly — is as important as positive test design. An API that returns the right response for valid inputs but exposes stack traces on invalid inputs, or returns 200 OK for authentication failures, has not been tested adequately.

They understand that security is part of quality. Broken authorisation, over-exposed data, and missing rate limiting are testing failures as much as incorrect business logic.

How to Use This Interview Guide

Run the fundamentals section early — it quickly establishes whether the candidate has the baseline knowledge the rest of the interview assumes. If a candidate cannot explain idempotency or the difference between 401 and 403, the practical scenarios section will not be productive.

Use the tooling section to calibrate hands-on experience. There's a significant difference between a candidate who has used Postman for exploratory testing and one who has built a Newman-based CI pipeline. Both can be valuable depending on the role — but knowing which you have helps you set the right expectations.

The practical scenarios section is where the interview becomes most revealing. Strong candidates structure their approach — they think about categories of test cases rather than individual cases, they identify edge cases naturally, and they ask clarifying questions when the scenario is underspecified.

The security section is a discriminator for mid-to-senior roles. Many QA engineers have limited security testing experience — which is a gap worth knowing about explicitly rather than discovering after hiring.

The senior-level section is for lead and principal QA roles. The contract testing question in particular separates candidates who understand microservices testing strategy from those who have only worked at the individual service level.