Top 50+ Manual Testing Interview Questions to Crack Any QA Interview
This guide contains the top 50+ manual testing interview questions and answers frequently asked in QA interviews for both freshers and experienced testers.
Software Testing Life Cycle (STLC)
Requirement Analysis → Test Planning → Test Case Design → Test Execution → Test Closure
Bug Life Cycle
New → Assigned → Open → Fixed → Retest → Closed
Top Manual Testing Interview Questions
1. What is the difference between verification and validation?
Verification is the process of checking whether the software is built correctly according to requirements, without executing the application.
Validation is the process of checking whether the software meets user expectations by executing the application.
Example:
- Verification → Reviewing requirement documents or test cases
- Validation → Testing login functionality in the application
Verification ensures we are building the product right, while validation ensures we are building the right product.
2. What is a test scenario, and how is it different from a test case?
A test scenario is a high-level idea of what needs to be tested.
A test case is a detailed step-by-step procedure with test data and expected results.
Example:
- Scenario → Verify login functionality
- Test Case → Enter username, password, click login, verify success message
Test scenarios provide coverage, while test cases ensure detailed validation.
3. Explain the bug life cycle in a real project.
The bug life cycle describes the stages a defect goes through from identification to closure.
- New – Bug is logged
- Assigned – Assigned to developer
- Open/In Progress – Developer starts fixing
- Fixed – Issue is resolved
- Retest – Tester verifies the fix
- Closed – Issue is confirmed resolved
- Reopened – If issue still exists
This lifecycle helps track defects efficiently and ensures proper resolution.
4. What is sanity testing, and when do you perform it?
Sanity testing is performed after minor changes or bug fixes to verify that specific functionality is working correctly.
It is a focused and quick check to ensure the build is stable for further testing.
Example: After fixing a login issue, verify only login-related functionality instead of testing the entire application.
5. What will you do if a developer says, “This is not a bug”?
If a developer rejects a bug, I:
- Re-check the requirement or acceptance criteria
- Verify expected vs actual behavior
- Provide clear evidence (screenshots, logs, steps)
If still unclear, I discuss with the BA or product owner for clarification.
This ensures correct understanding and avoids unnecessary conflicts.
6. What is the difference between severity and priority? Give a real-time example.
Severity indicates how serious the defect is in terms of functionality impact.
Priority indicates how quickly the defect should be fixed based on business importance.
Examples:
- Payment failure → High severity & High priority
- UI misalignment → Low severity but can be High priority
Severity is technical impact, while priority is business impact.
7. What is regression testing, and when do you perform it in a project?
Regression testing ensures that existing functionality is not affected by new changes such as bug fixes or new features.
It is performed:
- After bug fixes
- After new feature implementation
- Before release
This helps maintain application stability.
8. What is the difference between smoke testing and sanity testing?
Smoke testing is a broad test to check whether the critical functionalities of a build are working.
Sanity testing is a focused test to verify specific changes or fixes.
Example:
- Smoke → Check login, homepage, and navigation
- Sanity → Verify only fixed login issue
Smoke testing ensures build stability, while sanity testing ensures fix correctness.
9. How do you write a good bug report? What details should it contain?
A good bug report should be clear, concise, and easy to reproduce.
- Bug ID
- Title/Summary
- Description
- Steps to reproduce
- Expected result
- Actual result
- Severity and Priority
- Environment details
- Screenshots or recordings
This helps developers understand and fix the issue quickly.
10. What will you do if you do not have proper requirement documents but still need to start testing?
If requirements are unclear, I:
- Discuss with BA, product owner, or developers
- Understand business flow and user scenarios
- Start exploratory testing
- Document assumptions
This ensures testing continues without delays while maintaining quality.
11. What are the different types of black-box testing techniques?
Black box testing is a testing method where the tester validates the functionality of the application without knowing the internal code structure.
These are also called Test Case Design Techniques.
The main techniques are:
1. Equivalence Partitioning
Divide input data into valid and invalid groups.
Example: If age field accepts 18–60:
- 25 (valid)
- 10 (invalid)
- 70 (invalid)
2. Boundary Value Analysis (Very Important)
Test values at boundaries.
For 18–60:
3. Decision Table Testing
Used when output depends on multiple conditions.
4. State Transition Testing
Used when application changes state.
Example: After 3 wrong login attempts → account gets locked.
12. What are the different test case design techniques you have used?
Test case design techniques are methods used to design effective test cases.
The main techniques I have used are:
- Equivalence Partitioning
- Boundary Value Analysis
- Decision Table Testing
- State Transition Testing
13. What are the different types of non-functional testing?
Non-functional testing verifies the performance, usability, reliability, and security of the application.
Types include:
- Performance Testing
- Load Testing
- Stress Testing
- Security Testing
- Usability Testing
- Compatibility Testing
- Reliability Testing
14. What are the different stages in the Software Testing Life Cycle (STLC)?
Stages of STLC:
- Requirement Analysis
- Test Planning
- Test Case Design
- Test Environment Setup
- Test Execution
- Test Closure
15. What are the entry criteria and exit criteria in testing?
Entry Criteria are conditions that must be met before testing starts.
Examples:
- Requirements available
- Test cases ready
- Test environment ready
Exit Criteria are conditions that must be met before testing is stopped.
Examples:
- All test cases executed
- Critical bugs fixed
- Test coverage achieved
16. What is exploratory testing? When do you perform it?
Exploratory testing is a testing approach where the tester simultaneously learns the application, designs test cases, and executes them.
It is performed without predefined test cases and helps identify unexpected defects.
It is commonly used when:
- A new feature is implemented
- Requirements are unclear
- There is limited time for test case creation
17. What is the difference between functional testing and non-functional testing?
Functional testing verifies that the application features work according to requirements, such as login, add to cart, and checkout.
Non-functional testing verifies aspects like performance, usability, security, and reliability.
Functional testing checks what the system does, while non-functional testing checks how well the system performs.
18. What is User Acceptance Testing (UAT)? Who performs it?
User Acceptance Testing (UAT) is the final level of testing where the application is validated against business requirements.
It is performed by:
- End users
- Product owners
- Business stakeholders
It is done before releasing the application to production.
19. What is the difference between retesting and regression testing?
Retesting is performed to verify that a specific defect has been fixed correctly.
Regression testing ensures that existing functionalities are not affected by new changes.
Retesting focuses on a specific bug, while regression testing focuses on overall application stability.
20. What is a test plan, and what does it contain?
A test plan is a document that defines the testing strategy, scope, objectives, schedule, and resources.
It includes:
- Testing scope
- Test strategy
- Test environment
- Resources and roles
- Test schedule
- Entry and exit criteria
21. What is the difference between a test case and a test script?
A test case is a document that contains step-by-step instructions, test data, expected results, and conditions to verify a specific functionality manually.
A test script is an automated version of a test case written using a programming language and automation tools like Selenium or Playwright to execute tests automatically.
22. What is defect leakage?
Defect leakage occurs when a defect is missed during testing and is found later in production by end users.
It indicates that the defect was not detected in earlier testing phases.
23. What is defect density?
Defect density is the number of defects found per unit size of the software, such as per module or per lines of code.
It is used to measure the quality of the software.
24. What is a test summary report?
A test summary report is a document prepared after test execution that summarizes the testing activities and results.
It includes:
- Total test cases executed
- Passed and failed test cases
- Defects found and fixed
- Test coverage
- Overall test status
25. What are the different types of test environments you have used?
In projects, multiple environments are used:
- Development (Dev) – used by developers for development and unit testing
- QA/Test environment – used by testers for functional and regression testing
- Pre-production (UAT/Stage) – used for final validation before release
- Production (Prod) – live environment used by end users
26. What is the difference between alpha testing and beta testing?
Alpha testing is performed internally by the testing team before releasing the product to external users.
Beta testing is performed by real users in a real environment before the final release.
Alpha is internal testing, while Beta is external testing.
27. What is risk-based testing?
Risk-based testing is a testing approach where testing is prioritized based on risk levels.
Features with high business impact (like payment or login) are tested more thoroughly.
This helps reduce critical production issues.
28. What is the difference between static testing and dynamic testing?
Static testing is performed without executing the application. It includes reviewing requirements and documents.
Dynamic testing is performed by executing the application to verify functionality.
29. What is a Requirement Traceability Matrix (RTM)? Why is it important?
A Requirement Traceability Matrix (RTM) is a document that maps requirements to corresponding test cases.
It ensures that every requirement has at least one test case and no requirement is missed.
RTM helps ensure complete test coverage.
30. How do you ensure complete test coverage in a project?
Complete test coverage can be ensured by:
- Preparing RTM to map requirements to test cases
- Reviewing test cases regularly
- Performing both positive and negative testing
- Covering boundary conditions
- Running regression testing before release
31. List the different types of testing performed in the Software Development Life Cycle (SDLC).
In SDLC, testing is performed at different levels:
Testing Levels:
- Unit Testing
- Integration Testing
- System Testing
- User Acceptance Testing (UAT)
Functional Testing:
- Smoke Testing
- Sanity Testing
- Regression Testing
- Retesting
Non-Functional Testing:
- Performance Testing
- Load Testing
- Stress Testing
- Security Testing
- Usability Testing
- Compatibility Testing
32. List the different attributes of a good test case.
A good test case should have the following attributes:
- Test Case ID – Unique identifier
- Title – Clear description
- Preconditions – Required setup
- Test Steps – Easy-to-follow steps
- Test Data – Input values
- Expected Result – Expected outcome
- Actual Result – Observed result
- Status – Pass/Fail
33. Tell me about a time when you found a critical bug just before release. What did you do?
In one release, I found a critical issue where clicking a key button caused a server error instead of the expected action.
I immediately reported the issue and discussed it with developers and stakeholders, explaining its impact.
We paused the release, the developer fixed the issue, and I performed retesting to ensure it was resolved and no other functionality was affected.
After confirming stability, the release proceeded successfully, avoiding a production issue.
34. Will delaying the release impact the client or business?
Yes, delaying a release can impact timelines, but releasing a product with critical defects can have a much bigger impact.
If the defect affects critical functionality, it is better to delay the release and fix the issue rather than release a faulty product.
This ensures quality and builds long-term trust with the client.
35. Imagine you're working on a release with strict time constraints. How would you prioritize your testing efforts?
When working with strict deadlines, I follow a risk-based testing approach.
I prioritize testing based on business-critical functionalities such as login, payment, and core user flows.
- Test high-risk features first
- Focus on core user journeys
- Run smoke testing to validate build stability
- Perform regression testing on critical modules
- Defer low-priority or cosmetic testing if needed
This approach ensures maximum coverage within limited time and reduces the risk of production issues.
36. How do you handle requirements when time is tight?
When time is limited, I quickly analyze available requirements and clarify key functionalities with the BA, product owner, or developers.
I focus on understanding critical user flows and begin testing early.
- Prioritize important features
- Ask quick clarifications instead of waiting for complete documentation
- Use exploratory testing to identify issues faster
This helps maintain speed while ensuring essential functionality is tested properly.
37. Imagine you're testing a login page. What test scenarios would you design?
For a login page, I design both positive and negative test scenarios.
- Valid login credentials
- Invalid username and password
- Empty fields validation
- Password masking
- Forgot password functionality
- Session timeout and logout
- Multiple failed login attempts (account lock)
- Cross-browser and device compatibility
I also consider usability and security-related checks while testing.
38. How would you check for SQL injection on a login page?
To check SQL injection, I enter malicious inputs in the login fields to see how the application behaves.
Examples of test inputs:
If the application allows login without valid credentials or behaves unexpectedly, it indicates a vulnerability.
- Verify input validation
- Check proper error handling
- Ensure sensitive data is protected
This ensures the application is secure from basic injection attacks.
39. What activities are involved in the Maintenance phase of SDLC?
The maintenance phase begins after the application is released to production.
It involves fixing defects, improving performance, and implementing enhancements based on user feedback.
- Bug fixing and patches
- Performance improvements
- Feature enhancements
- Security updates
This ensures the application remains stable and up-to-date.
40. Write three negative test cases for a mobile messaging application.
Negative test cases for a messaging application include:
- Sending a message without internet connection
- Sending an empty message
- Sending a message exceeding character limit
This ensures proper validation and error handling.
41. When should the testing process be concluded?
Testing should be concluded when the exit criteria are met.
- All test cases executed
- Critical defects fixed
- Test coverage achieved
- Stakeholders approve release
This ensures the application is stable for production release.
42. Write three negative test cases for a credit card application form.
Negative test cases include:
- Entering invalid card number format
- Leaving mandatory fields empty
- Entering expired card details
This helps ensure strong validation and prevents incorrect data submission.
43. List the top 5 high-level test scenarios for a search engine.
- Search with valid keywords
- Search with invalid or random input
- Search with special characters
- Verify auto-suggestions functionality
- Validate result relevance and performance
44. List the top 5 high-level test scenarios for a new laptop launch.
- Verify product listing and details
- Add to cart functionality
- Checkout and payment flow
- Search and filter functionality
- Cross-browser and device compatibility
45. How would you test the video quality of a mobile or web video application?
To test video quality, I would validate:
- Different resolutions (HD, Full HD, 4K)
- Buffering and loading performance
- Audio and video synchronization
- Playback smoothness
- Performance under different network conditions
This ensures a good user experience across devices.
46. Write three negative test cases for the Bluetooth feature of a mobile phone.
- Trying to connect when Bluetooth is turned off
- Connecting to unsupported devices
- Interrupting file transfer during sharing
This ensures reliability and proper error handling.
47. What are your roles and responsibilities as a QA tester?
As a QA tester, my responsibilities include:
- Understanding requirements and preparing test scenarios
- Writing and executing test cases
- Identifying and reporting defects
- Performing regression and exploratory testing
- Collaborating with developers and stakeholders
This ensures the application meets quality standards before release.
48. How do you perform cross-browser testing manually?
Cross-browser testing ensures the application works consistently across different browsers.
I test the application on browsers like Chrome, Firefox, Edge, and Safari.
- Verify UI layout and alignment
- Check functionality across browsers
- Test responsiveness on different screen sizes
- Identify browser-specific issues
This ensures a consistent user experience across platforms.
49. How do you test UI/UX in an application?
UI/UX testing focuses on verifying the look, feel, and usability of the application.
- Check alignment, colors, fonts, and layout
- Verify responsiveness across devices
- Ensure navigation is user-friendly
- Validate error messages and feedback
This improves user experience and usability of the application.
50. Explain Agile methodology and ceremonies.
Agile methodology is an iterative approach to software development where work is divided into small cycles called sprints.
In Agile, QA testers are involved from the early stages of development and perform continuous testing.
Key Agile ceremonies include:
- Sprint Planning – Define scope and tasks for the sprint
- Daily Stand-up – Discuss progress and blockers
- Sprint Review – Demonstrate completed work
- Sprint Retrospective – Discuss improvements
This approach improves collaboration, faster delivery, and product quality.
51. What will you do if an element is not found during testing?
If an element is not found, I first verify whether the issue is due to a defect or incorrect test steps.
- Check if the element exists in the UI
- Verify environment and test data
- Confirm if the feature is implemented correctly
- Check for UI changes or delays
If the issue persists and it is not expected behavior, I log a defect with proper details.
<
52. How would you test a signup form?
To test a signup form, I would verify:
- Field validations (mandatory fields)
- Email format validation
- Password strength and rules
- Duplicate user registration
53. How would you test a search feature?
To test a search feature, I would validate:
- Search with valid keywords
- Search with invalid or random input
- Partial and case-insensitive matches
- Search performance and response time
54. How would you test a checkout process?
To test checkout functionality, I would verify:
- Product selection and cart updates
- Payment gateway functionality
- Order confirmation process
- Email or notification after order
55. How would you test file upload functionality?
To test file upload, I would check:
- Supported file formats
- File size limits
- Upload success and failure scenarios
- Error handling for invalid files
56. How would you test password reset?
To test password reset, I would verify:
- Reset email functionality
- Token validity and expiration
- Password rules and validation
- Successful login after reset
57. What if requirements are unclear?
If requirements are unclear, I:
- Discuss with stakeholders (BA or developers)
- Understand business flow
- Document assumptions
- Proceed with exploratory testing
58. What if a bug is not reproducible?
If a bug is not reproducible, I:
- Check environment and test data
- Review logs and screenshots
- Retry with different scenarios
- Provide detailed information to developers
59. What if build is unstable?
If the build is unstable, I:
- Perform smoke testing
- Identify critical failures
- Report issues immediately
- Wait for stable build before full testing
60. How do you handle tight deadlines?
To handle tight deadlines, I:
- Prioritize high-risk features
- Focus on critical user flows
- Perform risk-based testing
- Communicate progress clearly