Test Design - Best Practices

Modified on Mon, 17 Nov at 2:19 PM

1. Overview

This article provides you the detailed inputs on best practices to be followed while profiling.

2. Best Practices to followed While Profiling

1.Grooming guidelines -

  • Check the end-to-end flow of the test case.
  • Understand all assertions, preconditions, and postconditions.
  • Apply postconditions in After Scenario methods to ensure they are always executed, even if the test case fails midway.
  • When designing test cases, leverage component-based design. Create test cases around individual components so that end-to-end scenarios can be formed by combining these component-level tests. This approach increases reuse, minimizes the total number of test cases and scripts, and reduces maintenance effort.
  • Assess if the test case can be automated.
  • degree of abstraction to be achieved as part of the grooming process.
  • groomed test case is automated as is, maintaining a one-to-one correspondence between the test case and its automation script.
  • Profiling - Creation of Nodes, Custom methods and Object Repository are identified based on the groomed test case 
  • Follow appropriate naming conventions and documentation practices based on the programming language used for automation (e.g., Java, C#, Python, JavaScript, TypeScript).
  • Ensure all changes to nodes, custom methods, and the object repository are reviewed and approved, like how PR process works.


2.Decide between manual and automated profiling based on the type of automation for example, desktop, web, or mobile and whether it involves UI-based, API-based, or hybrid (UI–API–DB) automation. Include informal APIs such as CLI commands where applicable.

3. Choose the appropriate test design approach—such as online recording, offline recording, feature file–to–script conversion, prompt-based generation, or auto test case generation (the enhanced feature in v5 offers powerful capabilities).


Make key test design decisions regarding:

  • Scenario management: Determine when to merge scenarios and when to split test cases.
  • Parameterization: Define how to parameterize test steps.
  • Data sets: Decide how to generate multiple data sets efficiently.
  • Method reuse: Reuse existing custom methods wherever possible instead of creating new ones.

4. Use Codebot to generate code which is used while creating custom methods.


5.Decide on the following aspects of test design:

  • Data generation: Determine whether data should be included in the Feature File or maintained separately.
  • Data source: Choose between generated data or exported data from an external source.
  • Folder and file structure: Define the folder structure for Feature Files, decide on file naming conventions, and whether to use one Feature File or multiple Feature Files.
  • Test case grouping: Organize test cases using tags for better management and execution control.

6. Generate Test Cases:

  • Generate test cases from a single Feature File or all Feature Files.
  • Generate test cases for user-specified requirements.
  • Generate test cases without system-generated tags.
  • Generate test cases by hiding control names, feature names, and custom action names.
  • Generate test cases with user-specified preconditions and postconditions.

7.Apply de-duplication to eliminate duplicate test cases and use the Feature File Quality Analyzer to assess and ensure the quality of the Feature File.

8. Generate scripts:

  • All at once for the entire test suite.
  • Feature by feature to incrementally build scripts.
  • Use a custom framework to implement and generate scripts.
  • Select the reporting template and file format for test results.


9. Execution:

  • Batch execution with parameterized tags.
  • Batch reports should categorize failed test cases into application issues and script issues.
  • Cross-browser execution: You should know how to execute scripts across multiple browsers.
  • Parallel execution: you should know how to execute scripts in parallel for faster test cycles.

10. PR Process:

  • Remove timestamps from file names and the target folder from generated scripts. Raise the PR using the same file name as your previous check-in.
  • Use the PR link to review delta changes.
     

11. Maintenance:

  • Use Auto-Healing to manage locator changes (excluding locators used in custom methods).
  • Re-scrape the application to update locators and the profile.
  • Update the profile based on changes in scripts—verify how much of this functionality is currently available.

12.Co-pilot: algoQA Copilot is a built-in chat assistant within the algoQA platform that helps users by providing real-time support interacting with the platform. It has a capability to answer questions and offer guidance on using the platform


13. Impact-Based Testing:

  • Identify the areas impacted by recent code or configuration changes.
  • Prioritize and execute test cases that cover the affected functionalities to ensure efficient regression testing.
  • Leverage dependency mapping between components, features, and test cases to determine the exact impact scope.

14. Smart Ordering:

  • Implement smart ordering to prioritize test case execution based on impact, risk, and historical results.
  • Run high-priority and high-risk test cases first to detect critical issues early.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article