TABLE OF CONTENTS
3. How to use Prompt Engineering
4. Advantages of Prompt Engineering
5. Limitations of Prompt Engineering
1. Overview
This article provides an detailed information about how to use prompt engineering in algoQA to generate test cases.
2. How it Works
- Describe the scenario to test in Plain English:- Example: "I want to test user login with valid and invalid credentials."
- The underlying LLM processes the input and generates test cases in standard BDD Format which can be readily converted to executable scripts.
- Output:- Reusable, BDD test cases ready for test automation.
3. How to use Prompt Engineering
To generate test cases using the prompts, start by creating a new project. To know more about how to create a project access Creating a Project. After the project is created, add nodes to the canvas as required. Clicking on a node will open its configuration and you can add features and controls as required. From there, select the Prompt Engineering option from top to generate test cases using prompts. Enter your prompt and click on Generate Test Cases; the platform will then automatically generate test cases in BDD (Behavior-Driven Development) format. Additionally, you can also select features from the left pane, which helps generate more refined test cases.
4. Advantages of Prompt Engineering
Ease of Use: Generate structured BDD test cases from plain English descriptions. Test cases are tailored to the specific context of the application under test (AUT) and are ready for script conversion.
Intelligent Test Case Generation: Understands complex, multi-step user inputs (e.g., “Login to the application and book an appointment”) to produce relevant test cases covering full workflows.
- Accelerated Test Design: Speeds up the test creation process by automating the generation of accurate, context-aware test scenarios.
5. Limitations of Prompt Engineering
Clean UI Metadata: The underlying LLM uses an offline model built during the profiling process as input for test case generation. Following best practices during profiling, such as meaningful naming of UI elements and adopting a structured layout that reflects the AUT, greatly enhances the quality of generated test cases.
Prompting Strategy: As LLMs are sensitive to input prompts, users may need to experiment with prompt variations (e.g., rephrasing) to achieve the desired results.
Validation of Generated Steps: The generated test steps are not automatically validated against the actual application under test (AUT), and manual verification is required.
Stateless Functionality: The feature operates in a stateless manner, meaning it does not retain context or information from previous prompts.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article