⚡️ Supercharge your test cases ⚙️
Testmatic is a test-case manager for quickly writing and organising human-readable test cases.
- ✅ simple lists of steps
- 🧪 are grouped into tests
- 🏷️ which can be organised by tags
- 🏃 with results tracked in runs
Everything is stored in Markdown files, for easy viewing, editing, searching and version control. If pushed to a server, links to Markdown files can be shared within your org, via e.g. Wiki pages, Chat posts, email, etc.
The main benefit of using Testmatic is that it helps you to perform manual testing in an organised, consistent, rigorous, systematic manner.
Manual testing can offer fast benefits when automated test coverage is limited, development time is constrained and rapid delivery is required while minimising defects.
Whenever you are ready to add automation, Testmatic can help you there too: code generators automatically convert your test cases into unit test files.
Get started by installing Testmatic via the CLI.
- Node.js version 16.14 or above.
Install globally via NPM:
npm install -g testmatic
You can verify the installation by running:
testmatic
Get started by installing Testmatic via the UI.
- Node.js version 16.14 or above.
Install globally via NPM:
npm install -g testmatic-ui
You can start the UI from your command line.
testmatic-ui
You will want to first switch to the directory where you want to locate your Testmatic project.
For example, if you want to put your Testmatic project under ~/Sources/my-app
, run this:
cd ~/Sources/my-app
testmatic-ui
This will set up your Testmatic project in the following folder: ~/Sources/my-app/.testmatic
.
If you just want to play around with Testmatic or use it lightly, you can immediately get started with the hosted UI.
The hosted UI can be used immediately, without any local installation.
Simply navigate to: http://testmatic.surge.sh and begin adding your tests.
You can find instructions for using the hosted UI at UI guide.
At the heart of Testmatic is tests which are simple numbered lists of steps.
You simply follow the the steps to test the system.
Here's an example of a test:
User can login
- Go to the homepage
- Click the sign in button
- Enter the username and password of a registered user
- Click the sign in submit button
- Observe that you are now signed in
Notice some steps are actions, such as:
- Click the sign in button
Whereas others are expectations, such as:
- Observe that you are signed in
The actions tell us what to do but the expectations tell us how the system should behave or respond.
If the system does not behave as expected then either there is a fault in the system or our expectation is wrong.
The title and steps should be as short and succinct as possible but also clear and specific enough to easily follow and understand.
As you write many tests, you'll no doubt notice common recurring elements.
These elements can be referenced by tags, which are re-usable tokens that can be used to group and organise tests.
Tags are indicated in steps by surrounding a piece of text with round brackets like this: (tag)
.
For example, the "sign in" button might be a recurring element. Here's how it would look if we made it tag:
Reset password
- Go to the website
- Click the (sign in button)
<----------
- Enter the username and incorrect password of a registered user
- Click the submit button
- Observe that an error appears indicating the incorrect password
- Observe that a forgotten password link appears
- Click the forgotten password link
- Open your email, copy the code
- Switch back to the website, paste in the code
- Enter a new password and click submit
- Click the (sign in button) again
<----------
- Enter the username and new password
- Observe that you are signed in
Tags can be the following:
- Visual elements in the UI
- Pages of a website
- Screens of an app
- Test user accounts
- Flows, e.g. sign in flow
- Anything else that might be common between different tests
Note: You can also attach tags to a test itself. This is useful when you want a tag to apply to a whole test, not just one or more steps.
After you have written a test you might follow the steps of that test any number of times, to ensure the system is behaving correctly.
What if you want to keep track of the results? This is what test runs are for.
Each test can have multiple runs. Each run has a date/time stamp indicating when it was performed as well as a result and a list of checked/unchecked test steps.
You can mark each run with a result:
- Passed - the system behaved correctly
- Failed - there was some problem so the whole test failed
- Mixed - some steps passed, some failed
- Unset - you haven't performed the test yet
You can also check off individual test steps as you go through the run. This could be useful when making a screen recording, to show which steps have been executed, or just to keep track of the steps.
Each run has a date/time stamped folder containing the run file.
You can put screenshots or screen recording files into this folder as well. That way they will be easier to locate if/when you need them.
Runs are a powerful feature for measuring and organising your test results. You can quickly see which tests have passed or failed and keep your screen recordings organised.
Part of the benefit of tags (discussed earlier) is their usefulness for grouping and locating tests. If two or more tests reference the same tag then you can locate these tests by filtering by that tag.
This gives you a way of finding impacts – which tests might be impacted by a particular tag.
For example, suppose you are making a visual change to the website header, which might impact the sign up button. You can search for all tests that reference the (sign up button)
tag.
Your search might turn up a list of tests like this:
- Sign in as user
- Sign in as admin
- Reset password
- Sign out
Now you can run some or all of these tests and check how the Sign up button behaves under various scenarios. If you change introduced a bug in one of the tests, you'll have a chance to find the bug and fix it early rather than waiting for it to crop up in production.
There is no need to explicitly create a new project in the UI. As soon as you open the UI, a new test project will be created if one does not already exist.
If you are running the Testmatic local UI, you can switch to a project in a different folder.
Simply exit the running Testmatic local UI instance (Ctrl+C or Cmd+C), use cd
to switch to a different folder, then re-run testmatic-ui
.
For example:
~/Sources/project-one $ testmatic-ui
Running Testmatic UI...
...
{ Cmd+C }
~/Sources/project-one $ cd ../project-two
~/Sources/project-two $ testmatic-ui
Running Testmatic UI...
...
To create a new test, click the Add button in the Project explorer (top-left of the screen).
It will now appear in the Project explorer.
You can then enter a title and steps.
Open a test by selected it in the project explorer on the left.
You can then enter some steps.
ℹ️ Tags are re-usable tokens that can be used to group and organise tests.
For more info, see tags.
You can add tags to steps or to the test as a whole.
Focus on the step textbox and type in an "open backet" - (
. Enter your tag name. Then type in "close bracket" - )
.
You'll notice that a suggestion box appears – feel free to use this to quickly insert pre-existing tags.
- Keyboard - You can navigate with arrow keys and select a tag with the enter/return or tab key.
- Mouse - You can scroll up/down and click one of the tags in the list
Tags appear in the Project explorer under the Tags section.
Simply click a tag to view its associated tests.
A Testmatic project consists of a .testmatic
folder in your current working directory, containing sub-folders for tests
, tags
and runs
.
You can generate the folders using the init
command:
testmatic init
You can create your first test by running the test add
command and answering prompt questions, pressing Enter/Return when done.
$ testmatic test add
Please enter test title: Homepage loads
Thank you!
Now, please enter your steps, one-by-one.
(Empty line to finish)
1. Navigate to homepage
2. Observe homepage has loaded
3.
$
Your test "Homepage loads" should now have been created.
You can verify this by running the test list
command:
$ testmatic test list
TITLE NAME DOC
Should load homepage should_load_homepage ./.testmatic/tests/should_load_homepage.md
$
Suppose you want to categorise your new test as applicable only to guest users (users who have not yet logged in).
You can add a tag to the test by running the test tag add {tag-title}
command.
$ testmatic test tag add "should_load_homepage" "guest user"
$
Note: We're using the test name
should_load_homepage
here. It's just the title condensed into one word using underlines:_
. This makes it easier to copy/paste when needed.
Your new tag "Guest user" should now have been created and adeed to the "Homepage loads" test.
You can verify this by running the test show
command:
$ testmatic test show should_load_homepage
Should load homepage
====================
Doc: ./.testmatic/tests/should_load_homepage.md
Steps
-----
STEP
1 Navigate to homepage
2 Observe homepage has loaded
Links
-----
(No items)
Tags
----
NAME TITLE DOC
guest_user Guest user ./.testmatic/tags/guest_user.md
Runs
----
(No items)
$
You can see that your new tag was added.
If you explore your local .testmatic
folder you'll see that the test and tag files were created:
- .testmatic
- tests
- should_load_homepage.md ⚡️
- tags
- guest_user.md ⚡️
- runs
- tests
Suppose you added a few other tests involving various kinds of users. But now you want to filter by only tests involving a guest user. You can filter the list of tests by the "Guest user" tag using test list
command with the --tag
switch.
testmatic test list --tag "Guest user"
TITLE NAME DOC
Should load homepage should_load_homepage ./.testmatic/tests/should_load_homepage.md
When you perform a test, you might want to record certain details:
- Date/time you performed the test
- Result of the test – success, failure, mixed
- Text and links
- Screen recordings (videos, images, etc.)
- Outputs (JSON or CSV files, etc.)
Testmatic has a Runs feature to help you with this.
Each test can have one or more runs.
Each run has:
- One dated folder containing a Markdown file and any other files you wish to include (screen recordings, outputs, etc.)
- One dated Markdown file containing the date/time, text and links (if any)
To create a run, simply run the run add
command, providing the test name (or title) as the first parameter:
testmatic run add should_load_homepage
Note: You can optionally provide a date-time stamp in the format:
YYYY-MM-DD_HH-MM
. For example:2024-10-01_11:30
for October 1, 2024 at 11:30 AM.
The new run folder and Markdown file should now have been created.
You can verify this using the run show
command:
$ testmatic run show should_load_homepage
Should load homepage – 24/3/2024 2:44
=====================================
Files
-----
FILE
./.testmatic/runs/should_load_homepage/2024-04-24_02-44/2024-04-24_02-44.md
A new folder and Markdown file will have been added:
- .testmatic
- runs
- should_load_homepage
- 2024-04-24_02-44 ⚡️
- 2024-04-24_02-44.md ⚡️
- 2024-04-24_02-44 ⚡️
- should_load_homepage
- runs
You can open that folder in Finder (on Mac) using the run open
command:
testmatic run open should_load_homepage
Note: How does Testmatic know which run to show / open? It uses the latest by default. But if you prefer to target an older run, can provide an additional argument of the date/time stamp of the run to the
run show
orrun open
command. See run show, run open for details.
You might want to attach links to tests or tags.
For example:
- Links to documentation
- Links to web pages under test
- Links to screenshots or screen recordings
- Links to mock accounts
You can add/remove links manually by editing the test, tag or run Markdown files.
Testmatic also includes test link add
and tag link add
commands, allowing you to quickly add links to tests or tags respectively without leaving the command line.
$ testmatic test link add should_load_homepage http://website.com/
$ testmatic test show should_load_homepage
Should load homepage
====================
Doc: ./.testmatic/tests/should_load_homepage.md
Steps
-----
STEP
1 Navigate to homepage
2 Observe homepage has loaded
Links
-----
NAME URL
http://website.com/ http://website.com/
$ testmatic tag link add guest_user http://test-accounts.com/guests
$ testmatic tag show guest_user
Guest user
==========
Doc: ./.testmatic/tags/guest_user.md
Links
-----
NAME URL
http://test-accounts.com/guests http://test-accounts.com/guests
Tests
-----
TITLE NAME DOC
Should load homepage should_load_homepage ./.testmatic/tests/should_load_homepage.md
You might want to quickly find out, for a specific test or tag, which other tests or tags are related.
For example, some tests involving guest users might also involve other kinds of users or other systems, e.g. email notifications. So an error that causes failure of one test might cause failures of other related tests.
Testmatic has an Impacts feature which shows you a graph of impacts of a given test or tag. You can use the test impacts
command to view a test's impacts or the tag impacts
command for a tag's impacts.
$ testmatic test impact should_load_homepage
Test: Should load homepage - Impacts
====================================
--> Homepage screen (homepage_screen) [tag]
--> Footer (footer) [tag]
--> Should show copyright info (should_show_copyright_info) [test]
--> Header (header) [tag]
--> Should show log out button (should_show_log_out_button) [test]
--> Homepage url (homepage_url) [tag]
--> Should indicate logged in status after log in (should_indicate_logged_in_status_after_log_in) [test]
--> Header login link (header_login_link) [tag]
--> Login form (login_form) [tag]
--> Valid existing user login credentials (valid_existing_user_login_credentials) [tag]
--> Header logged in status (header_logged_in_status) [tag]
$ testmatic tag impacts homepage_url
Tag: Homepage url - Impacts
===========================
--> Should indicate logged in status after log in (should_indicate_logged_in_status_after_log_in) [test]
--> Should load homepage (should_load_homepage) [test]
--> Homepage screen (homepage_screen) [tag]
--> Footer (footer) [tag]
--> Should show copyright info (should_show_copyright_info) [test]
--> Header (header) [tag]
--> Should show log out button (should_show_log_out_button) [test]
Usage: init
Create a new project in the current working directory
Usage: project create
Create a new project in the current working directory (same as testmatic init
)
Usage: test list [options]
List tests in the current project
Options:
Syntax | Description |
---|---|
-t , --tag <value>
|
Filter by tag |
Usage: test add [options]
Add a new test to the project
Options:
Syntax | Description |
---|---|
-t , --title <value>
|
Title of the test.
Also used to generate an underscored filename used to refer to the test in short-hand.
Titles must be unique.
Titles should briefly summarise the test steps.
Required - must be provided, either via prompt or command line. |
-d , --description <value>
|
Description of the test.
Longer than the title, provides a more detailed summary of the test.
Tests can also include tags, enclosed in round brackets: (, ). For further information, see 'testmatic tag help'. Optional. |
-s , --steps [steps...]
|
List of steps of the test.
Add each step in quotes separated by a space, e.g.: "step one" "step two" Steps will be in the order that they are provided. Required - at least one step must be provided, either via prompt or command line. |
-l , --links [links...]
|
List of links to associate with the test.
For example, a deep link to the web page being tested or relevant documentation.
Add each link href in quotes separated by a space. E.g.: "http://product.com/login" "http://wiki.com/login-flow". Links can be prefixed with text separated by a pipe "|". E.g. "Login page|http://product.com/login" "Login flow docs|http://wiki.com/login-flow" Optional. |
Usage: test delete <testNameOrTitle>
Delete a test
Usage: test show <testNameOrTitle>
Show the full details of a test
Usage: test link add [options] <testNameOrTitle> <linkHrefOrTitle>
Add a new link to a test
Options:
Syntax | Description |
---|---|
-t , --title <value>
|
Title of the new link.
Optional. |
Usage: test link delete <testNameOrTitle> <linkHrefOrTitle>
Delete a link from a test
Usage: test link open <testNameOrTitle> <linkHrefOrTitle>
Open a test link in the browser
Usage: tag list
List tags in the current project
Usage: tag add [options]
Add a new tag to the project
Options:
Syntax | Description |
---|---|
-t , --title <value>
|
Title of the tag.
Also used to generate an underscored filename used to refer to the test in short-hand.
Titles must be unique.
Titles should briefly describe the tag.
Required - must be provided, either via prompt or command line. |
-y , --type <value>
|
Type of the tag.
Used to categorise one or more similar tags.
E.g. "page" for tags that refer to a page in an website.
Optional. |
-d , --description <value>
|
Description of the test.
Longer than the title, provides a more detailed description of the tag.
Optional. |
-l , --links [links...]
|
List of links to attach to the tag.
For example, a deep link to the web page being tested or relevant documentation.
Add each link href in quotes separated by a space. E.g.: "http://product.com/login" "http://wiki.com/login-flow". Links can be prefixed with text separated by a pipe "|". E.g. "Login page|http://product.com/login" "Login flow docs|http://wiki.com/login-flow" Optional. |
Usage: tag delete <tagNameOrTitle>
Delete a tag
Usage: tag show <tagNameOrTitle>
Show the full details of a tag
Usage: tag link add [options] <tagNameOrTitle> <tagLinkHref>
Add a new link to a tag
Options:
Syntax | Description |
---|---|
-t , --title <value>
|
Title of the new link.
Optional. |
Usage: tag link delete <tagNameOrTitle> <linkHrefOrTitle>
Delete a link from a tag
Usage: tag link open <tagNameOrTitle> <linkHrefOrTitle>
Open a tag link in the browser
Usage: tag type <tagNameOrTitle> <tagType>
Set the type of a tag
Usage: tag impacts <tagNameOrTitle>
List the tests and tags that are impacted by a tag
Usage: run show <testNameOrTitle> [runDateTime]
Show the full details of a run
Usage: run open <testNameOrTitle> [runDateTime]
Open a run folder
Manual testing is using an application yourself to ensure it works well.
It can be as simple as opening the app or website and clicking around to check that everything works.
In more complicated cases, it can involve systematic, rigorous testing, such as going through the entire sign-up flow to ensure that it works correctly.
Simple manual testing, such as clicking around, is often done by developers/engineers as the build the product.
More advanced testing is often outsourced to QA departments, who use sophisticated test case tools.
However, as products become more complex, an increasing burden of testing falls on developers/engineers and others on the team.
Testmatic fills this middle-ground between casual/informal testing and professional QA.
There is much agreement that unit testing is beneficial, but what about manual testing?
Manual testing provides unique benefits:
- Manual testing helps you discover and document intended behaviour – how the system is expected to behave.
- Manual test cases can be written immediately, without setting up test frameworks, etc.
- Manual testing can be done in any environment you have access to.
- Manual testing puts you in the end-user's shoes, letting you experience the system as an end-user would.
- Manual testing lets you verify complex, lengthy workflows, which might be too difficult to automate.
The benefits make manual testing are highly useful in certain kinds of situations, such as:
- Large and complicated applications
- Poorly documented applications
- Time-constrained businesses, e.g. a fast-growing startup
- Cost-constrained businesses who cannot afford QA
Much of the industry has focussed on the benefits of automated testing and some have downplayed manual testing, claiming it to be time-consuming, inefficient and error-prone.
These criticisms may be strong in theory, but in practice they can ignore some important issues with automated testing:
- Automated test coverage can be limited.
- Automating all important flows can be time-consuming.
- Automating certain kinds of behaviour can be very difficult. E.g. full user interface testing including observing smoothness, performance, accessibility, etc. or long complex flows involving multiple systems, both internal and external to the organisation, can be very difficult to automate.
- Automating testing may limit incidental or unplanned but desirable discoveries. These would more likely be picked in manual testing. E.g. testing a login flow, one might discover a small UI glitch with the password entry field.
In most situations, the optimal choice is probably some combination of automated and manual testing.
Automated tests are useful and widely applicable, but not the "one ring to rule them all".
Traditionally, manual testing was mainly performed by QA engineers, using sophisticated test case tools.
However, as products become more complex, an increasing burden of testing falls on developers/engineers and others on the team.
- Engineers perform manual testing to understand what they're building, why they're building it, how people will use it, etc.
- Engineers perform manual testing to ensure their code actually works as expected
- Product owners perform manual testing to understand how people use the product, identify issues and bottlenecks and ideate on future directions for the product
- Designers perform manual testing to see how the system looks and feels and identify areas for improvement, such as making the experience more consistent and cohesive
While specialisation is good for productivity, a blinkered or siloed approach to testing may not be optimal.
- More (and a variety of) eyes on a product are more likely to uncover errors.
- Errors (or important ones at least) eventually need to be fixed and this will likely involve engineers. It will likely be easier for engineers to fix errors if they already have a good grasp on key test cases and are generally capable testers themselves.
- Fixes may come in various forms, not necessarily engineering. E.g. a small error in pricing might be cheaper to fix by simply adjusting copy in an email to alert users, rather than consuming expensive engineer-hours.
- Engineers who understand test cases can gain a more holistic understanding of the system they are working on. Rather than just focussing on structure / code, engineers can think of how the software is being used and tailor their efforts accordingly. E.g. prioritisation of work, code structure, data structure selection, time estimates, long-term goal-setting.
Manual testing can be achieved in various ways with various systems.
Some key benefits of using Testmatic for your manual tests are:
- Fast and easy to create and update via CLI or text editor
- Easy to share with business and tech people alike
- Organised, structured and repeatable, enabling rigor and consistency in the testing effort
- Searchable by tag, allowing second-order impacts of failures to be identified, tested and fixed
- Can be used to generate automated unit test code scaffolding if desired (see: Roadmap)
Given/When/Then syntax is cumbersome and requires a learning curve and is generally the domain of software engineers.
In contrast, simple lists of testing steps, much like the method in a cooking recipe, are easy to understand for a broad set of team members – e.g. agile managers, product owners, QA, designers, engineers.
It's easier and faster to manually test software when you are clear on what specific actions need to be taken and in what order.
By documenting and organising testing procedures, manual testing can be performed consistently, ensuring a rigorously tested product.
Testmatic focusses on helping you write and organise your testing steps first, generating empty placeholder functions without you having to immediately write code to automate them.
If and when you decide to add automation, it's easy to locate the places in which to write code, and you can automate step by step, rather than having to automate a whole test sequence all at once.
You can share your test steps with team members and stakeholders easily.
You can link to a whole test, or a list of related tests.
Testmatic tests can be hosted in version control, with zero third-party dependencies or additional setup
This makes it easy to get started quickly - simply fork the testmatic project, begin generating and committing your tests, and push to your own repository.
If your organisation has a version control system, and you have permission to create a new repository, you already have everything you need to get started.
As your testmatic instance is a forked Git repository by default, you reap all the benefits of version control - tracking the history of changes, branching, ability to revert changes, etc.
As tests, steps and tags are stored internally as Typescript code, you can easily make modifications - large or small - using the standard tools of your IDE. For example, in VS Code, you can rename a token and automatically have it update all usages, by renaming the file with Update imports on file move and Rename symbol.
For example, you can instantly retrieve a list of all tests for a particular screen, e.g. Login screen or Dashboard screen.
Or you can instantly retrieve a list of tests that utilise a particular test account.
These lists can be conveniently linked from external repositories of information, such as a Solution design in a Wiki, a task tracking system or company chat.
For example, a wiki page for the Login screen could link to a testmatic doc listing all tests for that screen: http://github.com/myaccount/mytests/blob/main/docs/tags/login_screen.md.
Yes!
You can add them to a run folder of your test or link to them from the run markdown file.
See the Runs section under Advanced.
You can get started by forking the testmatic repo.
- Git
- Node.js version 16.14 or above:
- When installing Node.js, you are recommended to check all checkboxes related to dependencies.
- On GitHub.com, navigate to the testmatic/testmatic repository.
- In the top-right corner of the page, click Fork.
- Under "Owner," select the dropdown menu and click an owner for the forked repository.
- By default, forks are named the same as their upstream repositories. Optionally, to further distinguish your fork, in the "Repository name" field, type a name.
- Optionally, in the "Description" field, type a description of your fork.
- Optionally, select Copy the DEFAULT branch only.
- For many forking scenarios, such as contributing to open-source projects, you only need to copy the default branch. If you do not select this option, all branches will be copied into the new fork.
- Click Create fork.
You can then checkout to a local folder.
git clone https://github.com/myname/mytestmatic
And install the dependencies using npm.
npm install
Congratulations! You now have a working Testmatic project.
Future | More advanced querying, similarity search for tags, web-based UI, re-usability as a library, code generation framework. |
---|---|
28-May-2024 | Simplification of core, leveraging of `commander` and `chalk` libraries, deferral of code generation, addition of runs and impacts feature. |
13-Aug-2023 | Initial release with core framework and basic support for tests, steps, tags, doc generation and querying of tests and steps. |
Nothing to write here so far. This section will be updated if and as needed.
Please feel free to contact Jonathan, the project maintainer.
- Twitter - @conwy
- Github - @jonathanconway
- Email - jon@conwy.co