Story by
Ana Vinšek

t-matix IoT platform helps companies in various industries to digitize their products and processes, develop new digital products, and innovate their business models. Because of its modular principle, users are able to implement tailor-made solutions and innovate their business models.

Therefore, it is crucial to ensure the platform is stable, intuitive to configure, and easy to use. After all, it is made to solve IoT challenges and make life easier. To achieve this goal, we must put an emphasis on testing in all stages of the software development life cycle – from unit and integration to load and usability tests. That includes functional as well as non-functional testing. In this blog, we will focus on one specific type of testing – exploratory testing.

Image 1. t-matix platform industry coverage

Exploratory testing is a manual testing technique that consists of exploring the application, looking for potential problems, and corner case scenarios while simultaneously learning and designing a test.

Image 2. Exploratory testing process

This approach relies heavily on the tester’s knowledge about the product, technical expertise, experience, and creativity. It allows the tester not only to ensure that the application works as intended but also to investigate various “unlikely” scenarios that are not part of a ‘happy path.’ Thus, it can’t be replaced with automation, nor does it require a fixed script. Exploratory testing is not about following the instruction manual. It’s about end-to-end testing throughout the entire application and trying to find elusive bugs that cannot be caught by automated tests.

This technique is not meant to replace automated tests but to complement them. While automated testing allows testers to execute repetitive tasks and regression tests, it can’t cover hard-to-find bugs that no one even thinks of. Such bugs require the human approach as well as intuition and experience in assuming the behavior patterns of an end-user.

Image 3. Limitations of Automation, https://www.guru99.com/exploratory-testing.html

The best way to demonstrate the importance of exploratory testing is the following example from our everyday work:

Our Reporting Engine allows users to create various reports that show collected data in a structured way, adjusted to our clients’ needs. When configuring the feature timestamp of a specific feature (let’s call it Feature A), we noticed that the shown feature timestamp wouldn’t be correct unless Feature A was also configured. If Feature A is not set, the value for Feature A timestamp will be the current timestamp instead of the last collected timestamp for that specific feature.

To discover this bug, one needs to look at the data table of particular feature values and compare the timestamps in the table with those shown in the report. There is hardly a way an automated script designed to test all functionalities could catch this type of bug. And if it could, it would cost a lot of man/hours to write scripts for every corner-case scenario one can think of. It is much more efficient to conduct exploratory testing when searching for this type of bugs. That is why we use the combination of manual tests and automated tests. It provides us with the highest chance to catch all problems before they reach production.

When using our platform, the user has the power to configure and set up many useful features and adjust the product entirely to the business needs and goals. By giving the user such a powerful tool, we are committing ourselves to making an easy-to-use and bug-free product. The tester has the responsibility to control and adjust tests as they are performed without any specific plan created in advance. When a new feature or application is nearing completion, these tests show potential defects in how the app works. Also, they are beneficial in replicating end-user experience, which is heavily influenced by user flow.

Image 4. User satisfaction equation

User flows are paths that users take to complete a task. Our goal is to make it easier for users to achieve their goals without issues and to enjoy using our product. To do this, we must find inconsistencies and bugs in user paths. With exploratory testing, we validate that the flow is designed in accordance with business objectives and adjusted to end-user expectations. The goal is to detect confusing parts in user flows and remove potential barriers.

After running automated test scripts to ensure all our functionalities are working as expected (i.e. they pass standard test cases), we try to think of edge cases to uncover bugs that are potentially missed by other testing techniques. While conducting exploratory testing, we always ask ourselves the following questions: What edge cases might the user run into? Is our platform shaped in a way to ensure ease of use? Can we enhance the overall user experience? Are there tweaks needed to prevent confusion and a sense of being lost while using our platform?

Our common goal within the company is to ensure that users are satisfied with our product. That is why we give instant feedback to developers, and we are in constant communication with them. Very often, the result of exploratory testing isn’t just a bug or a potential problem, but an idea on how to enhance the user flow. For this to happen, we need to have synergy between various teams (most often between testers, developers, and support) to recognize potential critical defects and prevent them from happening. That is the real value of exploratory testing within the t-matix team.

In the end, it is all about the satisfied user who can use our product to the fullest, and this means using all its features with ease and clarity. If the user flow is too complicated and non-intuitive, the user won’t be able to benefit from the product and service in question fully. This can lead to a severe drop in user satisfaction because the product won’t meet the expectations. To prevent this from happening, we try to make our product better every day, even if it means that we as testers come off as people who break things and make life harder for developers and product managers. When we catch a bug, we feel we are one step closer to making our platform perfect, which is our continuous goal, not only as testers but also as members of the t-matix team.

Other blogs

PLATFORM
December 18, 2019
Continuosly building, testing, releasing and monitoring t-matix mobile apps

Story by
Josip Virovac

PLATFORM
November 5, 2019
Kubernetes-what is it? And how do we use it?


Story by
GORAN PIZENT

PLATFORM
September 25, 2019
Locust- a Hidden Gem in Load Testing


Story by
Igor Crnković

View all