Let's talk about model-based testing. Or more like, I talk to make notes of the way I model model-based testing. My history with it is longer than I care to remember as one of my past work places was heavily in that space leading me to research it extensively.
Model-based testing isn't the idea that "all testing is based on models" just like risk-based testing isn't the idea that "all testing is based on risk. All testing is based on models and risk, but still these techniques are more specific.
Model-based testing is the idea that we could create a concrete model that would capture relevant information in a way that is transferrable to new people (captures things better than words alone) and follows a set of rules that enables us to describe it in programmable format.
It feels like the testing equivalent of what UML was trying to do, expect it isn't just UML. UML as I recall it, was trying to create 4+1 perspectives to capture it all. Model-based testing takes any model if it is useful for testing purposes.
Usually the model I see is state model. And we can model so many things as state. User interface transitions. Functionality. Anything where one thing leads to another. And when we have state models, we can try using those to generate paths through the model.
These models, for testing purposes give us two things. They give us a concise way of describing things leading to things, and a way of saying partially that this should lead here with an oracle. It enables generating test cases that have a type of asserts attached to them.
I know my friend @ru_altom works in the space of all good things testing, including model-based testing. She shared an example of using python, altwalker and a set of selenium page objects to move from creating scripts to creating models with my old team.
The demo Ru did was on the last week before I quit the company. I hoped it would catch on but I had little power as I was leaving. I hear they tried it at a new feature. What I learned discussing their experience is what fascinates me today.
It feels like we all are looking for that one way to do all things testing. We should stop that. There is a mix of ways, this enables you to do stuff you did not do before but you still will miss the old style of linear, easy to read tests too.
When things fail with a model, there is an abstraction there that makes pinpointing the problem one step harder. It's a tradeoff on getting that versatility to your scripts. Granularity - saying exactly what is wrong - is inherent in unit tests, but not model-based tests.
When you build that model, it is easy to do too complex. It is a specific technique that requires skills of that technique. But there is no better way to learn that try and FAIL - it is only your First Attempt in Learning.
Also, you may end up questioning who should build that model. What if machine learning was up for building that model, instead of you doing the work. It gives your automation efforts an extra shape you can start pondering on.
For many many years, Finland had multiple research projects in the space of model-based testing. I got to hear and read what they did. And to note that good stuff from university research does not stick easily with companies.
They found *critical* problems in the research partner's applications. They were quietly fixed. They publicized finding problems with relatively small effort in publicly available new sites. Crashing type of problems, easiest of the oracles.
Simple, basic tools can already take you a long way. Understanding when and where to apply this is what we should be learning. All tests won't be "generated" but the ones that will make a difference.
You can follow @maaretp.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.