1) Since it's Friday, OF COURSE the big little idea arrives unbidden, to be consigned to weekend Twitter. However... several unrelated conversations are leading to some epiphanies that help to explain A LOT of phenomena. For instance, testing's automation obsession. Game on.
2) There are problems in the world. People don't like to talk about problems too much. All kinds of social forces contribute to that reluctance. "Don't come to me with problems! Come to me with solutions!" Dude, if I had a solution, I wouldn't come near your office. Trust me.
3) Here's the thing (and I'm painting in VERY broad strokes here) : builders, or makers, or (tip of the hat to @GeePawHill) practitioners of geekery are trying to solve technical or logistical problems. Consumers, or managers, or some testers, are trying to solve social problems.
4) Pretty much by definition, it's natural for builders—technologists and developers—to try to address technical problems. And when that's your inclination, it's not that you've got a hammer and everything looks like a nail; it's that all any nail needs is another kind of hammer.
5) This is both a feature and a bug. We tend to love technology because it's really good at solving technical and logistical problems—which are, to be sure, often at the pointy ends of many of our personal and social problems. That's the feature part, and it's a GOOD feature.
6) We are who we are, as individuals and as groups. The bug in our affection for technology is that our problems are always, in the end, personal and social. Technology doesn't ever solve those problems, but it can address aspects of them, sometimes with spectacular results.
7) Technology offers workarounds: hints and secrets; accelerators and brakes; amplifiers and attenuators; enablers and disablers; extenders and limiters; strong floors and soft cushions. These are all wonderful things to address the parts of social problems that yield to them.
8) Here's an example: in trying to build a product, programmers write code. There's an individual problem here: it's hard to write code really well without making mistakes. And there's a social problem, too: the manager wants the product RIGHT NOW, because deadlines.
9) So here's a really good way to address the technical and logistical part of the manager-pressure problem: apply or create technical solutions for it. Today's IDEs are increasingly amazing, with lots of features and hints to help avoid coding errors. That speeds up development.
10) Another thing that we could do to address the problem of coding errors: write code designed to trap them—unit checks, and automated regression checks. That's a good workaround for this rather daunting problem: Humans Make Mistakes, and computers don't comprehend intention.
11) So: IDEs and automated checks can help a lot as workarounds to possible mismatches between what I typed and what I meant to type. By making me aware of those errors quickly, they can also prompt me to be more careful and make fewer mistakes. That's a pretty big win for me.
12) Better yet, maybe I'm a programmer on a team that discusses things before I code them, and produces example output. If so, other people—not only I—can write automated checks that will detect differences between the product's output and what someone else wanted as output.
13) As a programmer, those assistants (often called testers, or SDETs) are helping me, and they're close to me. What's more, they're using and referring to code and tools a lot, just like I do, when they're talking about testing. All that typically helps to garner my respect.
14) Not only that, but we're all in a department and a company that produces code, founded by people who have spent their lives producing code. It's natural that many testers would want a piece of that action, either by nature or by socialization.
15) When you're writing automated check code, trying to solve technical problems in your technical workaround to the programmer's (real and legitimate) personal problem, the REAL social problem is automatically several layers away from you. See it? I haven't even noted it yet.
16) The REAL social problem is the one people using your software want to address. (Aside: I'm saying "REAL" here, but it's a rhetorical bug—or feature—to imply that all of the other problems in the chain, social or otherwise aren't real. They are. But they're not at the core.)
17) All this can lead to misunderstandings and schisms in the way people think about testing. If you're at a low enough level of abstraction, close to the code and the machinery, you'll naturally focus on testing that; even more so if your social group is similarly focused.
18) If you and your social group are close to the code and the machinery, *of course* you will be prone to focusing on verifying output via automated checks. Even more so if you are by nature fascinated by technology, when you have a builder or maker or developer's mindset.
19) And if you've been through a computer science program, it's natural that you'll focus on code and mathematics and algorithms — and that your curriculum won't have had paid much attention to testing. Your professors mostly came from that kind of world too, after all.
20) To the degree that your profs in those comp sci classes taught you about testing, they probably taught you to check the output of functions and routines. That's a good thing—seriously, a good thing. Code should be correct or else good things won't happen or bad things will.
21) If you're a tester, what you got from college, and your mission from the developers and managers in your shop might well be this: "Is the code inconsistent with the programmers' intentions? If it is, there's a bug, and if it isn't, there's no bug. Please look for bugs."
22) I don't want to be *too* extreme on this, but I'll risk it: the programmers' intentions for the *code* are to some degree beside the point, because the code is not the product. The REAL (see above) product—what we're producing—is the entirety of the user's experience of it.
23) Now, of course, the code is not inconsequential. On the contrary: no code, no product. The code affords and enables the product. And... Bad code: almost certainly bad product. Unstable code: unstable product. Insecure code: almost certainly insecure product. Et cetera.
24) It's a really good idea to scrutinize the code, to check it for coding errors. But in the end, that's not the thing of it, because the code is a means to an end. People, non-builders, don't *really* care about the technology. They care about how it addresses their problems.
25) This leads to different communities of testing, or as one might say, four _schools_. (As you can see, I'm not afraid of connecting a power supply to the third rail.) Some people called the original four schools "divisive", as though the divisions weren't there already.
26) But had they stopped to think about it, those people *could* have used the four schools as a means of achieving Peace In Our Time, by clarifying the very reasonable differences in perspectives on testing: what it is, what it means, and where it gets applied, and why.
27) So, I will revive the idea here with four schools different to the original four. Ready?

As we're developing a product, it's important to focus on our intentions, and possible problems in them. We need to understand the customers' needs and desires for the product.
28) We also need to develop a diversified notion of "customer", because in a development group, we're serving a lot of them. Certainly we're serving the people who are going to use our product; the product's customers. But we're also serving our direct clients: the business.
29) The business will want to know if it's soup yet—that is, whether the product is ready to ship, or whether it has problems that will diminish or destroy its value. We'd better start thinking about that right away. Of course, value is relative customers, individuals and groups.
30) So as testers, we had better start developing our ideas about lots of people who might use the product, including the ones that everyone else is forgetting (like novices, or disabled people, or people from other cultures, or impatient people, or hackers). And others, too.
31) We should consider the people who will be supporting the product, and testing it, and maintaining it, and porting it to other platforms, and translating it for other markets. And here's the important and somewhat painful part: we should be finding problems in others' work.
32) I'll keep coming back to that point: it is of the essence of testing to challenge what's in front of us; to believe that there must be problems in it, of some kind, for some people. If we don't believe in the probable existence of problems, we won't test well.
33) The issue here is that there might be problems in our ideas about what the customer wants or needs or gets from the product, and therefore, also, in our ideas about how we should build and support the product. That is: problems in our intentions and in our designs.
34) It's a really good idea to test the product by testing our ideas for the product's design before we even build it. There's a school of thought in testing, a community of testers in the world, that focuses non-exclusively on that stuff. Let's call that the Intention School.
35) ("Non-exclusively" is important. Many people will be school-fluid, as it were, relative to this model But I'm willing to bet that almost everybody leans preferentially towards one school or another, based on temperament, interest, experience, skills, ambitions, et cetera.)
36) People in the Intention school will be very interested in helping the team refine ideas about designs and plans for the product and the project. They'll love participating in planning and grooming sessions. They'll help to point out forgotten customer needs and desires.
37) People in the Intention School will help the developers and managers to remember customer support and documentation people, too. At meetups and in the blogs, they'll talk a lot about building quality in, and designing for testability, and the importance of collaboration.
38) We would all prefer to notice problems in design reviews and grooming, before the product ever gets built, but that's quite reasonably a particular focus and big theme for the Intention School. Cucumber—more importantly, the attendant conversation—is a feature on the menu.
39) So: the Intention School's focus is on design-focused testing and review that helps to clarify and refine our intentions and desires for a high-value product AND on trying to figure out, mostly for management, how the team could reasonably declare that development was done.
40) The next school of testing we could call the Discipline School. That's going to sound terrible to some people, but it's intended as a compliment. Excellent development work requires a large amount of discipline; it's even called a discipline. Discipline affords good work.
41) Discipline here is about code, in two senses—the product code, and an ethical code; dedication to organized, systematic, clean work. The Discipline School of testing looks for problems caused by lapses in discipline; coding errors, and mismatches between code and intentions.
42) The Discipline School embraces a lot of ideas about testing that come from programmers: careful code review, pairing, automated unit checks, and using a lot of code to check code. That Test Automation Pyramid is very much a Discipline School thing. Contract testing is too.
43) A tester in the Discipline School will spend a lot of time with the programmers, and a lot of time writing code, developing automated checks and frameworks to enable them. The goal is to confirm that what we're building is reasonably close to what we intended to build.
44) Where the Intention School tester will be seeking and drawing attention to towards bugs in designs and plans, the Discipline School tester will be intent on bugs in the code. S/he'll be refining and running those Cucumber checks, if the developers aren't doing that already.
45) The Discipline School testers will be talking about frameworks and static analysis tools at parties. At least half of their blog posts will be in a monospaced font, because code. And a cool thing about Discipline School testing is that it doesn't slow down development.
46) An important idea behind Discipline School testing is fast feedback to the developers via relatively shallow but fast testing. Shallow is not an insult! Shallow here means that the problems are close to the surfaces where the developers are working. That's not insignificant.
47) Bugs that are shallow and easily detectable at the time they're created can get buried in the product, turning into deep bugs. Even if those bugs are discovered before they get to the customer, they'll cost investigation, reporting, fixing, and retesting time. Find them now.
48) Discipline School testers might not notice some bugs related to the design or the customer experience. In that case, they may feel bad, but not terrible; it's not their job to find *every* bug; no one can do that. But if there's a bug in their own check code, they'll fume.
49) The Discipline School is culturally closely related to the Preparation School, and lines between the two are pretty blurry, but let's look at it this way: the Preparation School is focused on finding problems by looking at the build and release pipeline: CI and CD.
50) As the Discipline School focuses on problems in the code, the Preparation School focuses on problems in the build. That's important: if you can't build the product quickly and reliably, you can't release it to customers. You can't even produce a build for other testers.
51) The Preparation School knows what Jenkins' first name is. They use instrumentation to check the build, so if the product doesn't have scriptable interfaces and good logging and monitoring capabilities, Preparation School testers will rightfully complain about testability.
52) Preparation School testing anticipates trouble in production and recovering from it. That's cool because if we can build and redeploy quickly and we KNOW we can do it, we can test and release fixes fast, or we can back out a feature or flip a switch to disable it.
53) Notice some points about the three schools mentioned so far. They all have communities and discussions around their core ideas and themes. They all have specialized knowledge about the product and about the development process. They focus on specific families of problems.
54) All of these schools of thought are focused on identifying problems so that the designers and the builders and the ops folk can address them. None of these approaches prevent the problems that they encounter, strictly speaking, since they don't fix those problems; others do.
55) What testers of all stripes do (and it's a worthy thing) is to help the builders to prevent little problems from turning into big problems by identifying problems in intention, in building, and preparation. But after all this, there's still a dragon in the basement.
56) None of these schools is specifically focused on the actual experience of the actual product. They're all focused on *our* intentions, the builders' intentions. They're not focused on what happens when we've got a built product in front of us. And before we release it...
57) It might be a really good idea to TEST it; the whole product, getting experience with it as it's been built, in naturalistic ways. And there's a really good reason for this: problems may elude even our best intentions, our diligent discipline, and our careful preparation.
58) Until a human being begins to interact with the built product, everything about how people will experience *that product* is projection and imagination. That's not a slight, and it's not a dismissal; it's the way things are when things get real. And so we come to...
59) The Realization School. Realization is a pun; it refers to realizing, achieving, our goals to build the product on the one hand; and realizing, recognizing, things about it that we didn't anticipate— bugs, features, and enhancement ideas. And there will probably be some.
60) The Realization School is focused on the search for deeply hidden, subtle, rare, intermittent, or emergent problems that can elude even a highly disciplined development process. This requires experiencing the product, exploring it, and experimenting with it. It's a hard sell.
61) It's a hard sell for a lot of reasons, and I'm realizing a big one: it bangs into the problem that started this thread. More than any of the others, Realization is focused away from the mindset and process of building the product, focusing instead on the experience of it.
62) Because we're dealing with the experience of the product, the Realization School is strongly focused on interacting with the product in direct, unmediated, naturalistic ways. We'll use tools less to check, and more to probe, to stress the product out, to analyze its outputs.
63) We never say of a someone heroic that she has been checked by experience! To test in the realization frame is to challenge the *actual* product. The goal here is not to accept or confirm that everything is okay, but to find weaknesses that diligent work didn't prevent.
64) In Realization, we are thinking of the product in terms of its problem space more than its solution space. We're looking for ways in which it fails to solve the problem for which it was designed, or introduces new problems that might frustrate customers' needs and desires.
65) And this is *enormously* socially challenging, because after all this time and all this effort, no one wants to hear about problems. They want to go to the release party and talk about Cucumber or REST Assured or Kubernetes! (Hey, I do too.) And it gets worse.
66) Deep testing takes time, effort, skill, preparation, variety, tooling, and determination. It requires us to believe that there are problems that remain hidden in the product, when everyone else around is sure that there are no problems left to find, because we prevented them.
67) All this can deeply annoying to management. All along, everyone has been delivering the good news, optimistically envisioning success. And now we find a couple of deep, subtle bugs, and the programmers realize it'll take *at least* two more weeks to fix them.
68) The worst news of all, perhaps, is that as a development team, we've been focused on solving the technical problems in building a technical workaround to what is, ultimately, a social problem. Way back in the Intention School's frame, maybe we had an idea about some of that.
69) Now, here's a weird little shift: until we've applied some Realization-School thinking to Intention-School contributions to the design process, it's more likely that there will be deep, subtle, emergent problems in the design. And the same applies the other way.
70) If we aren't informed by some degree of Intention-School thinking at Realization time, our testing is more likely to be poorly designed or poorly focused to some degree. And if we're not questioning and critiquing our deep testing deeply, we risk missing important bugs.
71) In the Realization Frame, we're focused on the relationships between the *actual* (not imagined) product and its developers (like the other frames) AND its users. Again, that requires direct interaction with the product. Forever, many have dismissed this as "manual testing".
72) So, buried in Tweet 72, about 11 years after noting a distinction between testing and checking ( http://www.satisfice.com/blog/archives/856), and without consulting with my colleague @jamesmarcusbach on the subject (bad me), I am launching the campaign for a new term: *experiential testing*.
73) (I'm sorry, James; I should have let you know this would happen, except when I started this thread, I didn't realize it would.) Yes, experiential testing. I had an epiphany a couple of years back in a quiet talk with a tester asking for reputational help as a "manual tester".
74) For years the term "manual testing" has annoyed me, because as Cem Kaner has said, testing is NOT something you do with your hands; it's something you do with your MIND. Tools help. Eyes help. Even hands help. But the locus of the test is in the mind and the experience.
76) Experiential testing is not "UX testing" (which is a term that some people use), but they often have a lot in common. We're focused here on the tester obtaining direct experience with the product through exploration and experiment. Contrast this with *instrumented testing*.
77) (We tried "mediated testing", but that took even more explaining than "instrumented testing" will.) Experiential and instrumented testing are contrasts and opposites and polarities, but not *binary*, not mutually exclusive. Instrumentation mediates experience.
78) Experiential testing emphasizes interacting with the product, using the means that people naturally use for the interaction. "UX testing" as I heard with probably wouldn't refer to testing of an API, but if you're interacting with it as a programmer would, it's experiential.
79) Another way of summarizing experiential testing, directly albeit a little crudely: TESTING THE DAMNED THING BY USING THE DAMNED THING. So often I use software that makes me ask, "Did anyone try to use this damned thing?" (I should @ itunes on this thread.)
80) A long time ago, I wrote in a notebook, "Use the damned thing." A few years later, James remarked that "much of what passes for testing these days seems to represent an elaborate conspiracy to make sure that no one ever sets eyes on the product." Amen.
81) Using the damned th... I mean experiential testing does NOT preclude or prohibit the use of tools; not in the slightest. Instrumentation can help to probe a problem we encounter, visualize results, or suggest where our experience with a product might be lacking.
82) In the Realization School the idea is not to avoid instrumentation, but to politely shoo it away from centre stage. That's reasonable. Instrumentation is not even one of the performers; it's the lighting and sound system. It adds to the performance, but it's not the star.
83) Instrumentation, though, does leave a lot out of the mix: the human sensorium of sight and sound and proprioception. The human feelings of frustration and annoyance and impatience and confusion and surprise and occasional glee. That stuff is INFORMATION. We need it.
84) A requirements document is another medium; also instrumentation. (A medium is something between something and something else.) For far too long, testers have been gorging on requirements documents while starving and malnourishing testing's experience of the product.
85) All right; I would have guessed it was three hours earlier than it actually is. Tweetdeck's layout is burned into my retinas. But this is the sort of thing that it takes to get software testing out of the hole that it's in. And no doubt it's in a hole. Because...
86) Despite the fact that we have wonderful tools, remarkable technologies, lightning-fast processors and more disk space than God himself, we still have software that routinely annoys and frustrates people, never mind causing them loss, harm, pain, death, and diminished value.
87) I'm going to pause this thread to point out that @jamesmarcusbach and I develop this ideas and teach people how to apply them skillfully and rapidly. https://www.developsense.com , http://www.satisfice.com . Classes are coming up. https://rapid-software-testing.com/attending-rst/  And now the finale...
88) We must refocus testing on some fundamental questions: is there a problem here? Are there problems that threaten the value of the product? Are there problems that threaten the value of our work? Of the business? Let's use the damned thing for a bit, and find out. - fin-
You can follow @michaelbolton.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.