Schools of testing were often discussed in the past. They often caused debate, so why would anyone write about them nearly 20 years later.
Personally I found them incredibly useful to contrast and compare the testing approaches I had encountered over my career. This blog focuses on that personal value of the testing schools rather than hopefully any contentious points.
Whistle stop tour of my career
While many enter testing from diverse backgrounds. My own path was from a more traditional computer science route. Completing a degree and research masters that both focused on software development for computer networking, with an emphasis on creating software for the telecoms industry. These courses contained analytical topics such as code coverage metrics, formal methods, different modeling and testing techniques.
Of the first 7 years in testing, 5 of these were spent testing software in the telecoms industry. An academic background was perfectly suited to these roles. Typically there were highly detailed specifications, stringent processes for testing and standards to follow. These roles were definitely at the technical end of the testing spectrum. Performance testing of network and voice drivers, and verifying cryptographic frameworks. There was never a UI, so was always working on the terminal. Not only was testing done in a separate team to “police” the devs. There was a QA team to “police” the test team, to ensure the correct processes were being followed. Whilst the work in telecoms was rewarding, it had some drawbacks. It takes a long time to get from idea to delivery, often over 18 months. Also my passion lay in the cross section between business and tech.
This led to a switch to the financial industry – working on projects in sectors such as insurance, banking, hedge fund and retail. This was incredibly different to the telecoms industry. These industries are highly regulated and projects are typically of a client/vendor nature. There was detail requirements but perhaps not as detailed as telecoms. The testing was often less technical and for the first time in my career I had to write and execute test cases. Testing was still conducted by separate teams of testers. Projects seemed to be driven by 2 key metrics, bugs and test cases. There seemed to be an obsession with manually running huge suites of regression tests, which in turn became an obsession to automate these test cases. Due to the nature of the work it seemed to make sense to do ISTQB certification, which in hindsight was incredibly perspective. During this time I studied for a post grad in business and was eager to use newly gained knowledge around topics such as e-commerce and digital marketing.
Next came an interesting opportunity, the chance to work in the e-commerce team of a based gaming company focused on web technologies. The work was technical but not analytical like college and telecoms. The software was of a high quality but there was no use of test cases. The specifications were not as detailed as telecoms or past financial roles, often just a few user stories. There was a huge emphasis on communication and collaboration. For the first time in my career I was embedded in teams with developers. We would often pair, finding and then immediately fixing issues. Everything was context based, we decided what testing was required based on the feature we were working on. This set me on the path to the Rapid Software Testing course and an interest in exploratory testing.
Sadly this great opportunity came to an end. It left me wondering of the 4 phases on this path: what was the “right” way to test.
Old school testing schools
It was around this time I finally began to engage with the test community. Other testers seemed to have worked in one of these 4 “phases”, for example they had only worked in financial or telecoms. In debates each believed their approach to testing was the only “right” way.
An interest in context based testing meant stumbling across a slide deck by Bret Pettichord that mentioned schools of Testing. Nobody seemed to discuss this topic anymore and it seemed any debates were in the past.
Although the definitions of the 4 schools resonated with me.
1.”Analytical school: emphasis on analytical methods for assessing the quality of the software, including improvement of testability by improved precision of specifications and many types of modeling.”
This sounded like the experience in college and telecoms.
2.”QA/Control school: emphasis on standards and processes that enforce or rely heavily on standards.”
This sounded like the process from the telecoms industry.
3.”Factory school: emphasis on reduction of testing tasks to routines that can be automated or delegated to cheap labor.”
This sounded like the experience in the financial sector.
4.”Context-driven school: emphasis on adapting to the circumstances under which the product is developed and used.”
This sounded like the positive experience with the gaming company.
Which one was the right way?
The context driven school resonated with me, in hindsight perhaps this was because of my association that it enables higher quality and increased frequency of releases in a more pragmatic manner that some of the other approaches.
Today I still use this school of thought, my current role is within a similar context to that of the games company. Which means it’s not as analytical as academia, regulated as the financial industry nor has it the client/vendor nature, and it does not require the huge processes of the telecoms industry.
The past experience of the other schools brings respect for the best aspects of these schools and there is a more fluid thought process around these approaches.
- High risk or high availability features need a more analytical approach.
- Projects need some key metrics, the factory school approach may be how many people are introduced to project metrics.
- Some process is required to keep everyone aligned, the control school was a good learning ground around the topic of process.
Finally it’s been a long time since I worked in the telecoms or financial industry, thus it’s impossible to personally say if the context-driven school would work in those contexts. After I speak at conferences and Meetups, testers from the financial industry often approach me. It seems they would like to incorporate some of these ideas but it seems change takes times. Sometimes as long as 20 years.
JCD says
There are at least two other schools of thought today, one Bret discussed in a more updated set of slides and another that is now becoming a thing.
1. Agile School of Testing — Very roughly, you will tend towards having the customer representative tells you what to expect, rather than using exploration techniques to discover new information. New “Features” and “Design” issues are something your customer decides on and so bug advocacy is primarily to appeal to that stakeholder. Testing becomes questions around completed stories rather than how the system works as a whole.
2. Devops School of Testing — The customer does the testing, and if something goes wrong, you fail back automatically. New “Features” and “Design” issues are uncovered by statistical analysis. Existing customers decide what bugs should be fixed, because customers who don’t experience the bug don’t care it exists. Testing becomes asking questions of data rather than questions of software.
The question of right or wrong depends on what you believe quality means and what you believe generates quality. The idea of “value for people who matter” changes based upon who matters. Is every customer’s experience measurably important and thus stats tell you the real story? Is the one person who pretends to be all customers the most important? Is viewing the software from many non-customer perspectives (like what marketing says) useful? What do you think about things you cannot measure or only have proxy measures for? Do you believe your boss’s smile is most important, and it doesn’t matter if the product adds value to customers? Do you believe tacit knowledge is important or that all work testing can be made explicit? Do you believe testing can be made into math proofs and is thus just a question of logic? Do you believe that testing is about finding deviations of process, and if the process was followed, many fewer bugs would exist? Each question helps guide you towards/away different schools of thought.
I don’t think Context Driven is just “it depends” on each answer, but rather a specific set of views about generating context, while being less interested in other questions. For example, non-customer data is a must to help generate the context, thus the answer is yes for context driven testers, rather than “it depends”. On the other hand, I also think some people are flexibly minded and can live in a world that does not agree with their preferred school.
What you believe about the world informs you of what school you find most useful and thus what school you belong to.