Testing schools sparked intense debate in the past. Some swore by rigorous processes, others by flexibility. But nearly 20 years later, why revisit them?
For me, they weren’t just abstract concepts—they were a way to contrast and compare the different testing approaches I encountered throughout my career. This blog explores how these schools shaped my journey and why they still hold value today.
A Whistle-Stop Tour of My Career in Testing
1. College Formal methods
While many enter testing from diverse backgrounds. My own path was from a more traditional computer science route. My path began with a degree and a research master’s in computer science, focusing on software development for computer networking and telecoms. These studies emphasized analytical topics like code coverage, formal methods, and various testing models—perfect preparation for my first roles.
2. Telecoms: Process-Driven, Analytical Testing
For five of my first seven years in testing, I worked in telecoms, where specifications were highly detailed, testing followed strict processes, and industry standards dictated much of our work. Everything was technical—performance testing network and voice drivers, verifying cryptographic frameworks—but there was no UI, only terminal-based testing.
Testing was done separately from development, acting as a “policing” function. A QA team then “policed” the testers to ensure process adherence. While the work was rewarding, it had drawbacks: release cycles stretched over 18 months, and the rigid structure left little room for flexibility. I wanted something that bridged technology and business more closely, so I moved into financial services.
3. Financial Industry: Metrics, Test Cases, and Automation
Switching to financial services (insurance, banking, hedge funds, retail) was a stark contrast to telecoms. These industries were highly regulated, but projects often followed a client/vendor model.
For the first time, I encountered test cases—lots of them. Projects were driven by two key metrics: bugs found and test cases executed. There was an obsession with running massive regression test suites, leading to an equally strong obsession with automating them.
Because of the structured nature of testing in finance, I pursued ISTQB certification, which, in hindsight, provided an interesting perspective. Around this time, I also completed a postgraduate degree in business, diving into topics like e-commerce and digital marketing—knowledge I was eager to apply.
4. Gaming & E-Commerce: Context-Driven, Exploratory Testing
Next came an opportunity that completely changed my approach to testing: working in an e-commerce team at a gaming company. This shift was profound.
•The work was still technical but no longer analytical in the same way as telecoms.
•There were no detailed test cases—just user stories.
•I was embedded in a development team for the first time, pairing with developers to find and fix issues on the spot.
•Testing wasn’t about following a predefined process; it was about figuring out what testing made sense for each feature.
This context-driven approach was a revelation. It led me to the Rapid Software Testing course and sparked my deep interest in exploratory testing.
When that role ended, I found myself reflecting on my career: Which of these testing approaches was the “right” way?
The Four Testing Schools: A Useful Framework
At this point, I started engaging with the broader testing community and discovered a slide deck by Bret Pettichord that outlined four schools of software testing. Though discussions about testing schools had faded, the definitions resonated with my experiences.
1.Analytical School – Testing driven by analytical methods, precision in specifications, and modeling.
•My experience: Formal methods in College.
2.QA/Control School – Emphasizes process enforcement, compliance, and industry standards.
•My experience: Telecoms had a heavy focus on standards and process compliance.
3.Factory School – Focuses on efficiency, repeatability, and automation, often reducing testing to scripted tasks.
•My experience: Financial services, with its emphasis on test cases and automation, aligned with this school.
4.Context-Driven School – Adapts testing to the project’s unique circumstances, relying on collaboration and exploration.
•My experience: The gaming company exemplified this approach.
At the time, the context-driven school resonated with me the most. It enabled pragmatic testing, higher quality, and faster releases. But was it truly the “right” way?
Blending the Best of Each School
Today, my role aligns closely with the context-driven approach. But that doesn’t mean I reject the others. My past experiences gave me a deep respect for the best aspects of each:
•High-risk or high-availability features? The analytical approach helps ensure thorough validation.
•Need measurable project insights? The factory school’s focus on metrics can provide useful data.
•Keeping teams aligned? Some QA/control processes are necessary to maintain structure.
While I haven’t worked in telecoms or finance in years, testers from those industries often tell me they want to incorporate context-driven ideas. Change, however, takes time—sometimes decades.
The real lesson? There’s no universal “right way” to test. Instead of rigidly following one school, the best approach is to understand them all and apply what fits the context.
There are at least two other schools of thought today, one Bret discussed in a more updated set of slides and another that is now becoming a thing.
1. Agile School of Testing — Very roughly, you will tend towards having the customer representative tells you what to expect, rather than using exploration techniques to discover new information. New “Features” and “Design” issues are something your customer decides on and so bug advocacy is primarily to appeal to that stakeholder. Testing becomes questions around completed stories rather than how the system works as a whole.
2. Devops School of Testing — The customer does the testing, and if something goes wrong, you fail back automatically. New “Features” and “Design” issues are uncovered by statistical analysis. Existing customers decide what bugs should be fixed, because customers who don’t experience the bug don’t care it exists. Testing becomes asking questions of data rather than questions of software.
The question of right or wrong depends on what you believe quality means and what you believe generates quality. The idea of “value for people who matter” changes based upon who matters. Is every customer’s experience measurably important and thus stats tell you the real story? Is the one person who pretends to be all customers the most important? Is viewing the software from many non-customer perspectives (like what marketing says) useful? What do you think about things you cannot measure or only have proxy measures for? Do you believe your boss’s smile is most important, and it doesn’t matter if the product adds value to customers? Do you believe tacit knowledge is important or that all work testing can be made explicit? Do you believe testing can be made into math proofs and is thus just a question of logic? Do you believe that testing is about finding deviations of process, and if the process was followed, many fewer bugs would exist? Each question helps guide you towards/away different schools of thought.
I don’t think Context Driven is just “it depends” on each answer, but rather a specific set of views about generating context, while being less interested in other questions. For example, non-customer data is a must to help generate the context, thus the answer is yes for context driven testers, rather than “it depends”. On the other hand, I also think some people are flexibly minded and can live in a world that does not agree with their preferred school.
What you believe about the world informs you of what school you find most useful and thus what school you belong to.