Skip to main content

The Manual Testing and Automated Testing Paradox

You can’t have automated testing without manual testing. They are the chicken and egg of the software testing world.

Whether you’ve been in software testing a while or are new to the role, you’ll have definitely come across the terms automated testing and manual testing. They are pitched as being complete opposites, often with automated testing being labelled the successor to manual testing. You’ll hear people say that automated testing is a superior approach to software testing, and those same people will state that manual testing is dead, or isn’t fit for current approaches to software development. If you are doing automated testing you don’t need manual testing, or so the statements go.

But here’s the paradox: automated testing, despite being pitched as the replacement for manual testing, relies on the thinking, exploration, approaches and observations that come from manual testing to be effective. Without them, I can guarantee your automated tests will be coming up short, and providing little to no value. The truth is, they co-exist, not as rivals, but as an infinite loop to gain and confirm knowledge of our systems. The knowledge our teams use to make critical decisions relating to the software.

Defining the Fundamentals

Before we dive more into this relationship, we need to define manual testing and automated testing to provide context for this article.

Manual Testing: A human following a prewritten set of instructions to interact with software and confirm its behaviour against expected outcomes.

Automated Testing: The use of software to automatically execute predefined instructions and compare the results against codified expected outcomes, without human intervention.

They are almost identical aside from how the prewritten set of instructions, often referred to as a script, are being executed.

What about Exploratory Testing? While the rest of this article will focus on manual and automated testing, it would be incomplete without bringing exploratory testing into the conversation. Exploratory testing is often used as a synonym for manual testing, but they aren’t the same, the only real comparison is being human led. When a human is performing exploratory testing there tends to be no instructions, or minimal instructions, as well as no expected outcomes. The technique relies on the skills of the human and the observations they are making. The human has significantly more freedom to explore, not bound by the pre-written set of instructions and outcomes. There is another paradox hidden in there, whereby the majority of bugs found when doing manual testing are often found because the human strayed from the pre-written instructions. This is not surprising because humans are bad at following instructions, it’s human nature to want to explore, it’s human nature to be curious.

Automations Boundaries

Automation on the other hand, is not curious, it doesn’t explore, it’s deterministic. Sure, AI and specifically Agentic AI is gaining traction, but it’s still way short of a human tester. For the majority, automated tests do exactly what they are told to do. No more, no less. They are fixed, deterministic, they repeat the exact same instructions without deviation.

Or, at least they should.

So, why is it that one of the most talked about topics with automated testing, is test flakiness?

If we’ve provided the tests with the exact steps, and used all the knowledge we have, why are they nondeterministic? Because of the paradox. If the knowledge you codify into your tests is lacking, or even wrong, your tests will mirror and compound that shortcoming. Didn’t realise you were waiting for a specific state on the page, neither will your tests. Didn’t realise a page could load in a different order, neither will your tests. Didn’t realise one of your APIs or components is inconsistent, neither will your tests. Test flakiness comes from a lack of system and tool knowledge, which both stem from a lack of exploration. Your automated tests are only as good as the engineer who wrote them.

Manual Side of Automation

A starting point for an automated test is thinking about the steps, the test case, the instructions, or the actions that the automated test needs to take. The actions that trigger the behaviour we want to test. What data is going to be needed for that, where does that data come from? Another starting point could be to understand the requirements, to explore the implementation notes, and to observe the behaviour that we are trying to automate. With the all important, often final step, being to think about how we are going to assert this behaviour, what is the expected result, what does good or working look like? Work that needs to be done to increase our knowledge of the system. In other words, to do some manual testing.

We wouldn’t call it that, but it doesn’t mean the relationship isn’t there. While the order might not be identical to manual testing, all the steps, techniques and skills are there. The only real difference is the output is not a test case for a human to follow, it’s a script a computer can follow. However, for that script to be valuable, this work is essential.

I know this to be true, because I do it myself, and I’ve observed hundreds of engineers do it too. We manually trigger the behaviour under test, often in its entirety to get a feel for what we are automating. Then we manually complete the flow as we build up the code, just like you would when writing the steps in a test case. Arguably the most critical part in writing automated tests, because this is the exact point where our observations are codified, a strong observant testing mindset here will ensure that code is accurate, repeatable and actioning the desired behaviour. Especially when the test is on the user interface.

Wider Skills of Automating

When we create an automated test, we’re engaging in manual testing, test design, exploration, coding, and execution, all at once. The order is fluid and continuous, but they are all there.

To maximise the value from our automation efforts we need to be ensuring our engineers are proficient in all these skills, or the team as a collective. A shortcoming in either of these skills will diminish the value of your automation. You may end up in an infamous red green infinite loop where you engineers are just fixing flaky tests all day. Or perhaps the illusion of green, where your tests never fail but you still have high severity bugs in production. Or the trust in the tests completely goes and they fall into disrepair. I’ve seen them all, which is why I’m more convinced than ever that the real skills in automation are not in the code or the tool, it’s in the testing thinking that goes into the test. Yet that isn’t what we focus on as industry, we fixate on the code and the tools, and not on the problems we are trying to solve with them.

The Infinite Loop

Automation, exploration and manual testing are an infinite loop, they support and rely on each other. Want to design a high value automated test? You need to know what you are automating, to manual test the system, explore it, learn about it. Want a maintainable, deterministic test? You need to code it well, follow good practices, and implement patterns. Want to identify why a test is failing? You need to manual test and explore the system to see if the test is wrong, or a bug has been found. Want to understand more about why it went wrong? You need to explore the system manually. Then with all that knowledge you can improve your tests, and even delete some that are no longer fit for purpose. It’s a continuous loop of checking, confirming, learning, reporting and improving.

The future of automated testing, especially in an AI landscape, is to focus on testing skills. Improve the thinking and observations that go into a test, and their value will compound. You’ll have tests you trust, tests you understand and tests that are going to help your team make crucial decisions. Manual testing is not dead, it’s not automated or manual, it’s not all about exploratory testing, these are just umbrella terms. It’s about the skills under these terms like test design, oracles, risk identification, system modeling, curiosity, technical skills, coding, code design and many more.

Conclusion

If your automation engineers aren’t actively practising these skills, or even aware they have them, it’s time to invest in them, because without them, it’s garbage in, garbage out. If your manual testers, or exploratory tester feel like automation is a leap too far, inform them they already have all the foundations, and their foundations are probably stronger than most automation engineers. Or even better, as it’s hard to master all these skills, it’s asking a lot of someone; combine the people on your team and encourage collaboration, because after all, testing and quality are the whole team's responsibility.

About Automated Testing series

You're at the first part of our Automated Testing series - a collaboration with Richard Bradshaw. The series will explore many facets of Testing Automation and its implementations.

The next part, coming out on 17 June 2025, will explore different types of UI automated tests and their best use cases. Stay tuned!

Comments