Notes From James Bach's Lecture on Testing

Published on . Takes about 5 minutes to read.

IT College had the privilege of hosting another public lecture, where an international expert took the stand and shared his knowledge.

JUnit

James Bach, "The Consulting Software Tester" spoke about what testing really is and gave tips on becoming a professional skeptic.

I'm pleased to say that in my opinion, Mr. Bach is an excellent spokesmen and can keep the audience compelled with occasional jokes and frequent real world examples mixed in.

The topic of testing is a must for any developer and I'm glad to say I learned something to make my software better. Even if it's as simple as including a maxlength attribute on every input field.

Lecture Notes

Knowing the background

You have a simple if / else block, for example:

if ( a > 75) {
    light_bulb();
} else {
    sound_alarm();
}

How many test cases would you need to cover all the possible scenarios? The obvious answer would be 2: one where a is smaller than 75 and the other where a is say, 100. ?

Wrong. First question: what the hell is a? Is it the number of golfballs or the speed of a vehicle? A tester should always have as much background information as possible, and (s)he should ask additional questions if something is vague. If someone shows you a flowchart of a program it's a que to seek out the underlying complexity. A flowchart item saying "Read input" might involve steps from creating a new instance of a class to trying to establish a database connection to a remote server prone to failing.

It is the testers job to account to all of that and not to take the word of a programmer or even the designer of the system at face value. If you have an electric engine designed for voltages 90 V - 250 V, you still test it at 80 V, even trough you know it will fail - to test for the unwritten specifications (like to see if it fails catastrophically and explodes, quite possibly killing the user). However, there's no point on testing if a sailboat sails on dry land, that ability is not in the requirements list.

Test coverage

Lets say a (the code excerpt above) is the RPM of the blades in a coffee mixer. If the speed is smaller or equal to 75, we want the indicator light to be on and the alarm to go off when the speed rises above 75.

Notice, I said smaller or equal to. The code above doesn't reflect the case "equal to" so the intention of the programmer and actual code is different. That should be one of the tests: what happens if the speed is not smaller or greater than 75? Does the result match the expected value?

The tests should also cover the possibilities of a being a letter or a negative number or maybe Null value altogether.

So how many test cases would it take to test all of that? What a silly question. A test case could be small or long, complex or simple or even reuse other testcases. A better question to ask is: "How should I test this?"

Testing offers a constant intellectual challenge. A tester should be creative and think outside the box, make assumptions on how and where a product might fail. Testers are on the path of continuous self-education because of the variety of products and associated information they have to study.

A “good” tester

"I dropped my phone, I think it might be broken. How do I test it?"

Stop. Don't give any advice when you aren't familiar with the situation. What kind of a phone was it? If it was a military one, they're quite durable. If it's an expensive little thing, it most likely is injured. Why did you drop it? Was it to test if it breaks? Where did you drop it on - water, bed, carpet, cemented floor? Does testing it require any special equipment?

  • Such questions should come as a second nature to a professional sceptic.

There is no "right" way to test something, but there is a practice to avoid: translating specifications and requirements to tests 1:1. A good tester does not copy-paste the documentation to test cases but rather models the system in his/her head to understand it and go beyond it. The piece of paper describing the system doesn't correspond to the actual world and it's the tester who checks the intended and imagined result against what's really happening.

  • Imagine what might happen in the real world
  • Learn new concepts rapidly to adopt
  • Question everything and be ready to justify your reasonings
  • Focus on parts of the system, but be able to see the bigger picture

The guys who search for bugs are not the enemy to the programmers, in the contrary: a tester should help the programmer by giving feedback and educate the coders to make them write better software (and correct some thought patterns).

Parting thoughts

Feeling the weak spots of a system is a skill developed over time. It's like a detective work - only there are loads of suspects.

Writing tests down (documenting) is economically dangerous and limits the number and variety of tests runs on the product.

Testing cannot be automated. You can check the load of a server, but an automated script can never learn thus lacking the creativity, intuition - humanity - of a real tester. Sure - automated checks help testers by gathering info, but don't make the mistake of writing a bunch of automated tests instead of hiring a tester.

As a tester, it's your responsibility to learn new things quickly, be excellent at asking questions, model things in your mind… but also, you have to be able to talk about what you did and carry the meaning without a load of technical jargon. A tester is not the sole person responsible for catching all the bugs, that's for the whole team to do. Any developer should be only too happy to have a tester in the team because, ultimately, the systems that developers build are the ones that go into usage and fail.

Highly recommended: watch the recording of the lecture.