Why You're Not Writing UI Tests

It's Friday afternoon.

Your tasks for the sprint are complete, except for one.

You've finally convinced your PM that a little free time to look into writing automated UI tests could really reduce bugs.

Taskboard with single task in todo: 'Spike: UI testing functionality'

And as you sit there, looking at the single item on your task board, there's a gnawing feeling inside you about the whole thing.

You have test anxiety.

You're excited to have the chance to try out automated testing; something you feel has a lot of benefit; but you're terrified you're going to waste this opportunity.

What if you spend all this time setting up your tests, but in the end nothing really comes of it?

There are so many ways it can go wrong:

  • The tests might be too fragile, breaking with every little change, to the point that you just stop running them.
  • Even if they're reliant, they may not test anything useful.
  • Or, after figuring out how to make them reliant and useful, if your teammates don't adopt running them, you'll have to be in charge of making every update to them.
  • Maybe the functionality you're testing is scrapped, rendering your new tests useless.
  • And just because you've been given the time today doesn't mean you'll have the time tomorrow to do the proper maintenance on the test suite.

That's a lot of ways for this testing train to derail.

No wonder you're feeling anxiety about all of this.

I know from personal experience how debilitating these fears can be; and that they aren't irrational to have.

It takes a lot of work to get a good automated test suite built out. It's not something you can do in the course of an afternoon.

Plus, it's really easy to take the wrong path; chasing after endless trails trying to solve a problem that isn't all that important in the long run.

To have any hope in avoiding all of this, the test anxiety and the wasted time, it's important to define what a "good test" is:

  1. Easy to write
    Tests are only valuable when they save you time in the long run. If it takes two weeks to write a single test, you're unlikely to see that time back.
  2. Easy to run
    The chances of you being successful with tests are directly related to how integrated they are in the development process. If running your tests requires all this extra work, you and your team will quickly lose interest/willpower in them. Ideally, tests take no effort to run, as they're fully integrated with your development environment.
  3. Highly Visible
    Make sure you know when and why tests fail. It does no good to spend time in tests if you ignore the results.
  4. Useful
    A test can do a lot of things, but if it doesn't actually help then what's the point? They should catch bugs, reduce manual testing effort and improve your overall test coverage.

Those are my criteria for good tests.

With this list, we can now look test anxiety square in the eye and tell it to shut the hell up.

Here's what we'd say:

I'm limiting my optimism

I'm confident these tests will help, but I also know that trying to test everything is a poor plan.

I'm going to try writing automation and run into issues where it's just not worth the cost of fixing them.

I'm going to be okay burning some of this effort in order to avoid endless rabbit holes of debugging.

I'm writing one test

To start my test automation effort, I have to think "vertically".

Depiction of all the parts of a test architecture, including build system and test reporting

If I started things off writing a dozen tests, they would all fail criteria #2 and #3, increasing the chance they become useless.

Instead, I'm going to zero in on a single test, write it, then fully integrate it into my system before writing any more.

This way I know that one test meets all the criteria of what a "good test" is.

Only then will I get back to writing more tests, which now automatically come with those core "good test" features built in.

I'm skeptical about what tests are worth it

Since I know that I won't be able to test everything, I also know that I need to be selective with what tests I write.

It may be easy to write a test for a simple static page, but I also know that functionality is unlikely to break and usually non-critical.

Instead, I focus on the "money-maker" pages. The user flows that drive life into the business, and would cause major pain if broken (both for the user and the company).

I'll also balance the gain of a test against the pain of writing it.

I'm not saying I won't make exceptions for important pages, but if a page has a lot of dynamic elements that will require weeks of work, I'm going to weigh that cost against how important that page really is.

I'm not writing automation for unreleased functionality

While test-driven development may work well for unit tests, it's a lot more difficult for UI tests.

I understand that writing UI tests too early can lead to a lot of rework, as these new pages will likely undergo various changes in the first few months of life.

Instead, I will think about how the new functionality impacts existing pages, and shore up my tests for those existing pages to ensure the new functionality doesn't interfere with them.

Then, once the new flows have settled out, I can go back and write my tests, confident they will stay.

I won't forget the side-benefits

With all these promises made, I know that there are still no guarantees in life. Technology may fail me, and the support that I had planned on just may not be there.

But I'm reassured in knowing that automated tests have several benefits outside of the automation itself:

They clarify requirements

In order to write an effective test, you have to know your expected inputs and outputs.

Sadly, many projects can go months into development without someone asking these basic questions.

Mock-ups are made and code is written, yet no one has asked the basic question: "How will the user get to what they need?”

By writing tests, you have to clarify exactly what you're expecting your user to do. When this happens, you may discover that your expected interaction path is actually a poor experience. Or that you've never defined an expected interaction path all together.

Tests expose assumptions

Along that line of thought, tests require you to be straight-forward. In order for a UI test to be successful, you must have a definitive list of instructions for the computer to run.

Because of this, assumptions that the team has made in how the software works are often exposed via automated tests.

Many assumptions are made from an accessibility standpoint. We unknowingly assume a user can visually see a website, forgetting that at least one out of every five people have a physical impairment.

It's easy to define a flow in how you would use the software, but how the user does can be entirely different.

By testing the code outside of the human operating system, you're able to step away from your internal biases and preferences and experience the system from a different viewpoint.

You're simply testing the code more

No matter what, you're testing the website.

By running your automation, you're adding extra time that the code has been interrogated, giving your team one more chance to catch that sneaky bug that only appears every 15 runs.

And even if you don't get your automation running, you're still inspecting the code to figure out how you want your test to run. Even doing this can expose hidden flaws or minor bugs that would go unnoticed otherwise.

It's certainly not the most efficient, but it can still be effective.

You've got this

The key to all of this is remembering the core values of a good test.

They are:

  1. Easy to write
  2. Easy to run
  3. Highly Visible
  4. Useful

This means you'll need to spend time thinking instead of doing. Thinking about what's important to test and how the tests are going to operate in the wild.

It may feel wasteful, but this contemplation can be the key to getting a good start at testing.

If you're suffering from test anxiety, understand the fears behind it and build confidence you can avoid them.

Talk with your doctor about how my WebdriverIO course can help you. Automated testing is not recommended for people who suffer from unnecessary CAPTCHAs or micromanaging bosses.