After working in test automation with different businesses for years, you notice that people encounter the same issues over and over again. I am collecting the solutions I have found here, hoping to save someone some headaches.

# Conceptual understanding / Strategic layer

# Be clear on what you are trying to achieve

Before you jump into automating scenarios, make sure you have a strategy. Ask yourself what makes sense to automate. You need to come up with (seemingly obvious) answers to questions like

  • "What are the key flows of my application/website?"
  • "Which functionality has the highest chance of breaking? What has broken in the past?"
  • "How are users accessing my application/website? Which browsers and OSs are they using?"

Once you know, make sure you prioritise and start writing automated scripts starting from the most valuable.

# Test automation != machines performing manual tests

Understand that the purpose of automation is not to have a machine do manual testing for you. With few, limited exceptions, automated tests nowadays are still checking only what you explicitly ask them to check for. This does not make them better or worse, but rather makes them a tool for a different set of tasks, with regression testing being the chief one. Get familiar with just how much more narrow the scope of a single automated test is, or should be, and avoid writing long automated tests that try to do everything.

# Automated testing is part of your delivery process - know where it fits

Know how you will be running your tests and let that inform your decisions. Will you be running them on a schedule? Will they be triggered as part of a CI/CD pipeline every time a developer commits changes? What needs to happen when they fail/pass?

# Do not aim to have an automated test for everything

Do not fall into the trap of "automating everything." Automate what is valuable. Every test you write is a test you need to maintain, and if the target app is changing rapidly enough this, combined with the need to keep tests reliable, will take you to maintenance hell. Maintenance hell is a bad place and you do not want to go there. Having a small set of stable, reliable and meaningful tests is always better than having a large number of flaky tests.

# Implementational understanding / Tactical layer

# Keep visibility high for stakeholders

Who stands to gain from knowing whether your tests are passing or failing? How should they be alerted? A couple of color-coded dashboards around the office (or available online to the whole team) can be way more useful than most think.

# Cultivate trust in your automation framework

UI tests can be flaky (be unreliable/have false positives). If they get too flaky, whoever is supposed to interpret the results will slowly lose confidence in their value. In order to keep moving and progress towards their goals, people will start ignoring failed tests, and this will eventually result in actual bugs not being caught in time. Be careful, as this is one of the main reasons why test automation projects fail in the long run. Quarantine tests that have too many false positives, try to fix them but discard them if you can't. It is better to have few reliable tests than a lot of unreliable ones.

# Build tests for speed and parallelisation

There is huge value in being able to run a reliable test suite very quickly. It enables higher quality and faster delivery, which together become a massive competitive advantage. Keep tests short and independent. This enables parallelisation and helps a ton with maintenance.

# Proceed in small steps to avoid getting overwhelmed

This is a general best practice. It is true for writing tests, which you should execute step-by-step as they are being written to fail as quickly as possible - on average, this will make you succeed as quickly as possible. It is true for implementation strategy: get to know your tests. See how they behave over time, test them. Do not aim to implement as many as possible as quickly as possible and expect good things to happen. This is true when debugging failures: review the test step-by-step, make sure you know what the script is supposed to do and how the application under test is supposed to behave, and then start trying to isolate the issue, one change at a time.

Last updated: July 28th, 2021