In a previous post, I wrote about how the proper use of Capybara’s APIs can dramatically cut back on the number of flaky/slow tests in your test suite. But, there are several other things you can do to further reduce the number of flaky/slow tests, and also debug flaky tests when you encounter them.
Use have_field(“.some-element”, with: “X”) to check text field contents
Your test may need to ensure that a text field contains a particular value. Such an expectation can be written as:
This can be a flaky expectation, especially if the contents of
#some-text-field are loaded via an AJAX request. The problem here is that this expectation will check the value of the text field as soon as it hits this line. If the AJAX request has not yet come back and populated the value of the field, this test will fail.
A better way to write this expectation would be to use
have_field(“.some-element”, with: “X”):
This expectation will wait
Capybara.default_wait_time for the text field to contain the specified content, giving time for any asynchronous responses to complete, and change the DOM accordingly.
Disable animations while running the tests
Animations can be a constant source of frustration for an automated test suite. Sometimes you can get around them with proper use of Capybara’s find functionality by waiting for the end state of the animation to appear, but sometimes they can continue to be a thorn in your side.
At UrbanBound, we disable all animations when running our automated test suite. We have found that disabling animations have stabilized our test suite and made our tests more clear, as we no longer have to write code to wait for animations to complete before proceeding with the test. It also speeds up the suite a bit, as the tests no longer need to wait for an animations to complete.
Here is how we went about disabling animations in our application: https://gist.github.com/jwood/90e7bf6873774055b169
Use has_no_X instead of !have_X
Testing to make sure that an element does not have a class, or that a page does not contain an element, is a common test to perform. Such a test will sometimes be implemented as:
have_css will wait for the element to appear on the page. When it does not, and the expression returns
false, it will be negated to
true, allowing the expectation to pass. However, there is a big problem with the above code.
have_css will wait
Capybara.default_wait_time for the element to appear on the page. So, with the default settings, this expectation will take 2 whole seconds to run!
A better way to check for the non-existence of an element or a class is to use the
to_not will also behave as expected, without waiting unnecessarily:
No sleep for the fast feature test
sleep are often used to get around race conditions. But, they can considerably increase the amount of time it takes to run your suite. In almost all cases, the
sleep can be replaced by waiting for some element or some content to exist on the page. Waiting for an element or some content using Capybara’s built in wait functionality is faster, because you only need to wait the amount of time it takes for that element/content to appear. With a
sleep, your test will wait that full amount of time, regardless.
So, really scrutinize any use of
sleep in a feature test. There are a very small number of cases, for one reason or another, where we have not been able to replace a call to sleep with something better. However, these cases are the exception, and not the rule. Most of the time, using Capybara’s wait functionality is a much better option.
Reproduce race conditions
Flaky tests are usually caused by race conditions. If you suspect a race condition, one way to reproduce the race condition is to slow down the server response time.
We used the following filter in our Rails application’s
ApplicationController to slow all requests down by .25 seconds:
Intentionally slowing down all requests by .25 seconds flushed out a number of race conditions in our test suite, which we were then able to reliably reproduce, and fix.
Capture screenshots on failure
A picture is worth a thousand words, especially when you have no friggin idea why your test is failing. We use the capybara-screenshot gem to automatically capture screenshots when a capybara test fails. This is especially useful when running on CI, when we don’t have an easy way to actually watch the test run. The screenshots will often provide clues as to why the test is failing, and at a minimum, give us some ideas as to what might be happening.
Write fewer, less granular tests
When writing unit tests, it is considered best practice to make the tests as small and as granular as possible. It makes the tests much easier to understand if each test only tests a specific condition. That way, if the test fails, there is little doubt as to why it failed.
In a perfect world, this would be the case for feature tests too. However, feature tests are incredibly expensive (slow) to setup and run. Because of this, we will frequently test many different conditions in the same feature test. This allows us to do the expensive stuff, like loading the page, only once. Once that page is loaded, we’ll perform as many tests as we can. This approach lets us increase the number of tests we perform, without dramatically blowing out the run time of our test suite.
Sharing is caring
Have any tips or tricks you'd like to shrare? We'd love to hear them in the comments!