Testing is a very large part of how we build UrbanBound. We test at various phases in our software development lifecycle, and we have tests that target different levels of the application’s stack. Testing reassures us that the application will behave as we expect it to, ensures that it continues to behave as we expect it to as we change it, and catches problems early, making them easier and cheaper to fix.
User Testing Our Designs
The product team here at UrbanBound strives to design a product that is intuitive, easy to use, and solves the problem at hand. This is very easy to get wrong. Often, teams will make assumptions around the knowledge that the user has of the problem, or how they will interact with the product. Many times, these assumptions are incorrect.
By testing our designs with people who are, or potentially could be users of our application, we find out if we’re wrong at the earliest possible phase in the process. This lets us quickly correct our mistakes while still in the design phase of a feature, instead of spending time and money developing the solution, only to find out after the feature has been launched that the design is a failure.
Backend Unit Tests
UrbanBound is powered by a Rails application on the backend. Like other Rails apps, the application consists of controllers, models, and a host of support classes (such as service objects, policies, serializers, etc). We write unit tests to verify that each of these classes behaves as expected in isolation. Unit tests are great to write when creating or modifying a class, as it lets you know immediately if your code is working properly. It also gives you feedback with regards to how other modules will interact with your code. This feedback helps you define a public interface that is clear and simple. In addition, unit tests act as a great safety net for catching regressions introduced into the application.
Frontend Unit Tests
Feature tests can be difficult to write and maintain. Because feature tests usually interact with a web server running in a different process or thread, issues related to timing can pop up. If you’re not careful, your test could be checking for the existence of an element on the page before the server has had time to process the response and insert that element into the DOM. This is especially problematic in frontends that make a lot of asynchronous requests to the server. However, there are some very mature tools out there that help with this problem, so you just need to invest some time into learning how to use them properly.
For feature tests, we use rspec + capybara to write our feature tests, in combination with either Selenium WebDriver or Poltergeist as the test driver. We prefer to use Poltergeist because it runs the tests in a headless browser, but sometimes need to fall back to Selenium for specific tests that don’t run quite right in Poltergeist. We also use SitePrism to model the pages in our application as Page Objects, which makes tests easier to read, and write.
What’s the point in having an automated test suite if you don’t run it every chance you get? We use a continuous integration service to run our entire suite each time a change is pushed to GitHub. If any tests fail, we are notified immediately so we can investigate why the test failed and fix the issue.
We use CircleCI for our CI server. Although it can be a bit pricy for more than few workers, it really is a fantastic service. It is simple to set up, incredibly flexible, and supports all sorts of testing setups. They also provide the ability to SSH into the machine that is running your tests, which has proven to be incredibly useful for tracking down that odd case where a test passes locally, but fails consistently on the CI server.
Automated testing can never completely replace manual testing. It can only help to keep the manual testing focused on the things that can’t be easily automated. We have a team of Quality Assurance Engineers that, in addition to writing feature tests, will manually test features before they are cleared for deployment and merged into the master branch. Our QA team, with their attention to detail, are able to find issues that automated testing alone would never catch. Issues with the visual layout of a page, of the “feel” of the application, are prime examples of where manual testing shines.
Our Testing Philosophy
Our goal is to make sure our application works, and continues to work as we change it. Our approach to testing helps us make this happen. But, we try to never forget that testing has a cost. It takes time and money to maintain a test suite. Therefore, there must always be a return on that investment to the business. Though it doesn’t happen often, there are certainly times where automating the testing of a certain feature just doesn’t make sense. We’re not dogmatic about making sure that 100% of our codebase is covered by an automated test. But, that being said, we automate as much as we practically can.