John Wood

Lead a team of software engineers and quality assurance engineers in building out a self service employee relocation application. Perform product research to analyze the problems our customers face with relocation, and design and develop solutions to help solve those problems. Strongly encourage and enforce software development best practices amongst the team (TDD, code reviews, SOLID design, continuous integration, etc). Pair with members of the team help them tackle tough problems, or to explain certain aspects of the application. Mentor engineers at all levels to ensure they are on a trajectory to reach their career goals, and grow as engineers. Participate in the design, coding, and testing of the application, along side the rest of the team. Help build the necessary software development processes to keep the team running smoothly and communicating effectively. Work with employees outside of the product team to effectively communicate what we are working on, and when it will be delivered. Actively recruit engineers who would be a great addition to the team.

Recent Posts

Rails Validations Alone Won't Save You

Posted by John Wood on Jul 24, 2015 8:24:49 AM

ActiveRecord validations are a very powerful tool. They allow you clearly and concisely declare rules for your model that must be met in order for any instance of that model to be considered “valid”. And using them gives you all sorts of nifty error handling and reporting out of the box.

This validation does exactly what it says. It won’t consider an instance of Account valid unless the email address associated with it is unique across all accounts.

Or will it?

This particular validation will perform a SELECT to see if there are any other accounts with the same email address as the one being validated. If there are, then the validation will trip and an error will be recorded. If not, you will be allowed to save the record to the database.

But, here is the problem. Somewhat obviously, ActiveRecord validations are run within the application. There is a gap in time from when ActiveRecord checks to see if an account exists with the same email address, and when it saves that account to the database. This gap is small, but it most certainly exists.

Let’s say we have two processes running our application, serving requests concurrently. At roughly the same time, both processes get a request to create an account with the email address “”. Consider the following series of events:

Process 1: Receive request
Process 2: Receive request
Process 1: Verify no account exists with the email “”
Process 2: Verify no account exists with the email “”
Process 1: Save the record to the database
Process 2: Save the record to the database => whoops!

At the end of this series of events, we will have two accounts in the database with an email of “”.

Relational database constraints exist for this exact reason. Unlike the application processes, the database knows exactly what it contains when it processes each request. It is the only one that can reliably determine if it is about to create a duplicate record.

Some Rails developers feel that database constraints, including null constraints, foreign keys, and unique indexes, are not necessary because of the validations performed by ActiveRecord. As you can see by the example above, this is certainly not the case. It is almost a certainty that non-null columns without a non-null constraint will eventually contain null data, foreign key columns without a foreign key constraint will eventually contain ids to records that do not exist, and unique columns without a unique index will eventually contain non-unique data. I’ve seen many projects that solely relied on ActiveRecord validations to ensure the integrity of their data. Every single one of those projects had junk data in the database.

A database with inconsistent or bad data can be incredibly difficult to work with. Relational databases have many tried and true tools that can be used to ensure that the integrity of your data remains in-tact. All of them can easily be setup in a Rails migration. So, be sure to use them when creating or altering tables or columns.

The Road to Deploy When Ready

Posted by John Wood on Jul 7, 2015 8:52:13 AM

Our deployment process at UrbanBound has matured considerably over the past year. In this blog post, I’d like to describe how we moved from prescribed deployment windows with downtime, to a deploy-when-ready process that could be executed at any point in time.

The Early Days

About a year ago, UrbanBound was in the middle of transitioning from the “get it up and running quickly to validate the idea” stage to the “the idea works, let’s make sure we can continue to push it forward reliably” stage. Up until that point, we had accrued a significant amount of technical debt, and we didn’t have much in the way of a test suite. As a result, deploys were unpredictable. Sometimes new code would deploy cleanly, sometimes not. Sometimes we would introduce regressions in other areas of the application, sometimes not. Sometimes deploys would interfere with users currently using the app, sometimes not. Our deployment process was simply not reliable.

Stopping the Bleeding

The first order of business was to stop the bleeding. Before we could focus on improving the process, first we needed to stop it from being a problem. We accomplished this with some process changes.

First, we decided to limit the number of releases we did. We would deploy at the end of each two week sprint and to push out critical bug fixes. That’s it. We made some changes to our branching strategy in git to support this work flow, which looked something like this:

  • All feature branches would be based off of an integration branch. When features were completed, reviewed, and had passed QA, they would be merged into this integration branch.

  • At the end of every two week sprint, we would cut a new release branch off of the integration branch. Our QA team would spend the next few days regression testing the integration branch to make sure everything looked good. From this point on, any changes made to the code being released, a fix for a bug QA found for example, would be made on this release branch, and then cherry picked over to the integration branch.

  • When QA was finished testing, they would merge the release branch into master, and deploy master to production.

  • Emergency hotfixes would be done on a branch off of master, and then merged into master and deployed when ready. This change would then have to be merged upstream into the integration branch, and possibly a release branch if one was in progress.

A very similar workflow to the one described above can be found at

This process change helped us keep development moving forward while ensuring that we were releasing changes that would not break production. But, it did introduce a significant of overhead. Managing all of the branches proved challenging. And, it was not a great use of QA’s time to spend 2-3 days regression testing the release branch when they had already tested each of the features individually before they were merged into the integration branch.

Automated Acceptance Testing

Manually testing for regressions is a crappy business to be in. But, at the time, there was no other way for us to make sure that what we were shipping would work. We knew that we had to get in front of this. So, we worked to identify the critical paths through the application...the minimum amount of functionality that we would want covered by automated tests in order to feel comfortable cutting out the manual regression testing step of the deployment process.

Once we had identified the critical paths through the application, we started writing Capybara tests to cover those paths. This step took a fair amount of time, because we had to do this while continuing to test new features and performing regression testing for new releases every two weeks. We also had to flush out how we wanted to do integration tests, as integration testing was not a large part of our testing strategy at this point in time.

Eventually, we had enough tests in place, and passing, that we felt comfortable ditching manual regression testing effort. Now, after QA had passed a feature, all we needed to see was a successful build in our continuous integration environment to deem the code ready for deployment.

Zero Downtime Deploys

We deploy the UrbanBound application to Heroku. Personally, I love Heroku as a deployment platform. It is a great solution for those applications that can work within the limitations of the platform. However, one thing that is annoying with Heroku is that, by default, your application becomes totally unresponsive while it reboots after a deploy. The amount of time it is down depends on the application, and how long it takes to boot. But, this window was large enough for us that we felt it would be disruptive to our users if we were deploying multiple times per day.

Thankfully, Heroku offers a rolling reboot feature called preboot. Instead of stopping the web dynos and then starting the new ones, preboot changes the order so that it first starts the new web dynos, and makes sure they have started successfully and are receiving traffic before shutting down the old dynos. This means that the application stays responsive during the deploy.

However, preboot adds a fair amount of complexity to the deployment process. With preboot, you will have the old version of the application running side-by-side with the new version of the application, worker dynos, and the newly migrated database, for at least a few minutes. If any of your changes are not backwards compatible with the older version of the application (a deleted or renamed column in the database, for example), the old version of the application will begin experiencing problems during the deploy. There are also a few potential gotchas with some of the add-ons.

In our case, the backwards compatibility issue can be worked around fairly easily. When we have changes that are not backwards compatible, we simply deploy these changes off hours with the preboot feature disabled. The challenge then becomes recognizing when this is necessary (when there are backwards incompatible changes going out). We place the responsibility for identifying this on the author of the change and the person who performs the code review. Both of these people should be familiar enough with the change to know if it will be backwards compatible with the version of the application currently running in production.

The End Result

With the automated acceptance testing and zero downtime deploys in place, we were finally ready to move to a true “deploy when ready” process. Today, we deploy several times a day, all without the application missing a step. No more big integration efforts, or massive releases. We keep the deploys small, because doing so makes it much easier to diagnose problems when they happen. This deployment process also allows us to be much more responsive to the needs of the business. In the past, it could be up to two weeks before a minor change made it into production. Today, we can deploy that change as soon as it is done, and that’s the way it should be.

Topics: rails, Development & Deployment

Using Page Objects for More Readable Feature Specs

Posted by John Wood on Jun 1, 2015 5:24:36 PM

Feature specs are great for making sure that a web application works from end to end. But feature specs are code, and like all other code, the specs must be readable and maintainable if you are to get a return on your investment. Those who have worked with large Capybara test suites know that they can quickly become a stew of CSS selectors. It can make the tests difficult to read and maintain if CSS selectors are sprinkled around through the suite. This is even more problematic if the name of the CSS selector doesn’t accurately reflect what the element actually is or does.

Introducing Page Objects

This is where Page Objects come in. Per Martin Fowler:

A page object wraps an HTML page, or fragment, with an application-specific API, allowing you to manipulate page elements without digging around in the HTML.

Page Objects are great for describing the page structure and elements in one place, a class, which can then be used by your tests to interact with the page. This makes your tests easier to read and understand, which in turn makes them easier to maintain. If used correctly, changing the id or class of an element that is used heavily in your feature specs should only require updating the page object, and nothing else.

Let’s look at an example. Here is a basic test that uses the Capybara DSL to interact with a page:

Let’s take a look at at that same spec re-written to use SitePrism, an outstanding implementation of Page Objects in Ruby:

In the SitePrism version of the spec, the majority of the CSS selectors have moved from the test into the CompanyOfficePage class. This not only makes the selectors easier to change, but it also makes the test easier to read now that the spec is simply interacting with an object. In addition, it makes it easier for other specs to work with this same page since they no longer need knowledge about the structure of the page or the CSS selectors on the page.

While SitePrism’s ability to neatly abstract away most knowledge about CSS selectors from the spec is pretty handy, that is only the beginning.

Repeating Elements, Sections, and Repeating Sections

Page Objects really shine when you completely model the page elements that your spec needs to interact with. Not all pages are as simple as the one in the above example. Some have repeating elements, some have sections of related elements, and some have repeating sections of related elements. SitePrism contains functionality for modeling all of these. Defining the proper repeating elements, sections, and repeating sections in your page object allows you to interact with those elements and sections via SitePrism APIs that make your specs much easier to write, read, and understand.

Testing for Existence and Visibility

SitePrism provides a series of methods for each element defined in the page object. Amongst the most useful of these methods are easy ways to test for the existence and visibility of an element.

Using the Capybara DSL, the best way to check for the existence of an element is to expect that a CSS selector that uniquely identifies the element can be found on the page:

This becomes a bit easier with SitePrism, as we no longer have to use the CSS selector if our page object has an element defined:

Visibility checks are also simpler. Asserting that an element exists in the DOM, and is visible, can be a bit convoluted using the Capybara DSL:

Using SitePrism, this simply becomes:

The above code will wait until #blah is added to the DOM and becomes visible. If that does not happen in the time specified by Capybara.default_wait_time, then an exception will be raised and the test will fail.

Waiting for Elements

Using the above visibility check is a great way to ensure that it is safe to proceed with a portion of a test that requires an element be in the DOM and visible, like a test that would click on a given element. Waiting for an element to be not only in the DOM, but visible, is a great way eliminate race conditions in tests where the click target is dynamically added to the DOM.

Using Capybara Implicit Waits

By default, SitePrisim will not implicitly wait for an element to appear in the DOM when it is referenced in the test. This can lead to flakey tests. Because of this, we highly recommend telling SitePrism to use implicit waits in your project.


SitePrism is a fantastic tool. If you write feature specs, I highly recommend you check it out. It will make your specs eaiser to write, and maintain.


Topics: rails, testing, capybara

Tips and Tricks for Dubugging and Fixing Slow/Flaky Capybara Specs

Posted by John Wood on Apr 23, 2015 2:23:20 PM

In a previous post, I wrote about how the proper use of Capybara’s APIs can dramatically cut back on the number of flaky/slow tests in your test suite. But, there are several other things you can do to further reduce the number of flaky/slow tests, and also debug flaky tests when you encounter them.

Use have_field(“.some-element”, with: “X”) to check text field contents

Your test may need to ensure that a text field contains a particular value. Such an expectation can be written as:

This can be a flaky expectation, especially if the contents of #some-text-field are loaded via an AJAX request. The problem here is that this expectation will check the value of the text field as soon as it hits this line. If the AJAX request has not yet come back and populated the value of the field, this test will fail.

A better way to write this expectation would be to use have_field(“.some-element”, with: “X”):

This expectation will wait Capybara.default_wait_time for the text field to contain the specified content, giving time for any asynchronous responses to complete, and change the DOM accordingly.

Disable animations while running the tests

Animations can be a constant source of frustration for an automated test suite. Sometimes you can get around them with proper use of Capybara’s find functionality by waiting for the end state of the animation to appear, but sometimes they can continue to be a thorn in your side.

At UrbanBound, we disable all animations when running our automated test suite. We have found that disabling animations have stabilized our test suite and made our tests more clear, as we no longer have to write code to wait for animations to complete before proceeding with the test. It also speeds up the suite a bit, as the tests no longer need to wait for an animations to complete.

Here is how we went about disabling animations in our application:

Use has_no_X instead of !have_X

Testing to make sure that an element does not have a class, or that a page does not contain an element, is a common test to perform. Such a test will sometimes be implemented as:

Here, have_css will wait for the element to appear on the page. When it does not, and the expression returns false, it will be negated to true, allowing the expectation to pass. However, there is a big problem with the above code. have_css will wait Capybara.default_wait_time for the element to appear on the page. So, with the default settings, this expectation will take 2 whole seconds to run!

A better way to check for the non-existence of an element or a class is to use the has_no_X matcher:

Using to_not will also behave as expected, without waiting unnecessarily:

No sleep for the fast feature test

Calls to sleep are often used to get around race conditions. But, they can considerably increase the amount of time it takes to run your suite. In almost all cases, the sleep can be replaced by waiting for some element or some content to exist on the page. Waiting for an element or some content using Capybara’s built in wait functionality is faster, because you only need to wait the amount of time it takes for that element/content to appear. With a sleep, your test will wait that full amount of time, regardless.

So, really scrutinize any use of sleep in a feature test. There are a very small number of cases, for one reason or another, where we have not been able to replace a call to sleep with something better. However, these cases are the exception, and not the rule. Most of the time, using Capybara’s wait functionality is a much better option.

Reproduce race conditions

Flaky tests are usually caused by race conditions. If you suspect a race condition, one way to reproduce the race condition is to slow down the server response time.

We used the following filter in our Rails application’s ApplicationController to slow all requests down by .25 seconds:

Intentionally slowing down all requests by .25 seconds flushed out a number of race conditions in our test suite, which we were then able to reliably reproduce, and fix.

Capture screenshots on failure

A picture is worth a thousand words, especially when you have no friggin idea why your test is failing. We use the capybara-screenshot gem to automatically capture screenshots when a capybara test fails. This is especially useful when running on CI, when we don’t have an easy way to actually watch the test run. The screenshots will often provide clues as to why the test is failing, and at a minimum, give us some ideas as to what might be happening.

Write fewer, less granular tests

When writing unit tests, it is considered best practice to make the tests as small and as granular as possible. It makes the tests much easier to understand if each test only tests a specific condition. That way, if the test fails, there is little doubt as to why it failed.

In a perfect world, this would be the case for feature tests too. However, feature tests are incredibly expensive (slow) to setup and run. Because of this, we will frequently test many different conditions in the same feature test. This allows us to do the expensive stuff, like loading the page, only once. Once that page is loaded, we’ll perform as many tests as we can. This approach lets us increase the number of tests we perform, without dramatically blowing out the run time of our test suite.

Sharing is caring

Have any tips or tricks you'd like to shrare? We'd love to hear them in the comments!

Topics: rails, testing, capybara

Fix Flaky Feature Tests by Using Capybara's APIs Properly

Posted by John Wood on Apr 9, 2015 7:38:01 AM

A good suite of reliable feature/acceptance tests is a very valuable thing to have. It can also be incredibly difficult to create. Test suites that are driven by tools like Selenium or Poltergeist are usually known for being slow and flaky. And, flaky/erratic tests can cause a team to lose confidence in their test suite, and question the value of the specs as a whole. However, much of this slowness and flakiness is due to test authors not making use of the proper Capybara APIs in their tests, or by overusing calls to sleep to get around race conditions.

The Common Problem

In most cases flaky tests are caused by race conditions, when the test expects an element or some content to appear on the page, but that element or content has not yet been added to the DOM. This problem is very common in applications that use JavaScript on the front end to manipulate the page by sending an AJAX request to the server, and changing the DOM based on the response it receives. The time that it takes to respond to a request and process the response can vary. Unless you write your tests to account for this variability, you could end up with a race condition. If the response just happens to come back quick enough and there is time to manipulate the DOM, then your test will pass. But, should the response come a little later, or the rendering take a little longer, your test could end up failing.

Take the following code for example, which clicks a link with the id "foo", and checks to make sure that the message "Loaded successfully" displays in the proper spot on the page.

There are a few potential problems here. Let’s talk about them below.

Capybara’s Finders, Matchers, and Actions

Capybara provides several tools for working with asynchronous requests.


Capybara provides a number of finder methods that can be used to find elements on a page. These finder methods will wait up to the amount of time specified in Capybara.default_wait_time (defaults to 2 seconds) for the element to appear on the page before raising an error that the element could not be found. This functionality provides a buffer, giving time for the AJAX request to complete and for the response to be processed before proceeding with the test, and helps eliminate race conditions if used properly. It will also only wait the amount of time it needs to, proceeding with the test as soon as the element has been found.

In the example above, it should be noted that Capybara’s first API will not wait for .message to appear on the DOM. So if it isn’t already there, the test will fail. Using find addresses this issue.

The test will now wait for an element with the class .message to appear on the page before checking to see if it contains "Loaded successfully". But, what if .message already exists on the page? It is still possible that this test will fail because it is not giving enough time for the value of .message to be updated. This is where the matchers come in.


Capybara provides a series of Test::Unit / Minitest matchers, along with a corresponding set of RSpec matchers, to simplify writing test assertions. However, these matchers are more than syntactical sugar. They have built in wait functionality. For example, if has_text does not find the specified text on the page, it will wait up to Capybara.default_wait_time for it to appear before failing the test. This makes them incredibly useful for testing asynchronous behavior. Using matchers will dramatically cut back on the number of race conditions you will have to deal with.

Looking at the example above, we can see that the test is simply checking to see if the value of the element with the class .message equals "Loaded successfully". But, the test will perform this check right away. This causes a race condition, because the app may not have had time to receive the response and update the DOM by the time the assertion is run. A much better assertion would be:

This assertion will wait Capybara.default_wait_time for the message text to equal "Loaded successfully", giving our app time to process the request, and respond.


The final item we’ll look at are Capybara’s Actions. Actions provide a much nicer way to interact with elements on the page. They also take into account a few different edge cases that you could run into for some of the different input types. But in general, they provide a shortened way of interacting with the page elements, as the action will take care of performing the find.

Looking at the example above, we can re-write the test as such:

click_link will not just look for something on the page with the id of #foo, it will restrict its search to a link. It will perform a find, and then call click on the element that find returns.


If you write feature/acceptance tests using Capybara, then you should spend some time getting familiar with Capybara’s Finders, Matchers, and Actions. Learning how to use these APIs effectively will help you steer clear of flaky tests, saving you a whole lot of time and aggravation.

Topics: rails, testing, capybara

Testing at UrbanBound

Posted by John Wood on Mar 26, 2015 9:57:46 AM

Testing is a very large part of how we build UrbanBound. We test at various phases in our software development lifecycle, and we have tests that target different levels of the application’s stack. Testing reassures us that the application will behave as we expect it to, ensures that it continues to behave as we expect it to as we change it, and catches problems early, making them easier and cheaper to fix.

User Testing Our Designs

The product team here at UrbanBound strives to design a product that is intuitive, easy to use, and solves the problem at hand. This is very easy to get wrong. Often, teams will make assumptions around the knowledge that the user has of the problem, or how they will interact with the product. Many times, these assumptions are incorrect.

By testing our designs with people who are, or potentially could be users of our application, we find out if we’re wrong at the earliest possible phase in the process. This lets us quickly correct our mistakes while still in the design phase of a feature, instead of spending time and money developing the solution, only to find out after the feature has been launched that the design is a failure.

How we conduct user testing, and the people we recruit as testers, both differ depending on the feature being designed. Often, the prototype being tested can simply consist of a series of static screens that are linked together. This sort of prototype is cheap to build, and is great for testing a user’s understanding of the product and some simple flows through the product. For features that involve a large amount of user interaction, we’ll build a fully interactive prototype in JavaScript. This sort of prototype lets us test the user’s interaction with the product, in addition to their understanding of it and the product’s flow. While it’s much more expensive to build, the costs pale in comparison to the costs of building and delivering a bad solution, and having to go back to the drawing board after a feature has launched and failed.

Backend Unit Tests

UrbanBound is powered by a Rails application on the backend. Like other Rails apps, the application consists of controllers, models, and a host of support classes (such as service objects, policies, serializers, etc). We write unit tests to verify that each of these classes behaves as expected in isolation. Unit tests are great to write when creating or modifying a class, as it lets you know immediately if your code is working properly. It also gives you feedback with regards to how other modules will interact with your code. This feedback helps you define a public interface that is clear and simple. In addition, unit tests act as a great safety net for catching regressions introduced into the application.

We use rspec as the framework for our unit tests, factory_girl for setting up test data, and database_cleaner to help us clean up after each test runs.

Frontend Unit Tests

Like many modern web applications, UrbanBound’s frontend is a single page JavaScript application. Because our frontend is an app itself with a wide array of classes and components, it is important that we test it like we test our backend application. We do this for the same reasons we unit test our backend code.

Our frontend tests are driven by karma, and written using the Jasmine testing framework. We also use the great set of extensions to Jasmine provided by jasmine-jquery.

Feature Tests

Unit tests are great for making sure that modules work in isolation. But, simply making sure the wheels turn and the engine starts doesn’t mean the car will drive. We feel that, especially with a single page JavaScript app on the frontend, it is important to have a suite of tests that exercise the entire stack. This is where our feature tests come in.

Feature tests can be difficult to write and maintain. Because feature tests usually interact with a web server running in a different process or thread, issues related to timing can pop up. If you’re not careful, your test could be checking for the existence of an element on the page before the server has had time to process the response and insert that element into the DOM. This is especially problematic in frontends that make a lot of asynchronous requests to the server. However, there are some very mature tools out there that help with this problem, so you just need to invest some time into learning how to use them properly.

For feature tests, we use rspec + capybara to write our feature tests, in combination with either Selenium WebDriver or Poltergeist as the test driver. We prefer to use Poltergeist because it runs the tests in a headless browser, but sometimes need to fall back to Selenium for specific tests that don’t run quite right in Poltergeist. We also use SitePrism to model the pages in our application as Page Objects, which makes tests easier to read, and write.

Continuous Integration

What’s the point in having an automated test suite if you don’t run it every chance you get? We use a continuous integration service to run our entire suite each time a change is pushed to GitHub. If any tests fail, we are notified immediately so we can investigate why the test failed and fix the issue.

We use CircleCI for our CI server. Although it can be a bit pricy for more than few workers, it really is a fantastic service. It is simple to set up, incredibly flexible, and supports all sorts of  testing setups. They also provide the ability to SSH into the machine that is running your tests, which has proven to be incredibly useful for tracking down that odd case where a test passes locally, but fails consistently on the CI server.

Manual Testing

Automated testing can never completely replace manual testing. It can only help to keep the manual testing focused on the things that can’t be easily automated. We have a team of Quality Assurance Engineers that, in addition to writing feature tests, will manually test features before they are cleared for deployment and merged into the master branch. Our QA team, with their attention to detail, are able to find issues that automated testing alone would never catch. Issues with the visual layout of a page, of the “feel” of the application, are prime examples of where manual testing shines.

Our Testing Philosophy

Our goal is to make sure our application works, and continues to work as we change it. Our approach to testing helps us make this happen. But, we try to never forget that testing has a cost. It takes time and money to maintain a test suite. Therefore, there must always be a return on that investment to the business. Though it doesn’t happen often, there are certainly times where automating the testing of a certain feature just doesn’t make sense. We’re not dogmatic about making sure that 100% of our codebase is covered by an automated test. But, that being said, we automate as much as we practically can.

Topics: testing, Development & Deployment

How we make Remote Work *Work* [Part 2]

Posted by John Wood on Mar 9, 2015 9:27:00 AM

In Part 1 we talked about the benefits of remote work, and ways to successfully roll it out in an organization that is not yet accustomed to having remote workers. In Part 2, we will talk about the tools we use to make remote work *work*.

Asynchronous Communication Tools

Most companies already have a few different ways to communicate asynchronously (does not provide an immediate response). Email and chat (we use Slack) are two prime examples. These tools are fantastic for communications that do not require an immediate response. Chat tools are optimal for short communications, and email is generally a better tool for longer, more detailed communications. Asynchronous communication is ideal when you don’t want to interrupt somebody, and instead would rather hear back from them when they take a break from what they’re working on at the moment.

When working with a remote team, diligent use of these tools becomes even more important. Remote employees miss out on chats around the water cooler, and don’t have the ability to overhear discussions at the next desk. So if a topic can benefit from the entire team’s input, it is important to communicate it in a way that entire team can see it, and respond accordingly.

Moving these communications into email and chat has the added benefit of making them searchable at a later point in time. This can benefit employees that regularly come into the office as well as remote employees. If you take a week off, it is possible to catch up on the discussions that took place and the decisions that were made while you were out by reviewing the written history.

Note that not all chat solutions are created equal. Some organizations rely on chat solutions that only offer one to one communication or ad-hoc group chats. What we find essential is that the chat tool allows for dedicated rooms. These dedicated rooms form a collective conscious that allow team members to be plugged into topics of interest. When chat software only allows for ad-hoc groups, two problems arise. The overhead of setting up the group often prohibits their use and others who are accidentally not invited miss out and can’t contribute. Also, dedicated rooms are very useful for getting information out to a distributed team as they substitute for the in-person equivalent of someone standing up and making an announcement. To put a finer point on it, we find that AOL Instant Messenger type products, supposed “enterprise chat products” like Microsoft Lync, and even Gmail Chat are insufficient because they don’t support dedicated channels. Go with something like Slack, HipChat or even IRC.

The main challenge with asynchronous communication is to recognize when it is a poor fit, and when the conversation needs shift to something more synchronous. Conversations with a lot of back and forth are usually a poor fit for email or chat. They generally move along much quicker, and with less confusion, via a synchronous tool like video chat. Many conversations start out with only a question or two, and quickly evolve into a much larger conversation. When this happens, ditch the chat, and fire up a Google Hangout.

Synchronous Communication Tools

If the communication requires the undivided attention of two or more parties, then a more synchronous (provides an immediate response) form of communication is the way to go. This is the best way to keep the conversation moving along, and save everybody a lot of time in waiting for responses, and minimizing miscommunications.

Google Hangouts

We use Google Hangouts extensively for video chat. It is a fantastic tool, and free! The video and voice quality is pretty darn good if participants have a decent internet connection. And, if you happen to be using the coffee shop’s crappy WiFi, you have the ability to adjust your quality settings to degrade the video, so the audio continues to come through just fine. It is also trivial for users on the Hangout to share their screens. If you are a Google Apps user, you are able to create persistent hangout links for recurring meetings, like you daily stand-up, and you’re given the ability to prevent people not associated with your organization from joining the Hangout.

Google Hangouts isn’t perfect. Hangouts are limited to 15 connections, which could be a problem for larger teams/meetings. There are also a few bugs, like the one that boots you out of a meeting if Google expires your session. But, for the most part, Hangouts is an excellent, free, tool for video chat and screen sharing.


For remote pairing, we use ScreenHero. ScreenHero (recently acquired by Slack) is by far the best remote pairing tool I have ever used. With ScreenHero, each person in the pair gets their own mouse. This makes a huge difference, as it gives both participants the ability to say “this code, over here”, moving the mouse to the part of the code they want to draw attention to. You can also use your mouse to interact with the host’s screen, as you would if you were pairing side by side with somebody. You don’t have to “pass the ball” or perform some additional step in order to interact with the host’s machine. The audio and video quality is fantastic as well.2015-02-1310.59.32-412891-edited

Tips for Improving Video Conferencing

Good hardware is important in any video conferencing setup. You want to do your best to make the remote employees feel like they are there in the same room with everybody else, and visa versa.

For remote employees, a good headset with a uni-directional microphone is key. This lets you hear the others well, and the uni-directional microphone limits background noise so the others can hear you well also. Many of our developers use gaming headsets, which work out really well (we use this one or this one).

Conference rooms should be equipped specifically for hosting remote calls. This means a TV, a separate conference camera (we use this one), and, a good multi-directional microphone (we use this one). It helps tremendously for remote employees to see everybody in the room and hear everyone clearly, as they would if there were sitting in the room themselves.

For longer meetings, or meetings that are highly collaborative, multiple camera angles really help. This is accomplished easily in Google Hangouts by having a few people in the room join the hangout and position their laptop camera from a different angle. When there are notes on the wall, or other items in the room that participants need to walk around and interact with, we have a person in the room join the hangout on their mobile phone, and act as a surrogate for the remote employees. This works really well.



Remote Collaboration Tools

Google Docs / Google Drive

Google Drive (Google Docs, Google Sheets, Google Slides, etc) provides an impressive suite of office tools. The apps themselves are responsive, reliable, and full featured. But the true strength of these tools are the collaboration features. It is very easy to invite others to collaborate, either in edit or read only modes. When a collabor joins in editing a document, you are able to see the fact they they have joined, as well as any updates they make, in real time. These capabilities make it really easy to co-author documents, while potentially sitting a world away.

Pivotal Tracker

We use Pivotal Tracker for project management. It’s a powerful tool, and is a great fit for our development process. So, why is Pivotal Tracker being mentioned on a blog about working remotely? Because some companies still manage their projects with notecards on a wall. This can be a fantastic way to manage things for co-located teams, but simply does not work for anybody who works outside of the office. The information contained on these cards is important, and it is equally important that everybody on the team have access to that information.

GitHub Pull Requests for Code Reviews

Our source code lives in GitHub. GitHub was built with remote collaboration in mind. There are many features that exhibit this fact. But I just want to talk about pull requests for a moment.

We do code reviews for all of our work. Not only does it give the developer a chance to collect feedback from their peers, it also is an excellent tool for sharing knowledge within the team. We drive our code review process with a pull request. When the developer finishes work on a story, they open up a pull request, and find somebody to review it. The pull request provides a superb visual diff of the changes, and allows the author and the reviewer to go back and forth via comments in the pull request itself. When a section of code that has been commented on has been updated, then GitHub will automatically hide that comment, assuming it has been addressed. This makes it easy for the review to see if all of their comments have been addressed. It is also easy to mention other team members (via @username) not currently involved in the review to call their attention to a certain piece of code. This back and forth continues until the reviewer signs off, and sends the pull request to one of our QA engineers for testing.


While good tools alone cannot make a distributed team succeed, bad tools will certainly make a distributed team fail. It is very important that you find a great set of tools. New, better tools are always coming along. So, always keep an eye out and an ear open for new tools to make remote work even easier. The best is yet to come, I’m sure of it.

How we make Remote Work *Work* [Part 1]

Posted by John Wood on Mar 6, 2015 2:41:00 PM

The ability to work remotely is a tremendous benefit. There is certainly something to be said about getting everybody in the same physical space. Great things happen when a bunch of slightly over-caffeinated engineers pile into a conference room with a whiteboard and a collection of multi-colored dry erase markers. And pairing is certainly most effective when your pair is right next to you. However, there is also something to be said for giving people the option to work where they feel they are most productive, or the option to be more involved with their families. Remote work also opens the door to work with talented people outside of your geographic area.



Supporting a distributed team is not easy. Tools alone cannot make it work. It requires a big change in how a team communicates. And, it requires the support of the organization.

At UrbanBound, we have two full time remote members of the product team, a handful of part time remote employees that live out in the boonies, and a few employees that like to work remotely every now and again to either wait for the FedEx guy, or get heads down on a problem for a while. Remote work is a relatively new thing at UrbanBound, and it took a little while for the organization and the team to adjust. Thankfully, we have several team members who have remote working experience, and they were able to help in making that transition a success.

Gaining Organizational Support

Communicating the Benefits

Remote work is fairly common amongst those who work in the tech field. Working remotely is made possible by a wide array of internet based tools that those in the tech field are naturally more familiar with. In most cases, we have been using tools like internet based chat and video conferencing for much longer than those in other professions. This is not new to us. But for others, working with people who are not in the same office requires a big adjustment, which can sometimes be difficult. Others may be reluctant to make that adjustment unless they can clearly see why making this shift is in the organization's best interest.

It is important to communicate that remote work can result in happier, more productive employees. It allows the company to hire from outside of their geographic area, widening the pool of qualified candidates. It is also important to note that today’s tools make it cheap and easy to support remote workers. It no longer requires a special video conferencing system; a modern laptop can be its own video conferencing system.

Training on the Tools

As I mentioned above, many of the tools used to support remote work may be new to people outside of the tech industry. Therefore, it is important that people unfamiliar with the tools be trained on how to use them. Letting folks figure out the tools for themselves could take a lot of time, and cause a lot of frustration, which may cause people question the benefit of using such tools and supporting remote workers.

Training somebody to use a tool like Google Hangouts does not require an expensive one week class. A one hour meeting is more than enough to familiarize folks with the capabilities of a tool, and how to use it effectively. It is important that somebody be available for any questions that come up about the tools after the training is over. You’ll want to provide as much support as you can in helping these tools gain adoption. Failure to adopt these tools could result in a failed remote worker program.

Helping Make the Adjustment

huddleThe first remote worker usually has it the hardest. When a company is not used to having remote workers, it is very easy to forget about the person who is not in the office. In-office employees will often host meetings, and forget to setup some way for the remote worker to participate. In addition, remote workers are often left out of company events and activities.

Supporting remote workers requires the company to make small adjustments to how it communicates and plans. Any important meeting should have some way for remote employees to participate...not just “call in”, but actually participate. It takes a while to figure out the right setup to foster this participation. The remote employee should be able to hear everybody talking (usually enabled by a good microphone) and everybody should be able to her the remote employee talk (usually enabled by a good set of speakers). The goal is to make it feel like the remote employee is in the room with everybody else.

In addition to being able to hear and be heard, it is a huge plus if the remote worker can both see and be seen. We have two main conference rooms setup with both a microphone, camera and TV. If the remote participants can see the people in the meeting then it allows them to observe body language which is often times a very good way to judge reactions.

Get some actual face time

It helps to bring the employee into the office every once in a while. Seeing the remote employee in person gives the team the ability to get to know that person a little better. It provides the opportunity to grab lunch, and talk about things other than work for a while. Building these bonds helps people go the extra mile to make sure the remote employee really feels like part of the company. Company outings and events are an ideal time to bring remote employees into the office, allowing them to participate in person, and reminding them that they are a part of the company.

Be patient

Patience is absolutely required when making this transition. This sort of adjustment does not happen quickly. It takes time for people to change the way they communicate. It takes time for them to learn how to use the tools effectively. You need to gently keep nudging them in the right direction. If somebody forgot to hook up the multi-directional microphone for meeting and you couldn’t hear well as a result, gently remind them of how much better you can hear with that microphone instead of assaulting them for “forgetting about you”. It takes time, and patience, but this sort of adjustment is far from insurmountable. Stay positive, be persistent, and the organization will get there.

Other Requirements for Working Remotely Successfully

Fast Internet Connections

huddle_2-551438-editedI think it goes without saying that fast, reliable internet connections are a must for all parties involved. Pretty much all communication is done over the internet. It is very difficult to video chat or remote pair with somebody when one of the parties has a poor internet connection. In order for the communication to be seamless, as it would if you were sitting right there with the person, it is important that the audio be of high quality, and not cut out during the conversation.

Quiet / Private Places to Chat

Perhaps not quite as important as having a good internet connection, but having a quiet/private place to work is important. Even with directional microphones, it can sometimes be difficult to hear people if there is a lot of background noise (like in a loud office or a coffee shop). It is always best if you can duck into a quiet room when doing a video chat or having a remote pairing session. If you manage remote employees, or visa versa, then having a private place to chat is especially important.


It takes some work and a commitment from the organization to make working remotely a success. But I think that most companies are capable of making the transition, and the benefits far outweigh the costs. Granted I’m biased, as I work remotely a few days each week from the boonies. But at this stage in my life, I’m not sure that I could commit to commuting three hours a day, every day. The ability to work from home a few times a week allows me to put in a full day’s work, and still be very active in my kids’ lives -- all of which makes me a very happy and productive employee.

In Part 2 of this two part series, we will discuss some of the tools we use to make remote work *work*. Stay tuned!

Migrating Data - Rails Migrations or a Rake Task?

Posted by John Wood on Feb 25, 2015 11:41:00 AM

I’ve always thought that Migrations were one of Rails’ best features. In one of the very first projects I worked on as a n00b software engineer straight out of college, schema and data migrations were a very large, and painful, part of that project’s deployment process. Being able to specify how the schema should change, and being able to check in those changes along with the code that depends on those changes, was a huge step forward. The ability to specify those changes in an easy to understand DSL, and having the ability to run arbitrary code in the migration to help make those changes, was simply icing on the cake.

But, the question of how to best migrate data, not the schema, often comes up. Should data migrations be handled by Rails migrations as well? Should they be handled by a script, or a rake task instead? There are pros and cons to both approaches, and it depends on the situation.

Using Rails Migrations

One of the features of Rails Migrations is that the app will fail to startup if you have a migration that has not been run. This is good, because running your app with an out of date schema could lead to all sorts of problems. Because of this, most deployment processes will run the migrations after the code has been deployed, but before the server is started. If your data migration needs to be run before the app starts up, then you can use this to your advantage by using a migration to migrate your data. In addition, if your data migration can be reversed, then that code be placed in the migration’s down method, fitting nicely into the “migration way” of doing things.

However, there are some pitfalls to this approach. It is bad practice to use code that exists elsewhere in your application inside of a Rails Migration. Application code evolves over time. Classes come and go, and their interfaces can change at any time. Migrations on the other hand are intended to be written once, and never touched again. You should not have to update a migration you wrote three months ago to account for the fact that one of your models no longer exists. So, if migrating your data requires the use of your models, or any other application code, it’s probably best that you not use a migration. But, if you can migrate your data by simply using SQL statements, then this is a perfectly valid approach.

Using a Rake Task

Another way to migrate production data is to write a rake task to perform the migration. Using a rake task to perform the migration provides several clear advantages over using a Rails Migration.

First, you are free to use application code to help with the data migration. Since the rake task is essentially “throw away”, it can easily be deleted after it has been run in production. There is no need to change the rake task in response to application code changing. Should you ever need to view the rake task after it has been deleted, it is always available via your source control system. If you’d like to keep it around, that’s fine to. Since the rake task won’t be run once it has been run in production, it can continue to reference classes that no longer exist, or use APIs that have changed.

Second, it is easier to perform test runs of the rake task. We usually wrap the rake task code within an ActiveRecord transaction, to ensure that if something bad happens, any changes will be rolled back. We can take advantage of this design by conditionally raising an exception at the end of the rake task, rolling back all of the changes, if we are “dry run” mode (usually determined by an environment variable we pass to the task). This allows us to perform dry runs of the rake task, and use logging to see exactly what it will do, before allowing it to modify any data. With Rails Migrations, this is more difficult, as you need to rollback the migration as a separate step, and this is only possible for migrations that are reversible.

Finally, you can easily run the rake task whenever you want. It does not need to happen as a part of the deployment, or alter your existing deployment process to push the code without running the migrations or restarting the server. This gives you some flexibility, and lets you pick the best time to perform the migration.

Our Approach

Generally, we use Rails Migrations to migrate our application's schema, and Rake tasks to migrate our production data. There have only been a few cases where we have used Rails Migrations to ensure that a data migration took place as a part of the deployment. In all other cases, using a Rake task provides us with more flexibility, and less maintenance.

Another Approach?

Do you have another approach for migrating production data? If so, we’d love to hear it. Feel free to drop it in the comments.

Topics: rails, Development & Deployment

Why we're sticking with Heroku as long as possible

Posted by John Wood on Feb 25, 2015 11:33:00 AM

heroku-cloudThese days companies have a lot of choices when it comes to where to host their web applications. Not only are there many different providers to choose from, there are many different types of hosting to choose from. Do you need the raw performance of running on bare metal? Do you need total customization or control of your environment? Or, do you simply need to deploy your application somewhere, and let somebody else manage all of the finer details? 

There is no one right answer for everybody. It all depends on the needs of your application. However, at UrbanBound, we’re going to stay on Heroku as long as humanly possible.

There are many benefits to using a Platform as a Service (PaaS) provider such as Heroku or OpenShift. You give up a lot of control over how your environment is configured. However, in many cases, this is a very, very good thing.


I’m a nerd. I have an old-as-dirt dual Pentium III box in my basement, with a whopping 512MB of RAM, running the latest Ubuntu Server LTS release. I also have a VPS (Virtual Private Server) that I use to host a couple of applications that I have built over the years. There is nothing special on either of these machines. No credit card numbers. No proprietary information. However, that does not stop the flood of vulnerability scans and brute force login attempts that hit these machines daily. I know for a fact that, should I lapse on keeping my machines patched, up-to-date, and locked down, it would only be a matter of time before somebody gained access and did who knows what to my data or my websites/webapps.

The nice thing about using a PaaS is that I don’t have to worry about security as much. I still worry about it, but I worry about the things that are in my control… like making sure our application doesn’t have any security vulnerabilities, or isn’t using versions of any libraries or frameworks with known security holes. But, I don’t have to worry about stuff like the recent Ghost or Shellshock vulnerabilities. The great team at Heroku is all over these issues. We have come to rely on Heroku to quickly address new security vulnerabilities at all levels of the tech stack below our application.

I also don’t need to worry about firewall configuration, intrusion detection systems, malware scanning, network configuration, configuration of other services running on the servers, etc. The list goes on and on.


Scaling out is getting easier and easier as time goes on. The days of provisioning a new server from scratch, making sure all of the right versions of the right packages are installed, and manually adding the server to the load balancer are long gone. Even the hosting services that require you to manage the servers yourself provide an easy way to clone an existing server and have the new machine up and handling requests in minutes. And tools like Chef and Puppet are great for keeping you from ending up with a batch of snowflake servers.

But, simply using a slider in a dashboard to tell Heroku how many instances I’d like running, and then poof? Are you kidding? Short of reading my mind (or implementing a fully automated scaling solution with an unlimited budget), I don’t know how much easier it could be.

Granted, this does not come cheap. Heroku can be a very expensive option when you start scaling out. But for the time being, it is stupid-simple to handle occasional spikes in traffic.


One potential reason for wanting to manage your own servers is to have the ability to deploy the services you need to get the job done. When you have access to the machine itself, you are free to run whatever database, queuing system, or cache you wish.

Heroku supports a wide variety of add-ons. The list includes data stores, caching utilities, error reporting tools, logging and monitoring tools, notification services, network services, queuing systems, and more. It is very diverse, and likely includes at least plugin to handle whatever need it is that you have.

The best part is that these add-ons are simple to use. There is nothing to install, and most simply require the running of a single command, and maybe the setting an environment variable or two, before you can begin using them. That’s it.


At UrbanBound, we have a team of people who specialize in building software. Though we all know are way around the command line, and a few of us administer our own personal servers, none of us do it full time. Using a PaaS, we can leave all of the server administration to somebody else...somebody who knows much more about server administration than we do.

The focus on DevOps tooling the past few years has resulted in tools that make it much easier to administer clusters of servers. However, creating and maintaining Chef recipes and Puppet resource files takes a fair amount of work. It also also not easy to keep up to date with the latest versions of all the software you’re running on the server. Nor is it trivial to ensure that all of the software you are running on your server is configured, tuned, and optimized properly.

Leaving the server administration in the hands of the PaaS provider doesn’t only mean that we will end up with a better configured and more secure server. It also means we can focus on what we do best, building our app.


Not everybody can get away with using a PaaS service like Heroku. Some may need tighter control over how things are configured. Some may need to run in a private data center for legal reasons. And for those who need to scale out wide, it can become very expensive. But, if you don’t need the raw performance of running on bare metal, or the ultimate flexibility to configure every last thing in your environment, then I would encourage you to seriously consider a PaaS provider such as Heroku. Focus on what you do best. For us, that’s continuing to make the UrbanBound application great.

Topics: Development & Deployment

Subscribe to the UrbanBound MAKE blog

Recent Posts

Human Resources Today