Throughout our years at AppGyver, we’ve worked hard to keep our complex and intertwined tech stack highly testable. When you have literally dozens of interplaying components – proxies, data storages, data connectors, end user web apps, native iOS and Android clients, a visual app builder, a cloud build service for creating app binaries, user management, payments, analytics and much more – it takes time and effort to keep your test coverage up to par.
Unit tests are straightforward, of course – an individual developer can create code with a good test coverage, and it’s easy for contributors to ensure tests keep passing. Measuring test coverage is relatively simple. Unit tests are great for designing code, but as your system grows in size and complexity, you can rely less and less on them as an actual indicator of the health of your system. You can have 100% unit test coverage and green bars everywhere, and the system as a whole still fails.
This makes end-to-end tests extremely important. When you are building a product as complex as ours and deploying to production daily, you simply can’t rely on just unit tests – no matter their coverage. We need to be certain that the user can perform all the intended combinations of actions, in production, in the order intended. And with our main tool Composer 2 being web based, the only way to make sure it’s all working is for the test suite to emulate someone actually clicking around in the browser.
Browser-Based E2E Testing Sucks
Traditionally, E2E testing of complex web apps has been horrible. We’ve used many different tools and frameworks – Protractor, Ghost Inspector, PhantomJS, vanilla Selenium, and so on – and even built some of our own. None of them have gotten us to where we want to be.
The problems have been numerous. Tests have been difficult to create and even harder to maintain; tests have worked only with certain browsers; weird bugs have kept popping up… Especially hard to set up have been cloud-based automated runs and reports. At one point, we coded a tool that launched Protractor tests on a remote machine, recorded the output and uploaded it to Flowdock. To get rid of the inevitable hangups, we set the tool to periodically boot the whole system. Not very elegant. Iframes proved another Achilles’ heel for many frameworks – they simply couldn’t navigate them.
In our experience, a great E2E testing framework/tool should meet the following criteria:
- Runs in the cloud
- Tests can be created (mostly) without coding
- Supports multiple browsers
- Easy to set up automated, periodic tests
- Ability to easily run the same test against different environments
- High maintainability: fixing tests that break due to product changes is easy
- Great, visual reports of breaking changes
- Handles complex SPAs and network requests
There are other niceties, of course, but we failed to find a tool that would meet even just the above criteria – until we came across Usetrace.
Usetrace Takes It Home
Usetrace is the first E2E testing tool/framework we’ve found to hit all the right spots. It supports multiple browsers. Tests are easy to schedule and run in the web, and the reports are clear. Building tests is easy with the web editor. Usetrace effortlessly handles multiple pages and domains, navigates complex SPAs and waits for network requests to complete. It’s quick to fix up misbehaving selectors by diving into the code.
More importantly though, Usetrace is working on the right abstraction level: it’s all tied to just the HTML UI of the page. This leaves us free to change the underlying technologies as the product evolves, without messing with our tests. And when a breaking change is made, the failing tests are easy to identify and fix.
Often it’s just one step in a long chain that’s broken. Since Usetrace lets you compose longer E2E tests from smaller test snippets, we need to fix the problem just once, after which all tests start using the new version.
This maintainability is critical for us. It’s a relatively big upfront investment to create good E2E test coverages, and if it takes too much effort to keep everything up and running, the ROI falls too low. In the past, the cost of maintenance has led us to even abandon some E2E testing frameworks and resort to manual testing, leaving the tests we created to deteriorate.
Now, as we develop a new feature, we are able to always create the necessary new E2E tests for it (in addition to any unit tests, of course). Then, we can run our whole library of existing Usetrace tests against the new version, giving us very effective, automated regression testing.
We’re looking forward to expanding our E2E test coverage towards 100% (even if that’s an unreachable goal) with Usetrace. We’re definitely enjoying that extra layer of security and responsiveness to incidents that it brings.