Written in collaboration with Gareth Bowen (Chief Architect), and Bede Ngaruko (Quality Assurance Engineer).
Software defects, also known as “bugs”, are unexpected behavior in software due to inadvertent errors in the code. The severity of these defects can go from minor visual disruptions that have workarounds, to critical defects that severely impact a person’s life. Although it is widely believed that no software is truly free from defects, the earlier a defect is found the less time and headaches it will cost our team, deployment partners, and users. As a bonus, we can then spend more time on new and improved features for community health applications.
In this two-part blog series, we will focus on how the Medic team weeds out defects before releasing new versions of our framework and apps. In this post, we’ll start with how new code and features are tested as part of our software development workflow. In part two of this series, we’ll cover the testing process used for preparing releases.
Automated Testing
Our software development workflow is designed to prevent the introduction of defects into our code, and then find any defect before we release updates to our framework and apps. Detecting bugs early is made significantly easier by automated testing, which includes static analysis, unit tests, integration tests, and end-to-end tests. These tests run each time as part of continuous integration with Travis CI every time the code is changed.

Static Analysis
The first step in each build is a static analysis which scans the code like a spell checker would run through an essay, highlighting what looks like spelling or grammar mistakes. At Medic, we use the ESLint tool on all our production code, for which we provide a specific set of rules to follow. The main benefit of static analysis is to maintain code consistency and to quickly find problematic code without writing a single line of test code.

Unit Tests
As part of our software development process we aim for one or more unit tests to verify that new code or a fix works as expected. These are great for testing that the logic in a small “unit” of code works but can’t check that the units work together. We have around 2000 such unit tests that are run by Karma, when the code is run in a web browser and Mocha for code that runs on the server.

Integration Tests
Going beyond individual units we have integration tests, which check that units work well together. For us, the focus is on interaction points between components and various APIs including those of databases. For instance, with these tests, we can ensure that the code works with an actual CouchDB database. This is useful for ensuring our app and server APIs respond as expected when given specific requests.
End-to-end Tests
End-to-end testing helps to test the entire application for critical functionalities such as communicating with other systems, interfaces, database, network, and other applications. At Medic, our end-to-end tests run using Protractor, an automated testing framework for testing AngularJS applications in the browser. It simulates a user interacting with the website by filling out forms, clicking around the page, and navigating from page to page. End-to-end tests are particularly useful for proving we haven’t made any regressions, meaning we haven’t broken any existing functionality.

Acceptance Testing
Whether it is a bug fix, improvement, or feature request, all software development work is based on requests that we track as GitHub issues. These issues are prioritized to meet the needs of our partners and users and grouped into releases. Within each release each issue moves through 4 stages: To do, In progress, In AT (Acceptance Testing), and Done.

An issue is “In progress” when it is actively being worked on by the product team, which includes collaborative design, development, and code review of code in an isolated branch. To make sure an issue is ready for inclusion in our framework it must then pass our acceptance testing process. That is where our Quality Assurance Engineers work with teammates and users to make sure the fix or feature works as expected. This includes learning about the feature from new documentation, configuring the app as our technical partners would, and then using the feature as a health worker would — which is not necessarily how the developer expected! Any feedback or problems are reported back to developers, then fixed, reviewed, and tested again.
Once there is sign off on an issue, the code from the issue can be brought into the main codebase. This is known as merging the feature branch into the master branch. This is the final step before the issue can truly be marked as “Done”.
Learning As We Go
As our product team grows and learns, our software development process and testing methods are also evolving. We regularly revisit our process to see what is working and what can be improved. For instance, we wanted to more rapidly get completed features out to users and realized that unrelated issues could block their release. To fix this we started merging the feature branch only after it passes acceptance testing, and we’ll reevaluate in the coming months to see how that’s working out.
How have you incorporated testing into your software development life cycle? What testing practices do you find helpful? We’d love to hear from you, and have you contribute to our tools and process!
Stay tuned for our next post on testing, where we’ll cover the Release Testing process, which helps make sure that new versions of our framework and apps work as expected.