Yes, strategic software testing may not scream interesting to everybody but those working with software know the importance of software testing efforts. When you build a product, you want to assure the highest quality of software. And for that, you need to take testing seriously.
It’s not just about quality. It used to be ok to take ages to deliver updates and releases (ranging from 18 to 24 months). However, that’s completely unacceptable now.
Enterprise customers expect releases at least every two months. And these releases need to adhere to a wide variety of requirements across browsers, mobile applications, and operating systems.
The endless list of requirements is dizzying. So, how can you regularly ship software for your enterprise clients that with high quality and fits all requirements? The answer lies in automation.
Project Moss to the rescue
We run automated tests every time new features, improvements, or bug fixes are created for the Open Social distribution. We do the same for our custom projects. This ensures high quality and requires less time and effort than manual tests. Handy stuff.
The question is, how do we handle automated tests for both the Open Social distribution and our custom Enterprise projects? It’s a lot to juggle. That’s why we created Project Moss.
Moss from the TV series ‘The IT Crowd’
Project Moss, named after a famous character from the TV series ‘The IT Crowd’, is a Continuous Integration tool that serves as a bridge between the Open Social distribution and Enterprise projects.
It continuously checks the integration of the Open Social distribution code with the custom projects. We rely on it to take care of running automated tests whenever we change the distribution.
What are we trying to solve
It’s not always easy to know how changes to the Open Social distribution will affect the sites using it. In some cases, a small change might have a large impact on Enterprise projects.
For example, when we added the join button for groups we placed it permanently in the hero area of the group. Only after the release, we realized that this doesn’t work well for projects that would rather replace or remove that button.
Or, an even more impactful example, we added visibility settings (public, community or group members only) for groups. We only made it for the default Open Social groups. Any projects that had other group types would have to duplicate the code for the visibility settings which in term means more maintenance.
We try to accommodate any possible scenario when we create features or fix bugs but in reality, sometimes we fall short. We’re human, after all.
These scenarios are often discovered when other Open Social teams update the Enterprise sites. The team would then have to solve that scenario or bug and give it back to the distribution. Obviously, we want to prevent such scenarios or any kind of bugs from getting into the release, because such a flow costs a lot of time and money.
Beforehand, we were just testing the distribution functionality itself. But there are so many ways the distribution can be used and testing all these variations is nearly impossible.
- Problem: we couldn’t prevent updates from causing dysfunctionalities in our Enterprise projects.
- Goal: we want to reduce the amount of work for updates and be more confident about the changes we implement during Open Social releases.
- Solution: Project Moss! This project increases the quality and flexibility of the Open Social distribution, reduces update time for projects, and gives us more confidence.
So, how does this miracle project work?
Project Moss: ins and outs
This part of the article provides an overview of how Project Moss works to increase the quality and flexibility of Open Social updates.
Whenever changes are made to the Open Social distribution, we create a pull request on the GitHub repository. The pull request describes the changes that we will make, what we are trying to solve, and how it can be manually verified.
The moment that the pull request is created, automated tests begin to run on Travis CI where a clean installation of just the distribution is used.
Project Moss is also triggered (1) and creates a new testing branch for every Open Social Enterprise project using the branch of the just opened Pull Request (3) of the distribution. When the pull request is updated, the existing testing branch is also updated with the changes.
This testing branch is then pushed to the repository of the Enterprise projects (4). A webhook is used to inform Shippable CI about the new branch (5).
Now, in each of the projects, the automated tests for the distribution as well as any custom tests are run and the result is posted back to Project Moss (6).
We see an overview of the status per project for each pull request. We can see if the tests were a success, a failure, or if they are started to build. We also have the possibility to retrigger a build or spin up an environment to test the changes manually.
In case of a failure, we fix the issue in the testing branch of the project. And after the pull request is merged, the fixes are automatically put in a fix-distro branch of each project.
Whenever we start updating an Enterprise project to the new version of Open Social we can just use the fix-distro branch as a base. It already contains the fixes for each pull request that was merged and landed in the release.
It’s a serious time saver.
How does this benefit you?
Although we covered many benefits of Project Moss already, there’s one that clearly stands out for all users of Open Social: increased quality!
Thanks to Project Moss, we have already prevented numerous smaller and a handful of serious issues getting into an Open Social release, which we would otherwise not have detected.
By testing changes to the distribution in custom projects, we have fewer bugs, increased flexibility, and a better understanding of how the distribution can be used.
This doesn’t only work for Open Social. This workflow works with your own modules or features. You can, and should, test how it works in a vanilla Drupal installation, and it’s beneficial to test it in real life projects as well.
Our experience shows automated testing is worth it.