Every developer has experienced the struggle of creating a complex product and maintaining software quality assurance. Perhaps you’re working on a new feature and something unrelated breaks and goes unnoticed. Open Social is no different. It’s an elaborate social platform with complex features. We feel your pain.
What’s the problem? We want to deliver a high-quality code without the need to manually test every single feature over and over again. So, we at Open Social put our heads together to implement an automated testing system that runs on every code change and provides feedback on whether everything is still working.
Join us for a quick dive into how we set up automated testing in Open Social to ensure the quality of our product - without the headaches and late-night work evenings.
Automated testing framework
Even the best manual testing processes don’t catch all the defects in the software; they end up creeping in and can often reappear. The best way to increase the effectiveness and efficiency of software testing is by setting up automated tests.
Manual testing is when a human carefully goes through application screens and tries various usage and input combinations. Automated testing, on the other hand, is able to playback predefined and pre-recorded actions and compares the results to the expected behavior. It will then report whether the test was a failure or success. It’s really useful for developing successful projects.
Automated testing requires tools that will execute the tests you're running and generate a report. Therefore, you need to implement a test automation framework that contains all the tools you need.
There are many frameworks out on the market and you need to decide for yourself what works best. We chose Behat for Open Social.
Behat is an open source Behavior-Driven Development framework for PHP, which means it focuses on the behavioral aspect of testing; the why and how you want to achieve your goal. It uses a special language called Gherkin that allows you to write human-readable stories that are understandable and clear to everyone.
In the screenshot above, you can clearly see what the text will be testing for and which outcome is expected. The process is pretty simple: you define a feature, set a goal, and define the scenarios that should reach this goal. The next screenshot is an example of how we wrote a test for Open Social for users to reach a specific page after logging in to our platform, Open Social. As you can see, this test is pretty readable for everyone.
How does Behat understand our tests?
You might wonder how the Behat program is able to understand the human language in these tests. This is because the language is programmed in so-called contexts.
A context class is used by Behat to test your feature and to ensure that it behaves as expected. Each ‘context’ consists of a text describing the action that can be taken and the underlying code that uses the arguments. In a sense, it acts as the bridge between our language and the code used by the computer. It can be used whenever you need to test a feature with the same context.
For example, the image below shows a context we created for Open Social that tests whether a link with text is working. This ‘context’ can now be used in any test that needs to check whether links are working by using the following statement: When I click the xth "1" link with the text "Private message". Having different contexts ready can help you make your texts as flexible as you want.
There are already many available contexts you can use in Behat, and if you have custom functional statements then you can add your own contexts as well.
Executing the tests
Once you have created the tests (and contexts) you need, you can execute them and view the results. We run our own tests by calling Behat and adding our config/tags:
/var/www/vendor/bin/behat $PROJECT_FOLDER --config $PROJECT_FOLDER/config/behat.yml --tags $TAGS
If we run our earlier mentioned Behat test for Open Social (users need to reach a specific page after logging in), we get the following outcome in the image below. You can see all the test are green because they have succeeded!
If we deliberately let this test fail, the text would appear in red instead. If a function is written correctly, the test will give a clear error message when the function does not have the desired result.
Sometimes, when the error is hard to reproduce, you can debug the issue in another way. For example, by adding the text “And I break”, which will cause the application to pause until a key is pressed.
And now comes the cool part. Open Social has integrated Selenium, a web browser automation, with Behat so we can visually see how the tests are running on a browser of our choice. Selenium provides a playback (formerly also recording) tool for authoring tests. We also use a tool called VNC viewer to connect to this Selenium container.
So, why is this useful? Imagine we run a test and it fails. Firstly, in the Behat test, we can now see that it has paused at our breakpoint.
Then we can check our selenium container, so we can see exactly what the test sees. We figure out that the added text is not listed here.
Gathering test results using external services
At this point in our post, and with the tools we’ve mentioned, you are able to set up and run Behat locally. But it can be frustrating to run these tests locally every time the code is changed. And honestly, who has the time for that.
In order to avoid this, we use the tool Travis and Shippable (and Travis is free). They are both hosted, distributed, continuous integration services that are used to build and test software projects in combination with Github or Bitbucket.
We decided to integrate Travis with Github so that each pull request can’t be merged until all the tests are marked in green.
A downside to running the tests externally is that you cannot connect to it with your VNC Viewer, but this is solved by making the tests generate a screenshot each time a test fails and then uploads the screenshot to a server. This will also provide visual feedback that a developer will happily use to solve the problem.
Last but not least, this article described how to use Behat in combination with Travis, but actually, Travis is not just for Behat. You can extend Travis to conduct multiple checks to assure that the quality of your code is high. For example, Open Social extended the checks with a coding standards test and an accessibility test.
The time is worth it!
We took the time to create multiple automated tests for Open Social. And was it worth it? 100%.
All these tests assure that we at Open Social are able to deliver the best possible quality before we ship out new releases. Although setting it up is a lot of work at the beginning, now you’re able to detect even the smallest bugs in your code that would have otherwise slipped through. It saves you a lot of time that would be spent on hotfixing.
After all, anything that can increase efficiency and decrease time-consumption is worth it. And we’re left with a product with high-quality code. Win-win!
Open Social Github
If you are interested in the tests that are written for Open Social, you can check out our Github folder on https://github.com/goalgorilla/open_social/tree/8.x-3.x/tests/behat.
Thanks for reading! If you have any questions or comments then leave us a comment below or mention us in your tweet with @OpenSocialHQ.