We ain’t afraid of no bugs! While software bugs aren’t anything to joke about, we do run into some that cause us to chuckle. Without trying to make our awesome development team feel bad (we all make mistakes!) here’s an example of a bug we’ve encountered that was just a little funny to us.
Most people prefer not to walk (or run) backwards in time when attempting to select a date.
But seriously, we care a lot about the quality of our software here at Rustici. As a member of the QA team, it’s my responsibility to track down issues in our software before they become actual issues for our customers. Believe it or not, we don’t always spend our entire day (or week) clicking on each particular part of an application to find these bugs. In fact, there are a couple of different methods we use for finding and preventing bugs from getting into our products and out to our customers.
Keeping bugs out of our products
Currently, the QA team works on SCORM Cloud, Content Controller, Rustici Engine and Rustici Dispatch products. If you’re at all familiar with any of those, you know that there’s a lot of functionality packed into each of those products, which means there’s plenty of opportunity for things to go wrong! There’s certainly a lot to test in each of these, but we’ve created a couple of different tools to help us out and make the bug finding process easier and more efficient.
One of the best ways to catch a bug is to do so before the bug can even exist. In some circles this is called a “shift left” approach to QA. The way we handle this is two-fold. First, the QA team is involved in the feature creation process. This is before any code is ever written and is where a feature is fleshed out. This might look like a feature ticket being made, or even go as far as a tech spec document being created, which is then broken down into several tickets. Either way, the QA team is a part of that process and is able to review the ticket or tech spec and provide feedback. This allows us to start poking and prodding at a feature and thinking about how we are going to push the feature to its limits in various ways before we can actually touch the feature. This also allows the development team to take those ideas into consideration as they begin writing the code that will implement the feature. In this way, we are able to get ahead of the bug and catch it before it even actually becomes a thing.
As a part of this process, the QA team writes test cases that they know will need to be run against the new feature. We track all of our test cases in a popular test management tool called TestRail. Test cases are added into a release suite, so tests can be run once the feature is out of development and then will be run again as part of the release process. This allows us to confirm that the feature works as expected after the development of the feature is completed and that it still works as intended after development work for a release has been completed.
Manual and automated testing 1, 2, 3
Once a feature, or part of a feature, is completed by development, it moves over to the testing phase. This happens as soon as the developer finishes their work so that the feature is still relatively fresh in their mind. The QA team then takes that work and runs tests over it to ensure that the functionality works as described in the ticket or tech spec. We then also run tests that are related to the feature to help ensure that existing functionality is not broken by the addition of the new feature. This all happens before release testing with the idea in mind that the sooner we catch the bug the quicker it’ll be fixed since the developer is still relatively familiar with the code that created it.
After a feature has passed manual testing, we work to automate as much of the testing around the feature as possible. If the feature is an addition to our UI, we’ll add automated tests to our WebdriverIO test framework. If it’s an addition to one of our public APIs, we’ll add tests to our API test suite that uses Mocha.js. Both our UI and API test suites are run whenever a new build is created. This helps us stay on top of any regressions that might result from the addition of a bug fix feature.
This all leads to the best, and probably the most interesting, part of a QAs job, exploratory testing. Knowing that all the major use cases are covered allows us to spend more time in the product attempting to do interesting or unusual things. This might mean attempting unusual workflows within the application to see if that causes any confusion or trying out unusual settings combinations that could potentially lead to unruly results. Out of the box thinking is encouraged. Much like the exploratory bug at the beginning of this post, we’re trying to see what actions or inputs might cause the system to respond poorly.
We’re proud bug hunters, and if working alongside us as a developer sounds like something you’d enjoy, we’re always looking for awesome people.
And since it’s April Fool’s, we couldn’t help but end with one of our favorite jokes:
A software tester walks into a bar. He orders a beer. He orders 99999999999999999 beers. He orders -1 beers. He orders hkljasfjkhdsfuuhrgh beers. He orders <script>alert(“test”)</script> beers. The first real customer walks in and asks where the bathroom is. The bar bursts into flames.