8 Reasons Why Enterprise Software Testing is best done with Crowd-sourcing Testers

Enterprise Crowd Testing

In a world swarmed with software products & apps, quality can make or break your customers’ user experience. In a fragmented & highly competitive market where customer attention span is minimal, even an insignificant bug can result in downward spiraling of app adoption & huge opportunity losses for the company. Effective & fool proof software testing plays an integral role in acting as a solid base that enable software products to be smart, agile & user friendly.

Experts claim that about 65% of software product companies fail because they weren’t well prepared in terms of quality assurance before the launch

But sadly, QA function in big software companies as of today is about doing too many things with minimal budgets & teams. For many CIOs, QA isn’t always ‘mission critical’ & QAs get step-motherly treatment in the eyes of the decision makers. QA leaders are always in the fire fighting mode & their efforts are spent more in delaying the inevitable (software crash) than actually hunting down & fixing critical bugs

With the evolution of QA from desktop based to cloud, there were assumptions that testing would be simplified. But reality was that these apps needed to be tested across platforms & devices & given the tight timelines & budgets, QA teams weren’t able to cope with this drastic increase in testing matrixes.

Enter crowd testing

“If you are testing software that all kinds of strangers are going to use, then why not use a bunch of strangers to test it.”

The cloud adoption in the past decade also saw the evolution of community driven multi-device & multi-platform testing platforms which had the best testers around the world coming together, collaborating & finding the best solution for your software bugs. Collaborative testing became in the ‘in thing’ with crowd sourced team pulled off the ‘impossible’ that included

  1. Maintaining app quality
  2. Delivering on time
  3. Working around the clock with tight budgets
  4. Broad testing coverage
  5. Transparency in communication
  6. Control & compliance

Instead of hiring 5 in-house testers, you can hire 50 part time testers through crowd sourcing who can produce 300% more work in the same time & with 70% less fatigue & 10 times greater coverage when it comes to devices & platforms

As a CIO who wishes to release bug free apps/products of the highest quality under tight control & budgets, here are 6 reasons why enterprise software testing should be done by a crowd sourced team

      1. Lack of product or app bias

A crowd sourced environment has people who have an open & fresh mindset to approaching apps. A typical in-house team would have people drafted from the product team who might be biased to report bugs because the product is ‘their baby’

Internal teams tend to get hesitant to report bugs during the initial phase of product launch. This may end up spoiling the user experience for the early adopters & the product growth may hit the downward curve thanks to poor word-of-mouth

Crowd testers also boast the knack of testing multiple products which may be similar in nature. This would ensure that the first bugs that an early adopter would encounter would most likely be spotted & fixed by a crowd tester

      2. The power of ramping up & breaking ‘entry barriers’

An in-house testing team simply doesn’t scale. Given tight budgets & lack of autonomy (in some cases), in house testing gets into the rigmarole of blame games & actions that only delay the inevitable event of product failure than solving actual problems.

A crowd sourced team scales up faster as there is already a ready set of knowledgeable testers who have a fairly automated process in approaching software testing. Many SMBs typically face challenges in accessing multiple devices & platforms given their tiny scale. Crowd sourcing helps solve this problem by providing a pool of testers who would be armed with every single hardware-software combination that would ensure testing is water-tight. So instead of testing across 5 devices, you would now be able to test it across 50 devices at fairly the same cost

      3.Control over delivery

It must be reiterated that crowd testing would still be functional under the boundaries drawn by the client. Though testers are free to play around with test cases as much as they want, the final deliverable is a Yes or a NO. You either accept the bug, pay for it or reject the bug & not pay. The client would anyway have the final say

      4. OPS – Organized, Professionalized & Standardized

Crowd testers are a community. They behave like on. They compete against each other to get the best work and are professionals in their own right. Since it’s a self regulating marketplace, the community ensures that the testers behave professionally and maintain highest standards of quality & professional integrity

As with every other software process, crowd testing processes become standardized over a period of time. What works for one client can be replicated with minimal changes for the next client.

      5. Real world conditions & real users – Clean lab setups

Another challenge most companies face is the lack of ‘Clean lab’. A clean lab is an ideal environment which mirrors what the client would be seeing & experiencing. Since its tough for in-house teams find it tough to mirror such a setup, they end up running behind customers ‘after’ the bug is logged. This may not go well with many serious enterprise customers who want bug free softwares at the first launch

Since the testers are customers themselves (in case of consumer apps), they would be in a better position to test & deliver results

      6. Complement, not compete with internal teams & processes

A crowd sourced team should be seen as a ‘testing team on call’ & not as someone who would threaten the in-house teams’ very existence. After all, a crowd sourced team specializes only in testing and cannot replace the wisdom of in-house teams with strong product background

     7. Quality & Quantity of improvements

Our experience says that crowd testing platforms have significantly improved both quality & quantity of improvements reported. A direct reason could be that a group of people, with different approaches would be able to dig & recommend more improvements than a small team of in-house testers

     8. Tapping a wide knowledge bank

Ultimately, communities are known for their wide reach of people & an ever growing appetite for knowledge consumption. Such knowledge should be tapped by every enterprise to get the best out of their testing effort & also ensure that their own testing teams become smart, agile & well prepared to face future issues

 

Happy testing!

Advice for Software Testers from Bug Validation – By Krishnaveni

Advice for  Software Testers from Bug Validation – Krishnaveni

 

I recently got an opportunity to validate two contests on 99tests.com

This experience was quite refreshing, since it was like a refresher’s course for me to re-learn and re-visit things that I had learnt when I started out as newbie in Software Testing.

Bug Report is a crucial aspect of software testing. For bugs to be of value, effective bug reporting is a must.

Here are the lessons I learnt yet again.

 

Scope of Testing

First and foremost consider the scope of testing mentioned. Any bugs not coming under the given scope will be deemed invalid.

 

Bug Logging Guidelines

If the client or the contest owner has stated specific guidelines to log a bug, make sure to go through it before logging any of the bugs. If the tester fails to read them, chances are high that the bugs reported by the tester could lack in key information or bug format the client is expecting to get. Be attentive of what is expected of you.

 

Avoiding Duplicates

Before logging a bug, consider if a similar bug has already been raised by the others. Someone may have already reported the bug. Searching first helps to prevent duplicates. To be effective, try multiple synonyms and rephrasing of what the bug might have been called.

By not looking out for duplicates, the tester will lose out on the chance to log another valid bug. The known drawback is redundancy when there are many duplicates; however the impact is higher for a tester when the bug number is limited per tester in a contest.

Bugs rejected for being a duplicate will bring down the credibility of a tester.

 

Assigning Severity

Testers should assign the right severity for a bug. Clear understanding should be there to distinguish between a show stopper issue and a high issue.

I observed that many testers do not know this. Bugs like a field not accepting the input or a drop down not populating values are marked as show stoppers. Similarly some UI issues too are termed the same.

Show stopper bugs are those, which prevent the user from further accessing the application owing to an issue. In the examples stated above, the user can still access other areas of the application.

 

Bug Type

Be clear about what kind of bug it is. Is it a functional, GUI or a technical bug? Incorrect marking of bug types is a sure shot way to have the bug marked as invalid.

 

Managing Timeline

In order to accommodate more bugs within the specified testing timeframe, most testers log bugs in huge numbers without concentrating on how they are logging it. In this hurry, they just enter a one liner bug and submit it. They fail to realize that doing this way won’t fetch them any credibility. Being testers, one should never compromise on the quality. With practice, one will be able to learn how to manage the time.

Bug report should be complete inclusive of Bug description (one liner), steps to reproduce, expected results, actual results, screenshot or video and the environment details on which the testing was carried out. If there is a time limit allotted to test, that cannot be an excuse to log incomplete bugs. This shortcut will eventually lead to bugs being marked invalid or rejected owing to lack of details

 

Quality v/s quantity

Quality of the bugs is most important that the quantity of the bugs.  It serves no purpose to just keep filing bugs blindly.  Though it may appear that the tester is logging in many bugs, the credibility would definitely come down since the bugs would be lacking in content and quality.

 

Choice of Browsers

Ensure not to test on out dated or non-compatible browsers. It was surprising to see testers having tested the application on IE 6.

 

Environment Details

This information is one of the key aspects in a bug report that would aid in reproducing the issue.

Many testers fail to include the same while reporting the bug. This makes it very tedious for the client or the bug validator to figure out in which environment setup the issue occurred.

 

Bug Description

Bug descriptions are a sneak peak into what the bug is all about. They need to be short, crisp and in a line hinting on what the bug is all about. Inappropriate bug descriptions that don’t convey the right message might not catch the attention of the client or validator.

 

Communication Skills

Effective communication skills with a good proficiency in English are a must.

 

Spelling Mistakes

Before submitting the bug, do a proof reading. Check for spell errors. It is quite pathetic to see that as testers who should be finding the flaws, they themselves are unable to see the flaws in their own work.

Example: Sing in page not working

What the tester here meant to write was “Sign in page is not working”. There is a sea of difference between Sing & Sign. Hence care should be take to avoid spell errors.

 

Usage of short forms of words

Bugs reports are formal communications that help the client to know the issues in their product or application. So as per e-mail writing etiquette, no short forms of words must be used in a formal communication.

 

Example: One tester had written, the reset password page is not aval.

 

The tester had used the short form ‘aval’ for ‘available’. This is not a good practice.

 

Expecting the clients or validators to do mind reading

Don’t expect the client or validators to put their mind to figure out the entire issue just by logging in a bug with least information.

 

Example:

(a) Section ‘car’ accepts invalid inputs. The missing things in this are where is section car?

which page of section car(if the ‘car’ section is occurring in multiple places)

 

(b) Field is not accepting input in that form (which form?) How is the client

or stakeholder supposed to figure out which form in the entire application the tester is talking

about?

 

All about screenshots

  • Do not save screenshots in .bmp format since they tend to occupy a very large space. This might irk the client since the screenshots might take a very long time to download.

 

  • When multiple screenshots are included in a word document, it is a good practice to include a one liner description above each image so that it helps the client to understand what that image is about.

 

  • When the tester mentions to refer the screenshot in the bug report, the tester should ensure that the screenshot is attached in the bug. Many a times it is seen that the tester forgets to include the screenshot attachment.

 

  • Supposing a particular functionality works fine in one browser and not in another, it is a good practice to attach screenshots for both: browser where it is working and browser where it is not working.

 

Do not live with the bug

At times testers might have a tendency to ignore a bug if it is a very very trivial GUI issue. They would be so focused reporting major stuff that this might get missed out.

 

It is good practice not to miss out such bugs based on their severity.

 

Example:

In the particular application where I was validating the bugs, the logout button did not have a tooltip. The icon was appearing such that the very appearance was like a refresh button. It was a good thought of the tester to log a low GUI defect highlighting the same.

 

Bug Report Contents

A bug report should essentially contain the proper steps to reproduce, expected result, actual result, environment details and screenshots.

 

Steps to reproduce should start off with the mention of the URL that was tested. In case the application navigates to a different URL owing to an issue, it is worthy to mention that URL in the bug report.

 

Avoiding invalid bugs

Before reporting bugs the testers should reproduce it themselves to confirm the same.

Also, when ruling out that the error messages are not fired for an invalid input, the testers should ensure that the form/page is submitted in order to confirm if the issue still exists.

 

Key characteristic trait for a tester

Learnability is an important characteristic trait for people who wish to better themselves.

If the validators have commented on the bugs with tips or suggestions for improvement, the testers should consider them in a positive stride and use them constructively to improve their  skills.

 

Further links to refer:

  • http://testertested.blogspot.in/2010/02/coaching-testers-on-bug-reports.html

 

  • http://enjoytesting.blogspot.in/2011/11/release-of-my-ebook-what-if-50-tips-to.html

 

  • Bug Reporting etiquette from bugzilla: https://bugzilla.mozilla.org/page.cgi?id=etiquette.html

 

 

 

 

 

 

The Open Bug Experience by Krishnaveni

Guest Post: By Krishnaveni, one of the Top Testers at 99tests

The Open Bug Experience

 Open Bug

 

A tester to some extent by nature of practice can effortlessly spot a flaw in a product or an application. And when this happens, it is very intriguing to voice out the same. But the question that bothers is – would the issue that was voiced out by the tester reach the right contact or not for the issue to be resolved?

A tester knows the impact, risk and consequence of a bug when it is present in a product or an application that is massively used by the public.

How about having a platform to voice the bugs that we come across in our daily activities, be sure that they reach the right contacts and be rewarded for the same too? Too good to resist isn’t it??

The recently launched “Open Bug” feature from 99tests facilities this.

 

About Open Bug

Open Bug is a very innovative initiative by 99tests, wherein the testers registered on 99tests can log bugs they come across. The bugs that are creative and worthy can be voted through which a tester is rewarded.

My Experience with Open Bug

The approach of Open Bug kindled my curiosity to give it a try.

Unlike the conventional process, wherein, we are accustomed to test a suggested app with a given mission, the interesting factor here is that a tester can choose any app and log the bugs. 99tests has always been a perfect platform for me to learn new things. Needless to say, I was sure that this open bug feature would definitely have more to offer me to learn.

What I learnt

Lesson #1

The first thing I learnt was to leverage my skills I had, testing a particular genre of website (e-commerce, travel etc.,) and looked for apps belonging to that genre.

Lesson #2

Secondly, I learnt to adapt my bug filing style based on the context for the open bug feature. Unlike the conventional ways, if the context was to just test a single app and log all the bugs for that app, then it would have made sense to file each bug individually. But the objective of the open bug feature was to test any app and expose the issues in them.

In order to accommodate this objective, I grouped the bugs based on its quality characteristic like usability, UI etc., and logged them as a single bug. However this would not apply in every case. Based on the need, some bugs like those of functionality had to be logged separately.

Lesson #3

I was encouraged to try out the different genre of websites that I hadn’t tried before.

Lesson #4

I got a couple of lessons from my mistakes. To err is human they say and I was no exception. And I committed mistakes too. Honestly it was an irony for me to realize that though I ventured to figure out flaws in other websites & products, I failed to ensure the same in what I was offering.

I have to admit that it was purely by oversight and I am not ashamed to admit it. I learnt what mistake I made and shall take care to overcome them going forward. And how did I realize it? Nah! I did not figure them out myself. Again by oversight. It was by other testers who viewed my bug and shared their comments on the same.

Guess at times one tends to have an oversight that they fail to notice the mistakes they make. Here are the mistakes I made.

Mistake #1: I had incorrectly keyed in the name of the website I tested.

Here, the bugs logged for the website were worthy, but how could one evaluate them when an incorrect website name is specified?

Mistake #2: For one bug, I had inadvertently included the link which was a report generated for another website.

Now though the bug was meant for one website and was valid, the incorrect report link I included for the same rendered the bug invalid.

Lesson #5

The criteria in the open bug feature were to log a minimum of 20 bugs. So I went on a bug hunting spree and started to log the bugs I unearthed.

But after I reached a good number, I paused by and pondered; if doing this way really served the purpose the open bug feature was meant for? This is not a contest wherein I can just keep filing all the bugs of just one app at a go.

Now a question arose in my mind – what catches more attention of business owners to accept their app to be tested? Is it the few bugs that occur in some parts of their app or some major severity issues for them to be convinced to go in for a testing effort?

This thought has inspired me to focus on high priority issues that affect a business than just a technical bug. So this would mean that it’s important that one needs to log most critical bugs first.

It is motivating that any new thing attempted has many things to offer to learn.So this is an on going process testing – learning and improvising.Come give Open Bug feature a try and experience it for yourself. It’s worth every effort and time spent on it.

Research Paper – Effect of Crowd Size in Software Testing

Source: More Testers – The Effect of Crowd Size and Time Restriction in Software Testing

Crowd Testing Team Size

Crowd Testing Team Size

We at 99tests have seen more than 23,000 bugs being logged by over 5000 testers in the past two year. Here are some of key finding we have uncovered over the past couple of year. Specially the effect of a Crowd Testing team on finding high quality bugs in a short time.

When does the performance of a Crowd Testing team over take the performance of a normal QA team?

We have seen that a Crowd Testing team has to be composed of a minimum of 20 testers of diverse backgrounds to see the positive effects of the Wisdom of Crowds. Now, in the book Wisdom of Crowds by James Surowiecki states that for a group of diverse individuals to come to an accurate prediction or decision, four conditions that need to be met, first is diversity of the crowd, the second is independence in thinking,  third is decentralization and the fourth is aggregation.

Now coming back to Software Testing, one of the inherent issues with in house QA teams, is familiarity with the product, this means that since they have seen the product evolve over time, they become resistant to looking at the product with a fresh set of eyes, so traditional QA teams are very good at executing test cases and letting the developers know what works. This is where a Crowd Testing team out performs in finding bugs, that a new user would find. The second key trait of Crowd Testing Teams is the team size. Once the number of testers is two to three time the size of normal QA team, we start seeing the kinds of bugs which only a crowd team can produce. The third key point in forming a crowd testing team, is the diversity of the testers, i.e. the team of 20-30 testers needs to be composed of very highly skilled exploratory testers, average testers and novice testers. This team dynamic ensures that a high bar is set for the quality of the bugs by top testers and the team follows their lead in logging high quality bugs. The third key trait for good crowd testing teams is time pressure, so having a fixed time in which to find the maximum number of bugs is important condition for getting the best results. The fourth key trait is the incentives or rewards need to matched to the testers who find the most number and high quality bugs.

 PDF

Pre-print version (open access)

Is Risk Based Approach to Software Testing Better?

Software Quality has a unique requirement, that testers need to report to development teams on what works and what does not work. Now why does this core requirement cause testers to think in contradictory approaches to software testing? The first requirement of most development teams are to find out the features that work well. The second requirement is to then find bugs that are in the features.

The first approach is the one taken in most team or company environments. As that has higher priority than actually figuring out the measure of the quality of software. Now there are tools like Defect Removal Efficiency (DRE), that is essential the ratio of bugs caught in house to the bugs reported by customers. But how many teams actually use DRE for releases. Most of the Agile times are more concerned about their release that week. So the entire focus become on only validating what works and may be to fix the few showstopper bugs found by small testing effort.

Now why is this not very help full for the software testers?

The intelligence or the thinking is supposed to be carried out during the building of test plan. Now these are converted to test cases and are just executed by software testers. Only about 5% of the test effort is given to exploratory testing, where testers can use their intelligence and carry out risk based testing. One of the major issue is that there is no language to communicate from developers and testers as to what is important for that release cycle. So the two needs of validating the application and finding out bugs based on a risk based approach are contradictory to each other.

For example, let us take the case of testing a script that takes an input date file and the output needs to validated for the format. If one is taking the risk based approach to testing the application, we can first test the well defined input file and check for the output format. This would mach and test would pass. Now how would the risks based approach be different from the test case driven approach? One can systematically introduce randomness to the input to file to find out, where and when the file breaks to produce the output file format.

Now, testers can quantify the likely probability of the changes to the input file to generation of out put file. Most of the time testers just vary the input file to produce a bug, they are not thinking what is the probability of this event happening. Where as the developer knows that a completely junk file given as input might not happen.  So the information as to what is important and the probability of variation in inputs from the developer is not passed to the testers. Testers end up just executing test cases and find few bugs, and might totally miss the bugs which could have been uncovered by systematically breaking the app and asking the developer of the chances of each of the random input occurring in production.