Hello world!
Beginning with Quality Assurance Software Quality Assurance is not the last step before the product is released. That is software testing. Software Quality Assurance (SQA) consists of the software engineering processes and methods used to ensure quality. SQA encompasses the entire software development process, which may include processes such as reviewing requirements documents, source code control, code reviews, change management, configuration management, release management and of course, software testing. This is a parallel process that begins from the moment a product is conceptualized. This document aims to provide its readers with basics of setting up a QA process. This setup is ideal for small projects that would not have frequent build cycles. CMS system is an ideal example, as it will not undergo any functional changes once it is set up for the customer, unless a major module is added. The heart of SQA is the testing of the actual product. But there is a lot of groundwork that needs to be done before one can actually begin with this. Software testing is broken down into to methodologies, Manual and Automated testing. Manual Testing As the name suggests this one involves the members of the Software Testing group manually executing the set of test cases and recording their results, observations. This is an essential part of the whole software testing cycle. Test Automation This is the next step after the first manual cycle. This is generally advised for system that requires a regression cycle[1], and projects that involve performance testing. One thing to note is that we can only automate manual test cases. To identify manual test cases which are candidates for automation is an art that one learns from experience. At first it may be tempting to automate everything, but one has to strike a balance between the effort required to automate a test cases and the effort saved for the tester if he were to execute the test cases manually. Generally repetitive tasks with slight variations in execution are excellent candidates of automation. A simple example will be creating multiple types of users in a system, or testing various templates of an email. Generally projects that would have multiple build cycles are good candidates of automated testing. Long term projects are also other types of projects that would qualify for automated testing. Performance testing is another type of testing that can be automated. But this can be achieved in different ways. One could be simulating multiple requests to the server. Creating multiple users from many machines. Now we will look into the manual process in depth The process starts the moment the client places his requirements to the software development team. The QA team will need to read the market requirements and understand how the system works at present. Then they would need to understand how the proposed product would either replace the existing system or make the existing system more productive, faster. It is very important for the QA team to understand the value addition that the product will bring to the customer, as this clarity will help it align the test plan with the customer’s vision and goal for the new product. Few questions that will need to be answered in this phase will be… How does the existing system functions? How will the proposed system functions? What are the clear benefits like financial, time saved, ease to the user Once the requirements are understood, the next step is to define precisely what will be delivered to the customer. This generally includes the UI mockups. Then we need to define the scope of testing. E.g. would be for an ecommerce site, suppose the end user is required to enter his credit card number then to what level will be testing, (Blank input, 16 digit numeric input, or actual validation of the Credit Card number) Next step is to create a test plan. This is generally a word document that defines the path for the whole QA cycle for the project. It is a very high level map. It should include details like the time frame for a QA cycle after every deliverable from the Development team. The team size, the team structure, the team hierarchy. The platform on which the test would be carried out, the database used, the OS, the browser to be supported. Then we breakdown the project into modules and define the order in which they will be released to the QA team. Then we define what are the features to be tested for each module and more importantly what are the features that will not be tested. The latter is more likely to occur when a new module is added to an already existing system. In such a case QA will assume certain features will work for the new module to work. QA Team also defines what are the types of testing that will be undertaken Integration Testing[2] Regression Testing Security Testing [3] Performance Testing [4] UI Automation Testing[5] User acceptance Testing [6] Exploratory Testing [7] Installation Testing [8] Documentation Testing [9] We also define the criteria under which the testing can be suspended, generally when a bug of high priority, severity is encountered. E.g., the login is not working, installation fails. And under what conditions it will resume. Also it is important to define what the deliverables will be from the QA team. Deliverables generally include the test plan, the test case document, bug reports. Test Cases Creating test cases begins after the specs have been defined and frozen. The QA team then reviews the specs and starts to create scenarios for testing. It is important to realize that at this stage there is no product, yet the test cases can be formulated. The advantage is that this makes the QA thinks without having the constraints of the actual product. Also the test cases get added and refined as and when the UI mockups of the products are made available. There are general guidelines that are to be followed while writing a test case. Every test cases should have a unique id. The format is up to the QA managers to decide. It could be numeric or alphanumeric. Generally a combination of module name and a number. It should have a brief summary describing the test case The test case should be classified into Functional, Performance, Regression, Security, UAT, Database, etc Define a group of test cases that need to be passed for a product to be released When executing the test cases, the tester must note whether the test cases passed, failed, or was not executed for if it depended on some other feature that was either not tested or failed. If the test case failed than the tester should make a note of the environment under which the test case failed, so that he could enter the details when filing the bug. Once the test cases done executing the next phase of testing called exploratory testing is undertaken. This generally involves the tester trying some actions on the fly. And in case of any failure he must document the failure as a bug. The important thing to take care is that one must document this phase as much as possible as significant number of bugs can be uncovered during this phase. Also many new scenarios get tested and which may then be added as test cases. Test cases can be maintained in Excel format, or be hosted on a web server. What is required that there be an ability to add new test cases with ease and records the results of test cycles in the document itself with the version. Bug Filing This is perhaps the most important task that a test has to do. The following are details that need to be included by the tester while filing a bug Ever bug should have a unique ID A brief description of the bug The tester must define the severity of the bug. Severity is the effect of the bug on a system, As bug that causes the system to crash is a high severity bug The tester must define the priority of the bug. Priority is defined by the effect that the bug has on the customer, functionality of the product. If the user cannot see access the login page than this is a high priority bug. The tester must document precisely the steps that are needed to be carried out to recreate the bug. The expected result The actual result The component that is failing, e.g. user management, creating an article The system environment under which the bug occurs. E.g. the OS, browser, user details, rights, etc The frequency of the bug, whether it occurs every time, rarely, randomly And the exact error message, a screenshot is advisable Bugs should be filed in a bug tracking system, Bugzilla. It is important to use a standard utility as they provide all features like reporting, notifications etc. Bug filed should be available to the whole team so that inputs could be added to the bug. And the status can be updated as the progress is made on the bug. A bug is closed when the dev team provided a fix either in the code or a acceptable work around. This solution is verified by a member of the QA team and then the bug is marked as closed. Sometimes the bug may be classified as known issues. In such a case the customer is made aware of this. What makes a good QA? I have been in this field for a year now, and these are my observations, views He should have a calm nature, detail oriented, patient. He should have the ability to log all his actions and put those in to clear words. He should be curious, try out new scenarios. Generally, people with a nag for breaking systems. He should not assume the obvious but test it. QA today has attained great importance. Companies invest huge amounts of resources both human and financial to build these systems. And every day, as more and more data gets added into them, the value of the data itself increase, and any corruption and loss of such a repository is not acceptable. QA is a very important step in the product development cycle that eliminates possibilities for a disaster to occur to a very, very large extent. QA today is not an afterthought to the whole development process but a parallel one that have proven its value. Hope this provides you with enough insight into the world of testing. [1] Re-testing of a previously tested product following modification to ensure that faults have not been inadvertently introduced to unchanged functionality. [2] Integration testing is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing [3] This is a vast topic in itself. And it could have different interpretations. It could imply the validation of rights and roles of user in a system, to detecting of access violations [4] In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability and reliability [5] Identifying of test cases for automation and implementation [6] User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert (SME), preferably the owner or client of the object under test, through trial or review [7] This is a phase of testing that is done after the manual test cases are done. This involves exploration and documenting different actions that a user may try on the fly once he has the actual system, scenarios [8] Testing whether the product gets installed and uninstalled cleanly by following the documented procedure [9] Verifying the content of the help files if any. Prepared by Rohit N, July 13, 2007