Contact Us Sitemap
Contact Us Sitemap


USA ASIA-PACIFIC
ONLINE EVERYWHERE 24/7
 
HOME ABOUT US PRODUCTS SOLUTION & SERVICES CLIENTS JOBS EVENTS PARTNERS
Software Quality Assurance Practice

FootPath team is consistently focused to deliver the best quality of the end-product. We use effective project management, risk management and ISO-9001 compliant software manufacturing / development process. We use ISO-9001 compliant processes for Development, QA, CM (Configuration Management), Documentation and Technical Support. These processes are directly linked in achieving highest product quality.

FootPath Inc. provides complete array of software application testing services ranging from QA Process Review, Software Testing, Test Plan Preparation (Unit, Integration, Acceptance, Functional), Test management & QA staff augmentation. Testing Methodology include the following:

  Test Planning

The first phase, which produces the test plan. The test plan will define the levels and categories of testing to be conducted (e.g., functional, end-user interface, usability, configuration and regression) It will also include a list of:

  • Requirements to be tested or verified
  • Related acceptance criteria that are agreed to by the business sponsor (including performance considerations)
  • Roles and responsibilities
  • Tools and techniques to be used
  • A testing schedule

  Test Case Design

The design phase where application requirements and expected behavior, as specified by the business sponsor, are transformed into documented test cases. Test cases should not only be business-function in nature, but also should cover end-user interface verification, field validation, and performance levels and stress condition.

  Test Creation

The construction phase where test cases are transformed into reusable test scripts or programs that will exercise the AUT. Test data must also be created. Scripting tools and test-data generators may be used, but the more common approach is to employ capture/playback technology that can generate scripts and data through keystrokes and mouse-movement recording.

  Automated Testing Cycle

  Test Execution and Results Analysis

The system/application is exercised by a battery of test scripts; the results are stored and later evaluated against the baseline of expected results. Comparative and exception reports are generated along with visual graphs. Determinations are made as to whether all test requirements and completion requirements have been met and if further testing will be required. In this phase, a Bug data-to-metrics database link will be necessary for total quality management and continuous process improvement initiatives.

Enterprises are encouraged to acquire a testing methodology, build one on their own if they have the requisite experience or work with external consultants to develop one. In addition, for client/server and rapid applications development (RAD) projects, it is imperative that the testing process becomes iterative in a fashion similar to the AD process (see Figure 6). In conducting such projects, successful AD organizations will drive through several iterations and exploit techniques such as joint applications development (JAD), prototype and time-boxing to define and cement requirements, and to ensure timely delivery. Each iteration should produce a deliverable, and that deliverable (e.g., an application build) should be a target for testing.

Bug Tracking, Category and Reporting

Bug Tracking is done through a Bug Tracking System linked with Source Code Control System. All Changes, Modifications, etc. must be routed through the Bug Tracking System.

Bug Categories can be different for each organization. FootPath generally follows the following 4 categories

Category Description Example
P1

Priority 1, Requires Immediate Attention, Must Block the Release until Fixed. Related to Stability

Segmentation Fault, Memory Protection Error. System Unusable, Corrupts File System, etc.

P2

Priority 2, Severe Bug, Functionality does not work as supposed to, Related to Performance, Functionality

Migration of Ver 1.0 to 2.0 does not work properly

P3

Priority 3, Annoying Bugs, Data Inconsistent, GUI Inconsistent. Related to Usability, Correctness and Consistency

Report programs display incorrect data. GUI very poor

P4

Priority 4, Enhancements, Improvements, Additions. Related to Customer Requests

Support Excel format reports

Table: Typical Bug Categories

  When to Stop Testing

When should we stop testing? Typically it will depend on numerous factors including defined coverage requirements, Bugs discovered per line of code (LOC) or function point (FP), agreed acceptance test criteria, and mean time to failure (MTTF) or Mean Time Between Failure (MTBF). MTTF is an engineering term that is concerned with the expected average time between failures of a product. It is usually measured in hours, days or cycles.

QA Sign-off Release Determination: Most clients monitor the bugs and produce a bug report daily, weekly and monthly in bar chart based on bug category. Many organizations freeze source code several months before the planned release date and postpone all P4 bugs into next release. FootPath recommends the following condition

If Count (P1 Bugs) = 0 AND
Count (P2 Bugs) = 0 AND
Average Bugs/Quarter, Average Bugs/Month, Average Bugs/Week, and Average Bugs/Day
Is decreasing for last several successive months.
The Average Bugs/Week is the lowest and at acceptable level

  Testing Stages: "White-Box" vs. "Black-Box"

Testing stages typically include unit, integration, systems and acceptance testing. Within each testing stage, various types of testing can be performed, including functional, usability, configuration, performance, regression and stress testing.

Unit Testing

The process of testing individual application components or modules: Testing done here is described as "white-box testing" because developers are concerned with ensuring that the code behaves, as it should.

Integration Testing

The process of testing the interfaces between individual application components or modules: This is still considered white-box testing.

Systems or QA Testing

The process of testing a suite of application components or modules that constitutes the complete application. Testing done here is usually described as "black-box testing" because QA personnel are concerned with ensuring that the application behaves, as it should (i.e., the application functions according to end-user requirements). Stress testing can be done here.

Acceptance Testing

The end-user organization tests the complete application against the requirements of functional acceptance (i.e., black box testing). Stress testing can be done here as well.

Static vs. Dynamic Testing

Static testing takes a piece of the application and analyzes it without execution. Inspections, code analysis (using tools such as code parsers and complexity analyzers) "desk check" are examples of static testing. Dynamic testing requires execution of one or more of the application components (e.g., program logic or SQL calls) to detect and remove errors. It is an iterative process because, once Bugs have been removed, subsequent testing must be done.

  Inspections

Inspections are a more efficient means of removing Bugs than simple testing (often at least twice as effective) at removing design and coding Bugs. They also remove Bugs earlier in the project; thereby reducing Bug-removal costs significantly. Consequently, they represent a high-efficiency, lower-cost option of Bug removal for organizations with limited time and money. Because of their effectiveness, inspections should play a key role in any organization’s Bug-removal efforts. FootPath can contribute in this effort.

Constructing the Testing Tool Chest

As is the case with any effort of significant size and criticality, testing involves people, process and technology. This report, thus far, has focused on the people and the process. The technology becomes manifested in the form of automated testing tools. Therefore, it is important for AD organizations testing client/server applications to construct a testing tool chest containing the following categories at a minimum (see Figure 7).

  Glossary

GUI tester
Provides capture/playback, scripting and test execution engine (i.e., harness). Leading tools also provide test planning and management, test results analysis and reporting, and Bug tracking. Commonly used for repeated regression testing.

Test Repository
Stores test assets (e.g., test plans, cases, scripts, results and Bug history)

Load tester
Provides multi-user, stress, volume and performance testing

Coverage Analyzer
Ensures testing completeness by monitoring code, path and branch coverage

Runtime Error Detector
Detects memory-related errors (e.g., leaks, un-initialized memory and reading or writing beyond array bounds) and third-party library problems

Bug tracker
Captures, assigns and tracks Bugs (i.e., bugs) throughout the testing process

Software configuration management (SCM)
Provides links to source code through library management, version control and configuration management.

  SecureGURU™ is a registered trade-mark of FootPath, Inc.
Copyright © 1999-2024 [FootPath, Inc]