QA and Testing Definition

8326703afeb4401ba56b62565ae479ce
0

This will help you in preparation for interview fo QA / Testing profiles.

We have plenty of job opportunities for Testing with more than 4 years of experience.

Most of the jobs are in Bangalore. See the blog on

http://joblagao.com/blog/index.php/category/employee-referral-jobs/

DEFINITION RELATED TO TESTING

Testing: Testing is a process of executing a program with the intent of finding an error. The
purpose is reducing post release expenses.

Primary Benefits of Testing:  A good test has high probability of finding an as yet undiscovered error.

Secondary Benefit:
To check whether the functions appear to be working according to specifications.

Debug: Is a process of locating exact cause of error and removing that cause.

QA: QA involves in the entire software development process – monitoring and improving the process and making sure any agreed upon standards and procedures are followed and ensuring that problems are found and dealt with.

QA is about having an overall development and management process that provides right environment for ensuring quality of final product.

QA is best carried out on process.

QA should take place at every stage of SDLC.

QC: is about checking at the end of some development process (eg- a design activity) that we have built quality in i.e. that we have achieved the required quality with our methods.

QC is like testing a module against requirement specification or design document, measuring response time, throughput etc.

QC is best carried out on products

QC should be done at end of every SDLC i.e. when product building is complete.

Verification: Typically involves the reviews and meetings to evaluate plans, docs, code requirements and specifications. This can be done with checklists, issue lists, walkthroughs and inspections.

Validation: Typically involves the actual testing after verification. Validation checks that the system meets the customer’s requirements at the end of the life cycle

The major purpose of verification and validation activities is to ensure that software design, code, and documentation meet all the requirements imposed on them.

Walkthroughs: This is the informal meeting for evaluation purpose.

Inspection: Its just formal walkthrough.

 

 Test Design:

This document specifies what needed to test in the product. This would abstract from Requirements document.

Test Procedures:

Implement set of test cases as a procedure, specifying the exact process to be followed to conduct each of test cases.

Test Case: Is a document that describing input, action or event and expected response to determine whether feature of application working correctly or not.

Developing Test Cases:

 

– Design test cases for unit test event based on objectives of test

– Should be specifications derived tests

– Should consider Equivalence partitioning

– State Transitions testing

– Design test cases that include abnormal situations and data.

– Design test cases for all known and expected error conditions

– Design test cases to be easily duplicated

– Each test case must be described input and predicted result and conditions under which the test has been run.

 Test Case design

Test Case ID: It is unique number given to test case in order to be identified.

Test description: The description of the test case you are going to test.

Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.

Function to be tested: The name of function to be tested.

Environment: It tells in which environment you are testing.

Test Setup: Anything you need to set up outside of your application for example printers, network and so on.

Test Execution: It is detailed description of every step of execution.

Expected Results: The description of what you expect the function to do.

Actual Results: pass / failed

If pass – What actually happen when you run the test.

If failed – put in description of what you’ve observed.

Testing Techniques:

White
box techniques
: Branch Testing, Condition Testing.

Black
box testing techniques
: Boundary Value Analysis, Equivalence
Partitioning.

 

Equivalence Partitioning:

is the process of taking all of the possible test values and planning them into classes (groups). Test cases should be designed to test one value from each group. So that it uses fewest test cases to cover maximum input requirements.

 

Boundary Value Analysis:

This is a selection technique where the test data lie along the boundaries of the input domain.

 

Branch Testing:

In branch testing, test cases are designed to exercise
control flow braches or decision points in a unit.

 

Characteristics of a Good Test: They
are: likely to catch bugs, no redundant, not too simple or too complex. Test
case is going to be complex if you have more than one expected results.

 

Test Plan:

 

Contents

  1. Introduction
  2. Scope – Test Items, Features To be tested, Features Not to be Tested
  3. Approach
  4. Test Environment
  5. Schedule and Testing Tasks
  6. Item pass/fail
  7. Suspension Criteria
  8. Risk and Contingencies
  9. Test Deliverables
  10. Approves of the Plan

 

Introduction:

Summary of the items and features to be
tested. References to related documents.

 

Scope:

Test Items: Names
of all the items (software) being tested are listed here and their version too.
These include Programs, Batch files, Configuration files and control tables.

 

Features To Be Tested: A
detailed list of functions or features will be prepared from reviewing Detail
Design Document, Functional specification and Requirement specifications.

 

Features Not To Be Tested: If
any feature is not being tested, it should be clearly mentioned here with
reason for not being tested. Eg: Sending financial transactions over a private
network may not be tested due to non availability of infrastructure.

 

 

Approach:

For each major group features specify the type of testes
like regression, stress etc., Specify major activities, tools, techniques.

 

Test Environment:

Describe test environment minimal and optimal needs. This
would include Hardware, Network, Communications, OS and printers and Security
and other tools.

 

Schedule and Tasks:

Provide a detailed schedule of testing
activities and responsibilities. Indicate dependencies and time frames for
testing activities.

 

Item Pass/Fail criteria:

Specify the criteria to be used to determine whether each
test item has passed or failed.

 

Suspension criteria:

Criteria to be used to suspend the testing
activity.

Criteria to be used to resume the testing
activity

 

Risk and Contingencies:

Identify the high risk assumptions of the test plan.

Specify the contingency plans for each

 

Test Deliverables:

Identify the deliverables documents: test plan, test design specifications, test case specifications, test procedure documents, test item transmittal reports, test logs, test incidental reports and test summary reports. And also identify test input and output data.

 

Approvals:

Specify the names and titles of all persons who must approve the plan.

Provides space for signatures and dates

 

Testing Techniques:

 Black box Testing: This testing is completely based

on specifications and requirements and not bothered about internal logic of
testing.

 

White box Testing:
This is completely based on internal logic of code. Tests are based on coverage
of code statements, paths, branches and conditions.

 

Test Phase Levels:

 

Unit Testing: This
is the most micro scale of testing, to test particular function or module. This
will do by programmers only.

 

Incremental Integration:
Continuous testing of the application as new functionality is added.

 

Integration testing:
Testing the combined parts of the application to determine if they functioning
correctly all together. The parts can be code modules, individual applications
or client server applications.

 

Functional Testing:
Functionality testing is performed to verify that a software application
performs and functions correctly according to design specifications. By
inputting both valid and invalid data, we verify that the output of these
features and functional areas meets predefined expectations

 

System Testing:
Black box type testing based on overall requirement specifications.

 

Acceptance Testing:

Final testing based on the end user or
customer acceptance criteria. Determining if the software is satisfactory to a
customer. Its of two types Alpha testing and Beta testing.

 

Alpha Testing:

This type of testing is conducted at the
developer’s site by a customer. The developer makes note of the errors and
usage problems.

 

Beta Testing:

Conducted at one or more customer sites by
the end user(s) of the software. Unlike alpha testing, the developer is
generally not present.

 

Adhoc Testing:

This is also called as
Monkey testing. No presumptions are made in the advancement
for the testing of the remaining features based on the following results due to
a lack of repeatable tests.

 

End to End Testing:
This is similar to system testing in a environment that mimics  real word use, such as interacting network
communications, databases and network environment or interacting with other
hardware, applications or systems

 

Recovery:

Forces
the software to fail in a variety of ways and verifies that recovery is
properly performed. If recovery is
automatic, re-initialization, check-pointing mechanisms, data recovery, and
restart are each evaluated for correctness. If recovery requires human
intervention, the mean time to repair is evaluated to determine whether it is
within acceptable limits.

 

Sanity Testing/Smoke Testing:
This is initial testing effort to determine if the new software is performing
enough to accept major testing.

 

Regression Testing:
Re testing after bug fixes or modification of software or environment. We don’t
know how much retesting is needed near the end of application. For this
automated tools will be used.

 

Load Testing:
Testing the application under heavy loads. Such as testing web application
under range of loads to determine at what point of time the system response
time degrades or fails.

 

Stress Testing:
This can be described as the functional testing the system under unusually
heavy loads, heavy repetitions of same actions or inputs and input of large
numerical values and large complex queries to database.

 

Performance:
Load and Stress testing combindley
called as performance testing.

 

Compatibility Testing:

Compatibility Testing services
test the functionality and performance of a software application across
multiple platform configurations. This type of testing typically uncovers
compatibility issues with operating systems, other software applications, and
hardware components.

 

Portability Testing:

Testing to ensure compatibility of an application or Web
site with different browsers, OSs,
and hardware platforms. Portability testing can be performed manually or can be
driven by an automated functional or regression test suite.

 

Software Quality:

Quality software is reasonably bug free, delivered on
time, within budget limit, meets requirements and is maintainable.

 

CMM:

Capability maturity model that is developed by SEI. This
is the model of 5 levels of organization maturity, that determine effectiveness
of delivering quality software.

 

CMM1: Characterized by chaos, periodic panics and
heroic efforts required by individuals to successfully complete the project.
Success may not be repeated.

 

CMM2: software project tracking, requirements
management, realistic project plan and configuration management process are in
place. In this case success is repeatable.

 

CMM3: standard software development and maintenance
processes are integrated throughout the organization. A software engineering
process group is oversees the process in the organization, and training
programs.

 

CMM4:  Metrics
are used to track productivity, processes and products. Project performance is
predictable and quality is consistently high.

 

CMM5:  It
focuses on continuous improvement in process. It can predict impact of new
technologies and new processes and can be implemented when it’s required.

 

 

ISO:

International organization for standardization(9001-2000).

This concerns quality systems assessed by outside
auditors.

 

IEEE:

Institute of Electrical and Electronics Engineering.

 

IEEE/ANSI 829 – Software Test Documentation

 

IEEE/ANSI 1008 – Software Unit Testing

 

IEEE/ANSI 730 – Software Quality Assurance Plans

 

 

Software Life Cycle:

It includes aspects such as initial concept, requirement
analysis, functional design, internal design, documentation planning, test
planning, coding documentation, testing, retesting, and maintenance.

 

Configuration Management:

It covers the processes used to control, coordinate and
track: Code, documentation, requirements, problems and change requests, designs
and changes made to them, and who makes changes to them.

 

The purpose of software configuration management is to
identify all the interrelated components of software and to control their
evolution throughout the various life cycle phases.

 

Control Version

As an application evolves over time, many different
versions of its software components are created, and there needs to be an
organized process to manage changes in the software components and their
relationships

 

Change Control

Change control is the process by which a modification to a
software component is proposed, evaluated, approved or rejected, scheduled, and
tracked.

 

Business Requirements:

This document describes users needs for the application.

 

Functional Requirements:

This would be primarily derived from Business Requirements. This describes
functional needs.

 

Prototype:

This is look and feel representation of the application
that is proposed. This basically shows placements of the fields and modules and
generic flow of the application.

 

Test Strategy:

A Test strategy is a statement of the overall approaches
to the testing, identifying what levels of testing are to be applied and the
techniques, methods and tools to be used. A test strategy should ideally be
organization wide, being applicable to all of an organization’s projects.

 

Here levels refers Unit, Integration and system etc

Methods refers to dynamic and static.

Techniques refers to white box and black box testing

 

Test Methodology:

This
is actual implementation we use for each and every project. This may defer from
project to project. For example we may mention in test strategy that we use
load test but in the methodology of a particular project we may not perform it

 

Test Policy:

Test Policy is a management definition to testing for a
department. It specifies milestones to be achieved.

 

Test policy contains
these side headings

a. Def of testing

b. Testing System:
Methods through which testing will be achieved

c. Evaluation: How
information services management will measure and evaluate testing

d. Standards: The
standards against which testing will be measured

 

Test Life Cycle:

Test Analysis(Risk Analysis), Test plan , Designing Test
cases,  Executing Test cases, Maintaining
test logs, Reporting

 

Risk Analysis:

Schedule Risk: Factors that may affect the schedule of
testing

Technology Risk: Risks on hardware and software of the
application

Resource Risk: Test Team availability on slippage of the
project schedules.

Support Risk: Clarification required on specifications and
availability of personnel for the same

 

Bug: Something abnormal to the expected behavior is
called a bug

 

Error: A
software bug found before software release is called a Error

 

Defect: A software bug found after the software
release is send to customer

 

Fault: Since a defect is a software bug found in
production. Every defect leads to a
fault to the system

 

SDLC:

 

Waterfall Model:

This approach breaks the development cycle down into
discrete phases, each with rigid sequential begin and end. Each phase is fully
completed before the next is started. Once a phase is completed, and it never
goes back to change it. The phase are Requirements, Design, coding, Testing and
maintenance.

 

Iterative Model:

This model does not start with full specifications of
requirements. Instead development begins by specifying and implementing just
part of the software, which can be reviewed in order to identify further
requirements. This process is repeated, producing a new version of the software
for each cycle of the model

 

Progressive Model:

 

Prototype Model:

 

STLC:

Test life cycle includes:

Test Analysis(Risk Analysis)

Test plan

Designing Test Cases

Executing Test Cases

Maintaining the Test log

 

Testing Methods:

 

Static Testing:

Static testing approaches are time independent. They do
not necessarily involve either manual or automated execution of product.
Example includes syntax testing, structured walkthroughs, and inspections.

 

Dynamic Testing:

Dynamic testing techniques are time dependent and involve
executing specific sequence of instructions on paper. For example Boundary
testing is a dynamic testing technique that requires the execution of test
cases on the computer with a specific focus on the boundary values associated
with the inputs or outputs of the program.

 

Test Metrics:

Test metric is a
measurable unit used to measure the productivity and quality of the system
produced. Generally classified into

size oriented metrics
– based on lines of code (KLOC)

Function oriented
metrics – we use Function point metric to calculate the functionality delivered
by a system

 

For example these are
the test metrics u can say as example

 

*Test Coverage* =
Number of units (KLOC/FP) tested / total size of the system

 

*Test cost (in %)* =
Cost of testing / total cost *100

 

*Cost to locate
defect* = Cost of testing / the number of defects located

 

*Defects detected in
testing (in %)* = Defects detected in testing / total system defects*100

 

*Acceptance criteria
tested* = Acceptance criteria tested / total acceptance criteria

 

If u want to mention
your company testing department performance we use

Defect removal
Efficiency having formulae: (No. of defects found by

testing team/ (No. of
defects found by testing team + No. of defects

found by
Customer))*100

 

For example testing
department has found 90 defects and customer has

found 10 defects
then  DRE: (90/ 100 )*100 = 90% defect
removal efficiency

 

IF the case is
reverse then our defect removal efficiency will be only 10%

 

 

Cyclocomplex:

This metric is an indication of the number of ‘linear’
segments in a method (i.e. sections of code with no branches) and therefore can
be used to determine the number of tests required to obtain complete coverage.
It can also be used to indicate the psychological complexity of a method.

 

A method with no branches has a Cyclomatic Complexity of 1
since there is 1 arc. This number is incremented whenever a branch is encountered.
In this implementation, statements that represent branching are defined as:
‘for’, ‘while’, ‘do’, ‘if’, ‘case’ (optional), ‘catch’ (optional) and the
ternary operator (optional). The sum of Cyclomatic Complexities for methods in
local classes is also included in the total for a method.

 

Testing: Testing is
a process of executing a program with the intent of finding an error. The
purpose is reducing post release expenses.

 

Primary Benefits of Testing:  A good test has high probability of finding
an as yet undiscovered error.

 

Secondary Benefit:
To check whether the functions appear to be working according to
specifications.

 

Debug: Is a process of
locating exact cause of error and removing that cause.

 

QA: QA involves in the
entire software development process – monitoring and improving the process and
making sure any agreed upon standards and procedures are followed and ensuring
that problems are found and dealt with.

 

QA is about having an
overall development and management process that provides right environment for
ensuring quality of final product.

 

QA is best carried
out on process.

 

QA should take place at every stage of SDLC.

 

 

QC: is about checking at the
end of some development process (eg- a design activity) that we have built
quality in i.e. that we have achieved the required quality with our methods.

 

QC is like testing a
module against requirement specification or design document, measuring response
time, throughput etc.

 

QC is best carried
out on products

 

QC should be done at
end of every SDLC i.e. when product building is complete.

 

Verification: Typically
involves the reviews and meetings to evaluate plans, docs, code requirements
and specifications. This can be done with checklists, issue lists, walkthroughs
and inspections.

 

Validation: Typically
involves the actual testing after verification. Validation checks that the
system meets the customer’s requirements at the end of the life cycle

 

The major
purpose of verification and validation activities is to ensure that software
design, code, and documentation meet all the requirements imposed on them.

 

Walkthroughs: This
is the informal meeting for evaluation purpose.

 

Inspection: Its
just formal walkthrough.

 

 

Test Design:

This document specifies what needed to test
in the product. This would abstract from Requirements document.

 

Test Procedures:

Implement set of test cases as a procedure,
specifying the exact process to be followed to conduct each of test cases.

 

Test Case: Is a
document that describing input, action or event and expected response to
determine whether feature of application working correctly or not.

 

Developing Test Cases:

 


Design test cases for unit test event
based on objectives of test


Should be specifications derived tests


Should consider Equivalence
partitioning


State Transitions testing


Design test cases that include
abnormal situations and data.


Design test cases for all known and
expected error conditions


Design test cases to be easily
duplicated


Each test case must be described input
and predicted result and conditions under which the test has been run.

 

Test Case design

 

Test Case ID: It
is unique number given to test case in order to be identified.

Test description: The description of the test case you are going to test.

Revision history: Each test case has to have its revision history in order
to know when and by whom it is created or modified.

Function to be tested: The name of function to be tested.

Environment: It tells in which environment you are testing.

Test Setup: Anything you need to set up outside of your application for
example printers, network and so on.

Test Execution: It is detailed description of every step of execution.

Expected Results: The description of what you expect the function to do.

Actual Results: pass / failed

If pass – What actually happen when you run the test.

If failed – put in description of what you’ve observed.

Testing Techniques:

White
box techniques
: Branch Testing, Condition Testing.

Black
box testing techniques
: Boundary Value Analysis, Equivalence
Partitioning.

 

Equivalence Partitioning:

is the process of taking all of the possible test values and planning them into classes (groups). Test cases should be designed to test one value from each group. So that it uses fewest test cases to cover maximum input requirements.

 

Boundary Value Analysis:

This is a selection technique where the test data lie along the boundaries of the input domain.

 

Branch Testing:

In branch testing, test cases are designed to exercise
control flow braches or decision points in a unit.

 

Characteristics of a Good Test: They
are: likely to catch bugs, no redundant, not too simple or too complex. Test
case is going to be complex if you have more than one expected results.

Test Plan:

 

Contents

  1. Introduction
  2. Scope – Test Items, Features To be tested, Features Not to be Tested
  3. Approach
  4. Test Environment
  5. Schedule and Testing Tasks
  6. Item pass/fail
  7. Suspension Criteria
  8. Risk and Contingencies
  9. Test Deliverables
  10. Approves of the Plan

 

Introduction:

Summary of the items and features to be
tested. References to related documents.

 

Scope:

Test Items: Names
of all the items (software) being tested are listed here and their version too.
These include Programs, Batch files, Configuration files and control tables.

 

Features To Be Tested: A
detailed list of functions or features will be prepared from reviewing Detail
Design Document, Functional specification and Requirement specifications.

 

Features Not To Be Tested: If any feature is not being tested, it should be clearly mentioned here with reason for not being tested. Eg: Sending financial transactions over a private network may not be tested due to non availability of infrastructure.

Approach:

For each major group features specify the type of testes
like regression, stress etc., Specify major activities, tools, techniques.

 

Test Environment:

Describe test environment minimal and optimal needs. This
would include Hardware, Network, Communications, OS and printers and Security
and other tools.

 

Schedule and Tasks:

Provide a detailed schedule of testing
activities and responsibilities. Indicate dependencies and time frames for
testing activities.

 

Item Pass/Fail criteria:

Specify the criteria to be used to determine whether each
test item has passed or failed.

 

Suspension criteria:

Criteria to be used to suspend the testing
activity.

Criteria to be used to resume the testing
activity

 

Risk and Contingencies:

Identify the high risk assumptions of the test plan.

Specify the contingency plans for each

 

Test Deliverables:

Identify the deliverables documents: test plan, test
design specifications, test case specifications, test procedure documents, test
item transmittal reports, test logs, test incidental reports and test summary
reports. And also identify test input and output data.

 

Approvals:

Specify the names and titles of all persons
who must approve the plan.

Provides space for signatures and dates

 

Testing Techniques:

 

Black box Testing: This testing is completely based
on specifications and requirements and not bothered about internal logic of
testing.

 

White box Testing:
This is completely based on internal logic of code. Tests are based on coverage
of code statements, paths, branches and conditions.

Test Phase Levels:

Unit Testing: This

is the most micro scale of testing, to test particular function or module. This
will do by programmers only.

 

Incremental Integration:
Continuous testing of the application as new functionality is added.

 

Integration testing:
Testing the combined parts of the application to determine if they functioning
correctly all together. The parts can be code modules, individual applications
or client server applications.

 

Functional Testing:
Functionality testing is performed to verify that a software application
performs and functions correctly according to design specifications. By
inputting both valid and invalid data, we verify that the output of these
features and functional areas meets predefined expectations

 

System Testing:
Black box type testing based on overall requirement specifications.

 

Acceptance Testing:

Final testing based on the end user or
customer acceptance criteria. Determining if the software is satisfactory to a
customer. Its of two types Alpha testing and Beta testing.

 

 

Alpha Testing:

This type of testing is conducted at the
developer’s site by a customer. The developer makes note of the errors and
usage problems.

 

Beta Testing:

Conducted at one or more customer sites by
the end user(s) of the software. Unlike alpha testing, the developer is
generally not present.

 

Adhoc Testing:

This is also called as Monkey testing. No presumptions are made in the advancement for the testing of the remaining features based on the following results due to a lack of repeatable tests.

 

End to End Testing:
This is similar to system testing in a environment that mimics  real word use, such as interacting network
communications, databases and network environment or interacting with other
hardware, applications or systems

 

Recovery:

Forces the software to fail in a variety of ways and verifies that recovery is
properly performed. If recovery is automatic, re-initialization, check-pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits.

 

Sanity Testing/Smoke Testing:
This is initial testing effort to determine if the new software is performing
enough to accept major testing.

 

Regression Testing:
Re testing after bug fixes or modification of software or environment. We don’t
know how much retesting is needed near the end of application. For this
automated tools will be used.

 

Load Testing:
Testing the application under heavy loads. Such as testing web application
under range of loads to determine at what point of time the system response
time degrades or fails.

 

Stress Testing:
This can be described as the functional testing the system under unusually
heavy loads, heavy repetitions of same actions or inputs and input of large
numerical values and large complex queries to database.

 

Performance:
Load and Stress testing combindley
called as performance testing.

 

Compatibility Testing:

Compatibility Testing services
test the functionality and performance of a software application across
multiple platform configurations. This type of testing typically uncovers
compatibility issues with operating systems, other software applications, and
hardware components.

 

Portability Testing:

Testing to ensure compatibility of an application or Web
site with different browsers, OSs,
and hardware platforms. Portability testing can be performed manually or can be
driven by an automated functional or regression test suite.

 

Software Quality:

Quality software is reasonably bug free, delivered on
time, within budget limit, meets requirements and is maintainable.

 

CMM:

Capability maturity model that is developed by SEI. This
is the model of 5 levels of organization maturity, that determine effectiveness
of delivering quality software.

 

CMM1: Characterized by chaos, periodic panics and
heroic efforts required by individuals to successfully complete the project.
Success may not be repeated.

 

CMM2: software project tracking, requirements
management, realistic project plan and configuration management process are in
place. In this case success is repeatable.

 

CMM3: standard software development and maintenance
processes are integrated throughout the organization. A software engineering
process group is oversees the process in the organization, and training
programs.

 

CMM4:  Metrics
are used to track productivity, processes and products. Project performance is
predictable and quality is consistently high.

 

CMM5:  It
focuses on continuous improvement in process. It can predict impact of new
technologies and new processes and can be implemented when it’s required.

 

ISO:

International organization for standardization(9001-2000).

This concerns quality systems assessed by outside
auditors.

 

IEEE:

Institute of Electrical and Electronics Engineering.

 

IEEE/ANSI 829 – Software Test Documentation

 

IEEE/ANSI 1008 – Software Unit Testing

 

IEEE/ANSI 730 – Software Quality Assurance Plans

 

Software Life Cycle:

It includes aspects such as initial concept, requirement
analysis, functional design, internal design, documentation planning, test
planning, coding documentation, testing, retesting, and maintenance.

 

Configuration Management:

It covers the processes used to control, coordinate and
track: Code, documentation, requirements, problems and change requests, designs
and changes made to them, and who makes changes to them.

 

The purpose of software configuration management is to identify all the interrelated components of software and to control their evolution throughout the various life cycle phases.

 

Control Version

As an application evolves over time, many different
versions of its software components are created, and there needs to be an
organized process to manage changes in the software components and their
relationships

 

Change Control

Change control is the process by which a modification to a
software component is proposed, evaluated, approved or rejected, scheduled, and
tracked.

 

Business Requirements:

This document describes users needs for the application.

 

Functional Requirements:

This would be primarily derived from Business Requirements. This describes
functional needs.

 

Prototype:

This is look and feel representation of the application
that is proposed. This basically shows placements of the fields and modules and
generic flow of the application.

 

Test Strategy:

A Test strategy is a statement of the overall approaches
to the testing, identifying what levels of testing are to be applied and the
techniques, methods and tools to be used. A test strategy should ideally be
organization wide, being applicable to all of an organization’s projects.

Here levels refers Unit, Integration and system etc

Methods refers to dynamic and static.

Techniques refers to white box and black box testing

Test Methodology:

This is actual implementation we use for each and every project. This may defer from project to project. For example we may mention in test strategy that we use load test but in the methodology of a particular project we may not perform it

Test Policy:

Test Policy is a management definition to testing for a
department. It specifies milestones to be achieved.

Test policy contains these side headings

a. Def of testing

b. Testing System:
Methods through which testing will be achieved

c. Evaluation: How
information services management will measure and evaluate testing

d. Standards: The
standards against which testing will be measured

Test Life Cycle:

Test Analysis(Risk Analysis), Test plan , Designing Test
cases,  Executing Test cases, Maintaining
test logs, Reporting

 

Risk Analysis:

Schedule Risk: Factors that may affect the schedule of
testing

Technology Risk: Risks on hardware and software of the
application

Resource Risk: Test Team availability on slippage of the
project schedules.

Support Risk: Clarification required on specifications and
availability of personnel for the same

 

 

Bug: Something abnormal to the expected behavior is
called a bug

 

Error: A
software bug found before software release is called a Error

 

Defect: A software bug found after the software
release is send to customer

 

Fault: Since a defect is a software bug found in
production. Every defect leads to a
fault to the system

 

SDLC:

 

Waterfall Model:

This approach breaks the development cycle down into
discrete phases, each with rigid sequential begin and end. Each phase is fully
completed before the next is started. Once a phase is completed, and it never
goes back to change it. The phase are Requirements, Design, coding, Testing and
maintenance.

 

Iterative Model:

This model does not start with full specifications of
requirements. Instead development begins by specifying and implementing just
part of the software, which can be reviewed in order to identify further
requirements. This process is repeated, producing a new version of the software
for each cycle of the model

Progressive Model:

Prototype Model:

 

STLC:

Test life cycle includes:

Test Analysis(Risk Analysis)

Test plan

Designing Test Cases

Executing Test Cases

Maintaining the Test log

Testing Methods:

 

Static Testing:

Static testing approaches are time independent. They do
not necessarily involve either manual or automated execution of product.
Example includes syntax testing, structured walkthroughs, and inspections.

Dynamic Testing:

Dynamic testing techniques are time dependent and involve
executing specific sequence of instructions on paper. For example Boundary
testing is a dynamic testing technique that requires the execution of test
cases on the computer with a specific focus on the boundary values associated
with the inputs or outputs of the program.

Test Metrics:

Test metric is a
measurable unit used to measure the productivity and quality of the system
produced. Generally classified into

size oriented metrics
– based on lines of code (KLOC)

Function oriented
metrics – we use Function point metric to calculate the functionality delivered
by a system

For example these are

the test metrics u can say as example

 

*Test Coverage* =
Number of units (KLOC/FP) tested / total size of the system

 

*Test cost (in %)* =
Cost of testing / total cost *100

*Cost to locate

defect* = Cost of testing / the number of defects located

*Defects detected in

testing (in %)* = Defects detected in testing / total system defects*100

*Acceptance criteria

tested* = Acceptance criteria tested / total acceptance criteria

If u want to mention your company testing department performance we use

Defect removal Efficiency having formulae: (No. of defects found by

testing team/ (No. of defects found by testing team + No. of defects

found by Customer))*100

For example testing department has found 90 defects and customer has found 10 defects

then  DRE: (90/ 100 )*100 = 90% defect
removal efficiency

 

IF the case is
reverse then our defect removal efficiency will be only 10%

 

Cyclocomplex:

This metric is an indication of the number of ‘linear’
segments in a method (i.e. sections of code with no branches) and therefore can
be used to determine the number of tests required to obtain complete coverage.
It can also be used to indicate the psychological complexity of a method.

A method with no branches has a Cyclomatic Complexity of 1

since there is 1 arc. This number is incremented whenever a branch is encountered.
In this implementation, statements that represent branching are defined as:
‘for’, ‘while’, ‘do’, ‘if’, ‘case’ (optional), ‘catch’ (optional) and the
ternary operator (optional). The sum of Cyclomatic Complexities for methods in
local classes is also included in the total for a method.

 

Blog Author

Follow :

Related Blogs