Friday, April 27, 2012

Testing Certifications

About Testing Certifications, the following are the available testing certifications i know except their are new ones :)

1. ISTQB (International Software Testing Qualifications Board) { 3 levels : Foundation - Advanced - Expert }
2. CAST (Certified Associate in Software Testing)
3. CSTE (Certified Software Test Engineer)
4. CMST (Certified Maneger of  Software Testing)

References:

Sunday, July 5, 2009

Levels of Testing

Levels of Testing

  1. Unit Testing.
    • Unit Testing is primarily carried out by the developers themselves.
    • Deals functional correctness and the completeness of individual program units.
    • White box testing methods are employed


  2. Integration Testing.
    • Integration Testing: Deals with testing when several program units are integrated.
    • Regression testing : Change of behavior due to modification or addition is called ‘Regression’. Used to bring changes from worst to least.
    • Incremental Integration Testing : Checks out for bugs which encounter when a module has been integrated to the existing.
    • Smoke Testing : It is the battery of test which checks the basic functionality of program. If fails then the program is not sent for further testing.


  3. System Testing.
    • System Testing - Deals with testing the whole program system for its intended purpose.
    • Recovery testing : System is forced to fail and is checked out how well the system recovers the failure.
    • Security Testing : Checks the capability of system to defend itself from hostile attack on programs and data.
    • Load & Stress Testing : The system is tested for max load and extreme stress points are figured out.
    • Performance Testing : Used to determine the processing speed.
    • Installation Testing : Installation & uninstallation is checked out in the target platform.


  4. Acceptance Testing.
    • UAT ensures that the project satisfies the customer requirements.
    • Alpha Testing : It is the test done by the client at the developer’s site.
    • Beta Testing : This is the test done by the end-users at the client’s site.
    • Long Term Testing : Checks out for faults occurrence in a long term usage of the product.
    • Compatibility Testing : Determines how well the product is substantial to product transition.

Thursday, July 2, 2009

Interview Questions..

What is the testing that a tester performs at the end of unit testing?

Integration Testing is performed after unit testing and it tests the integrity of the modules in the application.

what is positive and negative testing with example?

Positive Testing: Testing the build with permitted
values,i.e values as per the requirement specification.

Negative Testing: Testing the build with wrong inputs,i.e
non permitted values as per the requirement specification.

What is Compatibility Testing?

In Compatibility testing we will test the application with wide variety of OSs, browsers, databases, servers, clients,
and hardware as per the client requirement/specification.

what are your strengths?(in testing)

Test to break attitude.
Motivation towards quality.
Continuous process improvement.
Good test case developement and testscript execution skills.
Every tester thinks always negative sense in that situation.

Difference between Test Plan & Test Strategy?

1. Test Plann : The Test plan is the document where it contains the elements like

1.Items to be tested
2.Items need not be tested
3.test Schedule,test case design and execution
4.Roles and responsibilities of the test team
5.Defect Management

2. Test Strategy : The approach of test execution is called test strategy .

what is validation and verification of a bug?

Validation: Does the system satisfy the requirements of the software?

Verification: Are we developing the system correctly?

After receiving the bug as fixed , what types of tests we will continue...

We will go for both Retesting and Regression testing.

In Retesting we are going to check whether the bug is fixed or not

In Regression testing we are going to check due to the fixation we are getting any problems due to the fixation ..

Wednesday, July 1, 2009

Functional & Non Functional Testing

Functional Testing : Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.

Functional Testing covers :

Unit Testing
Smoke testing / Sanity testing
Integration Testing (Top Down,Bottom up Testing)
Interface & Usability Testing
System Testing
Regression Testing
Pre User Acceptance Testing(Alpha & Beta)
User Acceptance Testing
White Box & Black Box Testing
Globalization & LocalizationTesting

Non-Functional Testing : Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.

Non-Functional Testing covers :

Load and Performance Testing
Ergonomics Testing
Stress & Volume Testing
Compatibility & Migration Testing
Data Conversion Testing
Security / Penetration Testing
Operational Readiness Testing
Installation Testing
Security Testing (ApplicationSecurity, Network Security, System Security)

Sunday, May 31, 2009

Validation & Verification

Validation : The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements.

Verification : The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

From that we notice that the Quality Control is a Validation process while the Quality Assurance is a Verification process.

Difference between Quality Assurance and Quality Control

• Quality control relates to a specific product or service.
• Quality control verifies whether specific attribute(s) are in, or are not in, a specific
product or service.
• Quality control identifies defects for the primary purpose of correcting defects.
• Quality control is the responsibility of the team/worker.
• Quality control is concerned with a specific product.

• Quality assurance helps establish processes.
• Quality assurance sets up measurement programs to evaluate processes.
• Quality assurance identifies weaknesses in processes and improves them.
• Quality assurance is a management responsibility, frequently performed by a staff
function.
• Quality assurance is concerned with all of the products that will ever be produced by a
process.
• Quality assurance is sometimes called quality control over quality control because it
evaluates whether quality control is working.
• Quality assurance personnel should never perform quality control unless it is to
validate quality control.

Wednesday, May 27, 2009

Types of software Testing

Software Testing Types:

1 - Black box testing : Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

2 - White box testing : This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.


3 - Unit testing : Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

4 - Incremental integration testing : Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

5 - Integration testing : Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

6 - Functional testing : This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

7 - System testing : Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

8 - End-to-end testing : Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

9 - Sanity testing : Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

10 - Regression testing : Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

11 - Acceptance testing : Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

12 - Load testing : Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

13 - Stress testing : System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

14 - Performance testing : Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

15 - Usability testing : User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

16 - Install/uninstall testing : Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

17 - Recovery testing : Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

18 - Security testing : Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

19 - Compatibility testing : Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

20 - Comparison testing : Comparison of product strengths and weaknesses with previous versions or other similar products.

21 - Alpha testing : In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

22 - Beta testing : Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

23 - Exploratory testing : is simultaneous learning, test design, and test execution.In other words, exploratory testing is any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Another Definition Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for unforeseen downtime that presents itself during the testing process.

24 - Rapid software testing : Rapid testing is a way to scale thorough testing methods to fit an arbitrarily compressed schedule. Rapid testing doesn't mean "not thorough", it means "as thorough as is reasonable and required, given the constraints on your time." A good rapid tester is a skilled practitioner who can test productively under a wider variety of conditions than conventionally trained (or untrained) testers.

25 - Smoke testing : originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.
Another definition is just before releasing the build the developpers will check whether the build is proper or not.


26 - Compliance testing : means checking the behaviour of the system at run time to determine if it behaves as desired. Desired behaviour may mean compliance
with some normative specification.