Pages

Showing posts with label DIFFERENCS. Show all posts
Showing posts with label DIFFERENCS. Show all posts

Sunday, August 21, 2011

Regression Vs Retesting

Differences between Regression Testing and Retesting.


a) Retesting is carried out to verify defect fix / fixes. Regression testing is done to check if the defect fix / fixes have not impacted other functionality of the application that was working fine before applying the code changes.
b) Retesting is planned based for the defect fixes listed in Build Notes. Regression testing is generic and may not be always specific to any defect fix or code change and can be planned as regional or full regression testing.
c) Retesting involves executing test cases that were failed earlier and regression testing involves executing test cases that were passed earlier build i.e., functionality that was working in earlier builds.
d) Retesting will involve rerunning failed test cases that are associated with defect(s) fixes being verified. Regression testing does not involve verifying defect fix but only executing regression test cases.
e) Retesting always takes higher priority over Regression testing i.e., Regression testing is done after completing Retesting. In some projects where there are ample testing resources, Regression testing is carried out in parallel with regression testing.
f) Though Retesting and regression testing have different objectives and priorities, they equally important for project’s success.
ANSWER(2):

Regression Testing: Whenever the changes are made to the existing code (while fixing defects) a set of Test cases is run every time to ensure that the changes in code have not introduced any new failures in existing code, is known as Regression Testing.

The bug which was recently fixed by developers (after making some code changes or whatever) may have caused new bugs in the functionality that had already been tested. In this case, you’ll identify all other functionality that that is linked with this bug (or functionality) and execute those scenarios (test cases). This is called Regression testing.

Take a simple example: You have an Application to test, you found a bug. The developers fixed it. Now, you need to test the entire application in order to see that there is no effect of the bug fix on the application in total. Or, lets say you have a bug in a feature which you found on Vista OS English. Now, you will test for the same bug in all the different languages which your application supports. This is Regression Testing.
If you have bugs in your previous versions or builds, checking them on the latest builds and other versions can also be called as Regression Testing.

Re-testing: This is very simple. Whenever a defect is fixed by the developer, tester verifies that defect to make sure that the defect is actually fixed, this is known as Re-testing. The functionality is again tested for a fix here. This would not includechecking the system in total. TEST ONLY THE FIX. Here, you are concerned with only the fix.
Example  of this would be: You found a bug. The developer fixes it and send it to you to test. You retest it to make sure that it has been fixed. If it is still not fixed, you again send it back to developers and they in turn return back to you after fixing it again. This process of retesting goes on and on until bug is fixed. This is called RE-Testing

Re-Testing is also called as Confirmation Testing. Confirmation Testing is done to make sure that the tests cases which Failed in last execution are passing after the defects against those failures are fixed.
We would love to hear any other examples which you have encountered in real time which would help us more in understanding these two concepts…


Black Box Vs White Box Testing



Sl.No
Black Box
White Box
1
Focuses on the functionality of the system
Focuses on the structure (Program) of the system
2
Techniques used are :
·Equivalence partitioning    
·Boundary-value analysis  
·Error guessing     
·Race conditions     
·Cause-effect graphing 
·Syntax testing 
·State transition testing 
·Graph matrix 


Techniques used are:
·Basis Path Testing
·Flow Graph Notation  
·Control Structure Testing     
1.      Condition Testing
2.      Data Flow testing
·Loop Testing
1.      Simple Loops
2.      Nested Loops
3.      Concatenated Loops
4.      Unstructured Loops

3
Tester can be non technical
Tester should be technical
4
Helps to identify the vagueness and contradiction in functional specifications
Helps to identify the logical and coding issues.

Saturday, August 20, 2011

STATIC Vs DYNAMIC TESTING(2)

Difference between Static Testing and Dynamic Testing? 

Static Testing 
Static Testing is a White Box testing technique where the developers verify or test their code with the help of checklist to find errors in it, this type of testing is done without running the actually developed application or program. Code Reviews, Inspections, Walkthroughs are mostly done in this stage of testing. 

Dynamic Testing 

Dynamic Testing is done by executing the actual application with valid inputs to check the expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration Testing, System Testing and Acceptance Testing. 

Some differences between Static Testing and Dynamic Testing are:

· Static Testing is more cost effective than Dynamic Testing because Static Testing is done in the initial stage.
· In terms of Statement Coverage, the Static Testing covers more areas than Dynamic Testing in shorter time.
· Static Testing is done before the code deployment where the Dynamic Testing is done after the code deployment.
· Static Testing is done in the Verification stage where the Dynamic Testing is done in the Validation stage.


STATIC Vs DYNAMIC TESTING


STATIC TESTING
DYNAMIC TESTING
Static testing is a form of software testing where the software isn’t actually used.
In dynamic testing the software must actually be compiled and run.
It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors
Dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time
This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

Some of dynamic testing methodologies include unit testing, integration testing, system testing and acceptance testing.
This is the verification portion of Verification and Validation
Dynamic testing is the validation portion of Verification and Validation.

These are verification activites. Code Reviews, inspection and walkthroughs are few of the static testing methodologies.
These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

Alpha Vs Beta Testing


What is Software Testing? Difference between Alpha and Beta Testing?

Basically people are not perfect. We make errors in design and code. Hence testing is an essential activity in Software development life cycle which is used to uncover as many as possible. Testing is an important activity carried ou in order to improve the quality, finding out all possible errors by generating test cases and by systematically using the disciplined technique.
According to the organisational need testing technique is choosen by having an eye on customer’s requirement. The dynamic testing is a method in which the actual testing is done to uncover all possible error with the users interaction and feedback.
There are different types of testing used by many software empire :
  • Black Box Testing.
    • Graph based estimation model
    • Equivalance partitioning
    • Boundary value analysis
    • Orthogonal Array Testing
    • Comparision testing
  • White Box Testing.
    • Basic Path Testing
    • Flow Graph Testing
    • Cyclometric Complaxity
  • Control Structure testing.
    • Conditional Testing
    • Data Flow Testing
    • Loop Testing
  • Integration Testing.
    • Top down Integration Testing
    • Bottom up Integration Testing
    • Regerassion Testing
    • Smoke Testing
  • Validation Testing.
    • Acceptance Testing
      • Alpha Testing
      • Beta Testing
  • System Testing.
    • Recovery Testing
    • Security Testing
    • Stress Testing
    • Performance Testing
Inspite of all the two major testing methods used are Alpha and Beta Testing.
Alpha Testing : A testing method in which the version of complete software is tested by the customer under the supervision of Developer and performed at the developer’s side with natural settings and controlled environment.
Beta Testing : A testing method in which the version of software is tested by the customer without the developer being present. The testing is performed at consumer’s site with uncontrolled environment. The end user records the problems and reports to the developer.
 2)
Alpha vs Beta Testing
In the development of any application, it is not enough to simply build the program and release it right away. It needs to undergo a series of rigorous testing to ensure that the program passes the requirements of the client and has no bugs that can cause minor glitches or even serious problems later on. Alpha and beta testing are two of the stages that a software must undergo testing. Alpha testing occurs first and when the software passes that, beta testing can then be undertaken. If a software fails alpha testing, changes are done and it repeats the tests until the software passes.
Alpha testing is undergone by a small team of experts who knows how to find software faults. Although the team is only composed of a few members, their expertise allows them to catch majority of the problems by putting the software through all scenarios they can make and try any combination of inputs to coax the software into an error. With beta testing, the testers are no longer actual experts but the lack of expertise is made up by the sheer number. Depending on what the client wants, the beta version of the program can be released to a limited number of participants or to anybody who wants to. Participants in a beta test report errors and what they are doing or attempting to do at that very instant so that the developers can try to replicate the error and then find a fix for it.
During alpha testing, the program is still relatively rough and there may still be serious problems that can cause the program to crash. The limited number of alpha testers also mean that the program can only be tested on a limited number of hardware configurations. It may seem that the program is already working flawlessly during alpha testing but the different configurations of users can cause errors within the program. In beta testing, the task is more of polishing the program so that it works nicely for everyone rather than ensuring that it actually works. Problems are then patched prior to the release of the final version of the software.

Difference between Verification and Validation

Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.

Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Difference between Verification and Validation:

Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

Friday, August 12, 2011

Smoke Testing Vs. Sanity Testing:

a) Taking a test ride to test the basic features (functionalities) of the bike can be compared to “Smoke Testing” a product. In the above story, while taking the test ride, Mr. Tester was determining if the basic features of the bike were stable and acceptable. In a typical testing environment, when a build is received for testing, a smoke test is run to determine if the build is stable and can be considered for further testing. Testers usually do a Smoke Testing before accepting the build for further testing. The tester "touches" all areas of the application without getting too deep into the functionality.

b) Testing the bike performance in detail after bringing it home can be compared to “Sanity Testing” a product. Testing those features in detail was not possible in the showroom or while taking test ride. In a typical testing environment, when a new build is received with minor modifications, instead of running a thorough regression test suite, a sanity test is performed so as to determine that the build has actually fixed the issues and no further issue has been introduced by the fixes. Sanity testing is generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the product.

Differences:
1) “Smoke Testing” is usually done on the nightly/interim build to test its stability. Therefore “Smoke testing” is often called as “Build Verification Testing” too. In contrast, “Sanity Testing” is usually done during the later cycles after thorough regression cycles are over. When multiple cycles of testing are executed, “Sanity Testing” is done towards the Product release phase.
2) “Smoke Testing” is done following a shallow and wide approach where all the basic and major areas are tested without going too deep into the functionality. In contrast, “Sanity Testing” is usually a focused but limited form of regression testing, which follows a deep and narrow approach to test a particular functionality in detail.
3) “Smoke Testing” is done by developers before the build is released or by testers before accepting a build for further testing. On the other hand, “Sanity Testing” is done mostly by the testers.
4) “Smoke Tests” are mostly in form of scripted form (either written test cases or automated test scripts) whereas “Sanity Tests” are mostly non-scripted!
5). “Smoke Testing” can be compared with the normal health check-up of the product whereas the “Sanity Testing” can be compared with some specialized tests to reveal possible problems with a particular functionality of the product!



Differences Between QA and QC

Many people and organizations are confused about the difference between quality assurance (QA), quality control (QC), and testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software, it is important for software managers to understand the differences. They are defined below:
  • Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
  • Quality Control: A set of activities designed to evaluate a developed work product.
  • Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast, QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections. Both QA and QC activities are generally required for successful software development.
Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation
  • While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.
  • The amount of external QA/QC should be a function of the project risk and the process maturity of an organization. As organizations mature, management and staff will implement the proper QA and QC approaches as a matter of habit. When this happens only minimal external guidance and review are needed.. 
Answer(2):

 v      Test Engineer: Testing the software,finding the defects ensure the application is working as Per requirements is the responsibility to Quality Control(Test Engineer).
v      Quality Assurance: Defining the process in the company,Redefining the process if required is responsibility of Quality Assurance.
v    Quality control is working at project level,Quality assurance is responsible at organization level.

v     The main difference between QA and Qc is goals of both QA
   and Qc are different,goal of QA is working towards prevention of errors where as goal of Qc is finding defects..
QA is process oriented where as Qc is project oriented.
QA involves in each and every phase in SDLC,where as QC involes only in Testing phase of SDLC.