Characteristics of a good software

Software Quality Characteristics

While developing any kind of software product, the first question in any developer’s mind is, “What are the qualities that a good software should have ?” Well before going into technical characteristics, I would like to state the obvious expectations one has from any software. First and foremost, a software product must meet all the requirements of the customer or end-user. Also, the cost of developing and maintaining the software should be low. The development of software should be completed in the specified time-frame.

Well these were the obvious things which are expected from any project (and software development is a project in itself). Now lets take a look at Software Quality factors. These set of factors can be easily explained by Software Quality Triangle. The three characteristics of good application software are: –
1)  Operational Characteristics
2)  Transition Characteristics
3)  Revision Characteristics

Software Quality Triangle

 qtr

Software Quality Triangle with characteristics

16 Characteristics of Good Software are as below

What Operational Characteristics should software have?

These are functionality based factors and related to ‘exterior quality’ of software. Various Operational Characteristics of software are :

a) Correctness: The software which we are making should meet all the specifications stated by the customer.
b)  Usability/Learn-ability: The amount of efforts or time required to learn how to use the software should be less. This makes the software user-friendly even for IT-illiterate people.
c)  Integrity: Just like medicines have side-effects, in the same way software may have a side-effect i.e. it may affect the working of another application. But quality software should not have side effects.
d)   Reliability: The software product should not have any defects. Not only this, it shouldn’t fail while execution.
e)   Efficiency: This characteristic relates to the way software uses the available resources. The software should make effective use of the storage space and execute command as per desired timing requirements.
f)  Security: With the increase in security threats nowadays, this factor is gaining importance. The software shouldn’t have ill effects on data / hardware. Proper measures should be taken to keep data secure from external threats.
g)  Safety: The software should not be hazardous to the environment/life.

What are the Revision Characteristics of software?

These engineering based factors of the relate to ‘interior quality’ of the software like efficiency, documentation and structure. These factors should be in-build in any good software. Various Revision Characteristics of software are :-

a) Maintainability: Maintenance of the software should be easy for any kind of user.
b) Flexibility: Changes in the software should be easy to make.
c) Extensibility: It should be easy to increase the functions performed by it.
d) Scalability: It should be very easy to upgrade it for more work(or for more number of users).
e) Testability: Testing the software should be easy.
f) Modularity: Any software is said to made of units and modules which are independent of each other. These modules are then integrated to make the final software. If the software is divided into separate independent parts that can be modified, tested separately, it has high modularity.

Transition Characteristics of the software:

a) Interoperability: Interoperability is the ability of software to exchange information with other applications and make use of information transparently.
b) Reusability: If we are able to use the software code with some modifications for different purpose then we call software to be reusable.
c) Portability: The ability of software to perform same functions across all environments and platforms, demonstrate its portability.

Importance of any of these factors varies from application to application. In systems where human life is at stake, integrity and reliability factors must be given prime importance. In any business related application usability and maintainability are key factors to be considered. Always remember in Software Engineering, quality of software is everything, therefore try to deliver a product which has all these characteristics and qualities.
Reference: http://www.ianswer4u.com/2011/10/characteristics-of-good-software.html

Software testing

Software testing

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1]Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation.

Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs.

  1. 1.       Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[6] A common source of requirements gaps is non-functional requirements such as testabilityscalabilitymaintainabilityusabilityperformance, and security.

 

  1. 2.       Testing methods

2.1   Static and Dynamic Testing

There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing

2.2   Black-box & White Box testing.

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) tests internal structures or workings of a program

Black-box testing treats the software as a “black box”, examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do, not how it does it.

               

3. Testing levels

3.1 Unit testing

Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level.

3.2 Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design.

3.3 System testing

System testing tests a completely integrated system to verify that it meets its requirements

4.Testing approach

4.1 Top down & Bottom up

5. Objectives of testing

5.1 Installation testing

5.2 Compatibility testing

5.3 Smoke and sanity testing

Sanity testing determines whether it is reasonable to proceed with further testing.

Smoke testing is used to determine whether there are serious problems with a piece of software, for example as a build verification test.

5.3 Regression testing

5.4 Alpha testing

5.5 Beta Testing

A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model.

  • Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
  • Test planningTest strategytest plantestbed creation. Since many activities will be carried out during testing, a plan is needed.
  • Test development: Test procedures, test scenariostest cases, test datasets, test scripts to use in testing software.
  • Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
  • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
  • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
  • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

 

 

 

 

Reference:

http://en.wikipedia.org/wiki/Software_testing

 

 

 

 

 

Defect Priority

 

Defect Priority (Bug Priority) indicates the importance or urgency of fixing a defect. Though priority may be initially set by the Software Tester, it is usually finalized by the Project/Product Manager.

Priority can be categorized into the following levels:

  • Urgent: Must be fixed in the next build.
  • High: Must be fixed in any of the upcoming builds but should be included in the release.
  • Medium: May be fixed after the release / in the next release.
  • Low: May or may not be fixed at all.

Priority is also denoted as P1 for Urgent, P2 for High and so on.

NOTE: Priority is quite a subjective decision; do not take the categorizations above as authoritative. However, at a high level, priority is determined by considering the following:

  • Business need for fixing the defect
  • Severity/Impact
  • Probability/Visibility
  • Available Resources (Developers to fix and Testers to verify the fixes)
  • Available Time (Time for fixing, verifying the fixes and performing regression tests after the verification of the fixes)

ISTQB Definition:

  • priority: The level of (business) importance assigned to an item, e.g. defect.

Defect Priority needs to be managed carefully in order to avoid product instability, especially when there is a large of number of defects

Installation Testing

Defect Severity

Defect Severity or Impact is a classification of software defect (bug) to indicate the degree of negative impact on the quality of software.

ISTQB Definition

  • severity: The degree of impact that a defect has on the development or operation of a component or system.

DEFECT SEVERITY CLASSIFICATION

The actual terminologies, and their meaning, can vary depending on people, projects, organizations, or defect tracking tools, but the following is a normally accepted classification.

  • Critical: The defect affects critical functionality or critical data. It does not have a workaround. Example: Unsuccessful installation, complete failure of a feature.
  • Major: The defect affects major functionality or major data. It has a workaround but is not obvious and is difficult. Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s.
  • Minor: The defect affects minor functionality or non-critical data. It has an easy workaround. Example: A minor feature that is not functional in one module but the same task is easily doable from another module.
  • Trivial: The defect does not affect functionality or data. It does not even need a workaround. It does not impact productivity or efficiency. It is merely an inconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.

Reference : http://softwaretestingfundamentals.com/defect-severity/