Results 1 to 2 of 2

Thread: Reliability testing

  1. #1
    Geek_Guest
    Guest

    Reliability testing

    I am interested to know how the reliability testing is organized / performed and the documentation required for it.

    Question asked by visitor iravski


  2. #2
    Contributing Member
    Join Date
    Feb 2006
    Answers
    85

    Re: Reliability testing

    Well reliability testing mostly gets performed in the system side places with lots depends on the HW and system.

    The purpose of reliability testing is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements.

    Reliability testing may be performed at several levels. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels. (The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels.

    Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original unfailed state. However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result.

    some in the software reliability engineering community believe statistical models used in hardware reliability are nevertheless useful as a measure of software reliability, describing what we experience with software: the longer you run software, the higher the probability you’ll eventually use it in an untested manner and find a latent defect that results in a failure

    A common reliability metric is the number of software faults, usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) goes down. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used.

    Thanks,
    Kunal.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
About us
Applying for a job can be a stressful and frustrating experience, especially for someone who has never done it before. Considering that you are competing for the position with a at least a dozen other applicants, it is imperative that you thoroughly prepare for the job interview, in order to stand a good chance of getting hired. That's where GeekInterview can help.
Interact