-
과제#01 :: Validation 과 Verification의 정의와 차이점 (자료수집)정리필요1 2008. 3. 22. 22:03Validation
요구 사항이 컴포턴트나 시스템을 특정하게 의도적으로 사용 또는 활용하는 것을 충족시키는지 조사에 의해서나 객관적인 증거 제공으로 확인하는 것 [ISO 9000]
Confirmation by examination and through provision of objective evidence that the requirements for a specific intented use or application have been fulfilled. [ISO 9000]
Verification
명세된 요구 사항이 충족되었는지를 조사에 의해서나 객관적인 증거 제공으로 확인하는 것. [ISO 9000]
Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]
Validation VS Verification
Validation은 주로 동적 테스팅(dynamic testing)을 통해 확인하는 것을 의미하고, Verification은 리뷰(review), 인스펙션(inspection) 등 주로 정적 테스팅(static testing)을 통해 확인하는 것을 의미한다.
참으로 걔념적으로 정리 되지 않은 학문이라 어렵군요 그러나 각각의 학자들이 정의 내린것을
생각해보면 하나로 귀결되지요 그개념만 알고 있으면 될듯합니다.
각각의 현업에 맞게 재정의해서 개발 프로세서에 맞게 맞춰가면 되지 않을까 생각 됩니다만. 제개인 적인 생각이지만요
<출처1>"The Free On-line Dictionary of Computing, by Denis Howe"
Verification:
The process of determining whether or not the products of a given phase in the life-cycle fulfil a set of established requirements.
Validation:
The stage in the software life-cycle at the end of the development process where software is evaluated to ensure that it complies with the requirements.
<출처2>"Software Engineering - A Practitioner's Approach" by Roger. S. Pressman:-
Software Testing is one element of a broader topic that is often referred to as verification and validation.
Verification refers to the set of activities that ensure that software correctly
implements a specific function.
Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.
Boehm states this another way:
Verification: "Are we building the product right?"
Validation: "Are we building the right product?"
<출처3>"Effective Methods for Software Testing" by William. E. Perry:-
A tester uses verification methods to ensure the system complies with an organization's standards and processes, relying on review or nonexecutable methods.
Validation physically ensures that the system operates according to plan by executing the system functions through a series of tests that can be observed and evaluated.
Verification answers the question, "Did we build the right system?" while the validation addresses, "Did we build the system right?".
<출처4>"Software Engineering...", by Boehm
the definition of validation is implying here that it be only a dynamic testing which is very untrue.
In the book Software Engineering Theory and Practice
by Shari Lawrence Pfleeger
She says:
"Validation ensures that the system has implemented
all of the requirements so that each system function
can be traced back to a particular requirement;system
test also verifies the requirements.
Verification ensures that each function works
correctly.
That is Validation makes sure that the developer is
building the right product and verification checks the
quality of implementation."
Testing Computer Software - Cem Kamer, Jack Falk and
Nguyen
"You verify a program by checking it against the most
closely related design documents or specifications if
there is an external specification the function test
verifies at."
"You validate a program by checking it against the
published user or system requirements. System testing
and integrity testing are exe of validation tests."
Here they talk about in a entirely different
manner....
In the book Software testing in the Real world By
Ed Kit
He Says:
"Verification is a human examination or review of the
work product."
And for validation he quotes the following:
IEEE / Ansi Standard definition
"Validation is the process of evaluation a system or
component at the end of the development process to
determine whether it satisfies specified
requirements."
Ed Kit also says that activities like Inspections,Code
Review are Verification activities and where as
Unit,Integration,Functional,System,Acceptance testings
as Validation activities.
But Watts Humphrey in " Managing the Software Process
" refers to Glen Meyers book " Software Reliability,
Principles and Practices " where it is mentioned as
follows:
"Verification : An attempt to find errors by executing
a program as a test or simulated environment (its now
preferable to view verification as the process of
proving the programs correctness)."
"Validation : An attempt to find errors by executing a
program in a Real environment."
According to this except Accepatance Testing
everything else is Verification.
In "The Hand Book of Software Quality Assurance" by
G. Goodon Shulmeyer and James I Mc Manus they discuss
more on V & V:
Here all the different definitions are given and they
say there is little difference b/w the definitions and
all are acceptable within this hand book:
IEEE Glossary definition
Verification is the process of evaluation a system to
determine whether the products of a given development
phase satisfy the conditions imposed at the start of
the phase.
Validation is the process of evaluation a system or
component during or at the end of the development
process to determine whether it satisfies specified
requirement.
EIA / IEEE Draft VS 12 207 - 1996 Definition
Verification is the Confirmation by examination and
provision of objective evidence that specified
requirements have been fulfilled.
Note: In design and development, verification concerns
the process of examining the result of a given
activity to determine conformity with the stated
requirements for that activity.
Validation is the confirmation by examination and
provision of objective evidence that the particular
requirements for a specified intended use are
fulfilled.
Note: In design and development the validation
concerns the process of examining a product to
determine conformity with user needs.
They also say that Testing and Evaluation as seperate
activities and they also refer to their relation with
Verification and Validation activities.
In the same book they also mention seperately abt the
famous definition by Boehm.
Verification: "Are we building the product right?"
Validation: "Are we building the right product?"
I feel William Perry also says the same and all the
authors i have quoted earlier also refer to the same.
I feel verification is the activity where the user
checks for the Standards to be followed and other
things.One of the major tool used during this phase is
the Checklists.
Where as Validation is the activity where the user
checks whether the product works in the way it should
work.
But i think all these defintions are more of academic
interest.Its just like the seperation Producers view
of quality and the customers view of quality.
We should be concentrating on how we can improve the
Testing efficiency and better defect tracking and
working cordially with the development.[출처] Validation & Verification 정의 총망라|작성자 날뫼
----------------------------------------------------------------------------------------Verification and Validation (software)
From Wikipedia, the free encyclopedia
In software project management, software testing, and software engineering, Verification and Validation (V&V) is the process of checking that a software system meets specifications and that it fulfils its intended purpose. It is normally part of the software testing process of a project. In pharmaceutical industry, verification involves testing the suitability of well established procedures or (compendial) methods, whereas validation varies from Cross validation, Empirical validation, periodic partial validation, internal/external validation, competence validation by nature, and Cleaning validation, Process validation, Equipment validation, or Documentation validation by tasks.
Contents
[hide][edit] Definitions
Also known as software quality control
Verification ensures that the final product satisfies or matches the original design (low-level checking) — i.e., you built the product right. This is done through static testing.
Validation checks that the product design satisfies or fits the intended usage (high-level checking) — i.e., you built the right product. This is done through dynamic testing and other forms of review.
According to the Capability Maturity Model (CMMI-SW v1.1), “Validation - The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] Verification- The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]."
In other words, verification is ensuring that the product has been built according to the requirements and design specifications, while validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place. Verification ensures that ‘you built it right’. Validation confirms that the product, as provided, will fulfill its intended use. Validation ensures that ‘you built the right thing’.
[edit] Related concepts
Both verification and validation are related to the concepts of quality and of software quality assurance. By themselves, verification and validation do not guarantee software quality; planning, traceability, configuration management and other aspects of software engineering are required.
[edit] Classification of methods
In mission-critical systems where flawless performance is absolutely necessary, formal methods can be used to ensure the correct operation of a system. However, often for non-mission-critical systems, formal methods prove to be very costly and an alternative method of V&V must be sought out. In this case, syntactic methods are often used.
[edit] Test cases
Test case prepared by the QA team is used to verification, if the process followed to develop the final product is write.
And test case prepared by the QC team is used to validation, if the product is built according to the requirement of the user.
while other methods such as reviews used early in the Software Development Life Cycle provide for validation.
Verification can be called as a part of validation process.
[edit] Independent Verification and Validation
Verification and validation often is carried out by a separate group from the development team; in this case, the process is called "Independent Verification and Validation", or IV&V.
[edit] See also
- Cross-validation
- Independent Verification and Validation Facility
- International Software Testing Qualifications Board
- Software verification
- Formal verification
- Validation
- Validation (drug manufacture)
- Verification and Validation - General
[edit] References
- Tran, E. (1999). "Verification/Validation/Certification", in Koopman, P.: Topics in Dependable Embedded Systems. USA: Carnegie Mellon University. Retrieved on 2007-05-18.
- Menzies, T.; Y. Hu (2003). "Data mining for very busy people". IEEE Computer'' 36 (1): 22–29. doi:10.1109/MC.2003.1244531. ISSN 0018-9162.
제목 : [일반] V&V 개념 (Verification & Validation)과 V-모델
출처 : http://sten.or.kr/bbs/board.php?bo_table=column&wr_id=2&page=2
[요약설명]
V&V 개념은 소프트웨어 테스팅의 기초적이고 핵심적인 개념이면서도 테스트 엔지니어도 자주 혼동하는 개념이다. V-모델을 통해서 설명되며 유의해서 봐둔 다면 소프트웨어 테스팅의 개념 정립에 큰 도움이 된다. 해당글은 STEN Journal에 싣린 글을 거의 원문 그대로 싣었다.[1]
<Verification”과 “Validation”>
“Verification”과 “Validation”은 혼동하기 쉬운 개념인데, 설명을 추가하자면 “Verification”은 개발단계의 산출물이 그 단계의 초기에 설정된 조건을 만족하는지 여부를 결정하기 위해 구성 요소나 시스템을 평가하는 프로세스이다(IEEE/ANSI).
아래 [그림1]은 V-모델 테스팅 체계 (V-Model Testing Framework)을 보여주고 있으며, 흔히 “Verification”은 인간에 의한 테스팅(Human testing)이라고 불리는데, 그 이유는 주로 산출물 위주의 검토형태로 이루어지기 때문이다. “Verification”은 본격적인 구축(implementation) 단계 이전에 요구사항명세서, 설계명세서, 코드등과 같은 산출물을 대상으로 평가(Evaluating), 검토(reviewing), 점검(inspecting)등을 하는 프로세스이며, 구체적인 방법은 점검 (Inspection). 워크스루(Walkthrough), 동료 검토(Buddy Checks)등이 있다.
“Validation”은 명시된 요구사항들을 만족하는지 여부를 확인하기 위해 개발단계 말이나 중간에 구성요소나 시스템을 평가하는 프로세스이다.(IEEE/ANSI) “Validation”은 실제적으로 소프트웨어를 실행하기 때문에 “컴퓨터 기반 테스팅 (Computer-Based Testing)”이라 불리며, “Validation”의 활동은 크게 하위레벨 테스팅 (Low-level testing)과 상위레벨 테스팅 (High-level testing)으로 나누어진다.
하위레벨 테스팅은 각각의 프로그램 단위를 한번에 하나씩 또는 합쳐서 수행하는 것으로 프로그램의 내부구조에 대한 상세한 지식을 요구하며, 여기에는 단위 테스팅 (unit testing)과 통합 테스팅 (integration testing)이 속한다.
상위레벨 테스팅은 전체를 대상으로 완성된 제품을 대상으로 테스팅을 실시하는 것이다. 일반적으로 내부개발인원이 수행하는 하위레벨 테스팅과는 달리 객관적인 차원에서 내부개발조직과 관련 없는 제3의 조직에서 상위레벨 테스팅을 수행한다. 상위레벨 테스팅은 목적에 따라 사용성 테스팅 (usability testing), 기능 테스팅 (function testing), 시스템 테스팅 (system testing), 인수 테스팅 (acceptance testing)으로 구분된다. [1]
{그림 입력을 어떻게해야할지 모르겠네요...^^;}
[그림1] V-Model Testing Framework
<V-모델>
테스트 작업의 유형과 프로세스는 소프트웨어 개발 단계와 밀접한 관계가 있다. 테스트 작업이 프로그램에 포함된 단순한 코딩의 오류만을 찾는 작업이 아니라 요구 분석에서의 오류, 설계 등 개발 단계의 작업들에 대한 테스트를 포함하므로 개발 프로세스와 매핑 시킬 필요가 있다.
이러한 특성을 고려해 개발한 V-모델은 소프트웨어 생명 주기를 모형화한 것으로 기존의 폭포수 모형에 시스템 검증과 테스트 작업을 강조한다. [그림 1]은 코딩을 중심으로 각 단계가 V자 모양의 대칭을 이루고 있다. V-모델의 해석은 구현 부분에 대해서는 유닛 테스트를, 디자인 부분에 대해서는 통합 테스트를, 분석 부분에서는 시스템 테스트를, 요구사항 부분에서는 사용자 승인 테스트 (인수 테스트)를 계획/설계/수행 한다는 것을 말한다. 이런 의미 외에도 각 테스트 단계에서 오류를 발견하였을 때 왼편의 요구명세, 분석, 디자인, 구현으로 되돌아 갈 수 있는 추적성 (Tracibility)을 보장함을 의미한다. 이러한 V 모델은 실행 작업과 그 작업 결과의 검증에 초점을 두고 있다. [2]
<Multiple V-모델>
[그림 2]에서 보이고 있는 Multiple V-모델은 임베디드 시스템 개발에 적절한 모형으로 기존의 V 모델을 모델, 프로토타입, 최종 제품에 관한 세 가지로 나누어 연결한 모델이다. Muliple V-모델의 처음 단계인 모델 단계는 PC를 통해 요구된 시스템의 행위를 모의 실험하는 단계이다. 그리고 이러한 모델 단계가 적합하다고 판정되면 프로토타입 단계에서는 모델로부터 소스코드를 생성하고 실험용 하드웨어에 소스코드를 삽입하여 점차적으로 실제 하드웨어로 전환해 가면서 최종 제품으로 개발해 간다. 그리고 각 단계는 디자인, 구현, 테스트의 순차적인 개발 과정이 포함되어 있다.
{그림 입력을 어떻게해야할지 모르겠네요...^^;}
[그림 2] Multiple V-모델
Multiple V-모델은 임베디드 시스템에서 이와 같은 업무를 정의하기에 적합하다. 왜냐하면 앞서 설명한 바와 같이 Multiple V-모델은 임베디드 시스템의 개발 단계인 모델, 프로토타입, 최종 제품과 동일하게 세 단계로 구분하여 각 단계의 특성에 따라 테스트 디자인 기법이나 테스트 단계 등을 적용할 수 있기 때문이다. 또한 요구 사항에 변화나 오류로 인한 변경이 생길 경우 최종 산물 보다는 프로토타입 단계, 그리고 프로토타입 단계보다는 모델 단계에서 변경함으로써 시간과 비용을 모두 감소시키는 효과도 기대할 수 있다.[2]
<관련 이슈>
V-모델이 소프트웨어 테스팅의 핵심적인 개념이기는 하지만 폭포수 모델 개발 방법론과 밀접하게 연계되어 있기 때문에 갖는 한계점이 있다. UP (Unified Process)로 개발할 경우 V-모델을 어떻게 적용시킬 것인지에 대해서는 연구된 바가 없어 개발자와 테스트 엔지니어 들에게 혼동을 초래할 소지가 있다.
해당 이슈에 대해서는 자료가 업데이트되는대로 본 게시물을 업데이트 하도록 하겠다.
[참고문헌]
[1] 이범우, 소프트웨어 테스팅의 진화과정, 소프트웨어 테스팅 전문 저널 - STEN Journal Vol. II, January 2005, pp. 8-11
[2]서광익, 임베디드 소프트웨어 테스트를 위한 Mulitple V-모델과 적용, 소프트웨어 테스팅 전문 저널- STEN Journal Vol. II, January 2005, pp. 34-38
[3] Gelperin, D., and B. Hetzel, "The Growth of Software Testing," CACM, Vol. 31, No. 6, 1988, pp. 687-695.
[정리]
V&V 개념은 소프트웨어 테스팅의 기초적이고 핵심적인 개념으로 V-모델 테스팅 체계 (V-Model Testing Framework)을 통해서 설명될 수 있다. 핵심적인 개념인 만큼 정확한 이해가 필요하다. V&V 개념 설명을 위해 본문에서 설명한 테스트의 종류를 이해한다면 소프트웨어 테스팅의 많은 부분을 이미 이해하게 된 것이다.
다음번 아티클에서는 "소프트웨어 테스트 프로세스"의 개념에 대해서 다루고자 한다. 제목이 암시하는 바와 같이 테스트도 개발과 같이 일정 프로세스를 가지고 있으며, 이를 따르고 산출물 관리를해야 좋을 테스트 결과를 얻게된다.
이러한 개념적인 설명을 어느정도 진행해서 테스트가 무엇인지 정확하게 파악할 정도가 되면 본격적으로 구체적인 테스트 기법들에 대해 소개를 할 예정이다.
참고로, 해외에는 "개발자를 위한 테스팅"이라는 교육과정이 있을 정도로 개발자를 위한 테스트도 전체 테스트 구도에서 중요한 위치를 차지한다. 지속적으로 테스팅에 관심을 가져 보다 수준 높은 개발자가 되기를 바란다. 귀하가 소속된 조직의 목표는 코딩차원의 개발이 아니고 고품질의 소프트웨어 제품을 개발하는 것이라는 것을 항상 염두에 두었으면 하는 바램이다.
출처 : http://satc.gsfc.nasa.gov/assure/agbsec5.txtV. VERIFICATION AND VALIDATION A. Concepts and Definitions Software Verification and Validation (V&V) is the process of ensuring that software being developed or changed will satisfy functional and other requirements (validation) and each step in the process of building the software yields the right products (verification). The differences between verification and validation are unimportant except to the theorist; practitioners use the term V&V to refer to all of the activities that are aimed at making sure the software will function as required. V&V is intended to be a systematic and technical evaluation of software and associated products of the development and maintenance processes. Reviews and tests are done at the end of each phase of the development process to ensure software requirements are complete and testable and that design, code, documentation, and data satisfy those requirements. B. Activities The two major V&V activities are reviews, including inspections and walkthroughs, and testing. 1. Reviews, Inspections, and Walkthroughs Reviews are conducted during and at the end of each phase of the life cycle to determine whether established requirements, design concepts, and specifications have been met. Reviews consist of the presentation of material to a review board or panel. Reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed. Informal reviews are conducted on an as-needed basis. The developer chooses a review panel and provides and/or presents the material to be reviewed. The material may be as informal as a computer listing or hand-written documentation. Formal reviews are conducted at the end of each life cycle phase. The acquirer of the software appoints the formal review panel or board, who may make or affect a go/no-go decision to proceed to the next step of the life cycle. Formal reviews include the Software Requirements Review, the Software Preliminary Design Review, the Software Critical Design Review, and the Software Test Readiness Review. An inspection or walkthrough is a detailed examination of a product on a step-by-step or line-of-code by line-of-code basis. The purpose of conducting inspections and walkthroughs is to find errors. The group that does an inspection or walkthrough is composed of peers from development, test, and quality assurance. 2. Testing Testing is the operation of the software with real or simulated inputs to demonstrate that a product satisfies its requirements and, if it does not, to identify the specific differences between expected and actual results. There are varied levels of software tests, ranging from unit or element testing through integration testing and performance testing, up to software system and acceptance tests. a. Informal Testing Informal tests are done by the developer to measure the development progress. "Informal" in this case does not mean that the tests are done in a casual manner, just that the acquirer of the software is not formally involved, that witnessing of the testing is not required, and that the prime purpose of the tests is to find errors. Unit, component, and subsystem integration tests are usually informal tests. Informal testing may be requirements-driven or design- driven. Requirements-driven or black box testing is done by selecting the input data and other parameters based on the software requirements and observing the outputs and reactions of the software. Black box testing can be done at any level of integration. In addition to testing for satisfaction of requirements, some of the objectives of requirements-driven testing are to ascertain: Computational correctness. Proper handling of boundary conditions, including extreme inputs and conditions that cause extreme outputs. State transitioning as expected. Proper behavior under stress or high load. Adequate error detection, handling, and recovery. Design-driven or white box testing is the process where the tester examines the internal workings of code. Design- driven testing is done by selecting the input data and other parameters based on the internal logic paths that are to be checked. The goals of design-driven testing include ascertaining correctness of: All paths through the code. For most software products, this can be feasibly done only at the unit test level. Bit-by-bit functioning of interfaces. Size and timing of critical elements of code. b. Formal Tests Formal testing demonstrates that the software is ready for its intended use. A formal test should include an acquirer- approved test plan and procedures, quality assurance witnesses, a record of all discrepancies, and a test report. Formal testing is always requirements-driven, and its purpose is to demonstrate that the software meets its requirements. Each software development project should have at least one formal test, the acceptance test that concludes the development activities and demonstrates that the software is ready for operations. In addition to the final acceptance test, other formal testing may be done on a project. For example, if the software is to be developed and delivered in increments or builds, there may be incremental acceptance tests. As a practical matter, any contractually required test is usually considered a formal test; others are "informal." After acceptance of a software product, all changes to the product should be accepted as a result of a formal test. Post acceptance testing should include regression testing. Regression testing involves rerunning previously used acceptance tests to ensure that the change did not disturb functions that have previously been accepted. C. Verification and Validation During the Software Acquisition Life Cycle The V&V Plan should cover all V&V activities to be performed during all phases of the life cycle. The V&V Plan Data Item Description (DID) may be rolled out of the Product Assurance Plan DID contained in the SMAP Management Plan Documentation Standard and DID. 1. Software Concept and Initiation Phase The major V&V activity during this phase is to develop a concept of how the system is to be reviewed and tested. Simple projects may compress the life cycle steps; if so, the reviews may have to be compressed. Test concepts may involve simple generation of test cases by a user representative or may require the development of elaborate simulators and test data generators. Without an adequate V&V concept and plan, the cost, schedule, and complexity of the project may be poorly estimated due to the lack of adequate test capabilities and data. 2. Software Requirements Phase V&V activities during this phase should include: Analyzing software requirements to determine if they are consistent with, and within the scope of, system requirements. Assuring that the requirements are testable and capable of being satisfied. Creating a preliminary version of the Acceptance Test Plan, including a verification matrix, which relates requirements to the tests used to demonstrate that requirements are satisfied. Beginning development, if needed, of test beds and test data generators. The phase-ending Software Requirements Review (SRR). 3. Software Architectural (Preliminary) Design Phase V&V activities during this phase should include: Updating the preliminary version of the Acceptance Test Plan and the verification matrix. Conducting informal reviews and walkthroughs or inspections of the preliminary software and data base designs. The phase-ending Preliminary Design Review (PDR) at which the allocation of requirements to the software architecture is reviewed and approved. 4. Software Detailed Design Phase V&V activities during this phase should include: Completing the Acceptance Test Plan and the verification matrix, including test specifications and unit test plans. Conducting informal reviews and walkthroughs or inspections of the detailed software and data base designs. The Critical Design Review (CDR) which completes the software detailed design phase. 5. Software Implementation Phase V&V activities during this phase should include: Code inspections and/or walkthroughs. Unit testing software and data structures. Locating, correcting, and retesting errors. Development of detailed test procedures for the next two phases. 6. Software Integration and Test Phase This phase is a major V&V effort, where the tested units from the previous phase are integrated into subsystems and then the final system. Activities during this phase should include: Conducting tests per test procedures. Documenting test performance, test completion, and conformance of test results versus expected results. Providing a test report that includes a summary of nonconformances found during testing. Locating, recording, correcting, and retesting nonconformances. The Test Readiness Review (TRR), confirming the product's readiness for acceptance testing. 7. Software Acceptance and Delivery Phase V&V activities during this phase should include: By test, analysis, and inspection, demonstrating that the developed system meets its functional, performance, and interface requirements. Locating, correcting, and retesting nonconformances. The phase-ending Acceptance Review (AR). 8. Software Sustaining Engineering and Operations Phase Any V&V activities conducted during the prior seven phases are conducted during this phase as they pertain to the revision or update of the software. D. Independent Verification and Validation Independent Verification and Validation (IV&V) is a process whereby the products of the software development life cycle phases are independently reviewed, verified, and validated by an organization that is neither the developer nor the acquirer of the software. The IV&V agent should have no stake in the success or failure of the software. The IV&V agent's only interest should be to make sure that the software is thoroughly tested against its complete set of requirements. The IV&V activities duplicate the V&V activities step-by- step during the life cycle, with the exception that the IV&V agent does no informal testing. If there is an IV&V agent, the formal acceptance testing may be done only once, by the IV&V agent. In this case, the developer will do a formal demonstration that the software is ready for formal acceptance. E. Techniques and Tools Perhaps more tools have been developed to aid the V&V of software (especially testing) than any other software activity. The tools available include code tracers, special purpose memory dumpers and formatters, data generators, simulations, and emulations. Some tools are essential for testing any significant set of software, and, if they have to be developed, may turn out to be a significant cost and schedule driver. An especially useful technique for finding errors is the formal inspection. Formal inspections were developed by Michael Fagan of IBM. Like walkthroughs, inspections involve the line-by-line evaluation of the product being reviewed. Inspections, however, are significantly different from walkthroughs and are significantly more effective. Inspections are done by a team, each member of which has a specific role. The team is led by a moderator, who is formally trained in the inspection process. The team includes a reader, who leads the team through the item; one or more reviewers, who look for faults in the item; a recorder, who notes the faults; and the author, who helps explain the item being inspected. This formal, highly structured inspection process has been extremely effective in finding and eliminating errors. It can be applied to any product of the software development process, including documents, design, and code. One of its important side benefits has been the direct feedback to the developer/author, and the significant improvement in quality that results.
출처 : http://geekswithblogs.net/srkprasad/archive/2004/11/30/16490.aspx
Validation
Verification
Am I building the right product
Am I building the product right
Determining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.
The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.
Am I accessing the right data (in terms of the data required to satisfy the requirement)
Am I accessing the data right (in the right place; in the right way).
High level activity
Low level activity
Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment
Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards
Determination of correctness of the final software product by a development project with respect to the user needs and requirements
Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.