UNIT III
UNIT III
Project Scope
Estimates
Software
Risk Project
Schedule Plan
Control Strategy
Software Scope
● Software scope describes
○ The functions and features that are to be delivered to end
users.Functions are evaluated and sometimes refined because
cost and schedule estimates are functionally oriented.
○ The data that are input to and output from the system
○ The "content" that is presented to users as a consequence of using
the software
○ The performance(Refers to processing and response time
requirements), constraints,( Limits placed on the software by
external hardware, available memory or existing systems.)
interfaces, and reliability that bound the system
● Scope can be define using two techniques
○ A narrative description of software scope is developed after
communication with all stakeholders
○ A set of use cases is developed by end users
● After the scope has been identified, two questions are asked
○ Can we build software to meet this scope?
○ Is the project feasible?
Obtaining Project Scope
The first set of questions focuses on the customer, the overall goals
and benefits. For example, the analyst might ask:
• Who is behind the request for this work?
• Who will use the solution?
• What will be the economic benefit of a successful solution?
• Is there another source for the solution?
The next set of questions enables the analyst to gain a better understanding
of the problem and the customer to voice any perceptions about a solution:
• How would you (the customer) characterize "good" output that would be
generated by a successful solution?
• What problem(s) will this solution address?
• Can you show me (or describe) the environment in which the solution will
be used?
• Will any special performance issues or constraints affect the way the
solution is approached?
The final set of questions focuses on the effectiveness of the meeting.
These are called as Meta Questions-
• Are you the right person to answer these questions? Are your
answers official?
• Are my questions relevant to the problem you have?
• Am I asking too many questions?
• Is there anyone else who will provide additional information?
• Is there anything else that I should be asking you?
Feasibility
● After the scope is resolved, feasibility is addressed
● Software feasibility has four dimensions
○ Technology – Is the project technically feasible? Is it within the state
of the art? Can defects be reduced to a level matching the
application's needs?
○ Finance – Is it financially feasible? Can development be completed at
a cost that the software organization, its client, or the market can
afford?
○ Time – Will the project's time-to-market beat the competition?
○ Resources – Does the software organization have the resources
needed to succeed in doing the project?
○ *
Another view recommends the following feasibility dimensions:
technological,economical, legal, operational, and schedule issues
Scoping Example- Case Study
The conveyor line sorting system (CLSS) sorts boxes
moving along a conveyor line. Each box is identified
by a barcode that contains a part number and is
sorted into one of six bins at the end of the line.
The boxes pass by a sorting station that contains a bar
code reader and a PC. The sorting station PC is
connected to a shunting mechanism that sorts the
boxes into the bins. Boxes pass in random order and
are evenly spaced. The line is moving at five feet
per minute.
Scoping Example- Further Requirements
• CLSS software input information from a barcode reader at time intervals
that conform to the conveyor line speed.
• Barcode data will be decoded into box identification format.
• The software will look up in part number database containing a
maximum of 1000 entries to determine proper bin location for box
currently at the reader (sorting station).
• The proper bin location is passed to a sorting shunt that will position
boxes in appropriate bin.
• A record of the bin destination for each box will be maintained for latter
recovery and reporting.
• CLSS software will also receive input from a pulse tachometer that will
be used to synchronize the control signal to the shunting mechanism.
i.e. Based on the number of pulses that will be generated between the
sorting station and the shunt, the software will produce a control signal
to the shunt to properly position the box.
The project planner examines the statement of scope and extracts all
important software functions. This process, called decomposition.
• Read bar code input.
• Read pulse tachometer.
• Decode part code data.
• Do database look-up.
• Determine bin location.
• Produce control signal for shunt.
• Maintain record of box destinations.
• Performance is depicted by conveyor line speed. Processing for each
box must be complete before the next box arrives at the barcode
reader.
• Constraints- The CLSS software is constrained by the hardware it must
access i.e. barcode reader, the shunt, the PC and the available
memory, the overall conveyor line configuration(evenly spaced boxes)
Resources
• Environmental Resources:
• Development Environment Resources:
• A software engineering environment (SEE) incorporates hardware,
software, and network resources that provide platforms and tools to
develop and test software work products
• Most software organizations have many projects that require access to
the SEE provided by the organization
• Planners must identify the time window required for hardware and
software and verify that these resources will be available
1. Lines of Code (LOC): As the name suggests, LOC count the total
number of lines of source code in a project. The units of LOC are:
● KLOC-Thousand lines of code
● NLOC- Non-comment lines of code
● KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of
the same kind. The experts use it to predict the required size of
various components of software and then add them to get the
total size.
Advantages:
● Universally accepted and is used in many models like COCOMO.
● Estimation is closer to the developer’s perspective.
● Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
● No proper industry standard exists for this technique.
● It is difficult to estimate the size using this technique in the early stages
of the project.
Disadvantages:
● No fixed standards exist. Some entities contribute more project size
than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it
must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow
Diagram(DFD) represents the functional view of software.
The model depicts the main processes/functions involved in software and the
flow of data between them.
Utilization of the number of functions in DFD to predict software size.
Already existing processes of similar type are studied and used to estimate
the size of the process.
Sum of the estimated size of each process gives the final estimated size
Advantages:
● It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will
increase the accuracy of estimation
Disadvantages:
● Studying similar kinds of processes to estimate size takes additional time
and effort
● All software projects are not required for the construction of DFD.
4.Function Point Analysis: In this method, the number and type of functions
supported by the software are utilized to find FPC(function point count).
The steps in function point analysis are:
● Count the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).
Advantages:
● It can be easily used in the early stages of project planning.
● It is independent of the programming language.
● It can be used to compare different projects even if they use different
technologies(database, language, etc).
Disadvantages:
Step-1:
Number of screens = 4
Number of records = 2
Step-2:
For screens,
Number of views = 4
Number of data tables = 7
Number of servers = 3
Number of clients = 4
by using above given information and table (For Screens),
Complexity level for each screen = medium
For reports,
Number of sections = 6
Number of data tables = 7
Number of servers = 2
Number of clients = 3
by using above given information and table (For Reports),
Complexity level for each report = difficult
Step-5:
%reuse of object points = 10% (given)
NOP = [object points * (100 - %reuse)]/100
= [24 * (100 -10)]/100 = 21.6
Step-6:
Developer’s experience and capability is low (given)
Using information given about developer and productivity rate table
Productivity rate (PROD) of given project = 7
Step-7:
Effort
= NOP/PROD
= 21.6/7
= 3.086 person-month
Process Based Estimation
● The most common technique for estimating a project is to base the estimate on
the process that will be used.
● That is, the process is decomposed into a relatively small set of tasks and the
effort required to accomplish each task is estimated.
● Like the problem-based techniques, process-based estimation begins with a
delineation of software functions obtained from the project scope. A series of
software framework activities must be performed for each function. Functions
and related software framework activities may be represented as below:
● Once problem functions and process activities are melded, the planner
estimates the effort (e.g., person-months) that will be required to
accomplish each software process activity for each software function.
● Average labor rates (i.e., cost/unit effort) are then applied to the effort
estimated for each process activity.
● It is very likely the labor rate will vary for each task. Senior staff heavily
involved in early activities are generally more expensive than junior staff
involved in later design tasks, code generation, and early testing.
● Costs and effort for each function and software process activity are
computed as the last step.
● If process-based estimation is performed independently of LOC or FP
estimation, we now have two or three estimates for cost and effort that
may be compared and reconciled.
● If both sets of estimates show reasonable agreement, there is good
reason to believe that the estimates are reliable.
● If, on the other hand, the results of these decomposition techniques
show little agreement, further investigation and analysis must be
conducted.
● Drom above table it is clearthat 53%of the total efforts are given to
analysis and design
Estimation with usecases
● Use cases provide a software team with insight into software scope and
requirements. However, developing an estimation approach with use cases is
problematic for the following reasons
○ Use cases are described using many different formats and styles—there
is no standard form.
○ Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
○ Use cases do not address the complexity of the functions and features
that are described.
○ Use cases can not describe complex behavior (e.g., interactions) that
involve many functions and features.
● Smith [Smi99] suggests that use cases can be used for estimation, but only if
they are considered within the context of the “structural hierarchy” that they
are used to describe.
● Before use cases can be used for estimation:
○ The level within the structural hierarchy is established,
○ The average length (in pages) of each use case is determined
○ The type of software (e.g., real-time, business, engineering/scientific,
WebApp, embedded) is defined
○ A rough architecture for the system is considered.
● Once these characteristics are established, empirical data may be used
to establish the estimated number of LOC or FP per use case (for each
level of the hierarchy).
● Historical data are then used to compute the effort required to
develop the system. To illustrate how this computation might be
made, consider the following relationship
● To illustrate how this computation might be made, consider the
following
● relationship:
● LOC estimate = N * LOCavg +[(Sa/Sh – 1)+ (Pa/Ph 1)] *LOCadjust
● where
N =actual number of use cases
LOCavg =historical average LOC per use case for this type of
subsystem
LOCadjust =represents an adjustment based on n percent of LOCavg
where n is defined locally and represents the difference between this
project and “average” projects
Sa = actual scenarios per use case
Sh =average scenarios per use case for this type of subsystem
Pa =actual pages per use case
Ph = average pages per use case for this type of subsystem
Reconciling Estimates
The estimation techniques discussed in the preceding sections result in
multiple estimates that must be reconciled to produce a single estimate of
effort, project duration, or cost.
The variation from the average estimate is approximately 18 percent on the
low side and 21 percent on the high side
The Widely divergent estimates can often be traced to one of two causes:
(1) The scope of the project is not adequately understood or has been
misinterpreted by the planner, or
(2) Productivity data used for problem-based estimation techniques is
inappropriate for the application, obsolete
Empirical Estimation Models
● An estimation model for computer software uses empirically
derived formulas to predict effort as a function of LOC or FP.
● The empirical data that support most estimation models are
derived from a limited sample of projects.
● For this reason, no estimation model is appropriate for all classes
of software and in all development environments
● The model should be tested by applying data collected from
completed projects, plugging the data into the model, and then
comparing actual to predicted results.
● If agreement is poor, the model must be tuned and retested
before it can be used.
The Structure of Estimation Models
A typical estimation model is derived using regression analysis on data
collected from past software projects. The overall structure of such
models takes the form:
Like all estimation models for software, the COCOMO II models require
sizing information. Three different sizing options are available as part of the
model hierarchy: object points, function points, and lines of source code.
{ for elaboration on Cocomo model in exam you can write about the object
point technique for software sizing next to above info}
Preparing Requirement Traceability Matrix
Requirement Traceability Matrix (RTM) is used to trace the requirements to
the tests that are needed to verify whether the requirements are fulfilled.
Advantage of RTM
1. 100% test coverage
2. It allows to identify the missing functionality easily
3. It allows identifying the test cases which needs to be updated in case of
a change in requirement
4. It is easy to track the overall test execution status
The basic Cocomo model assumes that the effort is only a function of the
number of lines of code and some constants evaluated according to the
different software system. However, in reality, no system’s effort and
schedule can be solely calculated on the basis of Lines of Code. For that,
various other factors such as reliability, experience, Capability. These
factors are known as Cost Drivers and the Intermediate Model utilizes 15
such drivers for cost estimation.
The effort is calculated as a function of program size and a set of cost drivers
are given according to each phase of the software lifecycle.
Examples OF COCOMO MODEL
Example:Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached &
embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM
Example 2