0% found this document useful (0 votes)
3 views

UNIT III

Unit 3 focuses on software project planning, emphasizing estimation, scheduling, risk analysis, quality management, and change management. It outlines the importance of accurate estimation and scheduling to avoid project delays and client dissatisfaction, while also detailing the steps for effective project planning and scope definition. The document discusses various techniques for estimation, including decomposition and sizing methods, to ensure reliable project outcomes.

Uploaded by

advaygargote
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

UNIT III

Unit 3 focuses on software project planning, emphasizing estimation, scheduling, risk analysis, quality management, and change management. It outlines the importance of accurate estimation and scheduling to avoid project delays and client dissatisfaction, while also detailing the steps for effective project planning and scope definition. The document discusses various techniques for estimation, including decomposition and sizing methods, to ensure reliable project outcomes.

Uploaded by

advaygargote
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Unit 3 Estimation and Scheduling

Contents Of Unit III


Software Project Planning
● Software project planning encompasses five major activities
❏ Estimation
❏ Scheduling
❏ Risk analysis
❏ Quality management planning
❏ Change management planning
● Estimation determines how much money, effort, resources, and time it
will take to build a specific system or product
● The software team first estimates
❏ The work to be done
❏ The resources required
❏ The time that will elapse from start to finish
● Then they establish a project schedule that
❏ Defines tasks and milestones
❏ Identifies who is responsible for conducting each task
❏ Specifies the inter-task dependencies
Observations on Estimation
● Planning requires technical managers and the software team to make an
initial commitment
● Process and project metrics can provide a historical perspective and
valuable input for generation of quantitative estimates
● Past experience can aid greatly
● Estimation carries inherent risk, and this risk leads to uncertainty
● The availability of historical information has a strong influence on
estimation risk
● When software metrics are available from past projects
○ Estimates can be made with greater assurance
○ Schedules can be established to avoid past difficulties
○ Overall risk is reduced
● Estimation risk is measured by the degree of uncertainty in the
quantitative estimates for cost, schedule, and resources
● Nevertheless, a project manager should not become obsessive about
estimation
○ Plans should be iterative and allow adjustments as time passes and
more is made certain
Sliding Window Planning:
● Project designing needs utmost care and a spotlight, since
commitment to unrealistic time and resource estimates end in
schedule slippage.
● Schedule delays will cause client discontent and adversely have
an effect on team morale. It will even cause project failure.
● Project designing could be a terribly difficult activity.Particularly
for giant projects it’s pretty much troublesome to create correct
plans. Major Issue in this is the actual fact that the correct
parameters, the scope of the project, project workers, etc.
might change and improve throughout the span of the project.
● So as to beat this drawback, generally project managers
undertake project designing little by little. Designing a project
over a variety of stages protects managers from creating huge
commitments too early. This method of staggered designing is
thought of as window designing. Within the window technique,
beginning with associate initial set up, the project is planned
additionally accurately in sequential development stages.
At the beginning of a project, project managers have incomplete
information concerning the main points of the project. Their
information base step by step improves, because the project
progresses through completely different phases.
After the completion of each section, the project managers will set
up every future section additional accurately and with increasing
levels of confidence.
Project Planning Process
● The objective of software project planning is to provide a framework that
enables the manager to make reasonable estimates of resources, cost,
and schedule.
● These estimates are made within a limited time frame at the beginning of
a software project and should be updated regularly as the project
progresses.
● In addition, estimates should attempt to define best case and worst case
scenarios so that project outcomes can be bounded.
● The planning objective is achieved through a process of information
discovery that leads to reasonable estimates.
● Estimation begins with a description of the scope of the product. Until
the scope is “bounded” it’s not possible to develop a meaningful
estimate.
● The problem is then decomposed into a set of smaller problems and
each of these is estimated using historical data and experience as guides.
● Problem complexity and risk are considered before a final estimate is
made.
Task Set for Project Planning
1. Establish project scope
2. Determine feasibility
3. Analyze risks
4. Define required resources
a. Determine human resources required
b. Define reusable software resources
c. Identify environmental resources
5. Estimate cost and effort
a. Decompose the problem
b. Develop two or more estimates using different approaches
c. Reconcile the estimates
6. Develop a project schedule
a. Establish a meaningful task set
b. Define a task network
c. Use scheduling tools to develop a timeline chart
d. Define schedule tracking algorithm
Steps to Software Planning:
− Defining project Scope
− Estimation
− Risk Analysis
− Scheduling

Project Scope
Estimates
Software
Risk Project
Schedule Plan
Control Strategy
Software Scope
● Software scope describes
○ The functions and features that are to be delivered to end
users.Functions are evaluated and sometimes refined because
cost and schedule estimates are functionally oriented.
○ The data that are input to and output from the system
○ The "content" that is presented to users as a consequence of using
the software
○ The performance(Refers to processing and response time
requirements), constraints,( Limits placed on the software by
external hardware, available memory or existing systems.)
interfaces, and reliability that bound the system
● Scope can be define using two techniques
○ A narrative description of software scope is developed after
communication with all stakeholders
○ A set of use cases is developed by end users
● After the scope has been identified, two questions are asked
○ Can we build software to meet this scope?
○ Is the project feasible?
Obtaining Project Scope
The first set of questions focuses on the customer, the overall goals
and benefits. For example, the analyst might ask:
• Who is behind the request for this work?
• Who will use the solution?
• What will be the economic benefit of a successful solution?
• Is there another source for the solution?
The next set of questions enables the analyst to gain a better understanding
of the problem and the customer to voice any perceptions about a solution:
• How would you (the customer) characterize "good" output that would be
generated by a successful solution?
• What problem(s) will this solution address?
• Can you show me (or describe) the environment in which the solution will
be used?
• Will any special performance issues or constraints affect the way the
solution is approached?
The final set of questions focuses on the effectiveness of the meeting.
These are called as Meta Questions-
• Are you the right person to answer these questions? Are your
answers official?
• Are my questions relevant to the problem you have?
• Am I asking too many questions?
• Is there anyone else who will provide additional information?
• Is there anything else that I should be asking you?
Feasibility
● After the scope is resolved, feasibility is addressed
● Software feasibility has four dimensions
○ Technology – Is the project technically feasible? Is it within the state
of the art? Can defects be reduced to a level matching the
application's needs?
○ Finance – Is it financially feasible? Can development be completed at
a cost that the software organization, its client, or the market can
afford?
○ Time – Will the project's time-to-market beat the competition?
○ Resources – Does the software organization have the resources
needed to succeed in doing the project?

○ *
Another view recommends the following feasibility dimensions:
technological,economical, legal, operational, and schedule issues
Scoping Example- Case Study
The conveyor line sorting system (CLSS) sorts boxes
moving along a conveyor line. Each box is identified
by a barcode that contains a part number and is
sorted into one of six bins at the end of the line.
The boxes pass by a sorting station that contains a bar
code reader and a PC. The sorting station PC is
connected to a shunting mechanism that sorts the
boxes into the bins. Boxes pass in random order and
are evenly spaced. The line is moving at five feet
per minute.
Scoping Example- Further Requirements
• CLSS software input information from a barcode reader at time intervals
that conform to the conveyor line speed.
• Barcode data will be decoded into box identification format.
• The software will look up in part number database containing a
maximum of 1000 entries to determine proper bin location for box
currently at the reader (sorting station).
• The proper bin location is passed to a sorting shunt that will position
boxes in appropriate bin.
• A record of the bin destination for each box will be maintained for latter
recovery and reporting.
• CLSS software will also receive input from a pulse tachometer that will
be used to synchronize the control signal to the shunting mechanism.
i.e. Based on the number of pulses that will be generated between the
sorting station and the shunt, the software will produce a control signal
to the shunt to properly position the box.
The project planner examines the statement of scope and extracts all
important software functions. This process, called decomposition.
• Read bar code input.
• Read pulse tachometer.
• Decode part code data.
• Do database look-up.
• Determine bin location.
• Produce control signal for shunt.
• Maintain record of box destinations.
• Performance is depicted by conveyor line speed. Processing for each
box must be complete before the next box arrives at the barcode
reader.
• Constraints- The CLSS software is constrained by the hardware it must
access i.e. barcode reader, the shunt, the PC and the available
memory, the overall conveyor line configuration(evenly spaced boxes)
Resources

• Human Resources: Evaluate scope and select skills required (both


organizational position and specialty, e.g. database software engineer).
No of people required decided on effort required (person-month).
• Planners need to select the number and the kind of people skills
needed to complete the project
• Small projects of a few person-months may only need one individual
• Large projects spanning many person-months or years require the
location of the person to be specified also
• The number of people required can be determined only after an
estimate of the development effort

• Environmental Resources:
• Development Environment Resources:
• A software engineering environment (SEE) incorporates hardware,
software, and network resources that provide platforms and tools to
develop and test software work products
• Most software organizations have many projects that require access to
the SEE provided by the organization
• Planners must identify the time window required for hardware and
software and verify that these resources will be available

• Reusable software Resources


− Off-the-shelf components (existing software acquired from 3rd party
with no modification required or has been developed internally for
past project)
− Full-experience components (previous project code is similar and team
members have full experience in this application area)
− Partial-experience components (existing project code is related but
requires substantial modification and team has limited experience in
the application area)
− New components (must be built from scratch for this project)
The following guidelines should be considered by the software planner when
reusable components are specified as a resource:
1. If off-the-shelf components meet project requirements, acquire them. The
cost for acquisition and integration of off-the-shelf components will almost
always be less than the cost to develop equivalent software.Ready to use;
fully validated and documented; virtually no risk
2. If full-experience components are available, the risks associated with
modification and integration are generally acceptable. The project plan
should reflect the use of these components.Modification of components will
incur relatively low risk
3. If partial-experience components are available, their use for the current
project must be analyzed. If extensive modification is required before the
components can be properly integrated with other elements of the software,
proceed carefully—risk is high. The cost to modify partial-experience
components can sometimes be greater than the cost to develop new
components.
4.New components
Components must be built from scratch by the software team specifically
for the needs of the current project
Software team has no practical experience in the application area
Software development of components has a high degree of risk
Each resource is specified with
● A description of the resource
● A statement of availability
● The time when the resource will be required
● The duration of time that the resource will be applied
Precedence ordering among project planning activities:
Software Project Estimation
Need:
● Software is the most expensive element of virtually all computer-based
systems.
● For complex, custom systems, a large cost estimation error can make the
difference between profit and loss.
● Cost overrun can be disastrous for the developer.
● Too many variables—human, technical, environmental, political—can
affect the ultimate cost of software and effort applied to develop it.
A series of systematic steps that provide estimates with acceptable risk.
To achieve reliable cost and effort estimates, a number of options arise:
1. Delay estimation until late in the project (obviously, we can achieve 100%
accurate estimates after the project is complete!).
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost
and effort estimates.
4. Use one or more empirical models for software cost and effort estimation.
I.Decomposition Technique
● Software project estimation is a form of problem solving, and in
most cases, the problem to be solved (i.e., developing a cost and
effort estimate for a software project) is too complex to be
considered in one piece.
● For this reason, we decompose the problem, recharacterizing it as a
set of smaller and more manageable problems.
● The decomposition approach was discussed from two different
points of view:
○ Decomposition of the problem
○ Decomposition of the process

I.1 Software Sizing:


In the context of project planning, size refers to a quantifiable outcome
of the software project.
If a direct approach is taken, size can be measured in LOC.
If an indirect approach is chosen, size is represented as FP.
The accuracy of a software project estimate is predicated on a number
of things:
(1) The degree to which the planner has properly estimated the size of
the product to be built;
(2) The ability to translate the size estimate into human effort, calendar
time, and dollars (a function of the availability of reliable software
metrics from past projects);
(3) The degree to which the project plan reflects the abilities of the
software team;
(4) The stability of product requirements and the environment that
supports the software engineering effort.
Four Different Approaches to sizing problem:

1.“Fuzzy logic” sizing:


This approach uses the approximate reasoning techniques that are the
cornerstone of fuzzy logic.
To apply this approach, the planner must identify the type of application,
establish its magnitude on a qualitative scale, and then refine the
magnitude within the original range.
2. Function Points Sizing:
Theses are derived by using empirical relationship based on countable
measures of software’s information domain and assessments of
software complexity.
3. Standard component sizing:
Software is composed of a number of different “standard
components” that are generic to a particular application area. For
example, the standard components for an information system are
subsystems, modules, screens, reports, interactive programs, batch
programs, files, LOC, and object-level instructions. The project
planner estimates the number of occurrences of each standard
component and then uses historical project data to determine the
delivered size per standard component.
Eg: Planner estimates 18 reports and historical data suggests 1000 LOC
for it then 18000 is approximate size of that component
4. Change Sizing:
This approach is used when a project encompasses the use of existing
software that must be modified in some way as part of a project. The
planner estimates the number and type (e.g., reuse, adding code,
changing code, deleting code) of modifications that must be
accomplished
Problem Based Estimation
● The project planner begins with a bounded statement of software scope
and from this statement attempts to decompose software into problem
functions that can each be estimated individually.
● LOC or FP (the estimation variable) is then estimated for each function.
● LOC and FP data are used in two ways during software project
estimation:
(1) As an estimation variable to "size" each element of the software and
(2) As baseline metrics collected from past projects and used in
conjunction with estimation variables to develop cost and effort
projections.
● Alternatively, the planner may choose another component for sizing
such as classes or objects, changes, or business processes affected.
● Baseline productivity metrics (e.g., LOC/pm or FP/pm9) are then applied
to the appropriate estimation variable, and cost or effort for the
function is derived. Function estimates are combined to produce an
overall estimate for the entire project.
● In general, LOC/pm or FP/pm averages should be computed by project
domain. That is, projects should be grouped by team size, application
area, complexity, and other relevant parameters.
● Local domain averages should then be computed.
● When a new project is estimated, it should first be allocated to a domain,
and then the appropriate domain average for productivity should be
used in generating the estimate
● Regardless of the estimation variable that is used,(either LOC or FP) the
project planner begins by estimating a range of values for each function
or information domain value.
● Using historical data or (when all else fails) intuition, the planner
estimates an optimistic, most likely, and pessimistic size value for each
function or count for each information domain value.
● An implicit indication of the degree of uncertainty is provided when a
range of values is specified.
● A three-point or expected value can then be computed
● The expected value for the estimation variable (size), S, can be computed
as a weighted average of the optimistic (sopt), most likely (sm), and
pessimistic (spess) estimates.
● For example, S = (sopt + 4sm + spess)/6
● Above equation gives heaviest importance to the “most likely” estimate
and follows a beta probability distribution.
● There is a very small probability the actual size result will fall outside the
optimistic or pessimistic values.
● Once the expected value for the estimation variable has been
determined, historical LOC or FP productivity data are applied. Any
estimation technique, no matter how sophisticated, must be
cross-checked with another approach. Even then, common sense and
experience must prevail
Software Engineering | Project size estimation techniques
Estimation of the size of the software is an essential part of Software
Project Management.
It helps the project manager to further predict the effort and time
which will be needed to build the project.
Various measures are used in project size estimation. Some of these
are:
● Lines of Code
● Number of entities in ER diagram
● Total number of processes in detailed data flow diagram
● Function points

1. Lines of Code (LOC): As the name suggests, LOC count the total
number of lines of source code in a project. The units of LOC are:
● KLOC-Thousand lines of code
● NLOC- Non-comment lines of code
● KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of
the same kind. The experts use it to predict the required size of
various components of software and then add them to get the
total size.
Advantages:
● Universally accepted and is used in many models like COCOMO.
● Estimation is closer to the developer’s perspective.
● Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
● No proper industry standard exists for this technique.
● It is difficult to estimate the size using this technique in the early stages
of the project.

2. Number of entities in ER diagram: ER model provides a static view of the


project. It describes the entities and their relationships. The number of
entities in ER model can be used to measure the estimation of the size of the
project. The number of entities depends on the size of the project. This is
because more entities needed more classes/structures thus leading to more
coding.
Advantages:
● Size estimation can be done during the initial stages of planning.
● The number of entities is independent of the programming
technologies used.

Disadvantages:
● No fixed standards exist. Some entities contribute more project size
than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it
must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow
Diagram(DFD) represents the functional view of software.
The model depicts the main processes/functions involved in software and the
flow of data between them.
Utilization of the number of functions in DFD to predict software size.
Already existing processes of similar type are studied and used to estimate
the size of the process.
Sum of the estimated size of each process gives the final estimated size

Advantages:
● It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will
increase the accuracy of estimation
Disadvantages:
● Studying similar kinds of processes to estimate size takes additional time
and effort
● All software projects are not required for the construction of DFD.
4.Function Point Analysis: In this method, the number and type of functions
supported by the software are utilized to find FPC(function point count).
The steps in function point analysis are:
● Count the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).

Advantages:
● It can be easily used in the early stages of project planning.
● It is independent of the programming language.
● It can be used to compare different projects even if they use different
technologies(database, language, etc).

Disadvantages:

● It is not good for real-time systems and embedded systems.


● Many cost estimation models like COCOMO uses LOC and hence FPC
must be converted to LOC.
An example of LOC based Estimation
Preliminary scope of problem statement:
Detailed calculations:
1.Estimated value of the size :
{4600+4*(6900)+8600}/6=40800/6=6800
2.Cost per line of code: 8000/620 =12.90 that is approximately
=13
3.Estimated project cost= Estimated LOC* cost per line of the
code=33200*13= $431600
4.Estimated effort = cost/labour= 431600/8000=53.95 which
approximately 54 person month
Example FP based estimation
Answers to these questions are rated from 0 to 5
Detailed calculations:
1.Count Total=320
2.Summation of all Fi(i=1 to 14)=52
3. Complexity adjustment factor=0.65+(0.01*summation fi)
=.65+(0.01*52)=1.17
2.Cost per FP: 8000/6.5=$1230
3.Estimated project cost= Estimated FP* cost perFP
code=375*1230= $461250 approximately 461000
4.Estimated effort = cost/labour= 461000/8000=57.62which
approximately 58 person month
Once function points have been calculated, they are used in a manner analogous to LOC
as a way to normalize measures for software productivity, quality, and other attributes:
• Errors per FP.
• Defects per FP.
• $ per FP.
• Pages of documentation per FP.
• FP per person-month.
Object Point Estimation
● This approach was devised at the Ixonard N. Stern School of
Business, New York University
● Object points are an approach used in software development
effort estimation under some models such as COCOMO II.
● Object points are a way of estimating effort size, similar to Source
Lines Of Code (SLOC) or Function Points.
● They are not necessarily related to objects in Object-oriented
programming, the objects referred to include screens, reports,
and modules, 3GL components of the language.
● The number of raw objects and complexity of each are estimated
and a weighted total Object-Point count is then computed and
used to base estimates of the effort needed.
● Object Point estimation is a new size estimation technique but it is
well suited in Application Composition Sector.
Estimation Of Efforts
Step-1: Access Object counts
Estimate the number of screens, reports and 3GL components that will
comprise this application.
Step-2: Classify complexity levels of each object
We have to classify each object instance into simple, medium and difficult
complexity level depending on values of its characteristics.
Complexity levels are assigned according to the given table
Step-3: Assign complexity weights to each object
The weights are used for three object types i.e, screens, reports and
3GL components.
Complexity weight are assigned according to object’s complexity level
using following table
Step-4: Determine Object Points
Add all the weighted object instances to get one number and this is
known as object point count.
Object Point
= Sigma (number of object instances)
* (Complexity weight of each object instance)
Step-5: Compute New Object Points (NOP)
We have to estimate the %reuse to be achieved in a project.
Depending on %reuse
NOP = [(object points) * (100 - %reuse)]/100
NOP are the object point that will need to be developed and differ from the
object point count because there may be reuse of some object instance in
project.
Step-6: Calculate Productivity rate (PROD)
Productivity rate is calculated on the basis of information given about
developer’s experience and capability.
For calculating it, we use following table
Example of object point estimation
Step-7: Compute the estimated Effort
Effort to develop a project can be calculated as
Effort = NOP/PROD
Effort is measured in person-month.
Example:
Problem Statement:
Consider a database application project with
1. The application has four screens with four views each and seven data
tables for three servers and four clients.
2. Application may generate two reports of six section each from seven
data tables for two servers and three clients.
10% reuse of object points.
Developer’s experience and capability in similar environment is low.
Calculate the object point count, New object point and effort to develop
such project.

Step-1:
Number of screens = 4
Number of records = 2
Step-2:
For screens,
Number of views = 4
Number of data tables = 7
Number of servers = 3
Number of clients = 4
by using above given information and table (For Screens),
Complexity level for each screen = medium
For reports,
Number of sections = 6
Number of data tables = 7
Number of servers = 2
Number of clients = 3
by using above given information and table (For Reports),
Complexity level for each report = difficult
Step-5:
%reuse of object points = 10% (given)
NOP = [object points * (100 - %reuse)]/100
= [24 * (100 -10)]/100 = 21.6
Step-6:
Developer’s experience and capability is low (given)
Using information given about developer and productivity rate table
Productivity rate (PROD) of given project = 7
Step-7:
Effort
= NOP/PROD
= 21.6/7
= 3.086 person-month
Process Based Estimation
● The most common technique for estimating a project is to base the estimate on
the process that will be used.
● That is, the process is decomposed into a relatively small set of tasks and the
effort required to accomplish each task is estimated.
● Like the problem-based techniques, process-based estimation begins with a
delineation of software functions obtained from the project scope. A series of
software framework activities must be performed for each function. Functions
and related software framework activities may be represented as below:
● Once problem functions and process activities are melded, the planner
estimates the effort (e.g., person-months) that will be required to
accomplish each software process activity for each software function.
● Average labor rates (i.e., cost/unit effort) are then applied to the effort
estimated for each process activity.
● It is very likely the labor rate will vary for each task. Senior staff heavily
involved in early activities are generally more expensive than junior staff
involved in later design tasks, code generation, and early testing.
● Costs and effort for each function and software process activity are
computed as the last step.
● If process-based estimation is performed independently of LOC or FP
estimation, we now have two or three estimates for cost and effort that
may be compared and reconciled.
● If both sets of estimates show reasonable agreement, there is good
reason to believe that the estimates are reliable.
● If, on the other hand, the results of these decomposition techniques
show little agreement, further investigation and analysis must be
conducted.
● Drom above table it is clearthat 53%of the total efforts are given to
analysis and design
Estimation with usecases
● Use cases provide a software team with insight into software scope and
requirements. However, developing an estimation approach with use cases is
problematic for the following reasons
○ Use cases are described using many different formats and styles—there
is no standard form.
○ Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
○ Use cases do not address the complexity of the functions and features
that are described.
○ Use cases can not describe complex behavior (e.g., interactions) that
involve many functions and features.
● Smith [Smi99] suggests that use cases can be used for estimation, but only if
they are considered within the context of the “structural hierarchy” that they
are used to describe.
● Before use cases can be used for estimation:
○ The level within the structural hierarchy is established,
○ The average length (in pages) of each use case is determined
○ The type of software (e.g., real-time, business, engineering/scientific,
WebApp, embedded) is defined
○ A rough architecture for the system is considered.
● Once these characteristics are established, empirical data may be used
to establish the estimated number of LOC or FP per use case (for each
level of the hierarchy).
● Historical data are then used to compute the effort required to
develop the system. To illustrate how this computation might be
made, consider the following relationship
● To illustrate how this computation might be made, consider the
following
● relationship:
● LOC estimate = N * LOCavg +[(Sa/Sh – 1)+ (Pa/Ph 1)] *LOCadjust
● where
N =actual number of use cases
LOCavg =historical average LOC per use case for this type of
subsystem
LOCadjust =represents an adjustment based on n percent of LOCavg
where n is defined locally and represents the difference between this
project and “average” projects
Sa = actual scenarios per use case
Sh =average scenarios per use case for this type of subsystem
Pa =actual pages per use case
Ph = average pages per use case for this type of subsystem
Reconciling Estimates
The estimation techniques discussed in the preceding sections result in
multiple estimates that must be reconciled to produce a single estimate of
effort, project duration, or cost.
The variation from the average estimate is approximately 18 percent on the
low side and 21 percent on the high side
The Widely divergent estimates can often be traced to one of two causes:
(1) The scope of the project is not adequately understood or has been
misinterpreted by the planner, or
(2) Productivity data used for problem-based estimation techniques is
inappropriate for the application, obsolete
Empirical Estimation Models
● An estimation model for computer software uses empirically
derived formulas to predict effort as a function of LOC or FP.
● The empirical data that support most estimation models are
derived from a limited sample of projects.
● For this reason, no estimation model is appropriate for all classes
of software and in all development environments
● The model should be tested by applying data collected from
completed projects, plugging the data into the model, and then
comparing actual to predicted results.
● If agreement is poor, the model must be tuned and retested
before it can be used.
The Structure of Estimation Models
A typical estimation model is derived using regression analysis on data
collected from past software projects. The overall structure of such
models takes the form:

Some examples of structure of estimation models are as follows:


COCOMO II Model
A more comprehensive estimation model of the original COCOMO model is
called COCOMO II . Like its predecessor, COCOMO II is actually a hierarchy
of estimation models that address the following areas:

• Application composition model: Used during the early stages of software


engineering, when prototyping of user interfaces, consideration of software
and system interaction, assessment of performance, and evaluation of
technology maturity are paramount

• Early design stage model:Used once requirements have been stabilized


and basic software architecture has been established.

• Post-architecture-stage model:Used during the construction of the


software.

Like all estimation models for software, the COCOMO II models require
sizing information. Three different sizing options are available as part of the
model hierarchy: object points, function points, and lines of source code.
{ for elaboration on Cocomo model in exam you can write about the object
point technique for software sizing next to above info}
Preparing Requirement Traceability Matrix
Requirement Traceability Matrix (RTM) is used to trace the requirements to
the tests that are needed to verify whether the requirements are fulfilled.
Advantage of RTM
1. 100% test coverage
2. It allows to identify the missing functionality easily
3. It allows identifying the test cases which needs to be updated in case of
a change in requirement
4. It is easy to track the overall test execution status

How To Prepare Requirement Traceability Matrix (RTM)


● Collect all the available requirement documents.
● Allot a unique Requirement ID for each and every Requirement
● Create Test Cases for each and every requirement and link Test Case IDs
to the respective Requirement ID.
Like all other test artifacts, RTM to varies between organizations.
Most of the organizations use just the Requirement Id’s and Test Case
Id’s in the RTM.
It is possible to make some other fields such as Requirement Description,
Test Phase, Test case result, Document Owner, etc.,
It is necessary to update the RTM whenever there is a change in
requirement.
Project Scheduling:Reasons for late software
delivery
Root Causes for Late Software
•Unrealistic deadline established outside the team
•Changing customer requirements not reflected in schedule changes
•Underestimating the resources required to complete the project
•Risks that were not considered when project began
•Technical difficulties that have not been predicted in advance
•Human difficulties that have not been predicted in advance
•Miscommunication among project staff resulting in delays
•Failure by project management to recognize project falling behind schedule
and failure to take corrective action to correct problems
How to Deal With Unrealistic Schedule Demands
1.Perform a detailed project estimate for project effort and duration using
historical data.
2.Use an incremental process model that will deliver critical functionality
imposed by deadline, but delay other requested functionality
3.Meet with the customer and explain why the deadline is unrealistic using
your estimates based on prior team performance.
4. Offer an incremental development and delivery strategy as an alternative
to increasing resources or allowing the schedule to slip beyond the
deadline.

Software Project Scheduling Principles


● Compartmentalization - The product and process must be decomposed
into a manageable number of activities and tasks
● Interdependency - Tasks that can be completed in parallel must be
separated from those that must completed serially
● Time allocation - Every task has start and completion dates that take the
task interdependencies into account
● Effort validation - Project manager must ensure that on any given day
there are enough staff members assigned to complete the tasks
within the time estimated in the project plan
● Defined Responsibilities - Every scheduled task needs to be assigned to a
specific team member
● Defined outcomes - Every task in the schedule needs to have a defined
outcome (usually a work product or deliverable)
● Defined Milestones - A milestone is accomplished when one or more
work products from an engineering task have passed quality review.
Scheduling Tools:
Task networks (activity networks) are graphic representations that can be of the
task interdependencies and can help define a rough schedule for particular
project
•Scheduling tools should be used to schedule any non -trivial project.
•PERT (program evaluation and review technique) and CPM (critical path
method) are quantitative techniques that allow software planners to identify the
chain of dependent tasks in the project work breakdown structure that
determine the project duration time.
•Timeline (Gantt) charts enable software planners to determine what tasks will
be need to be conducted at a given point in time (based on estimates for effort,
start time, and duration for each task).
The best indicator of progress is the completion and successful review of a
defined software work product.
● Time-boxing is the practice of deciding a priori the fixed amount of time
that can be spent on each task. When the task's time limit is exceeded,
development moves on to the next task (with the hope that a majority of
the critical work was completed before time ran out)
Developing a project schedule using
Gantt chart
How to Use a Gantt Chart for Software Development Project
● Determine a Project Schedule
Gantt charts are a great way to visualize the development process and track
the progress after kickoff. The tool allows breaking a single large project into
smaller manageable chunks of work, setting start and end dates. As a result,
your team gets a comprehensive project schedule with tasks displayed as
horizontal bars across a timeline.Dependencies show the relationship
between tasks and allow setting the tasks in sequential order
● Information Is More Accessible
Gantt charts present tons of complex project data in an intuitive and more
scannable way. The tool allows everyone involved in the engineering process
to look at the whole project from a bird’s eye view
The engineering team is able to more accurately determine the project’s
scope, better understand the responsibilities of each member. They know at a
glance:
● Who is going to complete which task.
● What are the time frames or deadlines for completing the task.
● How that individual task relates to the project as a whole.
● More Accurate Resource Planning
Gantt charts allow team members to distribute work more effectively. They
can visualize their current capacity by analyzing both ongoing and upcoming
project schedules mapped out across their relevant timelines.Thus, for
instance, QA specialists can rest easier knowing an approximate time frame of
when they’ll be needed. They can see from the Gantt chart what they’re
expected to test and where their assistance is not required.
● Team Productivity Is Improved
Gantt charts publicly demonstrate task assignments and progress information
empowering team members to hold each other accountable. If a task is two
days away from the deadline but is only 30 percent complete, Gantt chart
software may alert responsible parties. This allows corrective measures to be
taken, such as assigning another team member to help.
Defining a Task for the software
Project
No single task set is appropriate for all projects
An effective software process should define a collection of task sets, each
designed to meet the needs of different types of projects.
A task set is a collection of software engineering work tasks, milestones, work
products, and quality assurance filters that must be accomplished to
complete a particular project.
The task set must provide enough discipline to achieve high software quality.
But, at the same time, it must not burden the project team with unnecessary
work.
In order to develop a project schedule, a task set must be distributed on the
project timeline.
The task set will vary depending upon the project type and the degree of
rigor with which the software team decides to do its work.
I.Project types (few examples):
1.Concept development projects that are initiated to explore some new
business concept or application of some new technology.
2. New application development projects that are undertaken as a
consequence of a specific customer request.
3.Application enhancement projects: that occur when existing software
undergoes major modifications to function, performance, or interfaces that
are observable by the end user.
4. Application maintenance projects: that correct, adapt, or extend existing
software in ways that may not be immediately obvious to the end user.
5. Reengineering projects: that are undertaken with the intent of rebuilding
an existing (legacy) system in whole or in part.
II. Degree of Rigor:
Many factors influence the task set to be chosen. These include : size of the
project, number of potential users, mission criticality, application longevity,
stability of requirements, ease of customer/developer communication,
maturity of applicable technology, performance constraints, embedded and
non embedded characteristics, project staff, and reengineering factors.
When taken in combination, these factors provide an indication of the
degree of rigor with which the software process should be applied.
Task Set Example
Concept development projects are approached by applying the
following actions:
1.1Concept scoping :Determines the overall scope of the project.
1.2 Preliminary concept planning: Establishes the organization’s
ability to undertake the work implied by the project scope.
1.3 Technology risk assessment: Evaluates the risk associated with
the technology to be implemented as part of the project scope.
1.4 Proof of concept: Demonstrates the viability of a new technology
in the software context.
1.5 Concept implementation :Implements the concept
representation in a manner that can be reviewed by a customer and
is used for “marketing” purposes when a concept must be sold to
other customers or management.
1.6 Customer reaction :to the concept solicits feedback on a new
technology concept and targets specific customer applications.
Tracking Project Schedules
•Periodic status meetings
•Evaluation of results of all work product reviews
•Comparing actual milestone completion dates to scheduled dates
•Comparing actual project task start-dates to scheduled start-dates
•Informal meeting with practitioners to have them as sess subjectively
progress to date and future problems
•Use earned value analysis to assess progress quantitatively
END OF UNIT.
Extra slides are provided to for examples
on COCOMO model
Empirical Estimation Model
The COCOMO Model
COCOMO – Constructive Cost Model. COCOMO consists of a hierarchy of three
increasingly detailed and accurate forms
• Model 1- The basic COCOMO model computes software development effort as a
function of program size expressed in estimated lines of code.The first level,
Basic COCOMO can be used for quick and slightly rough calculations of Software
Costs. Its accuracy is somewhat restricted due to the absence of sufficient factor
considerations.

• Model 2- The Intermediate COCOMO model computes software development


effort as a function of program size and a set of “cost drivers” that include
subjective assessments of product, hardware, personnel and project attributes.

• Model 3 – The advanced COCOMO model incorporates all characteristics of the


intermediate version with an assessment of the cost driver’s impact on each
step (analysis, design etc) of the SE process
I.Basic COCOMO
– E = a(KLOC) b
– D = c(E)d
--Person Required = Efort/ Time = E/D

The effort is measured in Person-Months and as evident from the formula is


dependent on Kilo-Lines of code.
The development time is measured in Months.These formulas are used as such
in the Basic Model calculations, as not much consideration of different factors
such as reliability, expertise is taken into account, henceforth the estimate is
rough.
Intermediate COCOMO Model
II.Intermediate Model –

The basic Cocomo model assumes that the effort is only a function of the
number of lines of code and some constants evaluated according to the
different software system. However, in reality, no system’s effort and
schedule can be solely calculated on the basis of Lines of Code. For that,
various other factors such as reliability, experience, Capability. These
factors are known as Cost Drivers and the Intermediate Model utilizes 15
such drivers for cost estimation.

Classification of Cost Drivers and their attributes:

(i) Product attributes –


● Required software reliability extent
● Size of the application database
● The complexity of the product
(ii) Hardware attributes –
● Run-time performance constraints
● Memory constraints
● The volatility of the virtual machine environment
● Required turnabout time
(iii) Personnel attributes –
● Analyst capability
● Software engineering capability
● Applications experience
● Virtual machine experience
● Programming language experience

(iv) Project attributes –


● Use of software tools
● Application of software engineering methods
● Required development schedule
• Intermediate COCOMO

The project manager is to rate these 15 different parameters for a
particular project on a scale of one to three. Then, depending on
these ratings, appropriate cost driver values are taken from the
above table. Three ae 4 different tables to calculate all cost driver
attributes/These 15 values are then multiplied to calculate the EAF
(Effort Adjustment Factor).
IIIAdvance COCOMO
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model

The effort is calculated as a function of program size and a set of cost drivers
are given according to each phase of the software lifecycle.
Examples OF COCOMO MODEL
Example:Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached &
embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM
Example 2

You might also like