chap 4 of software engineering
chap 4 of software engineering
Jump2Learn Publication
4.1 UML Diagrams
What is UML?
The Unified Modeling Language (UML) is a standard visual modeling language intended to be
used for modeling business and similar processes, analysis, design, and implementation of
software-based systems.
UML is a common language for business analysts, software architects and developers used to
describe, specify, design, and document existing or new business processes, structure and
behavior of software systems.
UML can be applied to diverse application domains (e.g., banking, finance, internet, aerospace,
healthcare, etc.)
It can be used with all major object and component software development methods and for
various implementation platforms (e.g., J2EE, .NET).
UML is a standard modeling language, not a software development process.
UML :
o provides guidance as to the order of a team’s activities,
o specifies what artifacts should be developed,
o directs the tasks of individual developers and the team as a whole, and
o offers criteria for monitoring and measuring a project’s products and activities.
UML Diagrams
A UML diagram is a partial graphical representation (view) of a model of a system under design,
implementation, or already in existence.
UML diagram contains graphical elements (symbols) - UML nodes connected with edges (also
known as paths or flows) - that represent elements in the UML model of the designed system.
The UML model of the system might also contain other documentation such as use cases written
as templated texts.
UML specification defines two major kinds of UML diagram: structure diagrams and behavior
diagrams.
Structure Diagrams
Structure diagrams show the static structure of the system and its parts on different abstraction
and implementation levels and how they are related to each other.
The elements in a structure diagram represent the meaningful concepts of a system, and may
include abstract, real world and implementation concepts.
Some of the Structure Diagrams are Class Diagram, Object Diagram, Package Diagram, Model
Diagram, etc.
Behavior Diagrams
Behavior diagrams show the dynamic behavior of the objects in a system, which can be
described as a series of changes to the system over time.
Some of the behavior diagrams are Use-Case Diagram, Activity Diagram, State Machine Diagram,
Sequence Diagram, etc.
Class Diagram:
Class diagram is UML structure diagram which shows structure of the designed system at the
level of classes and interfaces, shows their features, constraints and relationships -
associations, generalizations, dependencies, etc.
Class diagram describes the attributes and operations of a class and also the constraints imposed
on the system.
The class diagrams are widely used in the modeling of objectoriented systems because they are
the only UML diagrams, which can be mapped directly with object-oriented languages.
Jump2Learn Publication
Class diagrams are the only diagrams which can be directly mapped with object-oriented
languages and thus widely used at the time of construction.
UML diagrams like activity diagram, sequence diagram can only give the sequence flow of the
application, however class diagram is a bit different.
The purpose of the class diagram can be summarized as −
o Analysis and design of the static view of an application.
o Describe responsibilities of a system.
o Base for component and deployment diagrams.
o Forward and reverse engineering.
Jump2Learn Publication
UML Class Notation
A class represent a concept which encapsulates state (attributes) and behavior (operations).
Each attribute has a type.
Each operation has a signature.
The class name is the only mandatory information.
UML class is represented by the following figure. The diagram is divided into four parts.
o The top section is used to name the class.
o The second one is used to show the attributes of the class.
o The third section is used to describe the operations performed by the class.
o The fourth section is optional to show any additional components.
To specify the visibility of a class member (i.e. any attribute or method), these notations must
be placed before the member's name: + (Public), #(Protected), -(Private).
Classes are used to represent objects. Objects can be anything having properties and
responsibility.
A class may be involved in one or more relationships with other classes. A relationship can be
one of the following types:
Jump2Learn Publication
The above diagram is an example of an Ordering System of an application.
It describes a particular aspect of the entire application.
First of all, Order and Customer are identified as the two elements of the system. They have a
one-to-many relationship because a customer can have multiple orders.
Order class is an abstract class and it has two concrete classes (inheritance relationship)
SpecialOrder and NormalOrder.
The two inherited classes have all the properties as the Order class. In addition, they have
additional functions like dispatch () and receive ().
Use-case Diagrams
Use case diagrams are usually referred to as behavior diagrams used to describe a set of actions
(use cases) that some system or systems (subject) should or can perform in collaboration with
one or more external users of the system (actors).
Each use case should provide some observable and valuable result to the actors or other
stakeholders of the system.
Use-case Notation
Actor
The actor is an entity that interacts with the system.
A user is the best example of an actor.
An actor is an entity that initiates the use case from outside the
scope of a use case.
It can be any element that can trigger an interaction with the
use case.
One actor can be associated with multiple use cases in the
system.
Use-Case
Use cases are used to represent high-level functionalities and
how the user will handle the system.
A use case represents a distinct functionality of a system, a
component, a package, or a class.
It is denoted by an oval shape with the name of a use case
written inside the oval shape.
Subject
A subject of a use case defines and represents boundaries of a
business, software system, physical system or
device, subsystem, component or even single class in relation
to the requirements gathering and analysis.
Subject (sometimes called a system boundary) is presented by
a rectangle with subject's name, associated keywords and
stereotypes in the top left corner.
Generalization
This represents a relationship between actors or between use
cases.
If two actors are related through this relationship, then the
actor (or use case) at the tail end of the arrow (connected to
the base of the triangle) is a specialized version of the actor (or
use case) at the other end.
Jump2Learn Publication
Association
This represents a two-way communication between an actor
and a use case and hence is a binary relation.
Since it is a two-way communication, for every use case
initiated by a primary actor, the actor must get a response back
from the use case.
An association may include cardinality information which
indicates how many instances of use cases or actors participate
in the communication.
Include
This is a special type of relationship between two use cases.
If a use case A includes another use case B, then the
implementation of A requires the implementation of B in order
to complete its task.
However, B is independent on its own. That is, B does not need
to know anything about A. B can also be included in any other
use case.
Extends
This is another special type of relationship between two use
cases. If a use case B extends another use case A, then the
implementation of A may conditionally include the
implementation of B in order to complete its task. That is, A
may complete the task without B in some situations.
But depending on the condition stated, A may require B.
B in this case is dependent on A and cannot exist on its own.
For this reason, B cannot extend more than one use case.
The use case narrative of A will include the execution step at
which it requires B; this point is called an extension point.
Example of Use-case
Jump2Learn Publication
A DFD starts with the most abstract definition of the system (lowest level) and at each higher
level DFD, more details are successively introduced. To develop a higher-level DFD model,
processes are decomposed into their sub-processes and the data flow among these sub-
processes is identified.
Therefore, the DFD provides a mechanism for functional modelling as well as information flow
modelling.
A level 0 DFD, also called a fundamental system model or a context model, represents the entire
software element as a single bubble with input and output data indicated by incoming and
outgoing arrows, respectively.
Additional processes (bubbles) and information flow paths are represented as the level 0 DFD is
partitioned to reveal more detail. For example, a level 1 DFD might contain five or six bubbles
with interconnecting arrows. Each of the processes represented at level 1 is a sub-function of the
overall system depicted in the context model.
The data flow diagram (DFD) serves two purposes:
1. to provide an indication of how data are transferred as they move through the system
2. to depict the functions that transform the data flow. The DFD provides additional
information that is used during the analysis of the information domain and serves as a
basis for the modelling of function.
Context Diagram
The context diagram is the most abstract data flow representation of a system.
It represents the entire system as a single bubble. This bubble is labelled according to the main
function of the system usually with the name of the software system being developed.
The various external entities with which the system interacts and the data flow occurring
between the system and the external entities are also represented using arrows with
corresponding data names.
The name ‘context diagram’ is well justified because it represents the context in which the
system is to exist, i.e. the external entities who would interact with the system and the specific
data items they would be supplying the system and the data items they would be receiving from
the system. The context diagram is also called as the level 0 DFD.
Jump2Learn Publication
4. Data Flow: Data Flow lines, sometimes called Information flow-lines, connect external
entities, process and data store elements. These lines
always drawn with an arrowhead trace the flow of
information throughout the system. Information flow may
be one-way or two-way. One or two arrows are drawn
between boxes to show which way the information flows.
The main reason why the DFD technique is so popular is probably because of the fact that DFD is a
very simple formalism – it is simple to understand and use.
Data Dictionary
A data dictionary lists all data items appearing in the DFD model of a system. The data items
listed include all data flows and the contents of all data stores appearing on the DFDs in the
DFD model of a system.
Data dictionary is a set of meta-data which contains the definition and representation of
data elements. It gives a single point of reference of data repository of an organization. Data
dictionary lists all data elements but does not say anything about relationships between data
elements.
Data dictionary (or system catalog) is a database about the database.
Contents of a Data Dictionary are commonly referred to as metadata.
Data Dictionary can be updated, queried much as a “regular” database
DBMS often maintains the Data Dictionary
Provides a summary of the structure of the database
o helps DBA manage the database and informs users of database scope
Aliases: None
Dial phone(input)
Jump2Learn Publication
Description Telephone number= [local extension/outside
number]
Process Specification
The process specification (PSPEC) is used to describe all flow model processes that appear at the
final level of refinement.
The content of the process specification can include narrative text a program design
language(PDL) description of the process algorithm mathematical equations tables diagrams or
charts.
By providing a PSPEC to accompany each bubble in n th flow model, the software engineer creates
a “minispec” that can serve as a first step in the creation of the software requirements
specification and as a guide for design of the program component that will implement the
process.
The PSPEC Narrative may provide a description of the following:
o Information that passes into and out of the module (an interface description);
o Information that is retained by a module, e.g., data stored in a local data structure;
o A procedural narrative that indicated major decisions points and tasks; and
o A brief discussion of restrictions and special features (e.g., file I/O, hardware dependent
characteristics, special timing requirements).
E.g. Show password process, the process specification would be PSEC: process password. The
process password transform performs all password validation for the system. Process password
received a four digit password from the interaction with the user function. The password first
compare to the master password stored within the system. If the master password matches then
it will set valid id message = ‘TRUE’ and passed to the message and status to display function. If
the master password does not match then it will allow for second and third trial. If it is again a
wrong password then it will come out from the system.
Symbolically it is described as:
Examples of DFD
Jump2Learn Publication
Draw DFD for Railway Reservation System up to 2nd level. Also create data
dictionary and process specification.
Draw DFD for Online Banking System up to 2nd level. Also create data dictionary
and process specification.
Jump2Learn Publication
4.3 Design model
Once software requirements have been analyzed and specified, software design is the first of
three technical activities among design, code generation, and test—that are required to build
and verify the software.
Each activity transforms information in a manner that ultimately results in validated computer
software.
Each of the elements of the analysis model provides information that is necessary to create the
four design models required for a complete specification of design.
The flow of information during software design is represented by the diagram.
Software requirements, along with the data, functional, and behavioral models, feed the design
task.
Using one of a number of design methods, the design task produces
o Data design, o Interface design, and
o Architectural design, o Component design.
Data Design
The data design transforms the information domain model created during analysis into the data
structures that will be required to implement the software.
The data objects and relationships defined in the E-R diagram and the detailed data Content
depicted in the data dictionary provide the basis for the data design activity. Part of data design
may occur in along with the design of software architecture.
More detailed data design occurs as each software component is designed.
Architectural design
The architectural design defines the relationship between major structural elements of the
software, the “design patterns” that can be used to achieve the requirements that have been
defined for the system, and the constraints that affect the way in which architectural design
patterns can be applied.
The architectural design representation the framework of a computer-based system can be
derived from the system specification, the analysis model, and the interaction of subsystems
defined within the analysis model.
Interface design
The interface design describes how the software communicates within itself, with systems that
interoperate with it, and with humans who use it.
An interface implies a flow of information (e.g., data and/or control) and a specific type of
behavior.
Jump2Learn Publication
Therefore, data and control flow diagrams provide much of the information required for
interface design.
Component-level design
The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the PSPEC, CSPEC,
and STD serve as the basis for component design.
The importance of software design can be stated with a single word “quality”.
Design is the place where quality is implemented in software engineering.
Design provides us with representations of software that can be assessed for quality.
Design is the only way that we can accurately translate a customer's requirements into a
finished software product or system.
Software design serves as the foundation for all the software engineering and software support
steps that follow.
Without design, we risk building an unstable system—one that will fail when small changes are
made; one that may be difficult to test; one whose quality cannot be assessed until late in the
software process, when time is short and many dollars have already been spent.
Jump2Learn Publication
External quality factors and internal quality factors
External quality factors are those properties of the software that can be readily observed by
users (e.g., speed, reliability, correctness, usability).
Internal quality factors are of importance to software engineers. They lead to a high-quality
design from the technical perspective. To achieve internal quality factors, the designer must
understand basic design concepts.
a.) Abstraction
Abstraction is the act of representing essential features only without including the background
details or explanations.
The notion of abstraction permits one to concentrate on a problem at some level of
generalization without regards to low-level details.
At the highest level of abstraction, a solution is described in terms of the problem environment
using normal understandable language.
At lower levels of abstraction, the solution is described from procedural oriented view.
At the lowest level of Abstraction, the solution is stated in a manner that can be directly
implemented.
Each step in the software process is a refinement in the level of abstraction of the software
solution.
As we move through different levels of abstraction, we work to create procedural and data
abstractions.
A procedural abstraction is a named sequence of instructions that has a specific and limited
function. An example of a procedural abstraction would be the word open for a door. Open
implies a long sequence of procedural steps (e.g., walk to the door, reach out and grasp knob,
turn knob and pull door, step away from moving door, etc.).
A data abstraction is a named collection of data that describes a data object. In the context of
the procedural abstraction open, we can define a data abstraction called door. Like any data
object, the data abstraction for door would encompass a set of attributes that describe the door
(e.g., door type, swing direction, opening mechanism, weight, dimensions). It follows that the
procedural abstraction open would make use of information contained in the attributes of the
data abstraction door.
Control abstraction is the third form of abstraction used in software design. Like procedural and
data abstraction, control abstraction implies a program control mechanism without specifying
internal details. An example of a control abstraction is the booting of a computer.
b) Refinement
Refinement is a process of elaboration.
It causes the designer to elaborate on the original statement, providing more and more detail as
each successive refinement occurs.
We begin with a statement of functional description i.e. defined at a higher level of abstraction.
In further steps (of the refinement), one or several instructions of the given program are
decomposed into more detailed instructions.
This successive decomposition or refinement of specifications terminates when all instructions
are expressed in terms of any programming language.
As tasks are refined, data also have to be refined, decomposed, or structured, and it is natural to
refine the program and the data in parallel.
Every refinement step implies some design decisions so the programmer has to be aware
of the underlying criteria (for design decisions) and of the existence of alternative solutions.
Jump2Learn Publication
Abstraction and refinement are complimentary concepts.
Abstraction enables a designer to specify procedure and data and still suppress low-level details
while Refinement helps the designer to reveal low-level details.
Both concepts help the designer in creating a complete design model as the design evolves.
Refinement is done in step-wise manner. This leads to “Divide and Conquer” approach to the
design.
Thus, a complex object into smaller and more manageable pieces (components) that can be
reviewed and inspected before moving to the next level of detail.
c) Modularity
Software is divided into separately named addressable components, called modules that are
integrated to satisfy problem requirements.
Modularity allows software to be manageable.
It is very difficult for the user (reader) to easily understand huge software.
This leads to a divide-and-conquer strategy and makes it easier to solve a complex problem
when it is broken down into manageable pieces.
If we subdivide software indefinitely, the effort required to develop it will become negligibly
small.
Unfortunately, other forces come into picture, effort associated with integrating the modules
increases. Hence, we should modularize but optimally.
The graph shown refers to the question of Modularity and Software Cost.
We can see that the cost per module/ cost of effort decrease as the number of modules increase
but at the same time the cost of interface increases.
Thus, we must try to find an optimum solution by staying in the region of minimum cost M.
We have 5 characteristics to define an effective modular system:
Modular decomposability: If a design method provides a systematic mechanism for
decomposing the problem into sub problems, it will reduce the complexity of the overall
problem, thereby achieving an effective modular solution.
Modular composability: If a design method enables existing (reusable) design components to be
assembled into a new system, it will result into a modular solution that does not reinvent the
wheel.
Modular understandability: If a module can be understood as a standalone unit (without
reference to other modules), it will be easier to build and easier to change.
Modular continuity: If small changes to the system requirements result in changes to individual
modules, rather than system wide changes, the impact of change-induced side effects will be
minimized.
Modular protection: If an unusual condition occurs within a module and its effects are
constrained within that module, the impact of error-induced side effects will be minimized.
d) Software Architecture
It refers to all over structure of the software and the ways in which that structure provides
conceptual integrity for a system.
Architecture is the hierarchy of program components (modules), the manner in which these
components interact and the structure of data that are used by the components.
However, components can be generalized to represent major system elements and their
interactions.
Jump2Learn Publication
One goal of software design is to derive an architectural rendering of a system. This rendering
serves as a framework from which more detailed design activities are conducted.
A set of architectural patterns enable a software engineer to reuse design level concepts.
Extra-functional properties. It addresses how the design architecture achieves requirements for
performance, capacity, reliability, security, adaptability, and other system characteristics.
Families of related systems. It shows repeatable patterns that are commonly found in the
design of families of similar systems. The design should thus have the ability to reuse
architectural building blocks.
The architectural design can be represented using one or more of a number of different models.
Structural models represent architecture as an organized collection of program components.
Framework models increase the level of design abstraction by attempting to identify repeatable
patterns that are found in similar types of applications.
Dynamic models represent the behavioral aspects of the program architecture, indicating how
the structure or system configuration may change as a function of external events.
Process models focus on the design of the business or technical process that the system must
include.
Finally, functional models can be used to represent the functional hierarchy of a system.
e) Control Hierarchy:
Control Hierarchy, also called program structure, represents the organization which is hierarchy
of program components (modules) and implies a hierarchy of control.
The following terminologies are used for describing control hierarchy:
Fan-Out: It is a measure of the number of modules that are directly controlled by another
module. E.g Fan-Out of M is 3
Fan-in: It indicates how many modules directly control a given module. E.g. Fan-in of r is 4.
Depth: Number of levels of control.
Width: It indicates Overall span of control.
Jump2Learn Publication
Superordinate: It is a module that controls another module.
Subordinate: It is a module that is controlled by another module. For example, referring to
Figure, module M is superordinate to modules a, b, and c. Module h is
subordinate to module e and is ultimately subordinate to module M.
The control hierarchy also represents two other different characteristics of the software
architecture:
o Visibility: It indicates the set of program components that may be invoked or used as data
by a given component even though indirectly.
o Connectivity: It indicates the set of components that are directly invoked or used as data
by a given component.
f) Structural Partitioning
The program structure should be partitioned both horizontally and vertically. As shown in Figure
horizontal partitioning defines separate branches of the modular hierarchy for each major
program function.
Control modules, represented in a darker shade, are used to coordinate communication
between and execution of program functions.
Horizontal Partitioning
The simplest approach to horizontal partitioning defines three partitions - input, data
transformation (often called processing), and output.
Partitioning the architecture horizontally provides a number of distinct benefits:
o results in software that is easier to test
o leads to software that is easier to maintain
o results in propagation of fewer side effects
o results in software that is easier to extend
Because major functions are decoupled from one another, changes can be less complex and
extensions to the system are easier to achieve without side effects.
On the negative side, horizontal partitioning often causes more data to be passed across module
interfaces and can complicate the overall control of program flow.
Jump2Learn Publication
Vertical partitioning
It is often called factoring. It suggests that control and work should be distributed top-down in
the program architecture.
Top-level modules should perform control functions and do little
actual processing work.
Modules that reside low in the architecture should be the workers, performing all
input, processing, and output tasks.
The only disadvantage is that a change in a control module at a higher level will propagate its
side effects to all subordinate modules.
A change to a worker module, given its low level in the structure, is
less likely to cause the propagation of side effects.
In general, changes to computer programs revolve around changes to input, computation or
transformation, and output.
The overall control structure of the program (i.e., its basic behavior) is far less likely to change.
For this reason vertically partitioned architectures are less likely to be susceptible to side effects
when changes are made and will therefore be more maintainable – a key quality factor.
g) Software Procedure
Program structure defines control hierarchy without regard to the sequence of processing and
decisions.
Software procedure focuses on the processing details of each module individually.
Software Procedure must provide a precise specification of processing, including sequence of
events, exact decision points, repetitive operations, and even data organization and structure.
There is a relationship between structure and procedure. The processing indicated for each
module must include a reference to all modules subordinate to the module being described.
h) Information Hiding
The principle of information hiding suggests that modules be characterized by design decisions
that each hides from others.
In other words, modules should be specified and designed so that information (procedure and
data) contained within a module is inaccessible to other modules that have no need for
such information.
Hiding implies that effective modularity can be achieved by defining a set of independent
modules that communicate with one another only that information that is necessary to achieve
software function.
It provides the greatest benefits when modifications are required during testing and
maintenance because unknown errors introduce during modifications are likely to propagate to
other location within the software as the data and procedure are hidden from other parts of the
software.
i) Data Structures
Data structure is a representation of the logical relationship among individual elements of data.
It describes the organization, methods of access, degree of associativity, and processing
alternatives for information.
Sequential vector: When scalar items are organized as a list or contiguous group, a sequential
vector is formed. Vectors are the most common of all data structures.
N-dimensional space: When the sequential vector is extended to two, three, and ultimately, a
random number of dimensions, an n-dimensional space is created. The most common n-
dimensional space is the two-dimensional matrix.
Jump2Learn Publication
Array: In many programming languages, an n-dimensional space is called an array.
Items, vectors, and spaces may further be organized in a variety of formats.
o Linked list: A linked list is a data structure that organizes non-contiguous scalar items,
vectors, or spaces in a manner (called nodes) that enables them to be processed as a list.
o Each node contains the data organization (e.g., a vector) and one or more pointers that
indicate the address in storage of the next node in the list.
o Nodes may be added at any point in the list by redefining pointers to accommodate the
new list entry.
o Other data structures incorporate or are constructed using the fundamental data
structures just described.
For example, a hierarchical data structure is implemented using multilinked lists
that contain scalar items, vectors, and possibly, n-dimensional spaces.
Data structures, like program structure, can be represented at different levels of
abstraction.
1. Cohesion:
A good software design implies clean decomposition of the problem into modules, and the neat
arrangement of these modules in a hierarchy.
Cohesion is a measure of functional strength of a module. It is a natural extension of the
information hiding concept.
By the term functional independence, we mean that a cohesive module performs a single task or
function requiring little interaction with procedures being performed in other parts of a
program.
Classification of Cohesion
1. Coincidental cohesion:
A module is said to have coincidental cohesion, if it performs a set of tasks that relate to
each other very loosely, if at all. Example:-
1. Wash Car 5. Walk Dog
2. Fill out the application form
3. Have Coffee
4. Go to the movie
Jump2Learn Publication
The activities here are related neither by flow of data nor by flows of control.
Such modules make the system less understandable.
2. Logical cohesion:
A module is said to be logically cohesive, if all elements of the module perform
similar operations, e.g. error handling, data input, data output, etc.
All activities in a logically cohesive module fall into same category having some
similarity as well as differences, logical cohesion is slightly better than
coincidental cohesion.
3. Temporal cohesion:
When a module contains functions that are related by the fact that all the
functions must be executed in the same time span, the module is said to exhibit
temporal cohesion.
The set of functions responsible for initialization, start-up, shutdown of some
process, etc. exhibit temporal cohesion.
4. Procedural cohesion:
A module is said to possess procedural cohesion, if the set of functions of the
module are all part of a procedure (algorithm) in which certain sequence of steps
have to be carried out for achieving an objective, e.g. the algorithm for decoding
a message.
When functions are grouped together in a component just to ensure the order,
the component is said to be procedurally cohesive.
5. Communicational cohesion:
A module is said to have communicational cohesion, if all functions of the
module refer to or update the same data structure. Example:
Activities, CALCULATE GROSS-PAY, CALCULATE DEDUCTION and CALCULATE NET-
PAY are combined into a single module NET-PAY, then it is known as
communicational. Here the activities are related because they all work on the
same input data, EmployeeID, which makes the module communicationally
cohesive.
6. Sequential cohesion:
A module is said to possess sequential cohesion, if the elements of a module
form the parts of sequence, where the output from one element of the sequence
is input to the next.
For example, in a TPS, the get-input, validate-input, sort-input functions are
grouped into one module.
Here the output of one activity goes as input to the next, i.e. the
elements/activities are sequentially bound. This group of activities cannot be
summed up as a single function which means the module is not functionally
cohesive.
It has usually good coupling and is easily maintained.
7. Functional cohesion:
Jump2Learn Publication
Functional cohesion is said to exist, if different elements of a module cooperate
to achieve a single function.
For example, a module containing all the functions required to manage
employees’ pay-roll exhibits functional cohesion.
Jump2Learn Publication
2. Coupling:
Coupling between two modules is a measure of the degree of interdependence or
interaction between the two modules.
If two modules interchange large amounts of data, then they are highly interdependent.
The degree of coupling between two modules depends on their interface complexity.
The interface complexity is basically determined by the number of types of parameters
that are interchanged while invoking the functions of the module.
In software design, we strive for lowest possible coupling. Simple connectivity among
modules results in software that is easier to understand and less prone to a "ripple
effect", caused when errors occur at one location and propagate through a system.
Classification of coupling
1. Data coupling:
Two modules are data coupled, if they communicate through a parameter.
An example is an elementary data item passed as a parameter between two
modules, e.g. an integer, a float, a character, etc. This data item should be
problem related and not used for the control purpose.
We find that there are no extra complications. Hence a data coupling is narrow,
direct, local, obvious and flexible.
If we add any piece of information that shuffles aimlessly around the system with
no particular use then it complicates and violates the principle of good coupling.
2. Stamp coupling:
Two modules are stamp coupled, if they communicate using a composite data
item such as a structure in C.
With stamp coupling the data values, format and organization must be matched
between interacting components.
Jump2Learn Publication
E.g.: If composite data like a CUSTOMER RECORD comprises of many fields are
passed between modules then the two modules are stamp coupled.
3. Control coupling:
Control coupling exists between two modules, if data from one module is used to
direct the order of instructions execution in another.
An example of control coupling is a flag set in one module and tested in another
module.
Control coupling is very common in most software designs and is shown in Figure
where a “control flag” (a variable that controls decisions in a subordinate or
super ordinate module) is passed between modules d and e.
Jump2Learn Publication
4. Common (Global) coupling:
Two modules are common coupled, if they share data through some global data
items.
Relatively high levels of coupling occur when modules are tied to an environment
external to software.
For example, I/O couples a module to specific devices, formats, and
communication protocols.
External coupling is essential, but should be limited to a small number of
modules with a structure.
5. Content coupling:
Content coupling exists between two modules if one refers to the inside of the
other in any way.
1. If one module refers to data within another.
2. If one module alters a statement in another.
3. If one module branches into another.
Advantages of Modularity
Allows large programs to be written by several or different people.
Encourage creation of commonly used routines to be placed in library and/or be used by
other programs.
Simplifies overlaying procedure of loading large programs into main storage.
Provides more check points to measure progress.
Simplifies design, making program easy to modify and reduce maintenance costs.
Provide a framework for more complete testing, easier to test.
Produces well-designed and more readable program.
Disadvantages of Modularity
Execution time may be, but not necessarily, longer.
Storage size may be, but is not necessarily, increased.
Compilation and loading time may be longer.
Inter-module communication problems may be increased.
Demands more initial design time.
Jump2Learn Publication
More linkage required, run-time may be longer, more source lines must be written and
more documentation has to be done.
Modular Design
Software architecture represents modularity; that is, software is divided into separately
named and addressable components, often called modules that are integrated to satisfy
problem requirements.
It has been stated that "modularity is the single attribute of software that allows a
program to be intellectually manageable".
A reader cannot easily grasp monolithic software. This leads to a divide-and-conquer
strategy, it is easier to solve a complex problem when it is broken down into
manageable pieces.
If we subdivide software indefinitely, the effort required to develop it will become
negligibly small. Unfortunately, other forces come into play, effort associated with
integrating the modules increases. Hence, we should modularize but optimally.
1. Evaluate the "First-Cut" program structure to reduce coupling and improve cohesion.
Once the program structure has been developed, modules may be exploded or
imploded with an eye toward improving module independence.
An exploded module becomes two or more modules in the final program structure
which can be redefined as a separate cohesive module.
An imploded module is the result of combining the processing implied by two or
more modules which results in high coupling to reduce interface complexity.
Jump2Learn Publication
2. Attempt to minimize structures with high fan-out; strive for fan-in as depth increase.
The structure shown inside (figure below) the cloud does not make effective use of
factoring. All modules are “packed” below a single control module. In general, a
more reasonable distribution of control is shown in the upper structure.
A very high fan-out is not very desirable; as it means that the module has to control
and coordinate too many modules and may therefore be too complex. Fan-out can
be reduced by creating a subordinate and making many of the current subordinates
subordinate to the newly created module. In general the fan-out should not be
increased above five or six.
Whenever possible, the fan-in should be maximized. Of course, this should not be
obtained at the cost of increasing the coupling or decreasing the cohesion of
modules. Fan-in can often be increased by separating out common functions from
different modules and creating a module to implement that function.
3. Keep scope of effect of a module within the scope of control of that module.
The scope of effect of module is defined as all other modules that are affected by a
decision made in module.
The scope of control of module is all modules are subordinate and ultimately
subordinate to module; if module makes a decision that affects module we have a
violation of this heuristic, because module lies outside the scope of control module.
Jump2Learn Publication
5. Define modules whose function is predictable, but avoid modules that are overly
restrictive.
A module is predictable when it can be treated as a black box; that is, the same
external data will be produced regardless of internal processing details.
Modules that have internal “memory” can be unpredictable unless care is taken in
their use.
Jump2Learn Publication
Jump2Learn Publication