Object Oriented Software Engineering - CCS356 - Notes
Object Oriented Software Engineering - CCS356 - Notes
Environmental Sciences
Professional English and Sustainability -
Professional English - - II - HS3252 Discrete Mathematics GE3451
I - HS3152 - MA3354
Statistics and Theory of Computation
Matrices and Calculus Numerical Methods - Digital Principles and - CS3452
3rd Semester
4th Semester
- MA3151 MA3251 Computer Organization
1st Semester
2nd Semester
8th Semester
6th Semester
UNIT- I
SOFTWARE PROCESS AND AGILE DEVELOPMENT
Introduction to Software Engineering, Software Process, Perspective and Specialized
Process Models –Introduction to Agility – Agile process – Extreme programming – XP
Process.
1.1 INTRODUCTION TO SOFTWARE
ENGINEERING The Evolving Role of Software:
Software can be considered in a dual role. It is a product and, a vehicle for delivering a product.
As a product, it delivers the computing potential in material form of computer hardware.
Example
A network of computers accessible by local hardware, whether it resides within a cellular phone
or operates inside a mainframe computer.
i) As the vehicle, used to deliver the product. Software delivers the most important product of our
time- Information.
ii) Software transforms personal data, it manages business information to enhance competiveness, it
provides a gateway to worldwide information networks (e.g., Internet) and provides the means for
acquiring information in all of its forms.
iii) Software acts as the basis for operating systems, networks, software tools and environments.
1.1.1software
Software Characteristics
Software is a logical rather than a physical system element. Therefore, software has
characteristics that are considerably different than those of hardware:
1. Software is developed or engineered; it is not manufactured in the classical sense.
Although some similarities exist between software development and hardware manufacture, the
two activities are fundamentally different.
In both activities, high quality is achieved through good design, but the manufacturing phase for
hardware can introduce quality problems that are nonexistent (or easily corrected) for software.
2. Software doesn't "wear out."
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
defects); defects are corrected and the failure rate drops to a steady-state level (ideally, quite low) for
some period of time.
As time passes, however, the failure rate rises again as hardware components suffer from the
cumulative effects of dust, vibration, abuse, temperature extremes, and many other environmental
maladies.
Stated simply, the hardware begins to wear out.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Expert systems, also called knowledgebase systems, pattern recognition (image and voice),
artificial neural networks, theorem proving, and game playing are representative of applications
within this category.
SOFTWARE ENGINEERING
In order to build software that is ready to meet the challenges and it must recognize a few simple
realities:
It follows that a concerted effort should be made to understand the problem before a software
solution is developed.
It follows that design becomes a pivotal activity.
It follows that software should exhibit high quality.
It follows that software should be maintainable.
These simple realities lead to one conclusion: software in all of its forms and across all of its
application domains should be engineered.
Software engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and works efficiently on real machines.
Software engineering encompasses a process, methods for managing and engineering software,
and tools.
1.1.3 Software Engineering: A Layered Technology
Software engineering is a layered technology as shown in below Figure 1.3
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Software engineering methods rely on a set of basic principles that govern each area of the
technology and include modelling activities and other descriptive techniques.
Software engineering tools:
Software engineering tools provide automated or semi-automated support for the process and the
methods.
When tools are integrated so that information created by one tool can be used by another, a system
for the support of software development, called computer-aided software engineering, is
established.
1.1.4 The Software Process
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.
An activity strives to achieve a broad objective and is applied regardless of the application domain,
size of the project, complexity of the effort, or degree of rigor with which software engineering is
to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work product
(e.g., an architectural design model).
A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that produces a
tangible outcome.
A process framework establishes the foundation for a complete software engineering process by
identifying a small number of framework activities that are applicable to all software projects,
regardless of their size or complexity.
In addition, the process framework encompasses a set of umbrella activities that are applicable
across the entire software process. A generic process framework for software engineering
encompasses five activities:
The five generic process framework activities:
a) Communication:
The intent is to understand stakeholders‘ objectives for the project and to gather requirements that
help define software features and functions.
b) Planning:
Software project plan—defines the software engineering work by describing the technical tasks to
be conducted, the risks that are likely, the resources that will be required, the work products to be
produced, and a work schedule.
c) Modelling:
A software engineer does by creating models to better understand software requirements and the
design that will achieve those requirements.
d) Construction:
This activity combines code generation (either manual or automated) and the testing that is
required to uncover errors in the code.
e) Deployment:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Process flow—describes how the framework activities and the actions and tasks that occur within
each framework activity are organized with respect to sequence and time.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
(a) A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment.
(b) An iterative process flow repeats one or more of the activities before proceeding to the next.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
(2) successful completion of a number of task patterns [specified] for the Communication pattern
has occurred; and
(3) the project scope, basic business requirements, and project constraints are known.
Problem: The specific problem to be solved by the pattern.
Solution:
Describes how to implement the pattern successfully.
This section describes how the initial state of the process (that exists before the pattern is
implemented) is modified as a consequence of the initiation of the pattern.
It also describes how software engineering information or project information that is available
before the initiation of the pattern is transformed as a consequence of the successful execution of
the pattern.
Resulting Context: Describes the conditions that will result once the pattern has been successfully
implemented. Upon completion of the pattern:
(1) What organizational or team-related activities must have occurred?
(2) What is the exit state for the process?
(3) What software engineering information or project information has been developed?
Related Patterns:
i) Provide a list of all process patterns that are directly related to this one. This may be represented
as a hierarchy or in some other diagrammatic form.
ii) For example, the stage pattern Communication encompasses the task patterns:
ProjectTeam,
CollaborativeGuidelines,
ScopeIsolation,
RequirementsGathering,
ConstraintDescription, and
ScenarioCreation.
Known Uses and Examples:
Indicate the specific instances in which the pattern are applicable.
For example, Communication is mandatory at the beginning of every software project, is
recommended throughout the software project, and is mandatory once the deployment activity is
under way.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
i) In the second increment, more sophisticated document producing and processing facilities, file
management functionalities are given.
Incremental process Model advantages
1. Produces working software early during the lifecycle.
2. More flexible as scope and requirement changes can be implemented at low cost.
3. Testing and debugging is easier, as the iterations are small.
4. Low risks factors as the risks can be identified and resolved during each iteration.
Incremental process Model disadvantages
1. This model has phases that are very rigid and do not overlap.
2. Not all the requirements are gathered before starting the development; this could lead to
problems related to system architecture at later iterations.
1.3.2.1 The RAD Model
Rapid Application Development is a linear sequential software development process model that
emphasizes an extremely short development cycle.
Rapid application achieved by using a component based construction approach.
If requirements are well understood and project scope is constrained the RAD process enables a
development team to create a ―fully functional system.
RAD phases:
Business modeling
Data modeling
Process modeling
Application generation
Testing and turnover
Business modelling:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
Data modelling:
The information flow defined as part of the business modeling phase is refined into a set of data
objects that are needed to support the business.
The characteristics (called attributes) of each object are identified and the relationships between
these objects are defined.
Process modelling:
The data modelling phase are transformed to achieve the information flow necessary to implement
a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a data object.
Application generation:
RAD assumes the use of 4 generation techniques.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
A quick design focuses on a representation of those aspects of the software that will be visible to
the customer/user (e.g. Input approaches and output formats). The quick design leads to the
construction of a prototype.
The prototype is evaluated by the customer/user and used to refine requirements for the software
to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer,
while at the same time enabling the developer to better understand what needs to be done.
Ideally, the prototype serves as a mechanism for identifying software requirements.
If a working prototype is built, the developer attempts to use existing program fragments or applies
tools (e.g., report generators, window managers) that enable working programs to be generated
quickly.
Advantages:
Requirements can be set earlier and more reliably.
Customer sees results very quickly.
Customer is educated in what is possible helping to refine requirements.
Requirements can be communicated more clearly and completely.
Between developers and clients Requirements and design options can be investigated quickly and
cheaply.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
The spiral development model is a risk-driven process model generator that is used to guide multi-
stakeholder concurrent engineering of software intensive systems.
It has two main distinguishing features.
One is a cyclic approach for incrementally growing a system‘s degree of definition and
implementation while decreasing its degree of risk.
The other is a set of anchor point milestones for ensuring stakeholder commitment to feasible and
mutually satisfactory system solutions.
Using the spiral model, software is developed in a series of incremental releases.
A spiral model is divided into a number of framework activities, also called task regions.
Typically, there are between three and six task regions. Figure1.10depictsspiral model that
contains six task regions:
Customer communication—tasks required to establish effective communication between
developer and customer.
Planning—tasks required to define resources, timelines, and other project related information.
Risk analysis—tasks required to assess both technical and management risks.
Engineering—tasks required to build one or more representations of the application.
Construction and release—tasks required to construct, test, install, and provide user support
(e.g., documentation and training).
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Each cube placed along the axis can be used to represent the starting point for different types of
projects A ―concept development project‖ starts at the core of the spiral and will continue until
concept development is complete.
If the concept is to be developed into an actual product, the process proceeds through the next cube
(new product development project entry point) and a ―new development project‖ is initiated. The
new product will evolve through a number of iterations around the spiral, following the path that
bounds the core region.
Spiral model is realistic approach to development of large-scale systems and software. Because
customer and developer better understand the problem statement at each evolutionary level. Also
risks can be identified or rectified at each such level.
Spiral Model Advantages:
Requirement changes can b made at every stage.
Risks can be identified and rectified before they get problematic.
Spiral Model disadvantages:
It is based on customer communication. If the communication is not proper then the software
product that gets developed will not be up to the mark.
It demands considerable risk assessment. If the risk assessment is done properly then only the
successful product can be obtained.
1.3.4 Concurrent Models
The concurrent development modelis also called as concurrent engineering.
It allows a software team to represent iterative and concurrent elements of any of the process
models.
In this model, the framework activities or software development tasks are represented as
states.
For example, the modeling or designing phase of software development can be in one of the states
like under development, waiting for modification, under revision or under review and so on.
All software engineering activities exist concurrently but reside in different states.
These states make transitions. That is during modeling, the transition from under development
state to waiting for modification state occurs.
Customer indicates that changes in requirements must be made, the modeling activity moves from
the under development state into the awaiting changes state.
This model basically defines the series of events due to which the transition from one state to
another state occurs. This is called triggering. These series of events occur for every software
development activity, action or task.
Advantages:
All types of software development can be done using concurrent development model.
This model provides accurate picture of current state of project.
Each activity or task can be carried out concurrently. Hence this model is an efficient process
model.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
TheseslidesaredesignedtoaccompanySoftwareEngineering:APractitioner’sApproach,7/e
3
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
i) Test driven management establishes a series of measurable ―destinations‖ and then defines
mechanisms for determining whether or not these destinations have been reached. Retrospectives:
i) An IXP team conducts a technical review after software increment is delivered called
retrospective.
ii) This review examines ―issues,events,and lessons-learned‖ across a software increment and/or the
entire software release.
iii) The intent is to improve the IXP process.
Continuous learning:
i) Learning is a vital product of continuous process improvement, members of the XP team are
encouraged to learn new methods and techniques that can lead to a higher quality product.
ii) In addition to these six new practices, IXP modifies a number of existing XP practices.
Story-driven development (SDD) insists that stories for acceptance tests be written before a
single line of code is developed.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Critics argue that amore formal model or specification is often needed to ensure that
omissions,inconsistencies, and errors are uncovered before the system is built.
Lack of formal design.
XP deemphasizes the need for architectural design andin many instances, suggests that design of
all kinds should be relativelyinformal.
Critics argue that when complex systems are built, design must beemphasized to ensure that the
overall structure of the software will exhibit quality and maintainability.
XP proponents suggest that the incremental nature of the XP process limits complexity (simplicity
is a core value) and therefore reduces the need for extensive design.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
The features are small, ―useful in the eyes of the client‖ results.
FDD designs the rest of the development process around feature delivery using the following eight
practices:
Domain Object Modelling
Developing by Feature
Component/Class Ownership
Feature Teams
Inspections
Configuration Management
Regular Builds
Visibility of progress and results
FDD recommends specific programmer practices such as ―Regular Builds‖ and ―Component/Class
Ownership‖.
Unlike other agile methods, FDD describes specific, very short phases of work, which are to be
accomplished separately per feature.
These include Domain Walkthrough, Design, Design Inspection, Code, Code Inspection, and Promote to
Build.
1.8.3.6 Agile Modelling (AM)
Agile Modeling (AM) is a practice-based methodology for effective modeling and documentation
of software-based systems.
Simply put, Agile Modeling (AM) is a collection of values, principles, and practices for modeling
software that can be applied on a software development project in an effective and light-weight
manner.
Although AM suggests a wide array of ―core‖ and ―supplementary‖ modeling principles, those
that make AM unique are:
Use multiple models.
There are many different models and notations that can be used to describe software.
AM suggests that to provide needed insight, each model should present a different aspect of the
system and only those models that provide value to their intended audience should be used.
Travel light.
As software engineering work proceeds, keep only those models that will provide long-term value
and jettison the rest.
Content is more important than representation.
Modeling should impart information to its intended audience.
A syntactically perfect model that imparts little useful content is not as valuable as a model with
flawed notation that nevertheless provides valuable content for its audience.
Know the models and the tools you use to create them.
Understand the strengths and weaknesses of each model and the tools that are used to create it.
Adapt locally.
The modelling approach should be adapted to the needs of the agile team.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
This model is good for small systems. This model is good for large systems
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Pins
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Control flow
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Software Design is the process of transforming user requirements into a suitable form, which
helps the programmer in software coding and implementation. During the software design
phase, the design document is produced, based on the customer requirements as documented
in the SRS document. Hence, this phase aims to transform the SRS document into a design
document.
The following items are designed and documented during the design phase:
1. Different modules are required.
2. Control relationships among modules.
3. Interface among different modules.
4. Data structure among the different modules.
5. Algorithms are required to be implemented among the individual modules.
Concepts are defined as a principal idea or invention that comes into our mind or in thought to
understand something. The software design concept simply means the idea or principle
behind the design. It describes how you plan to solve the problem of designing software, and
the logic, or thinking behind how you will design software. It allows the software engineer to
create the model of the system software or product that is to be developed or built. The software
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
design concept provides a supporting and essential structure or model for developing the right
software. There are many concepts of software design and some of them are given below:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
“the process of changing a software system in a way that it won‟t impact the behavior of the
design and improves the internal structure”.
Design concepts provides the software designer with a foundation from which more
sophisticated design methods can be applied and helps the software engineer to answer the
following questions: • What criteria can be used to partition software into individual
components? • How is function or data structure detail separated from a conceptual
representation of the software? • What uniform criteria define the technical quality of a software
design? Fundamental software design concepts provide the necessary framework for "getting it
right."
Abstraction
Architecture
Patterns
Separation of Concerns
Modularity
Information Hiding
Functional Independence
Refinement
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Module Coupling
A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls between
modules increase or the amount of shared data is large. Thus, it can be said that a design with
high coupling will have more errors.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite
data items such as structure, objects, etc. When the module passes non-global data
structure or entire structure to another module, they are said to be stamp coupled. For example,
passing structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication
to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.
Module Cohesion
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Coupling Cohesion
Coupling is also called Inter-Module Binding. Cohesion is also called Intra-Module Binding.
Coupling shows the relationships between Cohesion shows the relationship within the
modules. module.
While creating, you should aim for low While creating you should aim for high
coupling, i.e., dependency among modules cohesion, i.e., a cohesive component/ module
should be less. focuses on a single function (i.e., single-
mindedness) with little interaction with other
modules of the system.
In coupling, modules are linked to the other In cohesion, the module focuses on a single
modules. thing.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Software design patterns are communicating objects and classes that are customized to
solve a general design problem in a particular context. Software design patterns are general,
reusable solutions to common problems that arise during the design and development of
software. They represent best practices for solving certain types of problems and provide a way
for developers to communicate about effective design solutions. Design patterns capture expert
knowledge and experience, making it easier for developers to create scalable, maintainable,
and flexible software systems.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Design patterns are basically defined as reusable solutions to the common problems that
arise during software design and development. They are general templates or best practices
that guide developers in creating well-structured, maintainable, and efficient code.
Basically, there are several types of design patterns that are commonly used in software
development. These patterns can be categorized into three main groups:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Singleton Pattern
● The Singleton method or Singleton Design pattern is one of the simplest design
patterns. It ensures a class only has one instance, and provides a global point
of access to it.
● Factory Method Pattern
● The Factory Method pattern is used to create objects without specifying the
exact class of object that will be created. This pattern is useful when you need
to decouple the creation of an object from its implementation.
● Abstract Factory Pattern
● Abstract Factory pattern is almost similar to Factory Pattern and is considered
as another layer of abstraction over factory pattern. Abstract Factory patterns
work around a super-factory which creates other factories.
● Builder Pattern
● Builder pattern aims to “Separate the construction of a complex object from
its representation so that the same construction process can create different
representations.” It is used to construct a complex object step by step and the
final step will return the object.
● Prototype Pattern
● Prototype allows us to hide the complexity of making new instances from the
client.
● The concept is to copy an existing object rather than creating a new instance
from scratch, something that may include costly operations. The existing object
acts as a prototype and contains the state of the object.
2. Structural Design Patterns
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Adapter Pattern
● The adapter pattern converts the interface of a class into another interface
clients expect. Adapter lets classes work together that couldn‟t otherwise
because of incompatible interfaces
● Bridge Pattern
● The bridge pattern allows the Abstraction and the Implementation to be
developed independently and the client code can access only the Abstraction
part without being concerned about the Implementation part
● Composite Pattern
● Composite pattern is a partitioning design pattern and describes a group of
objects that is treated the same way as a single instance of the same type of
object. The intent of a composite is to “compose” objects into tree structures to
represent part-whole hierarchies.
● Decorator Pattern
● It allows us to dynamically add functionality and behavior to an object
without affecting the behavior of other existing objects within the same class.
● We use inheritance to extend the behavior of the class. This takes place at
compile-time, and all the instances of that class get the extended behavior.
● Facade Pattern
● Facade Method Design Pattern provides a unified interface to a set of
interfaces in a subsystem. Facade defines a high-level interface that makes the
subsystem easier to use.
● Proxy Pattern
● Proxy means „in place of‟, representing‟ or „in place of‟ or „on behalf of‟ are literal
meanings of proxy and that directly explains Proxy Design Pattern.
● Proxies are also called surrogates, handles, and wrappers. They are closely
related in structure, but not purpose, to Adapters and Decorators.
● Flyweight Pattern
● This pattern provides ways to decrease object count thus improving application
required objects structure. Flyweight pattern is used when we need to create a
large number of similar objects
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Observer Pattern
● It defines a one-to-many dependency between objects, so that when one
object (the subject) changes its state, all its dependents (observers) are notified
and updated automatically.
● Strategy Pattern
● that allows the behavior of an object to be selected at runtime. It is one of the
Gang of Four (GoF) design patterns, which are widely used in object-oriented
programming.
● The Strategy pattern is based on the idea of encapsulating a family of
algorithms into separate classes that implement a common interface.
● Command Pattern
● The Command Pattern is a behavioral design pattern that turns a request into a
stand-alone object, containing all the information about the request. This object
can be passed around, stored, and executed at a later time
● Chain of Responsibility Pattern
● Chain of responsibility pattern is used to achieve loose coupling in software
design where a request from the client is passed to a chain of objects to process
them.
● Later, the object in the chain will decide themselves who will be processing the
request and whether the request is required to be sent to the next object in the
chain or not.
● State Pattern
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● A state design pattern is used when an Object changes its behavior based on
its internal state. If we have to change the behavior of an object based on its
state, we can have a state variable in the Object and use the if-else condition
block to perform different actions based on the state.
● Template Method Pattern
● Template method design pattern is to define an and leave the details to be
implemented by the child classealgorithm as a skeleton of operationss. The
overall structure and sequence of the algorithm are preserved by the parent
class.
● Visitor Pattern
● It is used when we have to perform an operation on a group of similar kind of
Objects. With the help of visitor pattern, we can move the operational logic from
the objects to another class.
● Interpreter Pattern
● Interpreter pattern is used to defines a grammatical representation for a
language and provides an interpreter to deal with this grammar.
● Mediator Pattern
● It enables decoupling of objects by introducing a layer in between so that the
interaction between objects happen via the layer.
● Memento Pattern
● It is used to restore the state of an object to a previous state. As your
application is progressing, you may want to save checkpoints in your application
and restore back to those checkpoints later.
● Intent of Memento Design pattern is without violating encapsulation, capture and
externalize an object‟s internal state so that the object can be restored to this
state later.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Design patterns are a valuable tool in software development, and they offer various benefits and
uses, some of them are explained below :
● Enhancing Maintainability:
● Design patterns help organize code in a structured and consistent way. This
makes it easier to maintain, update, and extend the codebase. Developers
familiar with the patterns can quickly understand and work on the code.
● Promoting Code Reusability:
● Design patterns encapsulate solutions to recurring design problems. By using
these patterns, we can create reusable templates for solving specific problems in
different parts of your application.
● Simplifying Complex Problems:
● Complex software problems can be broken down into smaller, more manageable
components using design patterns. This simplifies development by addressing
one problem at a time and, in turn, makes the code more maintainable.
● Improving Scalability:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Basically, design patterns should be used when they provide a clear and effective solution to a
recurring problem in our software design. Here are some situations where we can use the
design patterns.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
The Model View Controller (MVC) design pattern specifies that an application consists of a
data model, presentation information, and control information. The pattern requires that each of
these be separated into different objects.
● The MVC pattern separates the concerns of an application into three distinct components,
each responsible for a specific aspect of the application‟s functionality.
● This separation of concerns makes the application easier to maintain and extend, as
changes to one component do not require changes to the other components.
1. Model
The Model component in the MVC (Model-View-Controller) design pattern represents
the data and business logic of an application. It is responsible for managing the application‟s
data, processing business rules, and responding to requests for information from other
components, such as the View and the Controller.
2. View
Displays the data from the Model to the user and sends user inputs to the Controller. It is
passive and does not directly interact with the Model. Instead, it receives data from the Model
and sends user inputs to the Controller for processing.
3. Controller
Controller acts as an intermediary between the Model and the View. It handles user
input and updates the Model accordingly and updates the View to reflect changes in the Model.
It contains application logic, such as input validation and data transformation.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
This below communication flow ensures that each component is responsible for a
specific aspect of the application‟s functionality, leading to a more maintainable and scalable
architecture
Below is the code of above problem statement using MVC Design Pattern:
Let‟s break down into the component wise code:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Increased File Count: MVC can result in a larger number of files and classes compared to
simpler architectures, which may make the project structure more complex and harder to
navigate.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
ARCHITECTURAL STYLES
The architectural model or style is a pattern for creating the system architecture
for a given problem. The software that is built for computer-based systems also
exhibits many architectural styles.
(2) A set of connectors that enable “communication, coordination and cooperation” among
components;
(3) Constraints that define how components can be integrated to form the system;
(4) Semantic models that enable a designer to understand the overall properties of a
system by analyzing the known properties of its constituent parts. In the section that follows,
we consider commonly used architectural patterns for software.
a)Data-centered architectures.
A data store (e.g., a file or database) resides at the center of this architecture and is
accessed frequently by other components that update, add, delete, or otherwise modify data
within the store. Figure (Data-centered architecture) illustrates typical data-centered style.
Client software accesses a central repository.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
b) Data-flow architectures.
If the data flow degenerates into a single line of transforms, it is termed batch sequential. This
pattern accepts a batch of data and then applies a series of sequential components (filters) to
transform it
This architectural style enables a software designer (system architect) to achieve a program
structure that is relatively easy to modify and scale. A number of sub styles exist within this
category:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
d)Object-oriented architectures.
The components of a system encapsulate data and the operations that must be
applied to manipulate the data.
Communication and coordination between components is accomplished via message passing.
e) Layered architectures.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
These golden rules actually form the basis for a set of user interface design principles that guide
this important aspect of software design.
Mandel defines a number of design principles that allow the user to maintain control:
Define interaction modes in a way that does not force a user into unnecessary or undesired
actions. Provide for flexible interaction.
Streamline interaction as skill levels advance and allow the interaction to be customized. Hide
technical internals from the casual user.
Design for direct interaction with objects that appear on the screen.
Mandel defines design principles that enable an interface to reduce the user‟s memory load:
Reduce demand on short-term memory.
The visual layout of the interface should be based on a real-world metaphor. Disclose
information in a progressive fashion.
The interface should present and acquire information in a consistent fashion. This implies that
(1) all visual information is organized according to design rules that are maintained throughout
all screen displays,
(2) input mechanisms are constrained to a limited set that is used consistently throughout the
application
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
(3) mechanisms for navigating from task to task are consistently defined and implemented.
Mandel defines a set of design principles that help make the interface consistent:
Allow the user to put the current task into a meaningful context. Maintain consistency across a
family of applications.
If past interactive models have created user expectations, do not make changes unless there is
a compelling reason to do so.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Testing – Unit testing – Black box testing– White box testing – Integration and System testing–
Regression testing – Debugging - Program analysis – Symbolic execution – Model Checking-
Case Study
Testing is the process of executing a program to find errors. To make our software perform well
it should be error-free. If testing is done successfully it will remove all the errors from the
software. In this article, we will discuss first the principles of testing and then we will discuss, the
Principles of Testing
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Unit testing is a type of software testing that focuses on individual units or components of a
software system. The purpose of unit testing is to validate that each unit of the software works
as intended and meets the requirements. Unit testing is typically performed by developers, and
it is performed early in the development process before the code is integrated and tested as a
whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code
does not break existing functionality. Unit tests are designed to validate the smallest possible
unit of code, such as a function or a method, and test it in isolation from the rest of the system.
This allows developers to quickly identify and fix any issues early in the development process,
improving the overall quality of the software and reducing the time required for later testing.
Unit Testing is a software testing technique using which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to
determine whether they are suitable for use or not. It is a testing method using which every
independent module is tested to determine if there is an issue by the developer himself. It is
correlated with the functional correctness of the independent modules. Unit Testing is defined
as a type of software testing where individual components of a software are tested. Unit Testing
of the software product is carried out during the development of an application. An individual
component may be either an individual function or a procedure. Unit Testing is typically
performed by the developer. In SDLC or V Model, Unit testing is the first level of testing done
before integration testing. Unit testing is a type of testing technique that is usually performed by
developers. Although due to the reluctance of developers to test, quality assurance engineers
also do unit testing.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1. Black Box Testing: This testing technique is used in covering the unit tests for input,
user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional behavior of the
system by giving the input and checking the functionality output including the internal
design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analyzing the code performance for the modules.
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit
6.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1. Unit Testing allows developers to learn what functionality is provided by a unit and
how to use it to gain a basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module works
properly.
3. Unit testing enables testing parts of the project without waiting for others to be
completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix issues
early in the development process before they become larger and more difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code works as
intended and meets the requirements, improving the overall quality of the software.
6. Increased Confidence: Unit testing provides developers with confidence in their
code, as they can validate that each unit of the software is functioning as expected.
7. Faster Development: Unit testing enables developers to work faster and more
efficiently, as they can validate changes to the code without having to wait for the full
system to be tested.
8. Better Documentation: Unit testing provides clear and concise documentation of the
code and its behavior, making it easier for other developers to understand and
maintain the software.
9. Facilitation of Refactoring: Unit testing enables developers to safely make changes
to the code, as they can validate that their changes do not break existing
functionality.
10. Reduced Time and Cost: Unit testing can reduce the time and cost required for
later testing, as it helps to identify and fix issues early in the development process.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of
the module.
4. It requires more time for maintenance when the source code is changed frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the
performance of the system, etc.
6. Time and Effort: Unit testing requires a significant investment of time and effort to
create and maintain the test cases, especially for complex systems.
7. Dependence on Developers: The success of unit testing depends on the developers,
who must write clear, concise, and comprehensive test cases to validate the code.
8. Difficulty in Testing Complex Units: Unit testing can be challenging when dealing with
complex units, as it can be difficult to isolate and test individual units in isolation from
the rest of the system.
9. Difficulty in Testing Interactions: Unit testing may not be sufficient for testing
interactions between units, as it only focuses on individual units.
10. Difficulty in Testing User Interfaces: Unit testing may not be suitable for testing
user interfaces, as it typically focuses on the functionality of individual units.
11. Over-reliance on Automation: Over-reliance on automated unit tests can lead to a
false sense of security, as automated tests may not uncover all possible issues or
bugs.
12. Maintenance Overhead: Unit testing requires ongoing maintenance and updates,
as the code and test cases must be kept up-to-date with changes to the software.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Integration Testing
Integration testing is the process of testing the interface between two software units or modules.
It focuses on determining the correctness of the interface. The purpose of integration testing is
to expose faults in the interaction between integrated units. Once all the modules have been
Integration testing
It is a software testing technique that focuses on verifying the interactions and data exchange
between different components or modules of a software application. The goal of integration
testing is to identify any problems or bugs that arise when different components are combined
and interact with each other. Integration testing is typically performed after unit testing and
before system testing. It helps to identify and resolve integration issues early in the development
cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there
should be a proper sequence to be followed. And also if you don’t want to miss out on any
integration scenarios then you have to follow the proper sequence. Exposing the defects is the
major focus of the integration testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the
modules are combined and the functionality is verified after the completion of individual module
testing. In simple words, all the modules of the system are simply put together and tested. This
approach is practicable only for very small systems. If an error is found during the integration
testing, it is very difficult to localize the error as the error may potentially belong to any of the
modules being integrated. So, debugging errors reported during Big Bang integration testing is
very expensive to fix.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Big-bang integration testing is a software testing approach in which all components or modules
of a software application are combined and tested at once. This approach is typically used when
the software components have a low degree of interdependence or when there are constraints
in the development environment that prevent testing individual components. The goal of big-
bang integration testing is to verify the overall functionality of the system and to identify any
integration problems that arise when the components are combined. While big-bang integration
testing can be useful in some situations, it can also be a high-risk approach, as the complexity
of the system and the number of interactions between components can make it difficult to
identify and diagnose problems.
Advantages:
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules
to be integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are
tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between
components.
8. This can result in a lack of confidence in the system’s stability and reliability.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are
tested with higher modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules making up the
subsystem. This integration testing uses test drivers to drive and pass appropriate data to the
lower-level modules.
Advantages:
Disadvantages:
Advantages:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Disadvantages:
Advantages:
● Mixed approach is useful for very large projects having several sub projects.
● This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
● Parallel test can be performed in top and bottom layer tests.
Disadvantages:
● For mixed integration testing, it requires very high cost because one part has a Top-
down approach while another part has a bottom-up approach.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
Applications:
1. Identify the components: Identify the individual components of your application that
need to be integrated. This could include the frontend, backend, database, and any
third-party services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases that
need to be executed to validate the integration points between the different
components. This could include testing data flow, communication protocols, and
error handling.
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your
integration tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the most
critical and complex scenarios. Be sure to log any defects or issues that you
encounter during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any
defects or issues that need to be addressed. This may involve working with
developers to fix bugs or make changes to the application architecture.
6. Repeat testing: Once defects have been fixed, repeat the integration testing process
to ensure that the changes have been successful and that the application still works
as expected.
Integration Testing
Integration testing is the process of testing the interface between two software units or modules.
It focuses on determining the correctness of the interface. The purpose of integration testing is
to expose faults in the interaction between integrated units. Once all the modules have been
unit-tested, integration testing is performed.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application. The goal of
integration testing is to identify any problems or bugs that arise when different components are
combined and interact with each other. Integration testing is typically performed after unit testing
and before system testing. It helps to identify and resolve integration issues early in the
development cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there
should be a proper sequence to be followed. And also if you don’t want to miss out on any
integration scenarios then you have to follow the proper sequence. Exposing the defects is the
major focus of the integration testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
Big-Bang Integration Testing – It is the simplest integration testing approach, where all the
modules are combined and the functionality is verified after the completion of individual module
testing. In simple words, all the modules of the system are simply put together and tested. This
approach is practicable only for very small systems. If an error is found during the integration
testing, it is very difficult to localize the error as the error may potentially belong to any of the
modules being integrated. So, debugging errors reported during Big Bang integration testing is
very expensive to fix.
Big-bang integration testing is a software testing approach in which all components or modules
of a software application are combined and tested at once. This approach is typically used when
the software components have a low degree of interdependence or when there are constraints
in the development environment that prevent testing individual components. The goal of big-
bang integration testing is to verify the overall functionality of the system and to identify any
integration problems that arise when the components are combined. While big-bang integration
testing can be useful in some situations, it can also be a high-risk approach, as the complexity
of the system and the number of interactions between components can make it difficult to
identify and diagnose problems.
Advantages:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules
to be integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are
tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between
components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are
tested with higher modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules making up the
subsystem. This integration testing uses test drivers to drive and pass appropriate data to the
lower-level modules.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Advantages:
Disadvantages:
Advantages:
Disadvantages:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Advantages:
● Mixed approach is useful for very large projects having several sub projects.
● This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
● Parallel test can be performed in top and bottom layer tests.
Disadvantages:
● For mixed integration testing, it requires very high cost because one part has a Top-
down approach while another part has a bottom-up approach.
● This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
Applications:
1. Identify the components: Identify the individual components of your application that
need to be integrated. This could include the frontend, backend, database, and any
third-party services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases that
need to be executed to validate the integration points between the different
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
components. This could include testing data flow, communication protocols, and
error handling.
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your
integration tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the most
critical and complex scenarios. Be sure to log any defects or issues that you
encounter during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any
defects or issues that need to be addressed. This may involve working with
developers to fix bugs or make changes to the application architecture.
6. Repeat testing: Once defects have been fixed, repeat the integration testing process
to ensure that the changes have been successful and that the application still works
as expected.
System Testing
NTRODUCTION:
System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets the
specified requirements and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance testing.
System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements. In system
testing, integration testing passed components are taken as input. The goal of integration
testing is to detect any irregularity between the units that are integrated together. System testing
detects defects within both the integrated units and the whole system. The result of system
testing is the observed behavior of a component or a system when it is tested. System Testing
is carried out on the whole system in the context of either system requirement specifications or
functional requirement specifications or in the context of both. System testing tests the design
and behavior of the system and also the expectations of the customer. It is performed to test the
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
system beyond the bounds mentioned in the software requirements specification (SRS). System
Testing is basically performed by a testing team that is independent of the development team
that helps to test the quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is performed after the integration
testing and before the acceptance testing.
● Test Environment Setup: Create testing environment for the better quality testing.
● Create Test Case: Generate test case for the testing process.
● Create Test Data: Generate the data that is to be tested.
● Execute Test Case: After the generation of the test case and the test data, test cases
are executed.
● Defect Reporting: Defects in the system are detected.
● Regression Testing: It is carried out to test the side effects of the testing process.
● Log Defects: Defects are fixed in this step.
● Retest: If the test is not successful then again test is performed.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1. JMeter
2. Gallen Framework
3. Selenium
1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI
Note: The choice of tool depends on various factors like the technology used, the
size of the project, the budget, and the testing requirements.
● The testers do not require more knowledge of programming to carry out this testing.
● It will test the entire product or software so that we will easily detect the errors or
defects which cannot be identified during the unit testing and integration testing.
● The testing environment is similar to that of the real time production or business
environment.
● It checks the entire functionality of the system with different test scripts and also it
covers the technical and business requirements of clients.
● After this testing, the product will almost cover all the possible bugs or errors and
hence the development team will confidently go ahead with acceptance testing.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● This testing is time consuming process than another testing techniques since it
checks the entire product or software.
● The cost for the testing will be high since it covers the testing of entire software.
● It needs good debugging tool otherwise the hidden errors will not be found.
Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts that
might get affected due to the modifications to ensure that no new errors have been introduced in
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
the software after the modifications have been made. Regression means the return of
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
highest priority has the highest rank. For example, a test case with priority code 2 is
less important than a test case with priority code 1.
●
● Selenium
● WATIR (Web Application Testing In Ruby)
● QTP (Quick Test Professional)
● RFT (Rational Functional Tester)
● Winrunner
● Silktest
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● As most of the test cases used in Regression Testing are selected from the existing
test suite, and we already know their expected outputs. Hence, it can be easily
automated by the automated tools.
● It helps to maintain the quality of the source code.
Debugging is the process of identifying and resolving errors, or bugs, in a software system. It is
an important aspect of software engineering because bugs can cause a software system to
malfunction, and can lead to poor performance or incorrect results. Debugging can be a time-
consuming and complex task, but it is essential for ensuring that a software system is
functioning correctly.
What is Debugging ?
In the context of software engineering, debugging is the process of fixing a bug in the software.
When there’s a problem with software, programmers analyze the code to figure out why things
aren’t working correctly. They use different debugging tools to carefully go through the code,
step by step, find the issue, and make the necessary corrections.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
The term “debugging” originated from an incident involving Grace Hopper in the 1940s when a
moth caused a malfunction in the Mark II computer at Harvard University. The term stuck and is
now commonly used to describe the process of finding and fixing errors in computer programs.
In simpler terms, debugging got its name from removing a moth that caused a computer
problem.
1. Code Inspection: This involves manually reviewing the source code of a software
system to identify potential bugs or errors.
2. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve bugs.
3. Unit Testing: This involves testing individual units or components of a software
system to identify bugs or errors.
4. Integration Testing: This involves testing the interactions between different
components of a software system to identify bugs or errors.
5. System Testing: This involves testing the entire software system to identify bugs or
errors.
6. Monitoring: This involves monitoring a software system for unusual behavior or
performance issues that can indicate the presence of bugs or errors.
7. Logging: This involves recording events and messages related to the software
system, which can be used to identify bugs or errors.
Process of Debugging
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Validation of corrections.
Debugging Approaches/Strategies
1. Brute Force: Study the system for a longer duration to understand the system. It
helps the debugger to construct different representations of systems to be debugged
depending on the need. A study of the system is also done actively to find recent
changes made to the software.
2. Backtracking: Backward analysis of the problem which involves tracing the program
backward from the location of the failure message to identify the region of faulty
code. A detailed study of the region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using
breakpoints or print statements at different points in the program and studying the
results. The region where the wrong outputs are obtained is the region that needs to
be focused on to find the defect.
4. Using A debugging experience with the software debug the software with similar
problems in nature. The success of this approach depends on the expertise of the
debugger.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
5. Cause elimination: it introduces the concept of binary partitioning. Data related to the
error occurrence are organized to isolate potential causes.
6. Static analysis: Analyzing the code without executing it to identify potential bugs or
errors. This approach involves analyzing code syntax, data flow, and control flow.
7. Dynamic analysis: Executing the code and analyzing its behavior at runtime to
identify errors or bugs. This approach involves techniques like runtime debugging
and profiling.
8. Collaborative debugging: Involves multiple developers working together to debug a
system. This approach is helpful in situations where multiple modules or components
are involved, and the root cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify the sequence of events
leading up to the error. This approach involves collecting and analyzing logs and traces
generated by the system during its execution.
10. Automated Debugging: The use of automated tools and techniques to assist in the
debugging process. These tools can include static and dynamic analysis tools, as well
as tools that use machine learning and artificial intelligence to identify errors and suggest
fixes.
● Syntax error
● Logical error
● Runtime error
● Stack overflow
● Index Out of Bound Errors
● Infinite loops
● Concurrency Issues
● I/O errors
● Environment Dependencies
● Integration Errors
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Reference error
● Type error
Debugging Tools
A debugging tool is a computer program that is used to test and debug other programs. A lot of
public domain software like gdb and dbx are available for debugging. They offer console-based
command-line interfaces. Examples of automated debugging tools include code-based tracers,
profilers, interpreters, etc. Some of the widely used debuggers are:
● Radare2
● WinDbg
● Valgrind
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some
automated tools available but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing mechanism.
Advantages of Debugging
1. Improved system quality: By identifying and resolving bugs, a software system can
be made more reliable and efficient, resulting in improved overall quality.
2. Reduced system downtime: By identifying and resolving bugs, a software system
can be made more stable and less likely to experience downtime, which can result in
improved availability for users.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
3. Increased user satisfaction: By identifying and resolving bugs, a software system can
be made more user-friendly and better able to meet the needs of users, which can
result in increased satisfaction.
4. Reduced development costs: Identifying and resolving bugs early in the development
process, can save time and resources that would otherwise be spent on fixing bugs
later in the development process or after the system has been deployed.
5. Increased security: By identifying and resolving bugs that could be exploited by
attackers, a software system can be made more secure, reducing the risk of security
breaches.
6. Facilitates change: With debugging, it becomes easy to make changes to the
software as it becomes easy to identify and fix bugs that would have been caused by
the changes.
7. Better understanding of the system: Debugging can help developers gain a better
understanding of how a software system works, and how different components of the
system interact with one another.
8. Facilitates testing: By identifying and resolving bugs, it makes it easier to test the
software and ensure that it meets the requirements and specifications.
Disadvantages of Debugging
While debugging is an important aspect of software engineering, there are also some
disadvantages to consider:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
3. Can be difficult to reproduce: Some bugs may be difficult to reproduce, which can
make it challenging to identify and resolve them.
4. Can be difficult to diagnose: Some bugs may be caused by interactions between
different components of a software system, which can make it challenging to identify
the root cause of the problem.
5. Can be difficult to fix: Some bugs may be caused by fundamental design flaws or
architecture issues, which can be difficult or impossible to fix without significant
changes to the software system.
6. Limited insight: In some cases, debugging tools can only provide limited insight into
the problem and may not provide enough information to identify the root cause of the
problem.
7. Can be expensive: Debugging can be an expensive process, especially if it requires
additional resources such as specialized debugging tools or additional development
time
The goal of developing software that is reliable, safe and effective is crucial in the dynamic and
always changing field of software development. Programme Analysis Tools are a developer’s
greatest support on this trip, giving them invaluable knowledge about the inner workings of their
code. In this article, we’ll learn about it’s importance and classification.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Static Program Analysis Tool is such a program analysis tool that evaluates and computes
various characteristics of a software product without executing it. Normally, static program
analysis tools analyze some structural representation of a program to reach a certain analytical
conclusion. Basically some structural properties are analyzed using static program analysis
tools. The structural properties that are usually analyzed are:
Code walkthroughs and code inspections are considered as static analysis methods but static
program analysis tool is used to designate automated analysis tools. Hence, a compiler can be
considered as a static program analysis tool.
Dynamic Program Analysis Tool is such type of program analysis tool that require the program
to be executed and its actual behavior to be observed. A dynamic program analyzer basically
implements the code. It adds additional statements in the source code to collect the traces of
program execution. When the code is executed, it allows us to observe the behavior of the
software for different test cases. Once the software is tested and its behavior is observed, the
dynamic program analysis tool performs a post execution analysis and produces reports which
describe the structural coverage that has been achieved by the complete testing process for the
program.
For example, the post execution dynamic analysis report may provide data on extent statement,
branch and path coverage achieved. The results of dynamic program analysis tools are in the
form of a histogram or a pie chart. It describes the structural coverage obtained for different
modules of the program. The output of a dynamic program analysis tool can be stored and
printed easily and provides evidence that complete testing has been done. The result of
dynamic analysis is the extent of testing performed as white box testing. If the testing result is
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
not satisfactory then more test cases are designed and added to the test scenario. Also
dynamic analysis helps in elimination of redundant test cases.
Master Software Testing and Automation in an efficient and time-bound manner by mentors with
real-time industry experience. Join our Software Automation Course and embark on an exciting
journey, mastering the skill set with ease!
Symbolic Execution
Symbolic execution is a software testing technique that is useful to aid the generation of test
data and in proving the program quality.
The execution requires a selection of paths that are exercised by a set of data values. A
program, which is executed using actual data, results in the output of a series of values.
In symbolic execution, the data is replaced by symbolic values with set of expressions,
one expression per output variable.
The common approach for symbolic execution is to perform an analysis of the program,
resulting in the creation of a flow graph.
The flowgraph identifies the decision points and the assignments associated with each
flow. By traversing the flow graph from an entry point, a list of assignment statements
and branch predicates is produced.
Symbolic execution cannot proceed if the number of iterations in the loop is known.
The second issue is the invocation of any out-of-line code or module calls.
Symbolic execution cannot be used with arrays.
The symbolic execution cannot identify of infeasible paths.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Partition analysis
Symbolic debugging
The procedure is designated “emblematic symbolic” in light of the fact that it utilizes
representative control strategies, for example, Paired Choice Outlines (BDDs) and Satisfiability
Modulo Speculations (SMT), to proficiently deal with enormous and complex state spaces. The
emblematic model really takes a look at works by addressing the framework’s state space as a
bunch of sensible recipes, and afterward utilizing computerized hypotheses demonstrating
devices to reason about these equations.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
● Culmination: symbolic model checking can give total confirmation, implying that it
can really take a look at all potential conditions of a framework and check that the
framework fulfills its determination.
● Adaptability: Emblematic model checking scales well with framework size, making it
appropriate for confirming enormous and complex frameworks. It can deal with
frameworks with many cooperating parts, complex information designs, and
simultaneousness.
Support for different symbolic models really looks at upholds various formalisms for framework
detail, including transient rationales, automata, and other conventional dialects. This adaptability
makes it conceivable to determine and confirm a great many frameworks.
● Broadly utilized: symbolic model checking is a deeply grounded procedure that has
been generally utilized in scholarly world and industry to confirm security basic
frameworks, including airplane control frameworks, rail route frameworks, and clinical
gadgets.
By and large, the symbolic model really looking at offers a strong and successful way to deal
with formal check, giving a serious level of computerization and versatility, while supporting
different formalisms for framework detail.
Symbolic model checking includes a few methods that are utilized to emblematically address
and control the framework under investigation. Here are a portion of the key strategies utilized in
emblematic model checking:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
model is then changed into an emblematic portrayal utilizing strategies, for example,
BDDs or other choice graphs.
● State-space investigation: symbolic model checking investigates the state space of
the framework under examination to confirm in the event that it fulfills a given detail.
State-space investigation methods include utilizing calculations that cross the state
space of the framework and check assuming the determination is fulfilled.
● Choice strategies: symbolic model checking frequently involves choice techniques
for satisfiability modulo speculations (SMT) or satisfiability (SAT) issues to check
regardless of whether a bunch of consistent recipes is satisfiable. This assists with
distinguishing counterexamples and check the accuracy of the framework.
● Property detail: The determination that the framework needs to fulfill is regularly
indicated by utilizing a transient rationale, like Straight Worldly Rationale (LTL) or
Calculation Tree Rationale (CTL). These rationales take into consideration the
declaration of transient connections between framework ways of behaving.
● Deliberation: Symbolic model checking might utilize reflection methods to lessen the
intricacy of the framework under examination, for example, by eliminating unimportant
subtleties or amassing comparative states. This can assist with making the confirmation
cycle more proficient.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Software Project Management (SPM) is a proper way of planning and leading software
projects. It is a part of project management in which software projects are planned,
implemented, monitored, and controlled. This article focuses on discussing Software
Project Management (SPM).
Need for Software Project Management
Software is a non-physical product. Software development is a new stream in business
and there is very little experience in building software products. Most of the software
products are made to fit clients’ requirements. The most important is that basic
technology changes and advances so frequently and rapidly that the experience of one
product may not be applied to the other one. Such types of business and environmental
constraints increase risk in software development hence it is essential to manage
software projects efficiently. It is necessary for an organization to deliver quality
products, keep the cost within the client’s budget constraint, and deliver the project as
per schedule. Hence, in order, software project management is necessary to incorporate
user requirements along with budget and time constraints.
Types of Management in SPM
1. Conflict Management
Conflict management is the process to restrict the negative features of conflict while
increasing the positive features of conflict. The goal of conflict management is to
improve learning and group results including efficacy or performance in an
organizational setting. Properly managed conflict can enhance group results.
2. Risk Management
3. Requirement Management
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
4. Change Management
6. Release Management
Release Management is the task of planning, controlling, and scheduling the built-in
deploying releases. Release management ensures that the organization delivers new and
enhanced services required by the customer while protecting the integrity of existing
services.
Aspects of Software Project Management
The list of focus areas it can tackle and the broad upsides of Software Project
Management is:
1. Planning
The software project manager lays out the complete project’s blueprint. The project plan
will outline the scope, resources, timelines, techniques, strategy, communication,
testing, and maintenance steps. SPM can aid greatly here.
2. Leading
A software project manager brings together and leads a team of engineers, strategists,
programmers, designers, and data scientists. Leading a team necessitates exceptional
communication, interpersonal, and leadership abilities. One can only hope to do this
effectively if one sticks with the core SPM principles.
3. Execution
SPM comes to the rescue here also as the person in charge of software projects (if well
versed with SPM/Agile methodologies) will ensure that each stage of the project is
completed successfully. measuring progress, monitoring to check how teams function,
and generating status reports are all part of this process.
4. Time Management
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
5. Budget
Software project managers, like conventional project managers, are responsible for
generating a project budget and adhering to it as closely as feasible, regulating
spending, and reassigning funds as needed. SPM teaches us how to effectively manage
the monetary aspect of projects to avoid running into a financial crunch later on in the
project.
6. Maintenance
Software project management emphasizes continuous product testing to find and repair
defects early, tailor the end product to the needs of the client, and keep the project on
track. The software project manager ensures that the product is thoroughly tested,
analyzed, and adjusted as needed. Another point in favor of SPM.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Consider spending money on various kinds of project management tools, software, & services if
ones engage in Software Project Management strategies. These initiatives can be expensive
and time-consuming to put in place. Because your team will be using them as well, they may
require training. One may need to recruit subject-matter experts or specialists to assist with a
project, depending on the circumstances. Stakeholders will frequently press for the inclusion of
features that were not originally envisioned. All of these factors can quickly drive up a project’s
cost.
3. Overhead in Communication
Recruits enter your organization when we hire software project management personnel. This
provides a steady flow of communication that may or may not match a company’s culture. As a
result, it is advised that you maintain your crew as
small as feasible. The communication overhead tends to skyrocket when a team becomes large
enough. When a large team is needed for a project, it’s critical to identify software project
managers who can conduct effective communication with a variety of people.
4. Lack of Originality
Software Project managers can sometimes provide little or no space for creativity. Team leaders
either place an excessive amount of emphasis on management processes or impose hard
deadlines on their employees, requiring them to develop and operate code within stringent
guidelines. This can stifle innovative thought and innovation that could be beneficial to the
project. When it comes to Software project management, knowing when to encourage creativity
and when to stick to the project plan is crucial. Without Software project management
personnel, an organization can perhaps build and ship code more quickly. However, employing
a trained specialist to handle these areas, on the other hand, can open up new doors and help
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1. Suppose after some changes, the version of the configuration object changes from
1.0 to 1.1. Minor corrections and changes result in versions 1.1.1 and 1.1.2, which is
followed by a major update that is object 1.2. The development of object 1.0
continues through 1.3 and 1.4, but finally, a noteworthy change to the object results
in a new evolutionary path, version 2.0. Both versions are currently supported.
2. Change control – Controlling changes to Configuration items (CI). The change
control process is explained in Figure below:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
A change request (CR) is submitted and evaluated to assess technical merit, potential side
effects, the overall impact on other configuration objects and system functions, and the
projected cost of the change. The results of the evaluation are presented as a change report,
which is used by a change control board (CCB) —a person or group who makes a final decision
on the status and priority of the change. An engineering change Request (ECR) is generated for
each approved change. Also, CCB notifies the developer in case the change is rejected with
proper reason. The ECR describes the change to be made, the constraints that must be
respected, and the criteria for review and audit. The object to be changed is “checked out” of the
project database, the change is made, and then the object is tested again. The object is then
“checked in” to the database and appropriate version control mechanisms are used to create
the next version of the software.
1. Configuration auditing – A software configuration audit complements the formal
technical review of the process and product. It focuses on the technical correctness
of the configuration object that has been modified. The audit confirms the
completeness, correctness, and consistency of items in the SCM system and tracks
action items from the audit to closure.
2. Reporting – Providing accurate status and current configuration data to developers,
testers, end users, customers, and stakeholders through admin guides, user guides,
FAQs, Release notes, Memos, Installation Guide, Configuration guides, etc.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Whenever software is built, there is always scope for improvement and those
improvements bring picture changes. Changes may be required to modify or update any
existing solution or to create a new solution for a problem. Requirements keep on
changing daily so we need to keep on upgrading our systems based on the current
requirements and needs to meet desired outputs. Changes should be analyzed before
they are made to the existing system, recorded before they are implemented, reported to
have details of before and after, and controlled in a manner that will improve quality and
reduce error. This is where the need for System Configuration Management comes.
System Configuration Management (SCM) is an arrangement of exercises that controls
change by recognizing the items for change, setting up connections between those
things, making/characterizing instruments for overseeing diverse variants, controlling
the changes being executed in the current framework, inspecting and revealing/reporting
on the changes made. It is essential to control the changes because if the changes are
not checked legitimately then they may wind up undermining a well-run programming. In
this way, SCM is a fundamental piece of all project management activities.
Processes involved in SCM – Configuration management provides a disciplined
environment for smooth control of work products.
Short note on Project Scheduling
A schedule in your project’s time table actually consists of sequenced activities and milestones
that are needed to be delivered under a given period of time.
Project schedule simply means a mechanism that is used to communicate and know about
that tasks are needed and has to be done or performed and which organizational resources will
be given or allocated to these tasks and in what time duration or time frame work is needed to
be performed. Effective project scheduling leads to success of project, reduced cost, and
increased customer satisfaction. Scheduling in project management means to list out activities,
deliverables, and milestones within a project that are delivered. It contains more notes than your
average weekly planner notes. The most common and important form of project schedule is
Gantt chart.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Process :
The manager needs to estimate time and resources of project while scheduling project. All
activities in project must be arranged in a coherent sequence that means activities should be
arranged in a logical and well-organized manner for easy to understand. Initial estimates of
project can be made optimistically which means estimates can be made when all favorable
things will happen and no threats or problems take place.
The total work is separated or divided into various small activities or tasks during project
schedule. Then, Project manager will decide time required for each activity or task to get
completed. Even some activities are conducted and performed in parallel for efficient
performance. The project manager should be aware of fact that each stage of project is not
problem-free.
Problems arise during Project Development Stage :
● People may leave or remain absent during particular stage of development.
● Hardware may get failed while performing.
● Software resource that is required may not be available at present, etc.
The project schedule is represented as set of chart in which work-breakdown structure and
dependencies within various activities are represented. To accomplish and complete project
within a given schedule, required resources must be available when they are needed.
Therefore, resource estimation should be done before starting development.
Resources required for Development of Project :
● Human effort
● Sufficient disk space on server
● Specialized hardware
● Software technology
● Travel allowance required by project staff, etc.
Advantages of Project Scheduling :
There are several advantages provided by project schedule in our project management:
● It simply ensures that everyone remains on same page as far as tasks get
completed, dependencies, and deadlines.
● It helps in identifying issues early and concerns such as lack or unavailability of
resources.
● It also helps to identify relationships and to monitor process.
● It provides effective budget management and risk mitigation.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
What is DevOps?
The DevOps is a combination of two words, one is software Development, and second is
Operations. This allows a single team to handle the entire application lifecycle, from
development to testing, deployment, and operations. DevOps helps you to reduce the
disconnection between software developers, quality assurance (QA) engineers, and system
administrators.
DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.
DevOps helps to increase organization speed to deliver applications and services. It also allows
organizations to serve their customers better and compete more strongly in the market.
DevOps can also be defined as a sequence of development and IT operations with better
communication and collaboration.
DevOps has become one of the most valuable business disciplines for enterprises or
organizations. With the help of DevOps, quality, and speed of the application delivery has
improved to a great extent.
DevOps is nothing but a practice or methodology of making "Developers" and "Operations"
folks work together. DevOps represents a change in the IT culture with a complete focus on
rapid IT service delivery through the adoption of agile practices in the context of a system-
oriented approach.
DevOps is all about the integration of the operations and development process. Organizations
that have adopted DevOps noticed a 22% improvement in software quality and a 17%
improvement in application deployment frequency and achieve a 22% hike in customer
satisfaction. 19% of revenue hikes as a result of the successful DevOps implementation.
Why DevOps?
Before going further, we need to understand why we need the DevOps over the other methods.
○ The operation and development team worked in complete isolation.
○ After the design-build, the testing and deployment are performed respectively. That's
why they consumed more time than actual build cycles.
○ Without the use of DevOps, the team members are spending a large amount of time on
designing, testing, and deploying instead of building the project.
○ Manual code deployment leads to human errors in production.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
○ Coding and operation teams have their separate timelines and are not in synch, causing
further delays.
DevOps History
○ In 2009, the first conference named DevOpsdays was held in Ghent Belgium. Belgian
consultant and Patrick Debois founded the conference.
○ In 2012, the state of DevOps report was launched and conceived by Alanna Brown at
Puppet.
○ In 2014, the annual State of DevOps report was published by Nicole Forsgren, Jez
Humble, Gene Kim, and others. They found DevOps adoption was accelerating in 2014
also.
○ In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA (DevOps
Research and Assignment).
○ In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published "Accelerate: Building
and Scaling High Performing Technology Organizations".
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1) Automation
Automation can reduce time consumption, especially during the testing and deployment phase.
The productivity increases, and releases are made quicker by automation. This will lead in
catching bugs quickly so that it can be fixed easily. For contiguous delivery, each code is
defined through automated tests, cloud-based services, and builds. This promotes production
using automated deploys.
2) Collaboration
The Development and Operations team collaborates as a DevOps team, which improves the
cultural model as the teams become more productive with their productivity, which strengthens
accountability and ownership. The teams share their responsibilities and work closely in sync,
which in turn makes the deployment to production faster.
3) Integration
Applications need to be integrated with other components in the environment. The integration
phase is where the existing code is combined with new functionality and then tested.
Continuous integration and testing enable continuous development. The frequency in the
releases and micro-services leads to significant operational challenges. To overcome such
problems, continuous integration and delivery are implemented to deliver in a quicker, safer,
and reliable manner.
4) Configuration management
It ensures the application to interact with only those resources that are concerned with the
environment in which it runs. The configuration files are not created where the external
configuration to the application is separated from the source code. The configuration file can be
written during deployment, or they can be loaded at the run time, depending on the environment
in which it is running.
Here are some advantages and disadvantages that DevOps can have for business, such as:
Advantages
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
○ DevOps means collective responsibility, which leads to better team engagement and
productivity.
Disadvantages
DevOps Architecture
Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, and testing of the
software components or frameworks.
The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and large
distributed applications. Agile Development is used in the DevOps architecture so that
integration and delivery can be contiguous. When the development and operations team works
separately from each other, then it is time-consuming to design, test, and deploy. And if the
terms are not in sync with each other, then it may cause a delay in the delivery. So DevOps
enables the teams to change their shortcomings and increases productivity.
Below are the various components that are used in the DevOps architecture:
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
1) Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the pre-
defined individual usage with fixed hardware allocation. And with DevOps, the usage of cloud,
sharing of resources comes into the picture, and the build is dependent upon the user's need,
which is a mechanism to control the usage of resources or capacity.
2) Code
Many good practices such as Git enables the code to be used, which ensures writing the code
for business, helps to track changes, getting notified about the reason behind the difference in
the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.
3) Test
The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be
automated, which decreases the time for testing so that the time to deploy the code to
production can be reduced as automating the running of the scripts will remove many manual
steps.
4) Plan
DevOps use Agile methodology to plan the development. With the operations and development
team in sync, it helps in organizing the work to plan accordingly to increase productivity.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
5) Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the system
accurately so that the health of the application can be checked. The monitoring becomes more
comfortable with services where the log data may get monitored through many third-party tools
such as Splunk.
6) Deploy
Many systems can support the scheduler for automated deployment. The cloud management
platform enables users to capture accurate insights and view the optimization scenario,
analytics on trends by the deployment of dashboards.
7) Operate
DevOps changes the way traditional approach of developing and testing separately. The teams
operate in a collaborative way where both the teams actively participate throughout the service
lifecycle. The operation team interacts with developers, and they come up with a monitoring
plan which serves the IT and business requirements.
8) Release
Deployment to an environment can be done by automation. But when the deployment is made
to the production environment, it is done by manual triggering. Many processes involved in
release management commonly used to do the deployment in the production environment
manually to lessen the impact on the customers.
DevOps Lifecycle
DevOps defines an agile relationship between operations and Development. It is a process that
is practiced by the development team and operational engineers together from beginning to the
final stage of the product.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Learning DevOps is not complete without understanding the DevOps lifecycle phases. The
DevOps lifecycle includes seven phases as given below:
1) Continuous Development
This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application.
There are no DevOps tools that are required for planning, but there are several tools for
maintaining the code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in
which the developers require to commit changes to the source code more frequently. This may
be on a daily or weekly basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved compilation, but it also includes
unit testing, integration testing, code review, and packaging.
The code supporting new functionality is continuously integrated with the existing code.
Therefore, there is continuous development of software. The updated code needs to be
integrated continuously and smoothly with the systems to reflect changes to the end-users.
Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository,
then Jenkins fetches the updated code and prepares a build of that code, which is an
executable file in the form of war or jar. Then this build is forwarded to the test server or the
production server.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow
QAs to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the
functionality. In this phase, Docker Containers can be used for simulating the test environment.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Selenium does the automation testing, and TestNG generates the reports. This entire testing
phase can automate with the help of a Continuous Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases
that failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at
predefined times. After testing, the code is continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process,
where important information about the use of the software is recorded and carefully processed
to find out trends and identify problem areas. Usually, the monitoring is integrated within the
operational capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data about the
application parameters when it is in a continuous use position. The system errors such as
server not reachable, low memory, etc are resolved in this phase. It maintains the security and
availability of the service.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from the
operations of the software. This is carried out by placing the critical phase of constant feedback
between the operations and the development of the next version of the current software
application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps which
are required to take a software application from development, using it to find out its issues and
then producing a better version. It kills the efficiency that may be possible with the app and
reduce the number of interested customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that
the code is correctly used on all the servers.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
The new code is deployed continuously, and configuration management tools play an essential
role in executing tasks frequently and quickly. Here are some popular tools which are used in
this phase, such as Chef, Puppet, Ansible, and SaltStack.
Containerization tools are also playing an essential role in the deployment phase. Vagrant and
Docker are popular tools that are used for this purpose. These tools help to produce
consistency across development, staging, testing, and production environment. They also help
in scaling up and scaling down instances softly.
Containerization tools help to maintain consistency across the environments where the
application is tested, developed, and deployed. There is no chance of errors or failure in the
production environment as they package and replicate the same dependencies and packages
used in the testing, development, and staging environment. It makes the application easy to run
on different computers.
7) Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release
process and allow the organization to accelerate the overall time to market continuingly.
It is clear from the discussion that continuity is the critical factor in the DevOps in removing
steps that often distract the development, take it longer to detect issues and produce a better
version of the product after several months. With DevOps, we can make any software product
more efficient and increase the overall count of interested customers in your product.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
DevOps Pipeline
A pipeline in software engineering team is a set of automated processes which allows DevOps
professionals and developer to reliably and efficiently compile, build, and deploy their code to
their production compute platforms.
The most common components of a pipeline in DevOps are build automation or continuous
integration, test automation, and deployment automation.
A pipeline consists of a set of tools which are classified into the following categories such as:
○ Source control
○ Build tools
○ Containerization
○ Configuration management
○ Monitoring
Continuous integration (CI) is a practice in which developers can check their code into a
version-controlled repository several times per day. Automated build pipelines are triggered by
these checks which allows fast and easy to locate error detection.
Some significant benefits of CI are:
○ Small changes are easy to integrate into large codebases.
○ More comfortable for other team members to see what you have been working.
○ Fewer integration issues allowing rapid code delivery.
○ Bugs are identified early, making them easier to fix, resulting in less debugging work.
AD
Continuous delivery (CD) is the process that allows operation engineers and developers to
deliver bug fixes, features, and configuration change into production reliably, quickly, and
sustainably. Continuous delivery offers the benefits of code delivery pipelines, which are carried
out that can be performed on demand.
Some significant benefits of the CD are:
○ Faster bug fixes and features delivery.
○ CD allows the team to work on features and bug fixes in small batches, which means
user feedback received much quicker. It reduces the overall time and cost of the project.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
DevOps Tools
Here are some most popular DevOps tools with brief explanation shown in the below image,
such as:
1) Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and release of the technology
changes quickly and frequently. It has features of versioning, automated testing, and continuous
delivery. It enables to manage entire infrastructure as code without expanding the size of the
team.
Features
○ Real-time context-aware reporting.
○ Model and manage the entire environment.
○ Defined and continually enforce infrastructure.
○ Desired state conflict detection and remediation.
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
2) Ansible
3) Docker
Docker is a high-end DevOps tool that allows building, ship, and run distributed applications on
multiple systems. It also helps to assemble the apps quickly from the components, and it is
typically suitable for container management.
Features
○ It configures the system more comfortable and faster.
○ It increases productivity.
○ It provides containers that are used to run the application in an isolated environment.
○ It routes the incoming request for published ports on available nodes to an active
container. This feature enables the connection even if there is no task running on the
node.
○ It allows saving secrets into the swarm itself.
4) Nagios
Nagios is one of the more useful tools for DevOps. It can determine the errors and rectify them
with the help of network, infrastructure, server, and log monitoring systems.
Features
○ It provides complete monitoring of desktop and server operating systems.
○ The network analyzer helps to identify bottlenecks and optimize bandwidth utilization.
○ It helps to monitor components such as services, application, OS, and network protocol.
○ It also provides complete monitoring of Java Management Extensions.
5) CHEF
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
A chef is a useful tool for achieving scale, speed, and consistency. The chef is a cloud-based
system and open source technology. This technology uses Ruby encoding to develop essential
building blocks such as recipes and cookbooks. The chef is used in infrastructure automation
and helps in reducing manual and repetitive tasks for infrastructure management.
Chef has got its convention for different building blocks, which are required to manage and
automate infrastructure.
Features
○ It maintains high availability.
○ It can manage multiple cloud environments.
○ It uses popular Ruby language to create a domain-specific language.
○ The chef does not make any assumptions about the current status of the node. It uses
its mechanism to get the current state of the machine.
6) Jenkins
Jenkins is a DevOps tool for monitoring the execution of repeated tasks. Jenkins is a software
that allows continuous integration. Jenkins will be installed on a server where the central build
will take place. It helps to integrate project changes more efficiently by finding the issues
quickly.
Features
○ Jenkins increases the scale of automation.
○ It can easily set up and configure via a web interface.
○ It can distribute the tasks across multiple machines, thereby increasing concurrency.
○ It supports continuous integration and continuous delivery.
○ It offers 400 plugins to support the building and testing any project virtually.
○ It requires little maintenance and has a built-in GUI tool for easy updates.
7) Git
Git is an open-source distributed version control system that is freely available for everyone. It is
designed to handle minor to major projects with speed and efficiency. It is developed to co-
ordinate the work among programmers. The version control allows you to track and work
together with your team members at the same workspace. It is used as a critical distributed
version-control for the DevOps tool.
○ It is a free open source tool.
○ It allows distributed development.
○ It supports the pull request.
○ It enables a faster release cycle.
○ Git is very scalable.
○ It is very secure and completes the tasks very fast.
8) SALTSTACK
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
www.BrainKart.com
Stackify is a lightweight DevOps tool. It shows real-time error queries, logs, and more directly
into the workstation. SALTSTACK is an ideal solution for intelligent orchestration for the
software-defined data center.
Features
○ It eliminates messy configuration or data changes.
○ It can trace detail of all the types of the web request.
○ It allows us to find and fix the bugs before production.
○ It provides secure access and configures image caches.
○ It secures multi-tenancy with granular role-based access control.
○ Flexible image management with a private registry to store and manage images.
9) Splunk
Splunk is a tool to make machine data usable, accessible, and valuable to everyone. It delivers
operational intelligence to DevOps teams. It helps companies to be more secure, productive,
and competitive.
Features
○ It has the next-generation monitoring and analytics solution.
○ It delivers a single, unified view of different IT services.
○ Extend the Splunk platform with purpose-built solutions for security.
○ Data drive analytics with actionable insight.
10) Selenium
Selenium is a portable software testing framework for web applications. It provides an easy
interface for developing automated tests.
Features
○ It is a free open source tool.
○ It supports multiplatform for testing, such as Android and ios.
○ It is easy to build a keyword-driven framework for a WebDriver.
○ It creates robust browser-based regression automation suites and tests.
○
https://play.google.com/store/apps/details?id=info.therithal.brainkart.annauniversitynotes&hl=en_IN
Click on Subject/Paper under Semester to enter.
Environmental Sciences
Professional English and Sustainability -
Professional English - - II - HS3252 Discrete Mathematics GE3451
I - HS3152 - MA3354
Statistics and Theory of Computation
Matrices and Calculus Numerical Methods - Digital Principles and - CS3452
3rd Semester
4th Semester
- MA3151 MA3251 Computer Organization
1st Semester
2nd Semester
8th Semester
6th Semester