Ccs356 Object Oriented Software Engineering _1
Ccs356 Object Oriented Software Engineering _1
NOTES
BY-PROF.ANAND J. BHAGWAT
1
UNIT I
2
Huge Programming: It is simpler to manufacture a wall than to a
house or building, similarly, as the measure of programming become
extensive engineering has to step to give it a scientific process.
o Reduces complexity
o To minimize software cost
o To decrease time
o Handling big projects
o Reliable software
o Effectiveness
Software Processes
3
A software process is the set of activities and associated outcome that
produce a software product. Software engineers mostly carry out these
activities. These are four key process activities, which are common to all
software processes. These activities are:
4
Software Crisis
5
WATERFALL MODEL
The developer must complete every phase before the next phase begins.
This model is named "Waterfall Model", because its diagrammatic
representation resembles a cascade of waterfalls.
6
2. Design Phase: This phase aims to transform the requirements gathered
in the SRS into a suitable form which permits further coding in a
programming language. It defines the overall software architecture together
with high level and detailed design. All this work is documented as a
Software Design Document (SDD).
7
Advantages of Waterfall model
o In this model, the risk factor is higher, so this model is not suitable for
more significant and complex projects.
o This model cannot accept the changes in requirements during
development.
INCREMENTAL MODEL
8
To develop the software under the incremental model, this phase performs
a crucial role.
10
The various phases of RAD are as follows
12
This model has a number of advantages such as customer
involvement, taking feedback from the customer during development, and
building the exact product that the user wants. Because of the multiple
iterations, the chances of errors get reduced and the reliability and
efficiency will increase.
Evolutionary Model
Disadvantages
13
The complexity of the spiral model can be more than the other
sequential models.
The cost of developing a product through a spiral model is high.
SPIRAL MODEL
Objective setting: Each cycle in the spiral starts with the identification of
purpose for that cycle, the various alternatives that are possible for
achieving the targets, and the constraints that exists.
14
The focus of evaluation in this stage is located on the risk perception for the
project.
Planning: Finally, the next step is planned. The project is reviewed, and a
choice made whether to continue with a further period of the spiral. If it is
determined to keep, plans are drawn up for the next step of the project.
Advantages
15
Disadvantages
PROTOTYPE MODEL
The prototype model requires that before carrying out the development of
actual software, a working prototype of the system should be built. A
prototype is a toy implementation of the system. A prototype usually turns
out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance as
compared to actual software.
In many instances, the client only has a general view of what is expected
from the software product. In such a scenario where there is an absence of
detailed information regarding the input to the system, the processing
needs, and the output requirement, the prototyping model may be
employed.
16
Advantage of Prototype Model
Disadvantages
• It needs better communication between the team members. This may not
be achieved all the time.
18
Component Based Model (CBM)
The component-based assembly model uses object-oriented
technologies. In object-oriented technologies, the emphasis is on the
creation of classes. Classes are the entities that encapsulate data and
algorithms. In component-based architecture, classes (i.e., components
required to build application) can be uses as reusable components.
19
Characteristics of Component Assembly Model:
FORMAL METHODS
20
Why do we use formal methods?
Analysis
21
our software application's requirements as it verifies and validates them
22
using formal verification and validation techniques. These techniques help
us better understand the requirements.
Feasibility study
Design
Development
Testing
Deployment
Formal methods can help in validating that our software application meets
the client's requirements and needs and can ensure that the deployed
23
software matches the formally verified models.
24
Maintenance
25
Aspect-Oriented Software Development
AGILE MODEL/AGILITY
26
Agile methods break tasks into smaller iterations, or parts do not
directly involve long term planning. The project scope and requirements are
laid down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are
clearly defined in advance.
1. Requirements gathering
2. Design the requirements
27
3. Construction/ iteration
28
4. Testing/ Quality assurance
5. Deployment
6. Feedback
2. Design the requirements: When you have identified the project, work
with stakeholders to define requirements. You can use the user flow
diagram or the high-level UML diagram to show the work of new features
and show how it will apply to your existing system.
5. Deployment: In this phase, the team issues a product for the user's
work environment.
6. Feedback: After releasing the product, the last step is feedback. In this,
the team receives feedback about the product and works through the
feedback.
29
The following 12 Principles are based on the Agile Manifesto.
Our highest priority is to satisfy the customer through the early and
continuous delivery of valuable software.
Business people and developers must work together daily throughout the
project.
Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
At regular intervals, the team reflects on how to become more effective, then
tunes and adjusts its behavior accordingly.
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
30
o Feature Driven Development(FDD)
31
o Lean Software Development
o eXtreme Programming(XP)
Scrum
o Scrum Master: The scrum can set up the master team, arrange the
meeting and remove obstacles for the process
o Product owner: The product owner makes the product backlog,
prioritizes the delay and is responsible for the distribution of
functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to
complete the sprint or cycle.
eXtreme Programming(XP)
1. Time Boxing
2. MoSCoW Rules
32
3. Prototyping
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
33
Disadvantages(Cons) of Agile Model:
Communication
Simplicity
Feedback
Courage
Respect
34
Extreme Programming is one of the Agile software development
methodologies. It provides values and principles to guide the team
behavior. The team is expected to self-organize. Extreme Programming
provides specific core practices where −
Code reviews are effective as the code is reviewed all the time.
Testing is effective as there is continuous regression and testing.
Design is effective as everybody needs to do refactoring daily.
Integration testing is important as integrate and test several times a
day.
Short iterations are effective as the planning game for release
planning and iteration planning.
35
History of Extreme Programming
36
Production and post-delivery defects: Emphasis is on − the unit
tests to detect and fix the defects early.
Misunderstanding the business and/or domain − Making the
customer a part of the team ensures constant communication and
clarifications.
Business changes − Changes are considered to be inevitable and
are accommodated at any point of time.
Staff turnover − Intensive team collaboration ensures enthusiasm
and good will. Cohesion of multi-disciplines fosters the team spirit.
Some of the good practices that have been recognized in the extreme
programming model and suggested to maximize their use are given below:
37
Simplicity: Simplicity makes it easier to develop good-quality code as
well as to test and debug it.
38
UNIT II
1. REQUIREMENTS ANALYSIS
In any project, the key stakeholders, including end users, generally have final say on the
project scope. Project teams should identify them early and involve them in the
requirements gathering process from the beginning.
39
2. Understand the project goal
To capture all necessary requirements, project teams must first understand the project's
objective. What business need drives the project? What problem is the product meant to
solve? By understanding the desired end, the project team can define the problem
statement and have more productive discussions when gathering requirements.
3. Capture requirements
At this stage, all relevant stakeholders provide requirements. This can be done through
one-on-one interviews, focus groups or consideration of use cases. Project teams gather
stakeholder feedback and incorporate it into requirements.
4. Categorize requirements
1. Functional requirements.
2. Technical requirements.
3. Transitional requirements.
4. Operational requirements.
5. Interpret and document requirements
Post-categorization, the project team should analyze its set of requirements to determine
which ones are feasible. Interpretation and analysis are easier when requirements are well
defined and clearly worded. Each requirement should have a clear and understood impact
on the end product and the project plan. After all the requirements have been identified,
prioritized and analyzed, project teams should document them in the software
requirements specification (SRS).
40
6. Finalize SRS and get sign-off on requirements
The SRS should be shared with key stakeholders for sign-off to ensure that they agree
with the requirements. This helps prevent conflicts or disagreements later. Feedback, if
any, should be incorporated. The SRS can then be finalized and made available to the
entire development team. This document provides the foundation for the project's scope
and guides other steps during the software development lifecycle (SDLC), including
development and testing.
2. REQUIREMENT GATHERINGR
41
Step 2: Meet with stakeholders
Once you’ve identified your project stakeholders, meet with them to get an idea of what
they’re hoping to get out of the project. Understanding what stakeholders want matters
because they’re ultimately the ones you’re creating your deliverables for.
Some questions you can ask include:
What is your goal for this project?
What do you think would make this project successful?
What are your concerns about this project?
What do you wish this product or service would do that it doesn’t already?
What changes would you recommend about this project?
The stakeholders are the people you’re ultimately developing the project for, so you
should ask them questions that can help you create your list of requirements.
42
Step 4: List assumptions and requirements
Now that you’ve completed the intake process, create your requirements management
plan based on the information you’ve gathered.
Consider the questions you initially set out to answer during the requirements gathering
process. Then, use them to create your requirements goals, including:
Length of project schedule: You can map out your project timeline using a Gantt chart and
use it to visualize any project requirements that depend on project milestones. Some
requirements will apply for the full duration of the project, whereas others may only apply
during distinct project phases. For example, you’ll need a specific budget for team
member salaries throughout the entire project, but you may only need specific material
during the last stage of your project timeline.
People involved in the project: Identify exactly which team members will be involved in
your project, including how many designers, developers, or managers you’ll need to
execute every step. People are part of your project requirements because if you don’t
have the team members you need, you won’t be able to complete the project on time.
Project risks: Understanding your project risks is an important part of identifying project
requirements. Use a risk register to determine which risks are of highest priority, such as
stakeholder feedback, timeline delays, and lack of budget. Then, schedule a brainstorming
session with your team to figure out how to prevent these risks.
Like SMART goals, your project requirements should be actionable, measurable, and
quantifiable. Try to go into as much detail as possible when listing out your project budget,
timeline, required resources, and team.
Extended Requirements
These are basically “nice to have” requirements that might be out of the scope of the
System.
Example:
Our system should record metrices and analytics.
Service heath and performance monitoring.
Difference between Functional Requirements and Non-Functional Requirements
Functional Requirements Non Functional Requirements
A non-functional requirement
A functional requirement defines a
defines the quality attribute of a
system or its component.
software system.
Non-functional requirement is
Functional requirement is specified by technical peoples e.g.
specified by User. Architect, Technical leaders and
software developers.
4. FEASIBILITY STUDY
45
Feasibility Study in Software Engineering is a study to evaluate feasibility of
proposed project or system. Feasibility study is one of stage among important four
stages of Software Project Management Process. As name suggests feasibility study is
the feasibility analysis or it is a measure of the software product in terms of how much
beneficial product development will be for the organization in a practical point of view.
Feasibility study is carried out based on many purposes to analyze whether software
product will be right in terms of development, implantation, contribution of project to the
organization etc.
TYPES OF FEASIBILITY STUDY
The feasibility study mainly concentrates on below five mentioned areas. Among these
Economic Feasibility Study is most important part of the feasibility analysis and Legal
Feasibility Study is less considered feasibility analysis.
1. Technical Feasibility: This technical feasibility study gives report whether there
exists correct required resources and technologies which will be used for project
development. Along with this, feasibility study also analyzes technical skills and
capabilities of technical team, existing technology can be used or not,
2. Operational Feasibility: In Operational Feasibility degree of providing service to
requirements is analyzed along with how much easy product will be to operate and
maintenance after deployment.
3. Economic Feasibility: In Economic Feasibility study cost and benefit of the project
is analyzed.
4. Legal Feasibility: In Legal Feasibility study project is analyzed in legality point of
view. Overall it can be said that Legal Feasibility if proposed project conform legal
and ethical requirements.
5. Schedule Feasibility: In Schedule Feasibility Study mainly timelines/deadlines is
analyzed for proposed project which includes how many times teams will take to
complete final project which has a great impact on the organization as purpose of
project may fail if it can’t be completed on time.
7. Cultural and Political Feasibility: This section assesses how the software project
will affect the political environment and organizational culture.
8. Market Feasibility: This refers to evaluating the market’s willingness and ability to
accept the suggested software system. Analyzing the target market, understanding
consumer wants and assessing possible rivals are all part of this study.
9. Resource Feasibility: This method evaluates if the resources needed to complete
the software project successfully It guarantees that sufficient hardware, software,
trained labor and funding are available to complete the project successfully.
Aim of Feasibility Study
The overall objective of the organization are covered and contributed by the system
or not.
The implementation of the system be done using current technology or not.
46
Can the system be integrated with the other system which are already exist
47
The State Machine Diagram above shows the different states in which the verification
sub-system or class exist for a particular system.
2. Basic components and notations of a State Machine diagram
2.1. Initial state
We use a black filled circle represent the initial state of a System or a Class.
2.2. Transition
We use a solid arrow to represent the transition or change of control from one state to
another. The arrow is labelled with the event which causes the change in state.
2.3. State
We use a rounded rectangle to represent a state. A state represents the conditions or
circumstances of an object of a class at an instant of time.
2.4. Fork
We use a rounded solid rectangular bar to represent a Fork notation with incoming
arrow from the parent state and outgoing arrows towards the newly created states. We
use the fork notation to represent a state splitting into two or more concurrent states.
48
2.5. Join
We use a rounded solid rectangular bar to represent a Join notation with incoming
arrows from the joining states and outgoing arrow towards the common goal state. We
use the join notation when two or more states concurrently converge into one on the
occurrence of an event or events.
49
Below are the steps of how to draw the State Machine Diagram in UML:
Step1. Identify the System:
Understand what your diagram is representing.
Whether it’s a machine, a process, or any object, know what different situations or
conditions it might go through.
Step2. Identify Initial and Final States:
Figure out where your system starts (initial state) and where it ends (final state).
These are like the beginning and the end points of your system’s journey.
Step3. Identify Possible States:
Think about all the different situations your system can be in.
These are like the various phases or conditions it experiences.
Use boundary values to guide you in defining these states.
Step4. Label Triggering Events:
Understand what causes your system to move from one state to another.
These causes or conditions are the events.
Label each transition with what makes it happen.
Step5. Draw the Diagram with appropriate notations:
Now, take all this information and draw it out.
Use rectangles for states, arrows for transitions, and circles or rounded rectangles for
initial and final states.
Be sure to connect everything in a way that makes sense.
50
Let’s understand State Machine diagram with the help of an example, ie for an online
order :
The UML diagrams we draw depend on the system we aim to represent. Here is just an
example of how an online ordering system might look like :
1. On the event of an order being received, we transit from our initial state to
Unprocessed order state.
2. The unprocessed order is then checked.
3. If the order is rejected, we transit to the Rejected Order state.
4. If the order is accepted and we have the items available we transit to the fulfilled
order state.
5. However if the items are not available we transit to the Pending Order state.
6. After the order is fulfilled, we transit to the final state. In this example, we merge the
two states i.e. Fulfilled order and Rejected order into one final state.
51
UNIT III
SOFTWARE DESIGN
Unit-3
52
Interface Design:
Interface design is the specification of the interaction between a system
and its environment. this phase proceeds at a high level of abstraction with
respect to the inner workings of the system i.e, during interface design, the
internal of the systems are completely ignored and the system is treated as
a black box. Attention is focussed on the dialogue between the target
system and the users, devices, and other systems with which it interacts.
The design problem statement produced during the problem analysis step
should identify the people, other systems, and devices which are
collectively called agents.
53
Detailed Design:
Design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their
algorithms and the data structures.
The detailed design may include:
Decomposition of major system components into program units.
Allocation of functional responsibilities to units.
User interfaces
Unit states and state changes
Data and control interaction between units
Data packaging and implementation, including issues of scope and
visibility of program
Algorithms and data structures
Quality attributes
Functionality:
Usability:
Reliability:
Supportability:
55
Testability, compatibility and configurability are the terms using which
a system can be easily installed and found the problem easily.
Supportability also consists of more attributes such as compatibility,
extensibility, fault tolerance, modularity, reusability, robustness, security,
portability, scalability.
Design concepts
1. Abstraction
56
4. Modularity
6. Functional independence
7. Refinement
9. Design classes
59
The figure represents pipe-and-filter architecture since it uses
both pipe and filter and it has a set of components called filters
connected by pipes.
Pipes are used to transmit data from one component to the next.
Each filter will work independently and is designed to take data
input of a certain form and produces data output to the next filter
of a specified form. The filters don’t require any knowledge of the
working of neighboring filters.
If the data flow degenerates into a single line of transforms, then it
is termed as batch sequential. This structure accepts the batch of
data and then applies a series of sequential components to
transform it.
The analysis and design process of a user interface is iterative and can be represented by
a spiral model. The analysis and design process of user interface consists of four
framework activities.
1. User, task, environmental analysis, and modeling: Initially, the
focus is based on the profile of users who will interact with the system,
i.e. understanding, skill and knowledge, type of user, etc, based on
the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirements developer
understand how to develop the interface. Once all the requirements
are gathered a detailed analysis is conducted. In the analysis part, the
tasks that the user performs to establish the goals of the system are
identified, described and elaborated. The analysis of the user
environment focuses on the physical work environment. Among the
questions to be asked are
62
Where will the interface be located physically?
Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
Does the interface hardware accommodate space, light, or noise constraints?
Are there special human factors considerations driven by environmental
factors?
2. Interface Design: The goal of this phase is to define the set of interface objects
and actions
i.e. Control mechanisms that enable the user to perform desired tasks.
Indicate how these control mechanisms affect the system. Specify the
action sequence of tasks and subtasks, also called a user scenario.
Indicate the state of the system when the user performs a particular
task. Always follow the three golden rules stated by Theo Mandel.
Design issues such as response time, command and action structure,
error handling, and help facilities are considered as the design model is
refined. This phase serves as the foundation for the implementation
phase.
3. Interface construction and implementation: The implementation
activity begins with the creation of prototype (model) that enables
usage scenarios to be evaluated. As iterative design process
continues a User Interface toolkit that allows the creation of windows,
menus, device interaction, error messages, commands, and many
other elements of an interactive environment can be used for
completing the construction of an interface.
4. Interface Validation: This phase focuses on testing the interface.
The interface should be in such a way that it should be able to
perform tasks correctly and it should be able to
handle a variety of tasks. It should achieve all the user’s requirements.
It should be easy to use and easy to learn. Users should accept the
interface as a useful one in their work.
Software Engineering | Function Oriented Design
The design process for software systems often has two levels. At the first
level the focus is on deciding which modules are needed for the system on
the basis of SRS (Software Requirement Specification) and how the
modules should be interconnected.
Function Oriented Design is an approach to software design where the
design is decomposed into a set of interacting units where each unit has a
clearly defined function.
63
Generic Procedure:
Start with a high level description of what the software / program does.
Refine each part of the description one by one by specifying in greater
details the functionality of each part. These points lead to Top-Down
Structure.
64
Problem in Top-Down design method:
Mostly each module is used by at most one other module
and that module is called its Parent module.
Solution to the problem:
Designing of reusable module. It means modules use
several modules to do their required functions.
66
Pseudo Code:
Pseudo Code is system description in short English like
phrases describing the function. It use keyword and
indentation. Pseudo codes are used as replacement for
flow charts. It decreases the amount of documentation
required.
Consider a scenario of synchronous message passing. You have two
components in your system that communicate with each other. Let’s call
the sender and receiver. The receiver asks for a service from the sender
and the sender serves the request and waits for an acknowledgment from
the receiver.
There is another receiver that requests a service from the sender. The
sender is blocked since it hasn’t yet received any acknowledgment from
the first receiver.
The sender isn’t able to serve the second receiver which can create
problems. To solve this drawback, the Pub-Sub model was introduced.
What is Pub/Sub Architecture?
The Pub/Sub (Publisher/Subscriber) model is a messaging pattern used in
software architecture to facilitate asynchronous communication between
different components or systems. In this model, publishers produce
messages that are then consumed by subscribers.
Key points of the Pub/Sub model include:
Publishers: Entities that generate and send messages.
Subscribers: Entities that receive and consume messages.
Topics: Channels or categories to which messages are published.
Message Brokers: Intermediaries that manage the routing of messages
between publishers and subscribers.
The user interface is the front-end application view to which the user
interacts to use the software. The software becomes more popular if its
user interface is:
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
69
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to
the system. The user needs to remember the syntax of the command
and its use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can
be represented by a spiral model. The analysis and design process of
user interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based
on the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirement’s developer
understand how to develop the interface. Once all the requirements are
gathered a detailed analysis is conducted. In the analysis part, the tasks
that the user performs to establish the goals of the system are identified,
described and elaborated. The analysis of the user environment focuses
on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface?
3. Does the interface hardware accommodate space, light, or noise
constraints?
4. Are there special human factors considerations driven by environmental
factors?2. Interface Design
The goal of this phase is to define the set of interface objects and actions
i.e., control mechanisms that enable the user to perform desired tasks.
Indicate how these control mechanisms affect the system. Specify the
action sequence of tasks and subtasks, also called a user scenario.
Indicate the state of the system when the user performs a particular task.
Always follow the three golden rules stated by Theo Mandel. Design
issues such as response time, command and action structure, error
handling, and help facilities are considered as the design model is refined.
This phase serves as the foundation for the implementation phase.
70
71
UNIT IV
SOFTWARE TESTING
Software testing is widely used technology because it is compulsory
to test each and every software before deployment. Software testing such
as Methods such as Black Box Testing, White Box Testing, Visual Box
Testing and Gray Box Testing.
Types of Testing
Manual Testing Automation Testing
Types of Manual 3
White Box Testing Black Box Testing Grey Box Testing
72
.
UNIT TESTING
Unit testing uses modules for testing purpose, and these modules are
combined and tested in integration testing. The Software is developed with
a number of software modules that are coded by different coders or
programmers. The goal of integration testing is to check the correctness of
communication among all the modules.
INTEGRATION TESTING
74
Types of Integration Testing
Incremental Approach
o Top-Down approach
o Bottom-Up approach
75
Top-Down Approach
The top-down testing strategy deals with the process in which higher level
modules are tested with lower level modules until the successful
completion of testing of all the modules. Major design flaws can be
detected and fixed early because critical modules tested first. In this type of
method, we will add the modules incrementally or one by one and check
the data flow in the same order.
Advantages:
Disadvantages:
76
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower
level modules are tested with higher level modules until the successful
completion of testing of all the modules. Top level critical modules are
tested at last, so it may cause a defect. Or we can say that we will be
adding the modules from bottom to the top and check the data flow in the
same order.
Advantages
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
We will go for this method, when the data flow is very complex and when it
is difficult to find who is a parent and who is a child. And in such case, we
77
will create the data in any module bang on all other existing modules and
78
check if the data is present. Hence, it is also known as the Big bang
method.
performed
Since this testing can be done after completion of all modules due to that
testing team has less time for execution of this process so that internally
linked interfaces and high-risk critical modules can be missed easily.
79
Advantages:
Disadvantages:
REGRESSION TESTING
This method is used to test the product for modifications or any updates
done. It ensures that any change in a product does not affect the existing
module of the product. Verify that the bugs fixed and the newly added
features not created any problem in the previous working version of the
Software.
Regression tests are also known as the Verification Method. Test cases are
often automated. Test cases are required to execute many times and
running the same test case again and again manually, is time-consuming
and tedious too.
80
These modifications may affect system functionality. Regression Testing
becomes necessary in this case.Regression testing can be performed
using the following techniques:
1. Re-test All
81
1. When new functionality added to the application.
82
2. When there is a Change Requirement.
3. When the defect fixed
4. When there is a performance issue fix
5. When there is an environment change
In this, we are going to test only the changed unit, not the impact area,
because it may affect the components of the same module.
In this, we are going to test the modification along with the impact area or
regions, are called the Regional Regression testing. Here, we are testing
the impact area because if there are dependable modules, it will affect the
other modules also.
During the second and the third release of the product, the client asks for
adding 3-4 new features, and also some defects need to be fixed from the
previous release. Then the testing team will do the Impact Analysis and
identify that the above modification will lead us to test the entire product.
83
SYSTEM TESTING
System testing is a type of software testing that evaluates the
overall functionality and performance of a complete and fully integrated
software solution. It tests if the system meets the specified requirements
and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance tes ting.
System Testing is a type of software testing that is performed on a
complete integrated system to evaluate the compliance of the system
with the corresponding requirements. In system testing, integration
testing passed components are taken as input. The goal of integration
testing is to detect any irregularity between the units that are integrated
together. System testing detects defects within both the integrated units
and the whole system. The result of system testing is the observed
behavior of a component or a system when it is tested. System
Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behavior of the
system and also the expectations of the customer. It is performed to test
the system beyond the bounds mentioned in the software requirements
specification (SRS). System Testing is basically performed by a testing
team that is independent of the development team that helps to test the
quality of the system impartial. It has both functional and non-functional
testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing.
85
Defect Reporting: Defects in the system are detected.
Regression Testing: It is carried out to test the side effects of the
testing process.
Log Defects: Defects are fixed in this step.
Retest: If the test is not successful then again test is performed.
87
The testing environment is similar to that of the real time production or
business environment.
It checks the entire functionality of the system with different test
scripts and also it covers the technical and business requirements of
clients.
After this testing, the product will almost cover all the possible bugs or
errors and hence the development team will confidently go ahead with
acceptance testing.
SMOKE TESTING
Smoke Testing comes into the picture at the time of receiving build
software from the development team. The purpose of smoke testing is to
determine whether the build software is testable or not. It is done at the
time of "building software." This process is also known as "Day 0".
Testing the basic & critical feature of an application before doing one round
of deep, rigorous testing (before checking all possible positive and negative
values) is known as smoke testing.
88
In the smoke testing, we only focus on the positive flow of the application
and enter only valid data, not the invalid data. In smoke testing, we verify
every build is testable or not; hence it is also known as Build Verification
Testing.
Validation Testing ensures that the product actually meets the client's
needs. It can also be defined as to demonstrate that the product fulfills its
intended use when deployed on appropriate environment.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
Standard Definition of Acceptance Testing
It is formal testing according to user needs, requirements, and business
processes conducted to determine whether a system satisfies the
acceptance criteria or not and to enable the users, customers, or other
authorized entities to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after
System Testing and before making the system available for actual use.
89
Types of Acceptance Testing
90
1. User Acceptance Testing (UAT)
User acceptance testing is used to determine whether the product is
working for the user correctly. Specific requirements which are quite often
used by the customers are primarily picked for testing purposes. This is
also termed as End-User Testing.
2. Business Acceptance Testing (BAT)
BAT is used to determine whether the product meets the business goals
and purposes or not. BAT mainly focuses on business profits which are
quite challenging due to the changing market conditions and new
technologies, so the current implementation may have to being changed
which results in extra budgets.
3. Contract Acceptance Testing (CAT)
CAT is a contract that specifies that once the product goes live, within a
predetermined period, the acceptance test must be performed, and it
should pass all the acceptance use cases..
4. Regulations Acceptance Testing (RAT)
RAT is used to determine whether the product violates the rules and
regulations that are defined by the government of the country where it is
being released. This may be unintentional but will impact negatively on
the business. ctly responsible.
5. Operational Acceptance Testing (OAT)
OAT is used to determine the operational readiness of the product and is
non-functional testing. It mainly includes testing of recovery, compatibility,
maintainability, reliability, etc.
6. Alpha Testing
Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
7. Beta Testing
Beta testing is used to assess the product by exposing it to the real end-
users, typically called beta testers in their environment. Feedback is
collected from the users and the defects are fixed. Also, this helps in
enhancing the product to give a rich user experience.
The box testing approach of software testing consists of black box testing
91
and white box testing. We are discussing here white box testing which also
known as glass box is testing, structural testing, clear box testing,
92
open box testing and transparent box testing. It tests internal coding
and infrastructure of a software focus on checking of predefined inputs
against expected and desired outputs. It is based on inner workings of an
application and revolves around internal structure testing. In this type of
testing programming skills are required to design test cases. The primary
goal of white box testing is to focus on the flow of inputs and outputs
through the software and strengthening the security of the software.
The term 'white box' is used because of the internal perspective of the
system. The clear box or white box or transparent box name denote the
ability to see through the software's outer shell into its inner workings.
Developers do white box testing. In this, the developer will test every line of
the code of the program. The developers perform the White-box testing and
then send the application or the software to the testing team, where they
will perform the black box testing and verify the application along with the
requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and
sends it to the testing team. Here, fixing the bugs implies that the bug is
deleted, and the particular feature is working fine on the application.
Here, the test engineers will not include in fixing the defects for the
following reasons:
o Fixing the bug might interrupt the other features. Therefore, the test
engineer should always find the bugs, and developers should still be
doing the bug fixes.
o If the test engineers spend most of the time fixing the defects, then
they may be unable to find the other bugs in the application.
The white box testing contains various tests, which are as follows:
o Path testing
o Loop testing
o Condition testing
o Testing based on the memory perspective
o Test performance of the program
93
Path testing
In the path testing, we will write the flow graphs and test all independent
paths. Here writing the flow graph implies that flow graphs are representing
the flow of the program and also show how every program is added with
one another as we can see in the below image:
And test all the independent paths implies that suppose a path from main()
to function G, first set the parameters and test if the program is correct in
that particular path, and in the same way test all other paths and fix the
bugs.
Loop testing
In the loop testing, we will test the loops such as while, for, and do-while,
etc. and also check for ending condition if working correctly and if the size
of the conditions is enough.
For example: we have one program where the developers have given
about 50,000 loops.
1. {
2. while(50,000)
3. ……
4. ……
5. }
We cannot test this program manually for all the 50,000 loops cycle. So we
write a small program that helps for all 50,000 cycles, as we can see in the
below program, that test P is written in the similar language as the source
94
code program, and this is known as a Unit test. And it is written by the
developers only.
95
1. Test P
2. {
3. ……
4. …… }
As we can see in the below image that, we have various requirements such
as 1, 2, 3, 4. And then, the developer writes the programs such as program
1,2,3,4 for the parallel conditions. Here the application contains the 100s
line of codes.
The developer will do the white box testing, and they will test all the five
programs line by line of code to find the bug. If they found any bug in any of
the programs, they will correct it. And they again have to test the system
then this process contains lots of time and effort and slows down the
product release time.
Now, suppose we have another case, where the clients want to modify the
requirements, then the developer will do the required changes and test all
four program again, which take lots of time and efforts.
In this, we will write test for a similar program where the developer writes
these test code in the related language as the source code. Then they
execute these test code, which is also known as unit test programs.
These test programs linked to the main program and implemented as
programs.
96
Therefore, if there is any requirement of modification or bug in the code,
then the developer makes the adjustment both in the main program and the
test program and then executes the test program.
Condition testing
In this, we will test all logical conditions for both true and false values; that
is, we will verify for both if and else condition.
For example:
1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
8. {
9. …..
10. ……
11. ……
12. }
The above program will work fine for both the conditions, which means that
if the condition is accurate, and then else should be false and conversely.
97
Black-box testing is a type of software testing in which the tester is not
concerned with the internal knowledge or implementation details of the
software but rather focuses on validating the functionality based on the
provided specifications or requirements.
Prerequisite - Software Testing | Basics
Black box testing can be done in the following ways:
1. Syntax-Driven Testing – This type of testing is applied to systems
that can be syntactically represented by some language. For example,
language can be represented by context-free grammar. In this, the test
cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs
work similarly so instead of giving all of them separately we can group
them and test only one input of each group. The idea is to partition the
input domain of the system into several equivalence classes such that
each member of the class works similarly, i.e., if a test case in one class
results in some error, other members of the class would also result in the
same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into
a minimum of two sets: valid values and invalid values. For
example, if the valid range is 0 to 100 then select one valid input like
49 and one invalid like 104.
2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering
all valid and invalid test cases considering that no two invalid inputs
mask each other. To calculate the square root of a number, the
equivalence classes will be (a) Valid inputs:
The whole number which is a perfect square-output will be an
integer.
The entire number which is not a perfect square-output will be a
decimal number.
Positive decimals
Negative numbers(integer or decimal).
Characters other than numbers like “a”,”!”,”;”, etc.
3. Boundary value analysis – Boundaries are very good places for
errors to occur. Hence, if test cases are designed for boundary values of
the input domain then the efficiency of testing improves and the
probability of finding errors also increases. For example – If the valid
range is 10 to 100 then test for 10,100 also apart from valid and invalid
inputs.
4. Cause effect graphing – This technique establishes a relationship
between logical input called causes with corresponding actions called the
98
effect. The causes and effects are represented using Boolean graphs.
The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.
For example, in the following cause-effect graph:
Each column corresponds to a rule which will become a test case for
testing. So there will be 4 test cases.
5. Requirement-based testing – It includes validating the requirements
given in the SRS of a software system.
6. Compatibility testing – The test case results not only depends on the
product but is also on the infrastructure for delivering functionality. When
the infrastructure parameters are changed it is still expected to work
properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Black Box Testing Type
The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional
99
requirements.
100
Regression Testing: It ensures that the newly added code is compatible
with the existing code. In other words, a new software update has no
impact on the functionality of the software. This is carried out after a
system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT.
This testing is not functional testing of software. It focuses on the
software’s performance, usability, and scalability.
Tools Used for Black Box Testing:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.
What can be identified by Black Box Testing
1. Discovers missing functions, incorrect function & interface errors
2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any
functions.
4. Discovers the errors in performance or behaviour of software.
Features of black box testing:
1. Independent testing: Black box testing is performed by testers who
are not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted
from the perspective of an end user, which helps to ensure that the
application meets user requirements and is easy to use.
3. No knowledge of internal code: Testers performing black box testing
do not have access to the application’s internal code, which allows
them to focus on testing the application’s external behaviour and
functionality.
4. Requirements-based testing: Black box testing is typically based on
the application’s requirements, which helps to ensure that the
application meets the required specifications.
5. Different testing techniques: Black box testing can be performed
using various testing techniques, such as functional testing, usability
testing, acceptance testing, and regression testing.
6. Easy to automate: Black box testing is easy to automate using
various automation tools, which helps to reduce the overall testing
time and effort.
7. Scalability: Black box testing can be scaled up or down depending on
the size and complexity of the application being tested.
101
8. Limited knowledge of application: Testers performing black box
testing have limited knowledge of the application being tested, which
102
helps to ensure that testing is more representative of how the end
users will interact with the application.
Advantages of Black Box Testing:
The tester does not need to have more functional knowledge or
programming skills to implement the Black Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used in finding the ambiguity and contradictions in the functional
specifications.
Disadvantages of Black Box Testing:
There is a possibility of repeating the same tests while implementing
the testing process.
Without clear functional specifications, test cases are difficult to
implement.
It is difficult to execute the test cases because of complex inputs at
different stages of testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and
consumes a lot of time.
103
UNIT V
there are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
104
2. Risk Management
Risk management is the analysis and identification of risks that is
followed by synchronized and economical implementation of resources to
minimize, operate and control the possibility or effect of unfortunate
events or to maximize the realization of opportunities.
3. Requirement Management
It is the process of analyzing, prioritizing, tracking, and documenting
requirements and then supervising change and communicating to
pertinent stakeholders. It is a continuous process during a project.
4. Change Management
Change management is a systematic approach to dealing with the
transition or transformation of an organization’s goals, processes, or
technologies. The purpose of change management is to execute
strategies for effecting change, controlling change, and helping people to
adapt to change.
5. Software Configuration Management
Software configuration management is the process of controlling and
tracking changes in the software, part of the larger cross-disciplinary field
of configuration management. Software configuration
management includes revision control and the inauguration of baselines.
6. Release Management
Release Management is the task of planning, controlling, and scheduling
the built-in deploying releases. Release management ensures that the
organization delivers new and enhanced services required by the
customer while protecting the integrity of existing services.
Aspects of Software Project Management
The list of focus areas it can tackle and the broad upsides of Software
Project Management is:
1. Planning
The software project manager lays out the complete project’s blueprint.
The project plan will outline the scope, resources, timelines, techniques,
strategy, communication, testing, and maintenance steps. SPM can aid
greatly here.
2. Leading
A software project manager brings together and leads a team of
engineers, strategists, programmers, designers, and data scientists.
Leading a team necessitates exceptional communication, interpersonal,
and leadership abilities. One can only hope to do this effectively if one
sticks with the core SPM principles.
105
3. Execution
SPM comes to the rescue here also as the person in charge of software
projects (if well versed with SPM/Agile methodologies) will ensure that
each stage of the project is completed successfully. measuring progress,
monitoring to check how teams function, and generating status reports
are all part of this process.
4. Time Management
Abiding by a timeline is crucial to completing deliverables successfully.
This is especially difficult when managing software projects because
changes to the original project charter are unavoidable over time. To
assure progress in the face of blockages or changes, software project
managers ought to be specialists in managing risk and emergency
preparedness. This Risk Mitigation and
management is one of the core tenets of the philosophy of SPM.
5. Budget
Software project managers, like conventional project managers, are
responsible for generating a project budget and adhering to it as closely
as feasible, regulating spending, and reassigning funds as needed. SPM
teaches us how to effectively manage the monetary aspect of projects to
avoid running into a financial crunch later on in the project.
6. Maintenance
Software project management emphasizes continuous product testing to
find and repair defects early, tailor the end product to the needs of the
client, and keep the project on track. The software project manager
makes ensuring that the product is thoroughly tested, analyzed, and
adjusted as needed. Another point in favor of SPM.
108
again. The object is then “checked in” to the database and appropriate
version control mechanisms are used to create the next version of the
software.
109
and that the system is always in a known and stable state.
110
SCM involves a set of processes and tools that help to manage the
different components of a software system, including source code,
documentation, and other assets. It enables teams to track changes
made to the software system, identify when and why changes were
made, and manage the integration of these changes into the final
product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have
been reported, makes bug tracking more effective.
2. Continuous Deployment and Integration: SCM combines with
continuous processes to automate deployment and testing, resulting in
more dependable and timely software delivery.
3. Risk management: SCM lowers the chance of introducing critical flaws
by assisting in the early detection and correction of problems.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly
method to handle code modifications for big projects, fostering a well-
organized development process.
5. Reproducibility: By recording precise versions of code, libraries, and
dependencies, source code versioning (SCM) makes builds
repeatable.
6. Parallel Development: SCM facilitates parallel development by
enabling several developers to collaborate on various branches at
once.
The main advantages of SCM
1. Improved productivity and efficiency by reducing the time and effort
required to manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes were
properly tested and validated.
3. Increased collaboration and communication among team members by
providing a central repository for software artifacts.
4. Improved quality and stability of software systems by ensuring that all
changes are properly controlled and managed.
The main disadvantages of SCM
1. Increased complexity and overhead, particularly in large software
systems.
2. Difficulty in managing dependencies and ensuring that all changes are
properly integrated.
3. Potential for conflicts and delays, particularly in large development
teams with multiple contributors.
111
PROJECT SCHEDULEING
112
Project schedule simply means a mechanism that is used to
communicate and know about that tasks are needed and has to be done
or performed and which organizational resources will be given or
allocated to these tasks and in what time duration or time frame work is
needed to be performed. Effective project scheduling leads to success of
project, reduced cost, and increased customer satisfaction. Scheduling in
project management means to list out activities, deliverables, and
milestones within a project that are delivered. It contains more notes than
your average weekly planner notes. The most common and important
form of project schedule is Gantt chart.
Project size estimation is a crucial aspect of software engineering, as it
helps in planning and allocating resources for the project. Here are some of
the popular project size estimation techniques used in software
engineering:
Expert Judgment: In this technique, a group of experts in the relevant
field estimates the project size based on their experience and expertise.
This technique is often used when there is limited information available
about the project.
Analogous Estimation: This technique involves estimating the project
size based on the similarities between the current project and previously
completed projects. This technique is useful when historical data is
available for similar projects.
113
Bottom-up Estimation: In this technique, the project is divided into
smaller modules or tasks, and each task is estimated separately. The
estimates are then aggregated to arrive at the overall project estimate.
Three-point Estimation: This technique involves estimating the project
size using three values: optimistic, pessimistic, and most likely. These
values are then used to calculate the expected project size using a
formula such as the PERT formula.
Function Points: This technique involves estimating the project size
based on the functionality provided by the software. Function points
consider factors such as inputs, outputs, inquiries, and files to arrive at
the project size estimate.
Use Case Points: This technique involves estimating the project size
based on the number of use cases that the software must support. Use
case points consider factors such as the complexity of each use case,
the number of actors involved, and the number of use cases.
Parametric Estimation: For precise size estimation, mathematical
models founded on project parameters and historical data are used.
COCOMO (Constructive Cost Model): It is an algorithmic model that
estimates effort, time, and cost in software development projects by
taking into account a number of different elements.
Wideband Delphi: Consensus-based estimating method for balanced
size estimations that combines expert estimates from anonymous
experts with cooperative conversations.
Monte Carlo simulation: This technique, which works especially well for
complicated and unpredictable projects, estimates project size and
analyses hazards using statistical methods and random sampling.
Each of these techniques has its strengths and weaknesses, and the
choice of technique depends on various factors such as the project’s
complexity, available data, and the expertise of the team.
Importance of Project Size Estimation Techniques
Resource Allocation: Appropriate distribution of financial and human
resources is ensured by accurate estimation.
Risk management: Early risk assessment helps with mitigation
techniques by taking into account the complexity of the project.
Time management: Facilitates the creation of realistic schedules and
milestones for efficient time management.
Cost control and budgeting: Both the terms are closely related, which
lowers the possibility of cost overruns.
Resource Allocation: Enables efficient task delegation and work
allocation optimization.
114
Scope Definition: Defines the scope of a project, keeps project
boundaries intact and guards against scope creep.
Estimating the size of the Software
Estimation of the size of the software is an essential part of Software
Project Management. It helps the project manager to further predict the
effort and time that will be needed to build the project. Various measures
are used in project size estimation. Some of these are:
Lines of Code
Number of entities in the ER diagram
Total number of processes in detailed data flow diagram
Function points
KLOC- Thousand lines of code
NLOC- Non-comment lines of code
KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of the
same kind. The experts use it to predict the required size of various
components of software and then add them to get the total size.
It’s tough to estimate LOC by analyzing the problem definition. Only after
the whole code has been developed can accurate LOC be estimated.
This statistic is of little utility to project managers because project
planning must be completed before development activity can begin.
Two separate source files having a similar number of lines may not
require the same effort. A file with complicated logic would take longer to
create than one with simple logic. Proper estimation may not be
attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This
statistic will differ greatly from one programmer to the next. A seasoned
programmer can write the same logic in fewer lines than a newbie coder.
Advantages
Universally accepted and is used in many models like COCOMO.
Estimation is closer to the developer’s perspective.
Both people throughout the world utilize and accept it.
At project completion, LOC is easily quantified.
It has a specific connection to the result.
Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
No proper industry standard exists for this technique.
It is difficult to estimate the size using this technique in the early
stages of the project.
115
When platforms and languages are different, LOC cannot be used to
normalize.
2. Number of entities in ER diagram: ER model provides a static view of
the project. It describes the entities and their relationships. The number of
entities in ER model can be used to measure the estimation of the size of
the project. The number of entities depends on the size of the project.
This is because more entities needed more classes/structures thus
leading to more coding.
Advantages:
Size estimation can be done during the initial stages of planning.
The number of entities is independent of the programming
technologies used.
Disadvantages:
No fixed standards exist. Some entities contribute more to project size
than others.
Just like FPA, it is less used in the cost estimation model. Hence, it
must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow
Diagram(DFD) represents the functional view of software. The model
depicts the main processes/functions involved in software and the flow of
data between them. Utilization of the number of functions in DFD to
predict software size. Already existing processes of similar type are
studied and used to estimate the size of the process. Sum of the
estimated size of each process gives the final estimated size.
Advantages:
It is independent of the programming language.
Each major process can be decomposed into smaller processes. This
will increase the accuracy of the estimation.
Disadvantages:
Studying similar kinds of processes to estimate size takes additional
time and effort.
All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of
functions supported by the software are utilized to find FPC(function point
count). The steps in function point analysis are:
Count the number of functions of each proposed type.
Compute the Unadjusted Function Points(UFP).
Find the Total Degree of Influence(TDI).
Compute Value Adjustment Factor(VAF).
Find the Function Point Count(FPC).
The explanation of the above points is given below:
116
Count the number of functions of each proposed type: Find the
number of functions belonging to the following types:
External Inputs 3 4 6
External Output 4 5 7
External
3 4 6
Inquiries
Internal Logical
7 10 15
Files
External
5 7 10
Interface Files
118
Configuration, Transaction Rate, On-Line Data Entry, End-user
Efficiency, Online Update, Complex Processing Reusability,
Installation Ease, Operational Ease, Multiple Sites and Facilitate
Change.
Each of the above characteristics is evaluated on a scale of 0-5.
Compute Value Adjustment Factor(VAF): Use the following formula to
calculate VAF
VAF = (TDI * 0.01) + 0.65
Find the Function Point Count: Use the following formula to calculate
FPC
FPC = UFP * VAF
Advantages:
It can be easily used in the early stages of project planning.
It is independent of the programming language.
It can be used to compare different projects even if they use different
technologies(database, language, etc).
Disadvantages:
It is not good for real-time systems and embedded systems.
Many cost estimation models like COCOMO use LOC and hence FPC
must be converted to LOC.
DEVOPS TUTORIAL
Why DevOps?
120
cycles.
121
o Without the use of DevOps, the team members are spending a large
amount of time on designing, testing, and deploying instead of
building the project.
o Manual code deployment leads to human errors in production.
o Coding and operation teams have their separate timelines and are
not in synch, causing further delays.
DevOps History
o In 2009, the first conference named DevOpsdays was held in Ghent
Belgium. Belgian consultant and Patrick Debois founded the
conference.
o In 2012, the state of DevOps report was launched and conceived by
Alanna Brown at Puppet.
o In 2014, the annual State of DevOps report was published by Nicole
Forsgren, Jez Humble, Gene Kim, and others. They found DevOps
adoption was accelerating in 2014 also.
o In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA
(DevOps Research and Assignment).
o In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published
"Accelerate: Building and Scaling High Performing Technology
Organizations".
122
1) Automation
Automation can reduce time consumption, especially during the testing and
deployment phase. The productivity increases, and releases are made
quicker by automation. This will lead in catching bugs quickly so that it can
be fixed easily. For contiguous delivery, each code is defined through
automated tests, cloud-based services, and builds. This promotes
production using automated deploys.
2) Collaboration
3) Integration
4) Configuration management
It ensures the application to interact with only those resources that are
concerned with the environment in which it runs. The configuration files are
not created where the external configuration to the application is separated
from the source code. The configuration file can be written during
deployment, or they can be loaded at the run time, depending on the
environment in which it is running.
Advantages
o DevOps is an excellent approach for quick development and
deployment of applications.
o It responds faster to the market changes to improve business growth.
123
o DevOps escalate business profit by decreasing software delivery time
and transportation costs.
o DevOps clears the descriptive process, which gives clarity on product
development and delivery.
o It improves customer experience and satisfaction.
o DevOps simplifies collaboration and places all tools in the cloud for
customers to access.
o DevOps means collective responsibility, which leads to better team
engagement and productivity.
Disadvantages
o DevOps professional or expert's developers are less available.
o Developing with DevOps is so expensive.
o Adopting new DevOps technology into the industries is hard to
manage in short time.
DevOps Tools
Here are some most popular DevOps tools with brief explanation
shown in the below image, such as:
1) Puppet
Puppet is the most widely used DevOps tool. It allows the delivery and
release of the technology changes quickly and frequently. It has features of
124
versioning, automated testing, and continuous delivery. It enables to
125
manage entire infrastructure as code without expanding the size of the
team.
Features
2) Ansible
Features
3) Docker
Docker is a high-end DevOps tool that allows building, ship, and run
126
distributed applications on multiple systems. It also helps to assemble the
127
apps quickly from the components, and it is typically suitable for container
management.
Features
4) Nagios
Nagios is one of the more useful tools for DevOps. It can determine the
errors and rectify them with the help of network, infrastructure, server, and
log monitoring systems.
Features
5) CHEF
A chef is a useful tool for achieving scale, speed, and consistency. The
chef is a cloud-based system and open source technology. This technology
uses Ruby encoding to develop essential building blocks such as recipes
and cookbooks. The chef is used in infrastructure automation and helps in
reducing manual and repetitive tasks for infrastructure management.
128
Chef has got its convention for different building blocks, which are required
to manage and automate infrastructure.
Features
6) Jenkins
Features
129
7) Git
Features
CLOUD PLATFORM
There are tons of ways in which every individual can state the meaning of
the cloud platform. But in the simplest way it can be stated as the operating
system and hardware of a server in an Internet-based data centre are
referred to as a cloud platform. It enables remote and large-scale
coexistence of software and hardware goods.
Cloud systems come in a range of shapes and sizes. None of them are
suitable for all. To meet the varying needs of consumers, a range of
models, forms, and services are available. They are as follows:
Cost
Global scale
131
The ability to scale elastically is one of the advantages of cloud computing
services. In other words it simply means that we can decide the processing
speed, location of the data centre where data is to be stored, storage and
even the bandwidth for our process and data.
Performance
The most popular cloud computing services are hosted on a global network
of protected datacenters that are updated on a regular basis with the latest
generation of fast and powerful computing hardware.
Security
Speed
It means that the huge amount of calculation and the huge data retrieval as
in download and upload can happen just within the blink of an eye,
obviously depending on the configuration.
Reliability
Understanding what these services are and how they can make our
process smooth and a lot easier by knowing their right objective will help
our organisation grow more.
133
o Type 2- The Platform as a service (PaaS)
134
repository, the deployment pipeline starts. First building the application, then
135
running code analysis, unit tests, and integration/API tests (all in parallel). If
all the tasks in this stage of the pipeline passes, then a smoke test suite is
triggered, and if the smoke test also passes, it triggers the regression test
suite and the visual regression test suite (executed in parallel), and then, if
this final stage passes as well, then we have a release candidate that can
be promoted to production, so that users can enjoy.
137
software, and then having their help on providing the necessary resources
to make it happens.
138