PBRS
PBRS
Introduction
1|Page
In bus reservation system there has been a collection of buses, agent who are booking tickets for
customer’s journey which give bus number and departure time of the bus. According to its name it
manages the details of all agent, tickets, rental details, and timing details and so on. It also manages the
updating of the objects.
In the tour detail there is information about bus, who has been taking customers at their
destination, it also contain the detailed information about the customer, who has been taken from which
bus and at what are the number of members he or she is taking his/her journey.
This section also contain the details of booking time of the seat(s) or collecting time of the
tickets, this section also contain the booking date and the name of agent which is optional, by which the
customer can reserve the seats for his journey
In Bus no category it contains the details of buses which are old/new. New buses are
added with the details with bus no, from city to the city, type of the bus, rent of a single seat, if the bus
has sleeper than the cost of sleeper, if the cabin has the facility for sitting than the cost of cabin seats,
tour timings of the new bus has also been stored. How many buses are currently given and available in
office?
In seats specification it gives the list of given issued and currently available seats and
contain the information about seats like sleeper, cabin etc.
The main objective of this project to provide the better work efficiency, security, accuracy, reliability,
feasibility. The error occurred could be reduced to nil and working conditions can be improved.
2|Page
Chapter 2
Development model
3|Page
Development model
Our project life cycle uses the waterfall model, also known as classic life cycle model or linear
sequential model.
System/Information
Engineering
Analysis Design Code Test
System Engineering and Analysis encompass requirements gathering at the system level with a small
amount of Top-level design and analysis. Information Engineering encompasses requirements
gathering at the strategic business level and at the business area level.
Software requirements analysis involves requirements for both the system and the software to be
document and reviewed with the customer.
3. Design
4|Page
Software design is actually a multi-step process that focuses on for distinct attributes of a program: data
structure, software architecture, interfaces representation and procedural detail. The design process
translates requirements into a representation of the software that can be accessed for quality before
coding begins.
4. Code Generation
5. Testing
Once code has been generated, program testing begins. The testing focuses on the logical internals of
the software, ensuring that all statement have been tested, and on the functional externals; that is,
conducting test to uncover errors and ensure that define input will produce actual results that agree with
required results.
6. Support
Software will undoubtedly undergo change after it is delivered to the customer. Change will occur
because errors have been encountered, because the software must be adapted to accommodate changes
in its external environment or because the customer requires functional or performance enhancements.
5|Page
Chapter 3
System Study
Before the project can begin, it becomes necessary to estimate the work to be done, the resource that
will be required, and the time that will elapse from start to finish. During making such a plan we visited
site many more times.
The objective of software project planning is to provide a framework that enables the management to
make reasonable estimates of resources, cost, and schedule. These estimates are made within limited
time frame at the beginning of a software project and should be updated regularly as the project
6|Page
progresses. In addition, estimates should attempt to define best case and worst case scenarios so that
project outcomes can be bounded.
The first activity in software project planning is the determination of software scope. Software scope
describes the data and control to be processed, function, performance, constraints, interfaces, and
reliability.
The most commonly used technique to bridge communication gap between customer and the software
developer to get the communication process started is to conduct a preliminary meeting or interview.
When I visited the site we have been introduced to the Manager of the center, there were two another
persons out of one was the technical adviser and another one was the cost accountant. Neither of us
knows what to ask or say; we were very much worried that what we say will be misinterpreted.
We started to asking context-free questions; that is, a set of questions that will lead to a basic
understanding of the problem. The first set of context-free questions was like this:
What do you want to be done?
Who will use this solution?
What is wrong with your existing working systems?
Is there another source for the solution?
Can you show us (or describe) the environment in which the solution will be used?
After first round of above asked questions. We revisited the site and asked many more questions
considering to final set of questions.
Are our questions relevant to the problem that you need to be solved?
Are we asking too many questions?
Should we be asking you anything else?
2.2.2 Feasibility
Not everything imaginable is feasible, not even in software. Software feasibility has four dimensions:
7|Page
Technology—is a project technically feasible? Is it within the state of the art?
After taking into consideration of above said dimensions, we found it could be feasible for us to
develop this project.
Software cost and effort estimation will never be an exact science. Too may variables—human,
technical, environmental, political—can affect the ultimate cost of software and effort applied to
develop it. However, software project estimation can be transformed a black art to a series of
systematic steps that provide estimates with acceptable risk.
1. Delay estimation until late in the project (since, we can achieve 100% accurate estimates
after the project is complete!)
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost and effort estimates.
4. Use one or more empirical models for software cost and effort estimation.
Unfortunately, the first option, however attractive, is not practical. Cost estimates must be provided
“Up front”. However, we should recognize that the longer we wait, the more we know, and the more
we know, the less likely we are to make serious errors in our estimates.
The second option can work reasonably well, if the current project is quite similar to past efforts and
other project influences (e.g., the customer, business conditions, the SEE, deadlines) are equivalent.
Unfortunately past experience has not always been a good indicator of future results.
The remaining options are viable approaches the software project estimation. Ideally, the techniques
noted for each option be applied in tandem; each used as cross check for the other. Decomposition
techniques take a “divide and conquer” approach to software project estimation. By decomposing a
project into major functions and related software engineering activities, cost and effort estimation can be
performed in the stepwise fashion.
8|Page
Empirical estimation models can be used to complement decomposition techniques and offer a
potentially valuable estimation approach in their own right. A model based on experience (historical data)
and takes the form
D = f (vi)
Where d is one of a number of estimated values (e.g., effort, cost, project duration and we are
selected independent parameters (e.g., estimated LOC (line of code)).
Each of the viable software cost estimation options is only as good as the historical data used to seed
the estimate. If no historical data exist, costing rests on a very shaky foundation.
9|Page
Chapter 4
10 | P a g e
4.1 PERT Chart
Program evaluation and review technique (PERT) and critical path method (CPM) are two
project scheduling methods that can be applied to software development. These techniques are driven
by following information:
Estimates of Effort
A decomposition of the product function
The selection of the appropriate process model and task set
Decomposition of tasks
PERT chart for this application software is illustrated in figure 3.1. The critical Path for this Project is
Design, Code generation and Integration and testing.
Aug-2008
Coding
Aug 15,2008
Documentati Finish
on and Jan 3, 2008
Report
Oct 30, 2008
Figure 4.1 PERT charts for “University Study Center Management System”.
Gantt chart which is also known as Timeline chart contains the information like effort, duration,
start date, completion date for each task. A timeline chart can be developed for the entire project.
Below in figure 4.2 we have shown the Gantt chart for the project. All project tasks have been listed in
the left-hand column.
Start: Jan 1, 2008.
11 | P a g e
Planned Actual Planned Actual Notes
Work tasks start start complete Complete
1.1 Identify needs and benefits
Figure: 4.2 Gant chart for the Project University Study Center Management System. Note: Wk1—
week1, d1—day1.
12 | P a g e
Chapter 5
System Analysis
13 | P a g e
Software requirements analysis is a process of discovery, refinement, modeling,
and specification. Requirement analysis proves the software designer with a representation of
information, function, and behavior that can be translated to data, architectural interface, and
component -level designs. To perform the job properly we need to follow as set of underlying concepts
and principles of Analysis.
Over the past two decades, a large number of analysis modeling methods have been developed.
Investigators have identified analysis problems and their caused and have developed a variety of
modeling notations and corresponding sets of heuristics to overcome them. Each analysis method has a
unique point of view. However, all analysis methods are related by a set of operational principles:
By applying these principles, we approach the problem systematically. The information domain is
examined so that function may be understood more completely. Models are used so that the
characteristics of function and behavior can be communicated in a compact fashion. Partitioning is
applied to reduce complexity. Essential and implementation vies of the software are necessary to
accommodate the logical constraints imposed any processing requirements and the physical constraints
imposed by other system elements.
In addition to these operational analysis principles, Davis suggests a set o guiding principles for
requirements analysis:
Understand the problem before you begin to create the analysis model. There is a tendency
to rush to a solution, even before the problem is understood. This often leads to elegant
software that solves the wrong problem! We always tried to escape from such situation
while making this project a success.
Develop prototypes that enable a user to understand how human/machine interaction will
occur. Since the perception of the quality of software is ofter based on the perception ot the
14 | P a g e
“friendliness” of the interface, protoptying (and the iteration that results) are highly
recommended.
Record the origin of and the reason for every requirement. This is the first step in
establishing traceability back to the customer.
Use multiple views of requirements. Building data, functional, and behavioral models
provide the software developer with three views. This reduces the likelihood that something
will be missed and increases the likelihood that inconsistency will be recognized.
Rank requirements. Tight deadlines may preclude the implementation of every software
requirement.
Work to eliminate ambiguity. Because most requirements are described in a natural
language, the opportunity for ambiguity abounds. The use of formal technical reviews is one
way to uncover and eliminate ambiguity.
We have tried to takes above said principles to heart so that we could provide an excellent
foundation for design.
All software applications can be collectively called data processing. Software is built to process data, to
transform data from one form to another; that is, to accept input, manipulate it in some way, and
produce output. This fundamental statement of objective is true whether we build batch software for a
payroll system or real-time embedded software to control fuel flow to an automobile engine.
The first operational analysis principle requires an examination of the information domain and the
creation of a data model. The information domain contains three different views of the data and control
as each is processed by a computer program:
To fully understand the information domain, each of these views should be considered.
15 | P a g e
Information content represents the individual data and control objects that constitute some
larger collection of information transformed by the software. For example, the data object, Status
declare is a composite of a number of important pieces of data: the aircraft’s name, the aircraft’s
model, ground run, no of hour flying and so forth. Therefore, the content of Status declares is defined
by the attributes that are needed to create it. Similarly, the content of a control object called System
status might be defined by a string of bits. Each bit represents a separate item of information that
indicates whether or not a particular device is on-or off-line.
Data and control objects can be related to other data and control objects. For example, the date
object Status declare has one or more relationships with the objects like total no of flying, period left
for the maintenance of aircraft an others.
Information flow represents the manner in which date and control change as each moves
through a system. Referring to figure 6.1, input objects are transformed to intermediate information
(data and / or control), which is further transformed to output. Along this transformation path,
additional information may be introduced from an existing date store ( e.g., a disk file or memory
buffer). The transformations applied to the date are functions or sub functions that a program must
perform. Data and control that move between two transformations define the interface for each
function.
Data/Control
Store
5.1.2 Modeling
16 | P a g e
The second and third operational analysis principles require that we build models of function and
behavior.
Functional models. Software transforms information, and in order to accomplish this, it must perform
at lease three generic functions:
Input
Processing
And output.
The functional model begins with a single context level model (i.e., the name of the software to be
built). Over a series of iterations, more and more functional detail is gathered, until a through
delineation of all system functionality is represented.
Behavioral models. Most software responds to events from the outside world. This
stimulus/response characteristic forms the basis of the behavioral model. A computer program always
exists in some state- an externally observable mode of behavior (e.g., waiting, computing, printing, and
polling) that is changed only when some even occurs. For example, in our case the project will remain
in the wait state until:
A behavioral model creates a representation of the states of the software and the events that cause
software to change state.
Problems are often too large and complex to be understood as a whole, for this reason, se tend
to partition (divide) such problems into parts that can be easily under stood and establish interfaces
17 | P a g e
between the part so that overall function can be accomplished. The fourth operational analysis principle
suggests that the information, functional, and behavioral domains of software can be partitioned.
Horizontal partitioning:
During installation, the software (Bus Reservation System) used to program and
configure the system. A master password is programmed for getting in to the software system. After
this step only user can work in the environments (right cornor naming operation, administration and
maintenance) only.
Bu s R e s e r v a t i o n S y s t e m
18 | P a g e
19 | P a g e
Acceptance Rejection
Some circumstances require the construction of a prototype at the beginning of analysis, since
the model is the only means through which requirements can be effectively derived. The model then
evolves into production software.
The above six questions are made as per the Andriole suggestions for prototyping approach.
20 | P a g e
E-R DIAGRAM:
BUS RESERVATION
SYSTEM
Give
services
Divided
BUSES
Work
area
Care of
DIFFERENT
examine Full of
Works TYPE OF
BUSES
SLEEPER
OR
WITHOUT DEPARTMENT SEATS
SLEEPER
21 | P a g e
The following DFD shows how the working of a reservation system could be smoothly managed:
WORK AREAS
BUSES RESERVED
RECORDS AGENT
DAILY VISITING
ENTRY REC AGENT
AGENT
DETAILS
REPORT
TABLE
We have STARBUS as our database and some of our tables (relation) are such as
STARBUS
AGENTBASICINFO
FEEDBACK
PASSANGERIFNO
STATIS
22 | P a g e
TIMELIST
n our table AGENT_BASIC_INFO we have following field such as agent_id, agent_name,
agent_phon_number etc.
AGENT_BASIC_INFO
AGENT_ID AGENT_NAME
AGENT_FNAME
AGENT_SHOP_NAME
AGENT_SHOP_ADDRESS
AGENT_SHOP_CITY
AGENT_PHON_NUMBER
AGENT_MOBIL_NUMBER
AGENT_CURRENT_BAL
In our FEEDBACK table we have fields like name, Email, Phon, Subject, Comment, and User_type.
23 | P a g e
Email
Name
Phone
FEEDBACK
Comment Subject
User_type
In our table PASSANGER_INFO we have filed like bill_no, c_name, c_phone, c_to, c_from, c_time,
24 | P a g e
C_name
C_phon
Bill_no
C_to
Status
PASSANGER
C_from
_INFO
Agent_id
C_time
Amount
Seat_no
Total_seat
In the table of TIME_LIST we have fields such as Sno, Satation_name, Rate_per_seat, Time,
25 | P a g e
Station_name
Sno
Rate_perSeat
TIME_LIST
Bus_number Time
Reach_time
PROCESS LOGIC::
As the privatization of buses is increasing thus the need of its smooth management is also
increasing the more we could facilitate the customers, the more they are comfortable with us, the
26 | P a g e
more customers we have visiting our reservation unit .the above tables and modules facilitates
27 | P a g e
Chapter 6
Technology used
28 | P a g e
6.1 Tools and Platform used for the Development
The Internet revolution of the late 1990s represented a dramatic shift in the way
individuals and organizations communicate with each other. Traditional applications, such as word
processors and accounting packages, are modeled as stand-alone applications: they offer users the
capability to perform tasks using data stored on the system the application resides and executes on.
Most new software, in contrast, is modeled based on a distributed computing model where applications
collaborate to provide services and expose functionality to each other. As a result, the primary role of
most new software is changing into supporting information exchange (through Web servers and
browsers), collaboration (through e-mail and instant messaging), and individual expression (through
Web logs, also known as Blogs, and e-zines — Web based magazines). Essentially, the basic role of
software is changing from providing discrete functionality to providing services.
The .NET Framework represents a unified, object-oriented set of services and libraries that embrace the
changing role of new network-centric and network-aware software. In fact, the .NET Framework is the
first platform designed from the ground up with the Internet in mind.
Microsoft .NET Framework is a software component that is a part of several Microsoft Windows
operating systems. It has a large library of pre-coded solutions to common programming problems and
manages the execution of programs written specifically for the framework. The .NET Framework is a
key Microsoft offering and is intended to be used by most new applications created for the Windows
platform.
The .NET Class Library is a key component of the .NET Framework — it is sometimes referred to as
the Base Class Library (BCL). The .NET Class Library contains hundreds of classes you can use for
tasks such as the following:
29 | P a g e
Processing XML
Working with data from multiple data sources
Debugging your code and working with event logs
Working with data streams and files
Managing the run-time environment
Developing Web services, components, and standard Windows applications
Working with application security
Working with directory services
The functionality that the .NET Class Library provides is available to all .NET languages, resulting in a
consistent object model regardless of the programming language developer’s use.
The .NET Framework consists of three key elements as show in below diagram
ASP.NET
Window Forms
Web Server Web Form Visual
Studio.NET
Operating System
30 | P a g e
Components of the .NET Framework
The CLR is also responsible for compiling code just before it executes. Instead of
producing a binary representation of your code, as traditional compilers do, .NET
compilers produce a representation of your code in a language common to the .NET Framework:
Microsoft Intermediate Language, often referred to as IL. When your code executes for the first time,
the CLR invokes a special compiler called a Just In Time (JIT) compiler, Because all .NET languages
have the same compiled representation, they all have similar performance characteristics. This means
that a program written in Visual Basic .NET can perform as well as the same program written in Visual
C++ .NET.
The benefits of using the .NET Class Library include a consistent set of services available to all .NET
languages and simplified deployment, because the .NET Class Library is available on all
implementations of the .NET Framework.
31 | P a g e
3. Unifying components
Until this point, this chapter has covered the low-level components of the .NET Framework. The
unifying components, listed next, are the means by which you can access the services the .NET
Framework provides:
ASP.NET
Windows Forms
Visual Studio .NET
ASP.NET
After the release of Internet Information Services 4.0 in 1997, Microsoft began researching possibilities
for a new web application model that would solve common complaints about ASP.
. ASP.NET introduces two major features: Web Forms and Web Services.
1. Web Forms
Developers not familiar with Web development can spend a great deal of time, for
example, figuring out how to validate the e-mail address on a form. You can validate the information
on a form by using a client-side script or a server-side script. Deciding which kind of script to use is
complicated by the fact that each approach has its benefits and drawbacks, some of which aren't
apparent unless you've done substantial design work. If you validate the form on the client by using client-
side JScript code, you need to take into consideration the browser that your users may use to access the
form. Not all browsers expose exactly the same representation of the document to programmatic
interfaces. If you validate the form on the server, you need to be aware of the load that users might place
on the server. The server has to validate the data and send the result back to the client. Web Forms
simplify Web development to the point that it becomes as easy as dragging and dropping controls onto a
designer (the surface that you use to edit a page) to design interactive Web applications that span from
client to server.
2. Web Services
A Web service is an application that exposes a programmatic interface through standard access
methods. Web Services are designed to be used by other applications and components and are not
intended to be useful directly to human end users. Web Services make it easy to build applications that
32 | P a g e
integrate features from remote sources. For example, you can write a Web Service that provides
weather information for subscribers of your service instead of having subscribers link to a page or parse
through a file they download from your site. Clients can simply call a method on your Web Service as
if they are calling a method on a component installed on their system — and have the weather
information available in an easy-to-use format that they can integrate into their own applications or
Web sites with no trouble.
Introducing ASP.NET
ASP.NET, the next version of ASP, is a programming framework that is used to create enterprise-class
Web applications. The enterprise-class Web applications are accessible on a global basis, leading to
efficient information management. However, the advantages that ASP.NET offers make it more than
just the next version of ASP. ASP.NET is integrated with Visual Studio .NET, which provides a GUI
designer, a rich toolbox, and a fully integrated debugger. This allows the development of applications
in a What You See is What You Get (WYSIWYG) manner. Therefore, creating ASP.NET applications
is much simpler.
Unlike the ASP runtime, ASP.NET uses the Common Language Runtime (CLR) provided by the .NET
Framework. The CLR is the .NET runtime, which manages the execution of code. The CLR allows the
objects, which are created in different languages, to interact with each other and hence removes the
language barrier. CLR thus makes Web application development more efficient.
In addition to simplifying the designing of Web applications, the .NET CLR offers many advantages.
The ASP.NET code is a compiled CLR code instead of an interpreted code. The CLR provides just-in-
time compilation, native optimization, and caching. Here, it is important to note that compilation is a two-
stage process in the .NET Framework. First, the code is compiled into the Microsoft Intermediate
Language (MSIL). Then, at the execution time, the MSIL is compiled into native code. Only the
portions of the code that are actually needed will be compiled into native code. This is called Just In
Time compilation. These features lead to an overall improved performance of ASP.NET applications.
33 | P a g e
Flexibility:
The entire .NET class library can be accessed by ASP.NET applications. You can use the language that
best applies to the type of functionality you want to implement, because ASP.NET is language
independent.
Configuration settings:
The application-level configuration settings are stored in an Extensible Markup Language (XML)
format. The XML format is a hierarchical text format, which is easy to read and write. This format
makes it easy to apply new settings to applications without the aid of any local administration tools.
Security:
ASP.NET applications are secure and use a set of default authorization and authentication schemes.
However, you can modify these schemes according to the security needs of an application. In addition
to this list of advantages, the ASP.NET framework makes it easy to migrate from ASP applications.
After you've set up the development environment for ASP.NET, you can create your first ASP.NET
Web application. You can create an ASP.NET Web application in one of the following ways:
Use a text editor:
In this method, you can write the code in a text editor, such as Notepad, and save the code as an ASPX
file. You can save the ASPX file in the directory C:\inetpub\wwwroot. Then, to display the output of
the Web page in Internet Explorer, you simply need to type http://localhost/<filename>.aspx in the
Address box. If the IIS server is installed on some other machine on the network, replace"localhost"
with the name of the server. If you save the file in some other directory, you need to add the file to a
virtual directory in the Default WebSite directory on the IIS server. You can also create your own
virtual directory and add the file to it.
34 | P a g e
Characteristics
Pages
ASP.NET pages, known officially as "web forms", are the main building block for application
development. Web forms are contained in files with an ASPX extension; in programming jargon, these
files typically contain static (X)HTML markup, as well as markup defining server-side Web Controls
and User Controls where the developers place all the required static and dynamic content for the web
page. Additionally, dynamic code which runs on the server can be placed in a page within a block <% -
- dynamic code -- %> which is similar to other web development technologies such as PHP, JSP, and
ASP, but this practice is generally discouraged except for the purposes of data binding since it requires
more calls when rendering the page.
Note that this sample uses code "inline", as opposed to code behind.
<%@ Page Language="C#" %>
<script runat="server">
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Sample page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
The current time is: <asp:Label runat="server" id="Label1" />
</div>
</form>
</body>
</html>
35 | P a g e
Code-behind model
It is recommended by Microsoft for dealing with dynamic program code to use the code-behind model,
which places this code in a separate file or in a specially designated script tag. Code-behind files
typically have names like MyPage.aspx.cs or MyPage.aspx.vb based on the ASPX file name (this
practice is automatic in Microsoft Visual Studio and other IDEs). When using this style of
programming, the developer writes code to respond to different events, like the page being loaded, or a
control being clicked, rather than a procedural walk through the document.
ASP.NET's code-behind model marks a departure from Classic ASP in that it encourages developers to
build applications with separation of presentation and content in mind. In theory, this would allow a
web designer, for example, to focus on the design markup with less potential for disturbing the
programming code that drives it. This is similar to the separation of the controller from the view in model-
view-controller frameworks.
Example
The above tag is placed at the beginning of the ASPX file. The CodeFile property of the @ Page
directive specifies the file (.cs or .vb) acting as the code-behind while the Inherits property specifies the
Class the Page derives from. In this example, the @ Page directive is included in SamplePage.aspx,
then SampleCodeBehind.aspx.cs acts as the code-behind for this page:
using System;
namespace Website
{
public partial class SampleCodeBehind : System.Web.UI.Page
{
protected override void Page_Load(EventArgs e)
{
base.OnLoad(e);
}
}
36 | P a g e
}
In this case, the Page_Load () method is called every time the ASPX page is requested. The
programmer can implement event handlers at several stages of the page execution process to perform
processing.
User controls
ASP.NET supports creating reusable components through the creation of User Controls. A User
Control follows the same structure as a Web Form, except that such controls are derived from the
System.Web.UI.UserControl class, and are stored in ASCX files. Like ASPX files, a ASCX contains
static HTML or XHTML markup, as well as markup defining web control and other User Controls. The
code-behind model can be used.
Programmers can add their own properties, methods, and event handlers. An event bubbling
mechanism provides the ability to pass an event fired by a user control up to its containing page.
State management
ASP.NET applications are hosted in a web server and are accessed over the stateless HTTP protocol.
As such, if the application uses stateful interaction, it has to implement state management on its own.
ASP.NET provides various functionality for state management in ASP.NET applications.
Application state
Application state is a collection of user-defined variables that are shared by an ASP.NET application.
These are set and initialized when the Application_OnStart event fires on the loading of the first
instance of the applications and are available till the last instance exits. Application state variables are
accessed using the Applications collection, which provides a wrapper for the application state variables.
Application state variables are identified by names.
Session state
37 | P a g e
Session state is a collection of user-defined session variables, which are persisted during a user session.
These variables are unique to different instances of a user session, and are accessed using the Session
collection. Session variables can be set to be automatically destroyed after a defined time of inactivity,
even if the session does not end. At the client end, a user session is identified either by a cookie or by
encoding the session ID in the URL itself.
ASP.NET supports three modes of persistence for session variables:
In Process Mode
The session variables are maintained within the ASP.NET process. This is the fastest way,
however, in this mode the variables are destroyed when the ASP.NET process is recycled or
shut down. Since the application is recycled from time to time this mode is not recommended
for critical applications.
ASPState Mode
In this mode, ASP.NET runs a separate Windows service that maintains the state variables.
Because the state management happens outside the ASP.NET process, this has a negative
impact on performance, but it allows multiple ASP.NET instances to share the same state
server, thus allowing an ASP.NET application to be load-balanced and scaled out on multiple
servers. Also, since the state management service runs independent of ASP.NET, variables can
persist across ASP.NET process shutdowns.
SqlServer Mode
In this mode, the state variables are stored in a database server, accessible using SQL. Session
variables can be persisted across ASP.NET process shutdowns in this mode as well. The main
advantage of this mode is it would allow the application to balance load on a server cluster
while sharing sessions between servers.
View state
View state refers to the page-level state management mechanism, which is utilized by the
HTML pages emitted by ASP.NET applications to maintain the state of the web form controls
and widgets. The state of the controls are encoded and sent to the server at every form
submission in a hidden field known as __VIEWSTATE. The server sends back the variable so
38 | P a g e
that when the page is re-rendered, the controls render at their last state. At the server side, the
application might change the viewstate, if the processing results in updating the state of any
control. The states of individual controls are decoded at the server, and are available for use in
ASP.NET pages using the ViewState collection.
Template engine
When first released, ASP.NET lacked a template engine. Because the .NET framework is object-
oriented and allows for inheritance, many developers would define a new base class that inherits from
"System.Web.UI.Page", write methods here that render HTML, and then make the pages in their
application inherit from this new class. While this allows for common elements to be reused across a
site, it adds complexity and mixes source code with markup. Furthermore, this method can only be
visually tested by running the application - not while designing it. Other developers have used include
files and other tricks to avoid having to implement the same navigation and other elements in every
page.
ASP.NET 2.0 introduced the concept of "master pages", which allow for template-based page
development. A web application can have one or more master pages, which can be nested. Master
templates have place-holder controls, called ContentPlaceHolders to denote where the dynamic content
goes, as well as HTML and JavaScript shared across child pages.
Child pages use those ContentPlaceHolder controls, which must be mapped to the place-holder of the
master page that the content page is populating. The rest of the page is defined by the shared parts of
the master page, much like a mail merge in a word processor. All markup and server controls in the
content page must be placed within the ContentPlaceHolder control.
When a request is made for a content page, ASP.NET merges the output of the content page with the
output of the master page, and sends the output to the user.
The master page remains fully accessible to the content page. This means that the content page may
still manipulate headers, change title, configure caching etc. If the master page exposes public
properties or methods (e.g. for setting copyright notices) the content page can use these as well.
Performance
ASP.NET aims for performance benefits over other script-based technologies (including Classic ASP)
by compiling the server-side code to one or more DLL files on the web server. This compilation
39 | P a g e
happens automatically the first time a page is requested (which means the developer need not perform a
separate compilation step for pages). This feature provides the ease of development offered by scripting
languages with the performance benefits of a compiled binary. However, the compilation might cause a
noticeable but short delay to the web user when the newly-edited page is first requested from the web
server, but won't again unless the page requested is updated further.
The ASPX and other resource files are placed in a virtual host on an Internet Information Services
server (or other compatible ASP.NET servers; see Other Implementations, below). The first time a
client requests a page, the .NET framework parses and compiles the file(s) into a .NET assembly and
sends the response; subsequent requests are served from the DLL files. By default ASP.NET will
compile the entire site in batches of 1000 files upon first request. If the compilation delay is causing
problems, the batch size or the compilation strategy may be tweaked.
Developers can also choose to pre-compile their code before deployment, eliminating the need for just- in-
time compilation in a production environment.
Development tools
Delphi 2006
Macromedia Dreamweaver MX, Macromedia Dreamweaver MX 2004, or Macromedia
Dreamweaver 8 (doesn't support ASP.NET 2.0 features, and produces very inefficient code for
ASP.NET 1.x: also, code generation and ASP.NET features support through version 8.0.1 was
little if any changed from version MX: version 8.0.2 does add changes to improve security
against SQL injection attacks)
Macromedia HomeSite 5.5 (For ASP Tags)
Microsoft Expression Web, part of the Microsoft Expression Studio application suite.
Microsoft SharePoint Designer
MonoDevelop (Free/Open Source)
SharpDevelop (Free/Open Source)
Visual Studio .NET (for ASP.NET 1.x)
Visual Web Developer 2005 Express Edition (free) or Visual Studio 2005 (for ASP.NET 2.0)
40 | P a g e
Visual Web Developer 2008 Express Edition (free) or Visual Studio 2008 (for ASP.NET
2.0/3.5)
Eiffel for ASP.NET
What is SQL?
SQL stands for Structured Query Language. SQL is used to communicate with a database. According
to ANSI (American National Standards Institute), it is the standard language for relational database
management systems. SQL statements are used to perform tasks such as update data on a database, or
retrieve data from a database. Some common relational database management systems that use SQL
are: Oracle, Sybase, Microsoft SQL Server, Access, Ingres, etc. Although most database systems use
SQL, most of them also have their own additional proprietary extensions that are usually only used on
their system. However, the standard SQL commands such as "Select", "Insert", "Update", "Delete",
"Create", and "Drop" can be used to accomplish almost everything that one needs to do with a database.
This tutorial will provide you with the instruction on the basics of each of these commands as well as
allow you to put them to practice using the SQL Interpreter.
SQL (Structured Query Language) is a database computer language designed for the retrieval and
management of data in relational database management systems (RDBMS), database schema creation
and modification, and database object access control management.
SQL is a standard interactive and programming language for querying and modifying data and
managing databases. Although SQL is both an ANSI and an ISO standard, many database products
support SQL with proprietary extensions to the standard language. The core of SQL is formed by a
command language that allows the retrieval, insertion, updating, and deletion of data, and performing
management and administrative functions. SQL also includes a Call Level Interface (SQL/CLI) for
accessing and managing data and databases remotely.
The first version of SQL was developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in
the early 1970s. This version, initially called SEQUEL, was designed to manipulate and retrieve data
stored in IBM's original relational database product, System R. The SQL language was later formally
standardized by the American National Standards Institute (ANSI) in 1986. Subsequent versions of the
SQL standard have been released as International Organization for Standardization (ISO) standards.
41 | P a g e
Originally designed as a declarative query and data manipulation language, variations of SQL have
been created by SQL database management system (DBMS) vendors that add procedural constructs,
control-of-flow statements, user-defined data types, and various other language extensions. With the
release of the SQL: 1999 standard, many such extensions were formally adopted as part of the SQL
language via the SQL Persistent Stored Modules (SQL/PSM) portion of the standard.
Common criticisms of SQL include a perceived lack of cross-platform portability between vendors,
inappropriate handling of missing data and unnecessarily complex and occasionally ambiguous
language grammar and semantics.
During the 1970s, a group at IBM's San Jose research center developed the System R relational
database management system, based on the model introduced by Edgar F. Codd in his influential paper,
A Relational Model of Data for Large Shared Data Banks. Donald D. Chamberlin and Raymond F.
Boyce of IBM subsequently created the Structured English Query Language (SEQUEL) to manipulate
and manage data stored in System R. The acronym SEQUEL was later changed to SQL because
"SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.
The first non-commercial non-SQL RDBMS, Ingres, was developed in 1974 at the U.C. Berkeley.
Ingres implemented a query language known as QUEL, which was later supplanted in the marketplace
by SQL.
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts
described by Codd, Chamberlin, and Boyce and developed their own SQL-based RDBMS with
aspirations of selling it to the U.S. Navy, CIA, and other government agencies. In the summer of 1979,
Relational Software, Inc. introduced the first commercially available implementation of SQL, Oracle
V2 (Version2) for VAX computers. Oracle V2 beat IBM's release of the System/38 RDBMS to market
by a few weeks.
After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM
began developing commercial products based on their System R prototype including System/38,
SQL/DS, and DB2, which were commercially available in 1979, 1981, and 1983, respectively.
Standardization
42 | P a g e
SQL was adopted as a standard by ANSI in 1986 and ISO in 1987. In the original SQL standard. Until
1996, the National Institute of Standards and Technology (NIST) data management standards program
was tasked with certifying SQL DBMS compliance with the SQL standard. In 1996, however, the NIST
data management standards program was dissolved, and vendors are now relied upon to self-certify
their products for compliance.
The SQL standard has gone through a number of revisions, as shown below:
1992 SQL-92 SQL2, FIPS 127-2 Major revision (ISO 9075), Entry Level SQL-92 adopted as FIPS 127-2.
1999 SQL:1999 SQL3 Added regular expression matching, recursive queries, triggers, support
for procedural and control-of-flow statements, non-scalar types, and some
object-oriented features.
2006 SQL:2006 ISO/IEC 9075-14:2006 defines ways in which SQL can be used in
conjunction with XML. It defines ways of importing and storing XML
data in an SQL database, manipulating it within the database and
publishing both XML and conventional SQL-data in XML form. In
addition, it provides facilities that permit applications to integrate into
their SQL code the use of XQuery, the XML Query Language published
by the World Wide Web Consortium (W3C), to concurrently access
ordinary SQL-data and XML documents.
The SQL standard is not freely available. SQL: 2003 and SQL: 2006 may be purchased from ISO or
ANSI. A late draft of SQL: 2003 is freely available as a zip archive, however, from Whitemarsh
Information Systems Corporation. The zip archive contains a number of PDF files that define the parts
of the SQL: 2003 specification.
43 | P a g e
Scope and extensions
Procedural extensions
SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-
based, declarative query language, not an imperative language such as C or BASIC. However, there are
extensions to Standard SQL which add procedural programming language functionality, such as control-
of-flow constructs. These are:
Common
Source Full Name
Name
ANSI/ISO
SQL/PSM SQL/Persistent Stored Modules
Standard
Microsoft/
T-SQL Transact-SQL
Sybase
In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-
oriented programmability is available on many SQL platforms via DBMS integration with other
languages. The SQL standard defines SQL/JRT extensions (SQL Routines and Types for the Java
Programming Language) to support Java code in SQL databases. SQL Server 2005 uses the SQLCLR
(SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while
prior versions of SQL Server were restricted to using unmanaged extended stored procedures which
were primarily written in C. Other database platforms, like MySQL and Postgres, allow functions to be
written in a wide variety of languages including Perl, Python, Tcl, and C.
Additional extensions
SQL: 2003 also defines several additional extensions to the standard to increase SQL functionality
overall. These extensions include:
44 | P a g e
The SQL/CLI, or Call-Level Interface, extension is defined in ISO/IEC 9075-3:2003. This extension
defines common interfacing components (structures and procedures) that can be used to execute SQL
statements from applications written in other programming languages. The SQL/CLI extension is
defined in such a way that SQL statements and SQL/CLI procedure calls are treated as separate from
the calling application's source code.
The SQL/JRT, or SQL Routines and Types for the Java Programming Language, extension is defined
by ISO/IEC 9075-13:2003. SQL/JRT specifies the ability to invoke static Java methods as routines
from within SQL applications. It also calls for the ability to use Java classes as SQL structured user-
defined types.
45 | P a g e
The SQL/PSM, or Persistent Stored Modules, extension is defined by ISO/IEC 9075-4:2003. SQL/PSM
standardizes procedural extensions for SQL, including flow of control, condition handling, statement
condition signals and resignals, cursors and local variables, and assignment of expressions to variables
and parameters. In addition, SQL/PSM formalizes declaration and maintenance of persistent database
language routines (e.g., "stored procedures").
Expressions which can produce either scalar values or tables consisting of columns and rows of data.
Predicates which specify conditions that can be evaluated to SQL three-valued logic Boolean truth
values and which are used to limit the effects of statements and queries, or to change program flow.
Clauses which are (in some cases optional) constituent components of statements and queries.
Whitespace is generally ignored in SQL statements and queries, making it easier to format SQL code
for readability.
SQL statements also include the semicolon (";") statement terminator. Though not required on every
platform, it is defined as a standard part of the SQL grammar.
Queries
The most common operation in SQL databases is the query, which is performed with the declarative
SELECT keyword. SELECT retrieves data from a specified table, or multiple related tables, in a
database. While often grouped with Data Manipulation Language (DML) statements, the standard
SELECT query is considered separate from SQL DML, as it has no persistent effects on the data stored
in a database. Note that there are some platform-specific variations of SELECT that can persist their
effects in a database, such as the SELECT INTO syntax that exists in some databases.
SQL queries allow the user to specify a description of the desired result set, but it is left to the devices
of the database management system (DBMS) to plan, optimize, and perform the physical operations
46 | P a g e
necessary to produce that result set in as efficient a manner as possible. An SQL query includes a list of
columns to be included in the final result immediately following the SELECT keyword. An asterisk
("*") can also be used as a "wildcard" indicator to specify that all available columns of a table (or
multiple tables) are to be returned. SELECT is the most complex statement in SQL, with several
optional keywords and clauses, including:
The FROM clause which indicates the source table or tables from which the data is to be retrieved. The
FROM clause can include optional JOIN clauses to join related tables to one another based on user-
specified criteria.
The WHERE clause includes a comparison predicate, which is used to restrict the number of rows
returned by the query. The WHERE clause is applied before the GROUP BY clause. The WHERE
clause eliminates all rows from the result set where the comparison predicate does not evaluate to True.
The GROUP BY clause is used to combine, or group, rows with related values into elements of a
smaller set of rows. GROUP BY is often used in conjunction with SQL aggregate functions or to
eliminate duplicate rows from a result set.
The HAVING clause includes a comparison predicate used to eliminate rows after the GROUP BY
clause is applied to the result set. Because it acts on the results of the GROUP BY clause, aggregate
functions can be used in the HAVING clause predicate.
The ORDER BY clause is used to identify which columns are used to sort the resulting data, and in
which order they should be sorted (options are ascending or descending). The order of rows returned by
an SQL query is never guaranteed unless an ORDER BY clause is specified.
The following is an example of a SELECT query that returns a list of expensive books. The query
retrieves all rows from the Book table in which the price column contains a value greater than 100.00.
The result is sorted in ascending order by title. The asterisk (*) in the select list indicates that all
columns of the Book table should be included in the result set.
SELECT *
FROM Book
The example below demonstrates the use of multiple tables in a join, grouping, and aggregation in an
SQL query, by returning a list of books and the number of authors associated with each book.
FROM Book
JOIN Book_author
ON Book.isbn = Book_author.isbn
GROUP BY Book.title;
Title Authors
Pitfalls of SQL 1
(The underscore character "_" is often used as part of table and column names to separate descriptive
words because other punctuation tends to conflict with SQL syntax. For example, a dash "-" would be
interpreted as a minus sign.)
Under the precondition that isbn is the only common column name of the two tables and that a column
named title only exists in the Books table, the above query could be rewritten in the following form:
48 | P a g e
FROM Book
GROUP BY title;
However, many vendors either do not support this approach, or it requires certain column naming
conventions. Thus, it is less common in practice.
Data retrieval is very often combined with data projection when the user is looking for calculated
values and not just the verbatim data stored in primitive data types, or when the data needs to be
expressed in a form that is different from how it's stored. SQL allows the use of expressions in the
select list to project data, as in the following example which returns a list of books that cost more than
100.00 with an additional sales_tax column containing a sales tax figure calculated at 6% of the price.
FROM Book
ORDER BY title;
Some modern day SQL queries may include extra WHERE statements that are conditional to each
other. They may look like this example:
FROM Book
ORDER BY title;
49 | P a g e
Server Side
Core 2 Due 2.4GHz and Above
2 GB of Random Access Memory and Above
160 GB Hard Disk
Client Side
Pentium-IV 1.5MHs and Above
512 MB of Random Access Memory and Above
80 GB Hard Disk
50 | P a g e
Chapter 7
System Design
51 | P a g e
1. Index page
This webpage is the starting page of the Website.It gives the followings:
2. Status.
52 | P a g e
As in the above image the Status webpage is displaying:
Accessed by anyone.
Information about the booking which seat is booked and which
is empty.
53 | P a g e
3. Agent name.
Accessed by anyone.
Contains information about name, address and phone number
of the agent.
54 | P a g e
4. Feedback
55 | P a g e
5. FAQ
56 | P a g e
6. Privacy and Policy.
57 | P a g e
7. Terms and Conditions.
Accessed by anyone.
Useful for customer
Contain information when to reach the starting point and what should do, in case when
our ticket is lost.
58 | P a g e
8. Login page
59 | P a g e
9. Forget Password Page
It required user name who forget its password and then click on Next button.
And also provide link for administration and other.
60 | P a g e
10. Identity Confirmation.
61 | P a g e
11. Ticket Booking page.
62 | P a g e
11. Select Seat page
63 | P a g e
12. Customer Information page
64 | P a g e
13. Ticket Print page
65 | P a g e
14. Search Ticket.
66 | P a g e
15. Ticket Cancellation
67 | P a g e
16. Ticket Cancellation
68 | P a g e
17. Change Password
69 | P a g e
Administrator Area
Username
Password
Email
Security Question.
Security Answer.
70 | P a g e
19. Create agent continue page
As in the above image the Create agents continue page web page is displaying:
71 | P a g e
20. Agent Basic Information page
As in the above image the agent’s Basic information web page is displaying:
Name
Father’s Name
Shop Name
Shop City
Shop phone number
Mobile Number
Deposit amount
72 | P a g e
21. Agent List page
Agent ID
Name
Shop Name
Shop City
Current Balance
Mobile Number
73 | P a g e
22. Agent Deposit Amount Page
As in the above image the agent’s Deposit Amount web page is displaying:
74 | P a g e
23. Agent Balance Page
75 | P a g e
24. Search Agent Page
76 | P a g e
25. Search Agent Page
77 | P a g e
78 | P a g e
79 | P a g e
80 | P a g e
Chapter 8
System Testing
81 | P a g e
System Testing
Once source code has been generated, software must be tested to uncover (and correct) as many errors
as possible before delivery to customer. Our goal is to design a series of test cases that have a high
likelihood of finding errors. To uncover the errors software techniques are used. These techniques
provide systematic guidance for designing test that
(1) Internal program logic is exercised using “White box” test case design techniques.
(2) Software requirements are exercised using “block box” test case design techniques.
In both cases, the intent is to find the maximum number of errors with the minimum amount of effort
and time.
8.2 Strategies
A strategy for software testing must accommodate low-level tests that are necessary to verify that a
small source code segment has been correctly implemented as well as high-level tests that validate
major system functions against customer requirements. A strategy must provide guidance for the
practitioner and a set of milestones for the manager. Because the steps of the test strategy occur at a
time when deadline pressure begins to rise, progress must be measurable and problems must surface as
earl as possible.
Following testing techniques are well known and the same strategy is adopted during this project
testing.
8.2.1 Unit testing: Unit testing focuses verification effort on the smallest unit of software design- the
software component or module. The unit test is white-box oriented. The module interface is tested to
ensure that information properly flows into and of the program unit under test the local data structure
has been examined to ensure that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution. Boundary conditions are tested to ensure that the module operated properly at
82 | P a g e
boundaries established to limit or restrict processing. All independent paths through the control
structure are exercised to ensure that all statements in a module haven executed at least once.
8.2.2 Integration testing: Integration testing is a systematic technique for constructing the program
structure while at the same time conducting tests to uncover errors associated with interfacing. The
objective of this test is to take unit tested components and build a program structure that has been
dictated by design.
8.2.3 Validation testing: At the culmination of integration testing, software is completely assembled as a
package, interfacing errors have been uncovered and corrected, and a final series of software tests—
validation testing-may begin. Validation can be defined in many ways, but a simple definition is that
validation succeeds when software functions in a manner that can be reasonably expected by the
customer.
8.2.4 System testing: System testing is actually a series of different tests whose primary purpose is to
fully exercise the computer-based system. Below we have described the two types of testing which
have been taken for this project.
Any computer-based system that manages sensitive information causes actions that can improperly
harm (or benefit) individuals is a target for improper or illegal penetration. Penetration spans a broad
range of activities: hackers who attempt to penetrate system for sport; disgruntled employees who
attempt to penetrate for revenge; dishonest individuals who attempt to penetrate for illicit personal
gain.
For security purposes, when anyone who is not authorized user cannot penetrate this system.
When programs first load it check for correct username and password. If any fails to act according will
be simply ignored by the system.
Performance testing is designed to test the run-time performance of software within the context of an
integrated system. Performance testing occurs throughout all steps in the testing process. Even at the
unit level, the performance of an individual module may be assessed as white-box tests are conducted.
83 | P a g e
8.3. Criteria for Completion of Testing
Every time the customer/user executes a compute program, the program is being tested. This sobering
fact underlines the importance of other software quality assurance activities.
As much time we run our project that is still sort of testing as Musa and Ackerman said. They have
suggested a response that is based on statistical criteria: “No, we cannot be absolutely certain that the
software will never fail, but relative to a theoretically sound and experimentally validated statistical
model, we have done sufficient testing to say with 95 percent confidence that the probability of 1000
CPU hours of failure free operation in a probabilistically defined environment is at least 0.995.”
Validation checks are useful when we specify the nature of data input. Let us elaborate what I mean. In
this project while entering the data to many text box you will find the use of validation checks. When
you try to input wrong data. Your entry will be automatically abandoned.
In the very beginning of the project when user wishes to enter into the project, he has to supply the
password. This password is validated to certain string, till user won’t supply correct word of string for
password he cannot succeed. When you try to edit the record for the trainee in Operation division you
will find the validation checks. If you supply the number (digits) for name text box, you won’t get the
entry; similarly if you data for trainee code in text (string) format it will be simply abandoned.
A validation check facilitates us to work in a greater way. It become necessary for certain Applications
like this.
84 | P a g e
Chapter 9
System Implementation
85 | P a g e
Specification, regardless of the mode through which we accomplish it, may be viewed as a
representation process. Requirements are represented in manner that ultimately leads to successful
software implementation.
A number of specification principles, adapted from the work of balzer and Goodman can be proposed:
1. Separate functionality from implementation.
2. Develop a model of the desired behavior of a system that encompasses date and the functional
responses of a system to various stimuli from the environment.
3. Establish the context in which software operates by specifying the manner in which other system
components interact with software.
4. Define the environment in which the system operates.
5. Create a cognitive model rather than a design or implementation model. The cognitive model
describes a system as perceived by its user community.
6. Recognize that “the specifications must be tolerant of incompleteness and augmentable.”
7. Establish the content and structure of a specification in a way that will enable it to be amenable to
change.
This list of basic specification principles provides a basis for representing software requirements.
However, principles must be translated into realization.
9.1.2 Representation
As we know software requirement may be specified in a variety of ways. However, if requirements are
committed to paper a simple set of guidelines is well worth following:
86 | P a g e
Chapter 10
Conclusion
87 | P a g e
To conclude, Project Grid works like a component which can access all the databases and picks up
different functions. It overcomes the many limitations incorporated in the .NET Framework. Among
the many features availed by the project, the main among them are:
Simple editing
Insertion of individual images on each cell
Insertion of individual colors on each cell
Flicker free scrolling
Drop-down grid effect
Placing of any type of control anywhere in the grid
88 | P a g e
Chapter 11
89 | P a g e
Future scope of the project: -
The project has a very vast scope in future. The project can be implemented on internet in future.
Project can be updated in near future as and when requirement for the same arises, as it is very flexible
in terms of expansion. With the proposed software of Web Space Manager ready and fully functional
the client is now able to manage and hence run the entire work in a much better, accurate and error free
manner. The following are the future scope for the project: -
The number of levels that the software is handling can be made unlimited in future from the
current status of handling up to N levels as currently laid down by the software. Efficiency can
be further enhanced and boosted up to a great extent by normalizing and de-normalizing the
database tables used in the project as well as taking the kind of the alternative set of data
structures and advanced calculation algorithms available.
We can in future generalize the application from its current customized status wherein other
vendors developing and working on similar applications can utilize this software and make
changes to it according to their business needs.
Faster processing of information as compared to the current system with high accuracy and
reliability.
Automatic and error free report generation as per the specified format with ease.
Automatic calculation and generation of correct and precise Bills thus reducing much of the
workload on the accounting staff and the errors arising due to manual calculations.
With a fully automated solution, lesser staff, better space utilization and peaceful work
environment, the company is bound to experience high turnover.
A future application of this system lies in the fact that the proposed system would remain relevant in
the future. In case there be any additions or deletion of the services, addition or deletion of any reseller
in any type of modification in future can be implemented easily. The data collected by the system will
be useful for some other purposes also.
All these result in high client-satisfaction, hence, more and more business for the company that will
scale the company business to new heights in the forthcoming future.
90 | P a g e
References
91 | P a g e
References:
Complete Reference of C#
Programming in C# - Deitel & Deitel
www.w3schools.com
http://en.wikipedia.org
The principles of Software Engineering – Roger S.Pressman
Software Engineering – Hudson
MSDN help provided by Microsoft .NET
Object Oriented Programming – Deitel & Deitel
92 | P a g e