E-COMMERCE Unit - 3
E-COMMERCE Unit - 3
Strategic plans, competitive marketing, and a skilled workforce aside, ecommerce organizations
are expected to remain up to date with the latest technologies. These technological advancements
have enabled people to meet their purchasing needs with ease, and as a result, the ecommerce
sector continues to go from strength to strength.
Ecommerce is faster than ever and customers can get anything at the click of a button, all thanks
to the latest available technology. Now customers can track their orders, find the best deals and
much more besides. With all this progress, new business opportunities are inevitably emerging.
1. Omni-channel presence/support
Making use of the right technology means providing customers with not only what they want
when they want it, but where they want it too.
Video Chat: Allows your business to interact face-to-face with customers creating a
personalized, cross-channel, visually demonstrative and consultative experience.
Cobrowsing: A visual engagement system bringing your agents and customers together
on the same page, at the same time, allowing agents to seamlessly guide your customers
through complex procedures.
Screen Sharing: A method of interacting where your customers share their screen with
your agents to effortlessly resolve difficulties filling forms, completing transactions, etc.
Document Interaction: Provides a platform for your agents to interact with your
customers’ documents safely and includes e-signature technologies for enhanced security.
These methods all help to ensure an interconnected customer journey all the way through to
purchase.
2. Extensive personalization
“The customer journey is changing. Consumers want everything, and they want it immediately.
Experience matters than anything else and the technology at our fingertips enable such amazing
experiences, only desired in today’s fast-paced world.” :Sam Hurley
Personalization is the biggest trend in ecommerce right now. Consumers have come to expect a
relevant shopping experience based on their personal preferences.
The statistics show more than 78% of customers ignore offers that aren’t personalized or based
on their previous engagement with the brand. This shows just how important personalization in
marketing and customer support has become.
Artificial Intelligence (AI) and machine learning analytics drive customer behavior patterns,
whilst simultaneously interpreting this data, meaning businesses are provided with a cycle of
desires and expectations, creating endless possibilities.
Big data, machine learning, and AI have made personalization the norm, with businesses catering
their support and services to reflect this.
Mobile platforms have increased in importance, so much so that m-commerce has emerged as a
concept in its own right.
Failing to provide a mobile-oriented shopping experience will lose your brand potential
customers. Equipping yourself with mobile-friendly technology is therefore crucial in
maximizing your chances of future success.
Ecommerce mobile apps: Apps offer customers continuous engagement with your brand
and the chance for your customers to familiarize themselves with new and relevant
purchasing opportunities.
Location-based marketing: Use your customers’ geographical whereabouts to market
products relevant to their specific location.
VR/AR guidance: Integrating VR and AR technologies provides an immersive mobile
shopping experience for your customers, connecting them with your brand in a deeper
and more meaningful way.
Internet of Things (IoT): The IoT stems from the desire to better understand consumer
trends across a range of connected devices. The scope it provides for delivering
personalized mobile shopping experiences to your customers is almost limitless.
4. Conversational Marketing
Traditional marketing channels flow in only one direction. The new concept of conversational
marketing has opened up two-way communication, creating numerous opportunities for
ecommerce success.
Getting information directly from customers makes more sense than attempting to predict it. You
can establish a personalized, real-time, one-on-one conversation on the back of this, safe in the
knowledge you truly understand your customers’ needs.
Below are some of the most efficient technologies for nurturing customer conversations:
Artificial Intelligence plays an important role in everyday life, having a major impact on how we
live and work. There are several examples of AI and automation tools with customer service
applications for your business, including voice-powered assistants such as Apple’s Siri, Google’s
home and Amazon Echo. Research shows that 45% of millennials are already using this type of
voice activated search for online shopping.
Chatbots and virtual assistants represent the future for businesses. Some are already integrating
chatbots in their systems to improve their customers’ experience and boost brand image.
With the help of Chatbots you can order food, check in luggage at the airport, book a hotel room,
schedule your flight, and get recommendations for almost anything you can think of. The
Starbucks chatbot for example gives customers details regarding their order status, payment
details etc.
6. Image search
Ecommerce businesses are integrating image search technology on their websites so customers
can easily photograph products they are interested in and find similar examples on other sites
that may be offering better deals.
Imagine someone sees a beautiful couch, but it costs too much for them. If your business offers
similar products at a more reasonable price, integrating image search into your website will
allow you to potentially pick up on this sale, creating an extra revenue stream.
Cart abandonment is the most frustrating reason for losing a sale because it means a user was
considering buying your product, only to change their mind at the last minute. The latest data
shows a 79.17% global rate of cart abandonment, highlighting how big a problem it is.
One of the main reasons customers abandon their carts is the checkout procedure itself. No
matter how well the lead has been nurtured, inefficient checkout processes raise the chances your
users will abandon their cart.
Therefore, if you want your ecommerce company to be successful, embrace technology that
provides quick and efficient checkout solutions, such as:
Speedy mobile payment solutions, including Apple Pay and Android Pay.
Enabling your customers to save card details, streamlining repeat purchases.
Providing one-page, hassle-free checkouts.
Offering a range of payment options.
Equipped with this technology, your business can alleviate any potential difficulties customers
may encounter at checkout.
The smallest and most basic type of network, a PAN is made up of a wireless modem, a
computer or two, phones, printers, tablets, etc., and revolves around one person in one building.
These types of networks are typically found in small offices or residences, and are managed by
one person or organization from a single device.
There are two types of Personal Area Network:-
Wireless Personal Area Network: Wireless Personal Area Network is developed by simply using
wireless technologies such as WiFi, Bluetooth. It is a low range network.
Wired Personal Area Network: Wired Personal Area Network is created by using the USB.
We’re confident that you’ve heard of these types of networks before LANs are the most
frequently discussed networks, one of the most common, one of the most original and one of the
simplest types of networks. LANs connect groups of computers and low-voltage devices together
across short distances (within a building or between a group of two or three buildings in close
proximity to each other) to share information and resources. Enterprises typically manage and
maintain LANs.
Using routers, LANs can connect to wide area networks (WANs, explained below) to rapidly and
safely transfer data.
Functioning like a LAN, WLANs make use of wireless network technology, such as Wi-Fi.
Typically seen in the same types of applications as LANs, these types of networks don’t require
that devices rely on physical cables to connect to the network.
Larger than LANs, but smaller than metropolitan area networks (MANs, explained below), these
types of networks are typically seen in universities, large K-12 school districts or small
businesses. They can be spread across several buildings that are fairly close to each other so
users can share resources.
These types of networks are larger than LANs but smaller than WANs and incorporate elements
from both types of networks. MANs span an entire geographic area (typically a town or city, but
sometimes a campus). Ownership and maintenance is handled by either a single person or
company (a local council, a large company, etc.).
The Internet is the most basic example of a WAN, connecting all computers together around the
world. Because of a WAN’s vast reach, it is typically owned and maintained by multiple
administrators or the public.
Geographical area: A Wide Area Network provides a large geographical area. Suppose if the
branch of our office is in a different city then we can connect with them through WAN. The
internet provides a leased line through which we can connect with another branch.
Centralized data: In case of WAN network, data is centralized. Therefore, we do not need to buy
the emails, files or back up servers.
Get updated files: Software companies work on the live server. Therefore, the programmers get
the updated files within seconds.
Exchange messages: In a WAN network, messages are transmitted fast. The web application like
Facebook, Whatsapp, Skype allows you to communicate with friends.
Sharing of software and resources: In WAN network, we can share the software and other
resources like a hard drive, RAM.
Global business: We can do the business over the internet globally.
High bandwidth: If we use the leased lines for our company then this gives the high bandwidth.
The high bandwidth increases the data transfer rate which in turn increases the productivity of
our company.
Security issue: A WAN network has more security issues as compared to LAN and MAN network
as all the technologies are combined together that creates the security problem.
Needs Firewall & antivirus software: The data is transferred on the internet which can be
changed or hacked by the hackers, so the firewall needs to be used. Some people can inject the
virus in our system so antivirus is needed to protect from such a virus.
High Setup cost: An installation cost of the WAN network is high as it involves the purchasing of
routers, switches.
Troubleshooting problems: It covers a large area so fixing the problem is difficult.
As a dedicated high-speed network that connects shared pools of storage devices to several
servers, these types of networks don’t rely on a LAN or WAN. Instead, they move storage
resources away from the network and place them into their own high-performance network.
SANs can be accessed in the same fashion as a drive attached to a server. Types of storage-area
networks include converged, virtual and unified SANs.
This term is fairly new within the past two decades. It is used to explain a relatively local
network that is designed to provide high-speed connection in server-to-server applications
(cluster environments), storage area networks (called “SANs” as well) and processor-to-
processor applications. The computers connected on a SAN operate as a single system at very
high speeds.
These types of networks are built and owned by businesses that want to securely connect its
various locations to share computer resources.
By extending a private network across the Internet, a VPN lets its users send and receive data as
if their devices were connected to the private network even if they’re not. Through a virtual
point-to-point connection, users can access a private network remotely.
Internet Architecture
The internet is a complex and decentralized network of interconnected computers, servers, and
devices that allows for the exchange of information and communication between users and
machines all around the world. The architecture of the internet is the underlying design and
organization of this network, including the protocols, standards, and technologies that enable its
functionality. In this article, we will discuss the architecture of the internet, its history, and its
current state.
The internet architecture can be traced back to the 1960s when the US Department of Defense
developed the Advanced Research Projects Agency Network (ARPANET) as a means of
communication for researchers and scientists across the country. ARPANET used packet
switching, a method of transmitting digital data in small units or packets, to enable more efficient
and reliable communication between computers.
Over time, ARPANET evolved into the internet, a global network of interconnected computers
and devices that allowed for the exchange of information and communication on a much larger
scale. The development of the World Wide Web in the early 1990s further expanded the
capabilities of the internet, allowing users to access and share information through web browsers
and hypertext links.
The architecture of the internet has continued to evolve and adapt to changing technology and
user needs. Today, the internet is a vast and complex network of interconnected devices and
systems, with many different protocols and standards that enable its functionality.
The architecture of the internet is composed of several key components that work together to
enable communication and information exchange between users and devices. These components
include:
1. Endpoints: These are the devices that are connected to the internet, such as computers,
smartphones, servers, and other devices. Endpoints communicate with each other through
the network.
2. Transmission Media: These are the physical channels through which data is transmitted
over the network, including copper wires, fiber optic cables, and wireless communication
channels.
3. Protocols: These are the rules and standards that govern how data is transmitted and
received over the network. Protocols include the Transmission Control Protocol/Internet
Protocol (TCP/IP), which is used to transfer data over the internet.
4. Network Infrastructure: This includes the routers, switches, and other networking
devices that are used to connect endpoints and transmit data over the network.
5. Domain Name System (DNS): This is the system that translates domain names into IP
addresses, which are used to identify and locate devices on the internet.
6. Web Servers: These are the servers that host websites and web applications, allowing
users to access and interact with content on the web.
7. Clients: These are the software applications that users use to interact with web servers
and access content on the web, including web browsers, email clients, and other
applications.
The architecture of the internet has evolved significantly since its early days, and today it is a
complex and decentralized network that spans the globe. The internet has enabled unprecedented
levels of communication and information exchange, and has become an essential part of modern
society.
One of the key challenges facing the internet architecture today is the increasing demand for
bandwidth and network capacity. As more and more devices and services are connected to the
internet, the network must be able to handle the increased traffic and data transfer demands. This
has led to the development of new technologies such as 5G wireless networks and high-speed
fiber optic connections.
Another challenge facing the internet architecture is the need to balance security and privacy
with openness and accessibility. The internet has enabled the free flow of information and
communication, but it has also created new opportunities for cyberattacks, data breaches, and
other security threats. As a result, there is a growing need for robust security measures and
privacy protections to ensure that the internet remains safe and secure for all users.
Computer Hardware
CENTRAL PROCESSING UNIT (CPU)
Central processing unit (CPU) is the central component of the Computer System. Sometimes it is
called as microprocessor or processor. It is the brain that runs the show inside the Computer. All
functions and processes that is done on a computer is performed directly or indirectly by the
processor. Obviously, computer processor is one of the most important element of the Computer
system. CPU is consist of transistors, that receives inputs and produces output. Transistors
perform logical operations which is called processing. It is also, scientifically, not only one of the
most amazing parts of the PC, but one of the most amazing devices in the world of technology.
Motherboard
Alternatively referred to as the mb, mainboard, mboard, mobo, mobd, backplane board, base
board, main circuit board, planar board, system board, or a logic board on Apple computers. The
motherboard is a printed circuit board and foundation of a computer that is the biggest board in a
computer chassis. It allocates power and allows communication to and between the CPU, RAM,
and all other computer hardware components.
A motherboard provides connectivity between the hardware components of a computer, like the
processor (CPU), memory (RAM), hard drive, and video card. There are multiple types of
motherboards, designed to fit different types and sizes of computers.
Each type of motherboard is designed to work with specific types of processors and memory, so
they are not capable of working with every processor and type of memory. However, hard drives
are mostly universal and work with the majority of motherboards, regardless of the type or
brand.
Microprocessor
Microprocessor is a controlling unit of a micro-computer, fabricated on a small chip capable of
performing ALU (Arithmetic Logical Unit) operations and communicating with the other
devices connected to it.
Microprocessor consists of an ALU, register array, and a control unit. ALU performs
arithmetical and logical operations on the data received from the memory or an input device.
Register array consists of registers identified by letters like B, C, D, E, H, L and accumulator.
The control unit controls the flow of data and instructions within the computer.
Initially, the instructions are stored in the memory in a sequential order. The microprocessor
fetches those instructions from the memory, then decodes it and executes those instructions till
STOP instruction is reached. Later, it sends the result in binary to the output port. Between these
processes, the register stores the temporarily data and ALU performs the computing functions.
Instruction Set − It is the set of instructions that the microprocessor can understand.
Bandwidth − It is the number of bits processed in a single instruction.
Clock Speed − It determines the number of operations per second the processor can perform. It
is expressed in megahertz (MHz) or gigahertz (GHz).It is also known as Clock Rate.
Word Length − It depends upon the width of internal data bus, registers, ALU, etc. An 8-bit
microprocessor can process 8-bit data at a time. The word length ranges from 4 bits to 64 bits
depending upon the type of the microcomputer.
Data Types − The microprocessor has multiple data type formats like binary, BCD, ASCII, signed
and unsigned numbers.
Features of a Microprocessor
Cost-effective: The microprocessor chips are available at low prices and results its low cost.
Size: The microprocessor is of small size chip, hence is portable.
Low Power Consumption: Microprocessors are manufactured by using metaloxide
semiconductor technology, which has low power consumption.
Versatility: The microprocessors are versatile as we can use the same chip in a number of
applications by configuring the software program.
Reliability: The failure rate of an IC in microprocessors is very low, hence it is reliable.
The Pentium III model, introduced in 1999, represents Intel’s 32-bit x86 desktop and mobile
microprocessors in accordance with the sixth-generation P6 micro-architecture.
The Pentium III processor included SDRAM, enabling incredibly fast data transfer between the
memory and the microprocessor. Pentium III was also faster than its predecessor, the Pentium II,
featuring clock speeds of up to 1.4 GHz. The Pentium III included 70 new computer instructions
which allowed 3-D rendering, imaging, video streaming, speech recognition and audio
applications to run more quickly.
The Pentium III processor was produced from 1999 to 2003, with variants codenamed Katmai,
Coppermine, Coppermine T and Tualatin. The variants’ clock speeds varied from 450 MHz to
1.4 GHz. The Pentium III processor’s new instructions were optimized for multimedia
applications called MMX. It supported floating-point units and integer calculations, which are
often required for still or video images to be modified for computer displays. The new
instructions also supported single instruction multiple data (SIMD) instructions, which allowed a
type of parallel processing.
Other Intel brands associated with the Pentium III were Celeron (for low-end versions) and Xeon
(for high-end versions).
Cyrix
Cyrix Corporation was a microprocessor developer that was founded in 1988 in Richardson,
Texas, as a specialist supplier of math coprocessors for 286 and 386 microprocessors. The
company was founded by Tom Brightman and Jerry Rogers. Cyrix founder, President and CEO
Jerry Rogers, aggressively recruited engineers and pushed them, eventually assembling a small
but efficient design team of 30 people.
MMX Technology
MMX is a Pentium microprocessor from Intel that is designed to run faster when playing
multimedia applications. According to Intel, a PC with an MMX microprocessor runs a
multimedia application up to 60% faster than one with a microprocessor having the same clock
speed but without MMX. In addition, an MMX microprocessor runs other applications about
10% faster, probably because of increased cache. All of these enhancements are made while
preserving compatibility with software and operating systems developed for the Intel
Architecture.
MMX is a single instruction, multiple data (SIMD) instruction set designed by Intel, introduced
in January 1997 with its P5-based Pentium line of microprocessors, designated as “Pentium with
MMX Technology”. It developed out of a similar unit introduced on the Intel i860, and earlier
the Intel i750 video pixel processor. MMX is a processor supplementary capability that is
supported on recent IA-32 processors by Intel and other vendors.
The New York Times described the initial push, including Super Bowl ads, as focused on “a new
generation of glitzy multimedia products, including videophones and 3-D video games.”
MMX has subsequently been extended by several programs by Intel and others: 3DNow!,
Streaming SIMD Extensions (SSE), and ongoing revisions of Advanced Vector Extensions
(AVX).
Computer software, or simply software, is a collection of data or computer instructions that tell
the computer how to work. This is in contrast to physical hardware, from which the system is
built and actually performs the work. In computer science and software engineering, computer
software is all information processed by computer systems, programs and data. Computer
software includes computer programs, libraries and related non-executable data, such as online
documentation or digital media. Computer hardware and software require each other and neither
can be realistically used on its own.
The majority of software is written in high-level programming languages. They are easier and
more efficient for programmers because they are closer to natural languages than machine
languages. High-level languages are translated into machine language using a compiler or an
interpreter or a combination of the two. Software may also be written in a low-level assembly
language, which has strong correspondence to the computer’s machine language instructions and
is translated into machine language using an assembler.
Most programming languages consist of instructions for computers. There are programmable
machines that use a set of specific instructions, rather than general programming languages.
Early ones preceded the invention of the digital computer, the first probably being the automatic
flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic
Golden Age. Since the early 1800s, programs have been used to direct the behavior of machines
such as Jacquard looms, music boxes and player pianos. The programs for these machines (such
as a player piano’s scrolls) did not produce different behavior in response to different inputs or
conditions.
Thousands of different programming languages have been created, and more are being created
every year. Many programming languages are written in an imperative form (i.e., as a sequence
of operations to perform) while other languages use the declarative form (i.e. the desired result is
specified, not how to achieve it).
The description of a programming language is usually split into the two components of syntax
(form) and semantics (meaning). Some languages are defined by a specification document (for
example, the C programming language is specified by an ISO Standard) while other languages
(such as Perl) have a dominant implementation that is treated as a reference. Some languages
have both, with the basic language defined by a standard and extensions taken from the dominant
implementation being common.
System Software
System software is software designed to provide a platform for other software. Examples of
system software include operating systems like macOS, GNU/Linux and Microsoft Windows,
computational science software, game engines, industrial automation, and software as a service
applications.
In contrast to system software, software that allows users to do user-oriented tasks such as create
text documents, play games, listen to music, or browse the web are collectively referred to as
application software.
In the early days of computing most application software was custom-written by computer users
to fit their specific hardware and requirements. In contrast, system software was usually supplied
by the manufacturer of the computer hardware and was intended to be used by most or all users
of that system.
The line where the distinction should be drawn is not always clear. Many operating systems
bundle application software. Such software is not considered system software when it can be
uninstalled usually without affecting the functioning of other software. Exceptions could be e.g.
web browsers such as Internet Explorer where Microsoft argued in court that it was system
software that could not be uninstalled. Later examples are Chrome OS and Firefox OS where the
browser functions as the only user interface and the only way to run programs (and other web
browsers can not be installed in their place), then they can well be argued to be the operating
system and hence system software.
We have seen that the fundamental purpose of the operating system is to manage the various
system resources. We have also examined the human computer interface which allows us to
interact with the operating system. There is, however, a significant body of software that, while
not strictly part of the operating system itself, cannot be described as application software. This
software is often bundled with the operating system software, and comes under the general
heading of utility software.
Utility software can include file re-organization utilities, backup programs, and a whole range of
communication services. Many of the utilities that are bundled with a particular operating system
are installed by default, although a significant number are optional and must be explicitly
selected for installation.
The number and type of utility program provided varies from one operating system to another,
but common examples include facilities to partition and format hard drives and floppy disks, file
encryption and compression utilities, and task scheduling programs. These utilities are often
implemented as stand-alone programs and can be run by the user in much the same way as an
application program. In many cases, there are a number of proprietary utility programs on the
market that carry out the same tasks, but with additional value added features.
Utility Software
Although a basic set of utility programs is usually distributed with an operating system (OS), and
this first party utility software is often considered part of the operating system, users often install
replacements or additional utilities. Those utilities may provide additional facilities to carry out
tasks that are beyond the capabilities of the operating system.
Many utilities that might affect the entire computer system require the user to have elevated
privileges, while others that operate only on the user’s data do not.
Intranet
Intranet is defined as private network of computers within an organization with its own server
and firewall. Moreover we can define Intranet as:
Intranet is system in which multiple PCs are networked to be connected to each other. PCs in
intranet are not available to the world outside of the intranet.
Usually each company or organization has their own Intranet network and members/employees
of that company can access the computers in their intranet.
Each computer in Intranet is also identified by a IP Address, which is unique among the
computers in that Intranet.
Advantage of Intranet
Intranet is very efficient and reliable network system for any organization. It is beneficial in
every aspect such as collaboration, cost-effectiveness, security, productivity and much more.
(i) Communication
Intranet offers easy and cheap communication within an organization. Employees can
communicate using chat, e-mail or blogs.
(iii) Collaboration
Intranet can connect computers and other devices with different architecture.
Data is available at every time and can be accessed using company workstation. This helps the
employees work faster.
(viii) Security
Since information shared on intranet can only be accessed within an organization, therefore there
is almost no chance of being theft.
Intranet targets only specific users within an organization therefore, once can exactly know
whom he is interacting.
Any changes made to information are reflected immediately to all the users.
Issues in Intranet
Apart from several benefits of Intranet, there also exist some issues.. These issues are shown in
the following diagram:
Applications
Intranet applications are same as that of Internet applications. Intranet applications are also
accessed through a web browser. The only difference is that, Intranet applications reside on local
server while Internet applications reside on remote server. Here, we’ve discussed some of these
applications:
(i) Document publication applications
Document publication applications allow publishing documents such as manuals, software guide,
employee profits etc without use of paper.
It offers electronic resources such as software applications, templates and tools, to be shared
across the network.
Like on internet, we have e-mail and chat like applications for Intranet, hence offering an
interactive communication among employees.
Intranet offers an environment to deploy and test applications before placing them on Internet.
Extranet
An extranet is a controlled private network that allows access to partners, vendors and suppliers
or an authorized set of customers normally to a subset of the information accessible from an
organization’s intranet. An extranet is similar to a DMZ in that it provides access to needed
services for authorized parties, without granting access to an organization’s entire network. An
extranet is a private network organization.
Historically the term was occasionally also used in the sense of two organizations sharing their
internal networks over a VPN.
During the late 1990s and early 2000s, several industries started to use the term ‘extranet’ to
describe centralized repositories of shared data (and supporting applications) made accessible via
the web only to authorized members of particular work groups – for example, geographically
dispersed, multi-company project teams. Some applications are offered on a software as a service
(SaaS) basis.
Advantage of Extranet
Extranets can be expensive to implement and maintain within an organization (e.g., hardware,
software, employee training costs), if hosted internally rather than by an application service
provider.
Security of extranets can be a concern when hosting valuable or proprietary information.
Issues in Extranet
Apart for advantages there are also some issues associated with extranet. These issues are
discussed below:
1. Hosting
Where the extranet pages will be held i.e. who will host the extranet pages. In this context there
are two choices:
Host it with an Internet Service Provider (ISP) in the same way as web pages.
But hosting extranet pages on your own server requires high bandwidth internet connection
which is very costly.
3. Security
Additional firewall security is required if you host extranet pages on your own server which
result in a complex security mechanism and increase work load.
4. Accessing Issues
5. Decreased Interaction
It decreases the face to face interaction in the business which results in lack of communication
among customers, business partners and suppliers.
Application layer (highest): Concerned with the data(URL, type, etc), where HTTP, HTTPS, etc
comes in.
Transport layer: Responsible for end-to-end communication over a network.
Network layer: Provides data route.
The web is a subset of the internet. It’s a system of Internet servers that support specially
formatted documents. The documents are formatted in a markup language called HTML (that
supports links, multimedia, etc). These documents are interlinked using hypertext links and are
accessible via the Internet.
URI
URI stands for ‘Uniform Resource Identifier’, it’s like an address providing a unique global
identifier to a resource on the Web. Uniform Resource Locator (URL) is the most commonly
used form of a URI.
The Internet is not governed, it has no single authority figure. The ultimate authority for where
the Internet is going rests with the Internet Society, or ISOC.
ISOC appoints the IAB: Internet Architecture Board. They meet regularly to review standards
and allocate resources, like addresses.
IETF: Internet Engineering Task Force. Another volunteer organisation that meets regularly to
discuss operational and technical problems.
1. Client-Side Components: These are the components that run on the client-side, which is
typically the user’s computer or device. Client-side components include web browsers, scripting
languages, and user interface components such as buttons and menus.
2. Server-Side Components: These are the components that run on the server-side, which is
typically a remote server or cloud-based system. Server-side components include web servers,
application servers, and databases.
3. Communication Protocols: These are the protocols that govern how data is transmitted
between the client-side and server-side components. The most common communication
protocols used in web system architecture include HTTP, HTTPS, and WebSockets.
4. Data Formats: These are the formats used to represent and transmit data between the client-
side and server-side components. Common data formats used in web system architecture
include JSON, XML, and CSV.
5. APIs: APIs, or Application Programming Interfaces, are the interfaces that enable
communication and data exchange between different components of the web system. APIs
provide a standardized way for applications and services to interact with each other.
6. Security: Web system architecture must also include security mechanisms to protect against
threats such as hacking, data breaches, and other cyber attacks. Security mechanisms can
include encryption, authentication, and access control.
1. Client-Server Architecture: This is the most common type of web system architecture, where
the client-side and server-side components are separate entities. The client-side component
typically consists of a web browser, while the server-side component includes a web server,
application server, and database.
2. Single-Page Applications (SPA): This type of web system architecture is designed to provide a
more responsive user interface, where the user interface is loaded once and then updated
dynamically without requiring a full page refresh. SPA is typically implemented using JavaScript
frameworks such as React and Angular.
3. Microservices Architecture: This architecture is designed to break down a large, monolithic
application into smaller, independent services that can be developed and deployed separately.
Each microservice is responsible for a specific function or feature, and communication between
services is typically done using APIs.
4. Progressive Web Apps (PWA): PWAs are web applications that are designed to provide a native
app-like experience on mobile devices. PWAs use a combination of web technologies such as
HTML, CSS, and JavaScript, along with features such as offline caching and push notifications.
URL is the short form for Uniform Resource Locator, a website URL is the location of a specific
website, page, or file on the Internet. Every URL is made up of multiple parts, and the way yours
are built will have a variety of effects on your site’s security and Search Engine Optimization
(SEO).
In the field of databases in computer science, a transaction log (also transaction journal, database
log, binary log or audit trail) is a history of actions executed by a database management system
used to guarantee ACID properties over crashes or hardware failures. Physically, a log is a file
listing changes to the database, stored in a stable storage format.
If, after a start, the database is found in an inconsistent state or not been shut down properly, the
database management system reviews the database logs for uncommitted transactions and rolls
back the changes made by these transactions. Additionally, all transactions that are already
committed but whose changes were not yet materialized in the database are re-applied. Both are
done to ensure atomicity and durability of transactions.
This term is not to be confused with other, human-readable logs that a database management
system usually provides.
In database management systems, a journal is the record of data altered by a given process.
All log records include the general log attributes above, and also other attributes depending on
their type (which is recorded in the Type attribute, as above).
Update Log Record notes an update (change) to the database. It includes this extra information:
PageID: A reference to the Page ID of the modified page.
Length and Offset: Length in bytes and offset of the page are usually included.
Before and After Images: Includes the value of the bytes of page before and after the page
change. Some databases may have logs which include one or both images.
Compensation Log Record notes the rollback of a particular change to the database. Each
corresponds with exactly one other Update Log Record (although the corresponding update log
record is not typically stored in the Compensation Log Record). It includes this extra information:
undoNextLSN: This field contains the LSN of the next log record that is to be undone for
transaction that wrote the last Update Log.
Commit Record notes a decision to commit a transaction.
Abort Record notes a decision to abort and hence roll back a transaction.
Checkpoint Record notes that a checkpoint has been made. These are used to speed up
recovery. They record information that eliminates the need to read a long way into the log’s
past. This varies according to checkpoint algorithm. If all dirty pages are flushed while creating
the checkpoint (as in PostgreSQL), it might contain:
redoLSN: This is a reference to the first log record that corresponds to a dirty page. i.e. the first
update that wasn’t flushed at checkpoint time. This is where redo must begin on recovery.
undoLSN: This is a reference to the oldest log record of the oldest in-progress transaction. This is
the oldest log record needed to undo all in-progress transactions.
Completion Record notes that all work has been done for this particular transaction. (It has been
fully committed or aborted)
Cookies
Cookies are small files which are stored on a user’s computer. They are designed to hold a
modest amount of data specific to a particular client and website, and can be accessed either by
the web server or the client computer. This allows the server to deliver a page tailored to a
particular user, or the page itself can contain some script which is aware of the data in the cookie
and so is able to carry information from one visit to the website (or related site) to the next.
Writing data to a cookie is usually done when a new webpage is loaded – for example after a
‘submit’ button is pressed the data handling page would be responsible for storing the values in a
cookie. If the user has elected to disable cookies then the write operation will fail, and
subsequent sites which rely on the cookie will either have to take a default action, or prompt the
user to re-enter the information that would have been stored in the cookie.
Cookies are a convenient way to carry information from one session on a website to another, or
between sessions on related websites, without having to burden a server machine with massive
amounts of data storage. Storing the data on the server without using cookies would also be
problematic because it would be difficult to retrieve a particular user’s information without
requiring a login on each visit to the website.
If there is a large amount of information to store, then a cookie can simply be used as a means to
identify a given user so that further related information can be looked up on a server-side
database. For example the first time a user visits a site they may choose a username which is
stored in the cookie, and then provide data such as password, name, address, preferred font size,
page layout, etc. – this information would all be stored on the database using the username as a
key. Subsequently when the site is revisited the server will read the cookie to find the username,
and then retrieve all the user’s information from the database without it having to be re-entered.
The time of expiry of a cookie can be set when the cookie is created. By default the cookie is
destroyed when the current browser window is closed, but it can be made to persist for an
arbitrary length of time after that.
By default cookies are visible to all paths in their domains, but at the time of creation they can be
retricted to a given subpath.
Cookies Security
There is a lot of concern about privacy and security on the internet. Cookies do not in themselves
present a threat to privacy, since they can only be used to store information that the user has
volunteered or that the web server already has. Whilst it is possible that this information could be
made available to specific third party websites, this is no worse than storing it in a central
database. If you are concerned that the information you provide to a webserver will not be
treated as confidential then you should question whether you actually need to provide that
information at all.
Cookies Tracking
Some commercial websites include embedded advertising material which is served from a third-
party site, and it is possible for such adverts to store a cookie for that third-party site, containing
information fed to it from the containing site – such information might include the name of the
site, particular products being viewed, pages visited, etc. When the user later visits another site
containing a similar embedded advert from the same third-party site, the advertiser will be able
to read the cookie and use it to determine some information about the user’s browsing history.
This enables publishers to serve adverts targetted at a user’s interests, so in theory having a
greater chance of being relevant to the user. However, many people see such ‘tracking cookies’
as an invasion of privacy since they allow an advertiser to build up profiles of users without their
consent or knowledge.