Enterprise IAM TrainingModule v1
Enterprise IAM TrainingModule v1
ACCESS MANAGEMENT
(IAM)
TRAINING MODULE
This material is meant for IBM Academic Initiative use only. NOT FOR RESALE.
Enterprise IAM Training Module
July 2019
NOTICES
This information was developed for products and services offered in the USA.
IBM may not offer the products, services, or features discussed in this document
in other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation non-IBM product, program, or service. IBM may have patents or
pending patent applications covering subject matter described in this document.
The furnishings of this document do not grant you any license to these patents.
You can send license inquiries, in writing, to:
IBM Director of Licensing IBM Corporation North Castle Drive, MD-NC119
Armonk, NY 10504-1785 United States of America
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain
transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in the new editions on the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described
in this publication at any time without notice.
IBM, the IBM logo and ibm.com are trademarks or registered trademarks of the
International Business Machines Corp., registered in many jurisdictions
worldwide. Other product and service names might be trademarks of IBM or
other companies. A current list of IBM trademarks in available on the web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.html.
Adobe, and the Adobe logo are either registered trademarks or trademarks of
Adobe Systems Incorporated in the United States, and/or other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Table Of Contents
CHAPTER 1: Introduction
Identity Management (IdM)
The term refers to the entire set of processes and technologies for maintaining
and updating digital identities. Identity lifecycle management includes identity
synchronization, provisioning, de-provisioning, and the ongoing management of
user attributes, credentials and entitlements. It manages identity’s lifecycle
through a combination of processes, organizational structure, and enabling
technologies.
Authentication Authorization
Any combination of the 2 primary forms of Authorization:
following 3 factors will Coarse-Grain
be considered as Strong High-level and overarching
authentication: entitlements
What you know Create, Read, Update, Modify
Password Fine-Grain
Passphrase Detailed and explicit
What you are entitlements
Iris Based on factors such as time,
Fingerprint dept, role and location
What you have
Token
Smartcard
The capability of the subject and system to maintain the secrecy of the
authentication factors for identities directly reflects the level of security of that
system. If the process of illegitimately obtaining and using the authentication
factor of a target user is relatively easy, then the authentication system is insecure.
If that process is relatively difficult, then the authentication system is reasonably
secure.
Authorization
Keep in mind that just because a subject has been identified and authenticated
does not mean they have been authorized to perform any function or access all
resources within the controlled environment. It is possible for a subject to be
logged onto a network (that is, identified and authenticated) but to be blocked
from accessing a file or printing to a printer (that is, by not being authorized to
perform that activity). Most network users are authorized to perform only a
limited number of activities on a specific collection of resources. Identification
and authentication are all-or-nothing aspects of access control.
Authorization has a wide range of variations between all or nothing for each
object within the environment. A user may be able to read a file but not delete
it, print a document but not alter the print queue, or log on to a system but not
access any resources. Authorization is usually defined using one of the concepts
of access control, such as discretionary access control (DAC), mandatory access
control (MAC), or role-based access control (RBAC).
AAA Services
You may have heard of the concept of AAA services. The three As in this
acronym refer to authentication, authorization, and accounting (or sometimes
auditing). However, what is not as clear is that although there are three letters in
the acronym, it refers to five elements: identification, authentication,
authorization, auditing, and accounting. Thus, the first and the third/last A
actually represent two concepts instead of just one.
Auditing recording a log of the events and activities related to the system
and subjects.
Auditing
The audit trails created by recording system events to logs can be used to evaluate
the health and performance of a system. System crashes may indicate faulty
programs, corrupt drivers, or intrusion attempts. The event logs leading up to a
crash can often be used to discover the reason a system failed. Log files provide
an audit trail for re-creating the history of an event, intrusion, or system failure.
Auditing is needed to detect malicious actions by subjects, attempted intrusions,
and system failures and to reconstruct events, provide evidence for prosecution,
and produce problem reports and analysis.
Accountability
To have viable accountability, you must be able to support your security in a court
of law. If you are unable to legally support your security efforts, then you will be
unlikely to be able to hold a human accountable for actions linked to a user
account. With only a password as authentication, there is significant room for
doubt. Passwords are the least secure form of authentication, with dozens of
different methods available to compromise them.
Directory services: Hosts the user profiles and associated credentials that
are used to access applications.
Digital identity: The ID itself, including the description of the user and
his/her/its access privileges. (“Its” because an endpoint, such as a laptop or
smartphone, can have its own digital identity.)
Entitlement: The set of attributes that specify the access rights and
privileges of an authenticated security principal.
Security principal: A digital identity with one or more credentials that can
be authenticated and authorized to interact with the network.
Single sign-on (SSO): A type of access control for multiple related but
separate systems. With a single username and password, a user can access a
system or systems without using different credentials.
Need of IAM
Secure user access plays a key role in the exchange of data and information. In
addition, electronic data is becoming ever more valuable for most companies.
Access protection must therefore meet increasingly strict requirements – an
issue that is often solved by introducing strong authentication. Modern IAM
solutions allow administering users and their access rights flexibly and
effectively, enabling multiple ways of cooperation.
Also, IAM is a prerequisite for the use of cloud services, as such services may
involve outsourcing of data, which in turn means that data handling and access
has to be clearly defined and monitored. At the same time, companies are facing
the challenge of having to work with various forms of IAM data provided by
historically grown systems. To be able to meet current security requirements and
react quickly if required, they need to identify and consolidate such data sources
and define a data lifecycle. The governance level defines the regulatory
framework and the compliance and review procedures. The management level
allows administration of identities, rights and authorization tokens. The execution
On the other hand, it might make things much easier for an attacker who manages
to compromise an over-privileged employee identity. Poor identity access
management also often leads to individuals retaining privileges after they are no
longer employees.
Business Challenges
Today’s enterprise IT departments face the increasingly complex challenge of
providing granular access to information resources, using contextual information
about users and requests, while successfully restricting unauthorized access to
sensitive corporate data.
Organizations can recruit and retain the best talent is to remove the constraints of
geographic location and offer a flexible work environment. A remote workforce
allows businesses to boost productivity while keeping expenses in check as well
as untethering employees from a traditional office setting. However, with
employees scattered all over a country or even the world, enterprise IT teams face
a much more daunting challenge: maintaining a consistent experience for
employees connecting to corporate resources without sacrificing security. The
growth of mobile computing means that IT teams have less visibility into and
control over employees’ work practices. Solution is, a comprehensive, centrally
managed IAM solution returns the visibility and control needed for a distributed
workforce to an enterprise IT team.
2. Distributed applications
3. Productive provisioning
Without a centralized IAM system, IT staff must provision access manually. The
longer it takes for a user to gain access to crucial business applications, the less
productive that user will be. On the flip side, failing to revoke the access rights
of employees who have left the organization or transferred to different
departments can have serious security consequences. To close this window of
exposure and risk, IT staff must de-provision access to corporate data as quickly
as possible.
5. Password problems
Solution is Enterprises can readily make password issues a thing of the past by
federating user identity and extending secure single sign-on (SSO) capabilities
to SaaS, cloud based, web-based, and virtual applications. SSO can integrate
password management across multiple domains and various authentication and
attribute-sharing standards and protocols.
6. Regulatory compliance
• The risks associated with IAM and how they are addressed.
• The needs of the organization.
• How to start looking at IAM within the organization and what an effective
IAM process looks like.
• The process for identifying users and the number of users present within the
organization.
• The process for authenticating users.
• The access permissions that are granted to users.
• Whether users are inappropriately accessing IT resources.
• The process for tracking and recording user activity.
• The risks associated with IAM and how they are addressed.
Deploy: begin assigning access to systems and data using new processes
and workflows.
Optimize: deploy automated and delegated processes only after steady
state has been achieved.
Report: leverage investment to satisfy reporting requirements for
legislation and internal controls.
IAM vendors
The identity and access management vendor landscape is a crowded one,
consisting of both pureplay providers such as Okta and OneLogin and large
vendors such as IBM, Microsoft and Oracle. Below is a list of leading players
based on Gartner’s Magic Quadrant for Access Management, Worldwide, which
was published in June 2017.
Atos (Evidan)
CA Technologies
Centrify
Covisint
ForgeRock
IBM Security Identity and Access Assurance
I-Spring Innovations
Micro Focus
Microsoft Azure Active Directory
Okta
OneLogin
Optimal idM
Information describing the various users, applications, files, printers, and other
resources accessible from a network is often collected into a special database that
is sometimes called a directory. As the number of different networks and
applications has grown, the number of specialized directories of information has
also grown, resulting in islands of information that are difficult to share and
manage. If all of this information could be maintained and accessed in a consistent
and controlled manner, it would provide a focal point for integrating a distributed
environment into a consistent and seamless system.
Directories
A directory is a listing of information about objects arranged in some order that
gives details about each object. Common examples are a city telephone
directory and a library card catalog. For a telephone directory, the objects listed
are people; the names are arranged alphabetically, and the details given about
each person are address and telephone number. Books in a library card catalog
are ordered by author or by title, and information such as the ISBN number of
the book and other publication information is given.
The terms white pages and yellow pages are sometimes used to describe how a
directory is used. If the name of an object (person, printer) is known, its
characteristics (phone number, pages per minute) can be retrieved. This is like
looking up a name in the white pages of a telephone directory. If the name of an
individual object is not known, the directory can be searched for a list of objects
that meet a certain requirement. This is like looking up a listing of hairdressers
in the yellow pages of a telephone directory. However, directories stored on a
computer are much more flexible than the yellow pages of a telephone directory
because they can usually be searched by specific criteria, not just by a
predefined set of categories.
Because directories are meant to store relatively static information and are
optimized for that purpose, they are not appropriate for storing information that
changes rapidly. For example, the number of jobs currently in a print queue
probably should not be stored in the directory entry for a printer because that
information would have to be updated frequently to be accurate. Instead, the
directory entry for the printer can contain the network address of a print server.
The print server can be queried to get the current queue length if desired. The
information in the directory (the print server address) is static, whereas the
number of jobs in the print queue is dynamic.
A request is typically performed by the directory client, and the process that
looks up information in the directory is called the directory server. In general,
servers provide a specific service to clients. Sometimes a server might become
the client of other servers in order to gather the information necessary to process
a request.
There are also associated LDAP APIs for the C language and ways to access
LDAP from within a Java™ application. Additionally, within the Microsoft
development environment, you can access LDAP directories through its Active
Directory Service Interface (ADSI) In general with LDAP, the client is not
dependent upon a particular implementation of the server, and the server can
implement the directory however it chooses.
LDAP is an open industry standard that defines a standard method for accessing
and updating information in a directory. LDAP has gained wide acceptance as
the directory access method of the Internet and is therefore also becoming
strategic within corporate intranets. It is being supported by a growing number
of software vendors and is being incorporated into a growing number of
applications.
LDAP defines a communication protocol. That is, it defines the transport and
format of messages used by a client to access data in an X.500-like directory.
LDAP does not define the directory service itself. When people talk about the
LDAP directory, that is the information that is stored and can be retrieved by the
LDAP protocol.
All modern LDAP directory servers are based on LDAP Version 3. You can use
a Version 2 client with a Version 3 server. However, you cannot use a Version 3
client with a Version 2 server unless you bind as a Version 2 client and use only
Version 2 APIs.
All LDAP servers share many basic characteristics since they are based on the
industry standard Request for Comments (RFCs). However, due to
implementation differences, they are not all completely compatible with each
other when there is not a standard defined.
The request is performed by the directory client, and the process that maintains
and looks up information in the directory is called the directory server. In
general, servers provide a specific service to clients. Sometimes, a server might
become the client of other servers in order to gather the information necessary
to process a request.
The client and server processes may or may not be on the same machine. A server
is capable of serving many clients. Some servers can process client requests in
parallel. Other servers queue incoming client requests for serial processing if they
are currently busy processing another client’s request.
Distributed Directories
The terms local, global, centralized, and distributed are often used to describe a
directory. These terms mean different things in different contexts. In this section,
we explain how these terms apply to directories.
In general, local means nearby, and global means that something is spread across
the universe of interest. The universe of interest might be a company, a country,
or the Earth. Local and global are two ends of a continuum. That is, something
may be more or less global or local than something else. Centralized means that
something is in one place, and distributed means that something is in more than
one place. As with local and global, something can be distributed to a greater or
lesser-extent.
The clients that access information in the directory can be local or remote. Local
clients may all be located in the same building or on the same LAN. Remote
clients might be distributed across the continent or planet. The directory itself can
be centralized or distributed. If a directory is centralized, there may be one
directory server at one location or a directory server that hosts data from
distributed systems. If the directory is distributed, there are multiple servers,
usually geographically dispersed, that provide access to the directory. When a
directory is distributed, the information stored in the directory can be partitioned
or replicated. When information is partitioned, each directory server stores a
unique and non-overlapping subset of the information. That is, each directory
entry is stored by one and only one server. One of the techniques to partition the
directory is to use LDAP referrals. LDAP referrals enable users to refer LDAP
requests to a different server. When information is replicated, the same directory
entry is stored by more than one server. In a distributed directory, some
information may be partitioned while some may be replicated.
costly administration can be eliminated, and security risks are more controllable.
For example, the telephone directory, mail, and Web application as shown in
figure can all access the same directory to retrieve an e-mail address or other
information stored in a single directory entry. The advantage is that the data is
kept and maintained in one place. Various applications can use individual
attributes of an entry for different purposes permitting that the they have the
correct authority. New uses for directory information will be realized, and a
synergy will develop as more applications take advantage of the common
directory.
One standards drive was led by the CCITT (International Consultative Committee
on Telephony and Telegraphy), and the ISO (International Standards
Organization). The CCITT has since become the ITU-T (International
Telecommunications Union - Telecommunication Standardization Sector). This
effort resulted in the OSI (Open Systems Interconnect) Reference Model (ISO
7498), which defined a seven-layer model of data communication with physical
transport at the lower layer and application protocols at the upper layers.
The other standards drive grew up around the Internet and developed from
research sponsored by DARPA (the Défense Advanced Research Projects
Agency) in the United States. The Internet Architecture Board (IAB) and its
subsidiary, the Internet Engineering Task Force (IETF), develop standards for the
Internet in the form of documents called Request for Comments (RFCs), which
after being approved, implemented, and used for a period of time, eventually
become standards (STDs). Before a proposal becomes an RFC, it is called an
Internet Draft.
The OSI protocols developed slowly, and because running the full protocol stack,
is resource intensive, they have not been widely deployed, especially in the
desktop and small computer market. In the meantime, TCP/IP and the Internet
were developing rapidly and being put into use. Also, some network vendors
developed proprietary network protocols and products.
X.500 The Directory Server Standard
However, the OSI protocols did address issues important in large distributed
systems that were developing in an ad hoc manner in the desktop and Internet
marketplace. One such important area was directory services. The CCITT created
the X.500 standard in 1988, which became ISO 9594, Data Communications
Network Directory, Recommendations X.500-X.521 in 1990, though it is still
commonly referred to as X.500.
X.500 organizes directory entries in a hierarchal name space capable of
supporting large amounts of information. It also defines powerful search
capabilities to make retrieving information easier. Because of its functionality and
scalability, X.500 is often used together with add-on modules for interoperation
between incompatible directory services.
X.500 specifies that communication between the directory client and the directory
server uses the directory access protocol (DAP). However, as an application layer
protocol, the DAP requires the entire OSI protocol stack to operate. Supporting
the OSI protocol stack requires more resources than are available in many small
environments. Therefore, an interface to an X.500 directory server using a less
resource-intensive or lightweight protocol was desired.
The first version of LDAP was defined in X.500 Lightweight Access Protocol
(RFC 1487), which was replaced by Lightweight Directory Access Protocol
(RFC 1777). LDAP further refines the ideas and protocols of DAS and DIXIE.
It is more implementation neutral and reduces the complexity of clients to
encourage the deployment of directory-enabled applications. Much of the work
on DIXIE and LDAP was carried out at the University of Michigan, which
provides reference implementations of LDAP and maintains LDAP-related Web
pages and mailing lists.
RFC 1777 defines the LDAP protocol itself. RFC 1777, along with:
The String Representation of Standard Attribute Syntaxes (RFC 1778)
A String Representation of Distinguished Names (RFC 1779)
An LDAP URL Format (RFC 1959)
A String Representation of LDAP Search Filters (RFC 1960)
RFC 2251 is a proposed standard, one step below a draft standard. LDAP V3
extended LDAP V2 in the following areas:
Referrals: A server that does not store the requested data can refer the
client to another server.
Security: Extensible authentication using Simple Authentication and
Security Layer (SASL) mechanism.
Internationalization: UTF-8 support for international characters.
Extensibility: New object types and operations can be dynamically
defined, and schema published in a standard manner.
Beyond LDAPv3
Recently, the push for encapsulating LDAP operations within XML for use
within Web Services has spawned a new language called the Directory Services
Markup Language (DSML). The most recent of the specification is
DSMLv2.DSML is an XML schema for representing directory information, it's a
generic import / export format for directory information. Directory information
in DSML can be shared between DSML-aware applications without
exposing the LDAP Protocol.
XML provides an effective way to present and transfer data; Directory services
allow you to share and manage data, and are thus a necessary prerequisite for
conducting online business; DSML is designed to make directory service more
dynamic by employing XML. DSML is an XML schema for working with
directories, it is defined using a Document Content Description (DCD). Thus,
DSML allows XML programmers to access LDAP-enabled directories without
having to write to the LDAP interface or use proprietary directory-access
APIs,and provides one consistent way to work with multiple dissimilar
directories.
Directory Components
A directory contains a collection of objects organized in a tree structure. The
LDAP naming model defines how entries are identified and organized. Entries
are organized in a tree-like structure called the Directory Information Tree (DIT).
Entries are arranged within the DIT based on their distinguished name (DN). A
DN is a unique name that unambiguously identifies a single entry. DNs are made
up of a sequence of relative distinguished names (RDNs). Each RDN™ in a DN
corresponds to a branch in the DIT leading from the root of the DIT to the
directory entry. A DN is composed of a sequence of RDNs separated by commas,
such as cn=thomas,ou=IT,o=company.
You can organize entries, for example, after organizations and within a single
organization you can further split the tree into organizational units, and so forth.
You can define your DIT based on your organizational needs as shown in
figure.
For example, one company with different divisions, you may want to start
with your company name under the root as the organization (o) and then
branch into organizational units (ou) for the individual divisions. In case
you store data for multiple organizations within a country, you may want
to start with a country (c) and then branch into organizations.
LDAP Standards
Several standards in the form of IETF RFCs exist for LDAP. The following is a
brief list of RFCs that apply for LDAP Version 2 and Version 3:
LDAP defines the content of messages exchanged between an LDAP client and
an LDAP server. The messages specify the operations requested by the client
(that is, search, modify, and delete), the responses from the server, and the
format of data carried in the messages. LDAP messages are carried over
TCP/IP, a connection-oriented protocol, so there are also operations to establish
and disconnect a session between the client and server.
However, for the designer of an LDAP directory, it is not so much the structure
of the messages being sent and received over the wire that is of interest. What is
important is the logical model that is defined by these messages and data types,
how the directory is organized, what operations are possible, how information is
protected, and so forth.
The philosophy of the LDAP API is to keep simple things simple. This means
that adding directory support to existing applications can be done with low
overhead. Because LDAP was originally intended as a lightweight alternative to
DAP for accessing X.500 directories, it follows a X.500 model. The directory
stores and organizes data structures known as entries. A directory entry usually
describes an object such as a person, device, a location, and so on. Each entry
has a name called a distinguished name (DN) that uniquely identifies it. The DN
consists of a sequence of parts called relative distinguished names (RDNs),
much like a file name consists of a path of directory names in many operating
systems such as UNIX® and Windows. The entries can be arranged into a
hierarchical tree-like structure based on their distinguished names. This tree of
directory entries is called the Directory Information Tree (DIT).
Each entry contains one or more attributes that describe the entry. Each attribute
has a type and a value. For example, the directory entry for a person might have
an attribute called telephoneNumber. The syntax of the telephoneNumber
attribute would specify that a telephone number must be a string of numbers that
can contain spaces and hyphens. The value of the attribute would be the
person’s telephone number, such as 512-555-1212.
LDAP defines operations for accessing and modifying directory entries such as:
Binding and unbinding
Searching for entries meeting user-specified criteria
Adding an entry
Deleting an entry
Modifying an entry
Modifying the distinguished name or relative distinguished name of an
entry (move)
Comparing an entry
The basic unit of information stored in the directory is called an entry. Entries
represent objects of interest in the real world such as people, servers,
organizations, and so on. Entries are composed of a collection of attributes that
contain information about the object. Every attribute has a type and one or more
values. The type of the attribute is associated with a syntax. The syntax specifies
what kind of values can be stored. For example, an entry might have a attribute.
The syntax associated with this type of attribute would specify that the values are
telephone numbers represented as printable strings optionally followed by
keywords describing paper size and resolution characteristics. It is possible that
the directory entry for an organization would contain multiple values in this
attribute—that is, that an organization or person represented by the entity would
have multiple fax numbers. The relationship between a directory entry and its
attributes and their values is shown in Figure 2-1.
For example, using the correct definitions, the telephone numbers 512-838-
6008, 512838-6008, and 5128386008 are considered the same. A few of the
syntaxes that have been defined for LDAP are listed in Table 2-1.
Table 2-2 lists some common attributes. Some attributes have alias names that
can be used wherever the full attribute name is used. For example, cn can be
used when referring to the attribute commonName.
Constraints can be associated with attribute types to limit the number of values
that can be stored in the attribute or to limit the total size of a value. For
example, an attribute that contains a photo could be limited to a size of 10 KB to
prevent the use of unreasonable amounts of storage space. Or an attribute used
to store a social security number could be limited to holding a single value.
Schemas define the type of objects that can be stored in the directory. Schemas
also list the attributes of each object type and whether these attributes are
required or optional. For example, in the person schema, the attribute surname
(sn) is required, but the attribute description is optional. Schema-checking
ensures that all required attributes for an entry are present before an entry is
stored. Schema-checking also ensures that attributes not in the schema are not
stored in the entry. Optional attributes can be filled in at any time. Schema also
define the inheritance and subclassing of objects and where in the DIT structure
(hierarchy) objects may appear.
Table 2-3 lists a few of the common schema (object classes and their required
attributes). In many cases, an entry can consist of more than one object class.
Though each server can define its own schema, for interoperability it is expected
that many common schema will be standardized (refer to RFC 2252, Lightweight
Directory Access Protocol (v3): Attribute Syntax Definitions, and RFC 2256, A
Summary of the X.500(96) User Schema for use with LDAPv3).
There are times when new schema will be needed at a particular server or within
an organization. In LDAP Version 3, a server is required to return information
about itself, including the schema that it uses. A program can therefore query a
server to determine the contents of the schema. This server information is stored
at the special zero-length DN. Objects can be derived from other objects. This is
known as sub classing. For example, suppose an object called person was defined
that included an attribute surname and so on. An object class organizational
Person could be defined as a subclass of the person object class. The
organizationPerson object class would have the same attributes as the person
object class and could add other attributes such as title and officenumber. The
person object class would be called the superior of the organizationPerson object
class. One special object class, called top, has no superiors. The top object class
includes the mandatory objectClass attribute. Attributes in top appear in all
directory entries as specified (required or optional).
Each directory entry has a special attribute called objectClass. The value of the
objectClass attribute is a list of two or more schema names. These schema define
what type of object(s) the entry represents. One of the values must be either top
or alias. Alias is used if the entry is an alias for another entry, otherwise top is
used. The objectClass attribute determines what attributes the entry must and may
have.
The special object class extensibleObject allows any attribute to be stored in the
entry. This can be more convenient than defining a new object class to add a
special attribute to a few entries, but also opens up that object to be able to contain
anything (which might not be a good thing in a structured system).
LDIF
When an LDAP directory is loaded for the first time or when many entries have
to be changed at once, it is not very convenient to change every single entry on
a one-by-one basis. For this purpose, LDAP supports the LDAP Data
Interchange Format (LDIF) that can be seen as a convenient, yet necessary, data
management mechanism. It enables easy manipulation of mass amounts of data.
See Example 2-1 for the basic form of an LDIF entry.
A line can be continued by starting the next line with a single space or tab
character, for example:
dn: cn=John E Doe, o=University of Higher
Learning, c=US
Multiple entries within the same LDIF file are separated by a blank line.
Multiple blank lines are considered a logical end-of-file.
Example 2-2 shows a simple LDIF file which contains an organizational unit,
People, located beneath the organization ibm.com in the DIT. The entry of John
Smith is the only data entry for People. Further on, there is an organizational unit
called marketing. Note that John Smith is a member of the marketing department
due to the attribute value pair ou: marketing.
Example 2-2 Example LDIF File with organizational and person entries
dn: o=ibm.com
objectclass: top
objectclass: organization
o: ibm.com
dn: ou=People, o=ibm.com
objectclass: organizationalUnit
ou: people
dn: ou=marketing, o=ibm.com
objectclass: organizationalUnit
ou: marketing
dn: cn=John Smith, ou=people, o=ibm.com
objectclass: top
objectclass: organizationalPerson
cn: John Smith
sn: Smith
givenname: John
uid: jsmith
ou: marketing
ou: people
telephonenumber: 838-6004
LDAP Schema
The order shown for the object classes above indicates a hierarchical relationship
between these object classes, but not necessarily. The top objectclass is of course
at the top of the hierarchy. Most other objectclasses that are not intended to be
subordinate to another class should have top as its superior. Not all LDAP
directories expect a user record to have the top object class assigned to it, while
others require it for using Access Control Lists (ACLs) on the object. The person
class is subordinate to the top class and requires that the cn (Common Name) and
sn (Surname) attributes be populated and allows several other optional attributes.
The organizationalPerson class inherits from the person class. The inetOrgPerson
class inherits from the organizationalPerson class. Now here is the tricky part:
The eDominoAccount object class is subordinate to the top class and requires that
the sn and userid attributes be populated. Notice that this overlaps with the person
object class requirement for the sn attribute. Does this mean that we need to store
the sn attribute twice? No, because it is a standard attribute. We will talk more
about attributes a little later in this section. It illustrates that you cannot
necessarily tell the hierarchical relationship of object classes by the order they
appear in a list. So then, how do we tell? We tell (or in reality, your
LDAPdirectory interface shows you) by looking at the object class definitions
themselves. The methods for defining object classes for LDAP V3 are described
in RFC-2251 and RFC-2252. Example 2-4 shows object class definitions taken
from ITDS.
Note that each object class begins with a string of numbers delimited by
decimals. This number is referred to as the OID (object identifier). After the
OID is the object class name (NAME) followed by a description (DESC). If it is
subordinate to another object class, the superior (SUP) object class is listed.
Finally, the object class definition specifies what attributes are mandatory
(MUST) and which are optional (MAY).
The OID is a numeric string that is used to uniquely identify an object. OIDs are
a managed hierarchy administered by the International Organization for
Standardization (ISO - Web site http://www.iso.ch/) and the International
Telecommunication Union (ITU - Web site http://www.itu.ch/). ISO and ITU
delegate OID management to organizations by assigning them OID numbers.
Organizations can then assign OIDs to objects or further delegate to other
organizations. OIDs are associated with objects in protocols and data structures
defined using Abstract Syntax Notation (ASN.1).
OIDs are intended to be globally unique. They are formed by taking a unique
numeric string (for example, 1.3.4.7.4.17) and adding additional digits in a unique
fashion (such as 1.3.4.7.4.17.1, 1.3.4.7.4.17.2, 1.3.4.7.4.17.3, etc.) An
organization may acquire a "branch" from some root or vertex in the OID tree.
Such a branch is more commonly referred to as an arc (in the previous example
it was 1.3.4.7.4.17). The organization may then extend the arc (called subarcs) as
shown above to create additional OIDs and arcs. We have no idea why the
terminology for the OID tree uses the words "vertex" and "arc" instead of "root"
and "branch" as is more commonly used in LDAP and its X.500 heritage. If you
have an LDAP directory that is a derivative of the original University of Michigan
LDAP code (many open source and commercial LDAP directory servers are),
your object class definitions are contained in files ending with ".oc".
As you may have guessed, the "dot notation" as first used by the IETF for IP
addresses was adopted to OIDs to keep things simple. However, unlike IP
addresses, there is no limit to the length of an OID.
If your organization must define your own attributes for use within your internal
directories, you should consider obtaining your own private enterprise number
arc to identify these attributes. We do not recommend that you "make up" your
own numbers, as you will probably not be able to interoperate with other
organizations (or some vendor's LDAP products). This is not to say obtaining
your own OID arc from ISO, IANA or some other authority to define your own
object classes and attributes will guarantee interoperability. But it will prevent
you rom using OIDs that have already been assigned to or by someone else. OIDs
are only used for "equality-matching". That is, two objects (for example, directory
attributes or certificate policies) are considered to be the same if they have exactly
the same OID. There are no implied navigational or hierarchical capabilities with
OIDs (unlike IP addresses, for example); given an OID one can not readily find
out who owns the OID, related OIDs, etc. OIDs exist to provide a unique
identifier. There is nothing to stop two organizations from picking thesame
identical names for objects that they manage, however, the OIDs will be unique
assuming they were assigned from legitimate arc numbers.
Let us look at the following example: Top is an abstract class that contains the
objectClass attribute. Person is a structural class that instantiates the directory
entry for a given person where the objectClass attribute is also part of that
Person entry. So far, this example has used only attributes and object classes
defined in a standard. So, now, you may want to tailor the people entries to
include information that your company requires and that is not defined in the
standard Person object definition. There are two ways to do this:
Subclass the Person object to create a new structural class that includes
those additional attributes defined by your company and instantiate the
Person directory entry based on that new class.
Define that collection of company attributes needed for your company’s
Person definition as an auxiliary class and attach it to the directory entry
instantiated from the Person class.
Either method is recommended. The downside to auxiliary classes is that
if the auxiliary class includes an attribute that is also included in the
structural class definition, when that attribute is included in the
instantiated directory entry and that attribute contains multiple values and
you want to delete the attribute, you cannot tell whether the attribute
(when added to the entry) was added when the structural class was
instantiated or when the auxiliary class was instantiated. This may be
important to the implementor or administrator.
Special entries exist in the namespace, called aliases. Aliases represent links to
other entries or partitions of the namespace. When the distinguished name of an
alias is used, the entry accessed is the entry to which the alias refers (unless
specified otherwise through the programming interface). The collection of
directory entries forms the Directory Information Tree (DIT). The method of
storage for the DIT of the LDAP directory is implementation-dependent and
hidden from the user of that LDAP directory. For example, the ITDS uses IBM
DB2 as its data store, but no DB2 constructs are externalized to LDAP.
Attributes
All the object class does is define the attributes, or types of data items contained
in that type of object. Some examples of typical attributes are cn (common
name), sn (surname), givenName, mail, uid, and userPassword. Just as the
object classes are defined with unique OIDs, each attribute also has a unique
OID number assigned to it. LDAP V3 attributes follow a notation similar
(ASN.1) to object classes. Example 2-6 shows some attribute definitions.
attribute: sn
attributetypes=(2.5.4.4 NAME ('sn' 'surName') DESC 'This is the X.500
surname attribute, which contains the family name of a person.' SUP 2.5.4.41
EQUALITY 2.5.13.2 ORDERING 2.5.13.3 SUBSTR 2.5.13.4 USAGE
userApplications)
attribute: mail
attributetypes=(0.9.2342.19200300.100.1.3 NAME ('mail' 'rfc822mailbox')
DESC 'Identifies a users primary email address (the email address retrieved and
displayed by white-pages lookup applications).' EQUALITY 2.5.13.2 SYNTAX
1.3.6.1.4.1.1466.115.121.1.15 USAGE userApplication)
Notice in Example 2-6 on page 41 that the superior (SUP) of sn is the attribute
2.5.4.41, which happens to be the name attribute. But then the name attribute
description says unlikely that values of this type itself will occur....
This illustrates just one of the many peculiarities of the way the attributes have
been defined. It merely provides a shorthand way to defining name-like attributes
such as surname. We did not need to define the syntax for sn because it inherits
this from name.
The attribute mail also has an alias of rfc822mailbox. As you may have guessed,
the "EQUALITY" and "SYNTAX" are yet more ASN.1 definitions.
Each RDN in a DN corresponds to a branch in the DIT leading from the root of
the DIT to the directory entry. Each RDN is derived from the attributes of the
directory entry. In the simple and common case, an RDN has the form
<attribute name> = <value>. A DN is composed of a sequence of RDNs
separated by commas.
The example is very simple but can be used to illustrate some basic concepts.
Each box represents a directory entry. The root directory entry is conceptual but
does not actually exist.
The type is case-insensitive and the value is defined to have a particular syntax.
The order of RDNs in an LDAP name is the most specific RDN first followed by
the less specific RDNs moving up the DIT hierarchy. A concatenated series of
RDNs equates to a distinguished name. The DN is used to represent an object and
the path to the object in the hierarchical namespace. A URL format for LDAP has
been defined that includes a DN as a component of the URL. These forms are
explained in the sections that follow.
Every entry in the directory has a DN. The DN is the name that uniquely
identifies an entry in the directory. A DN is made up of attribute=value pairs,
separated by commas, for example:
cn=Roger Smith,ou=sales,o=ib,c=US
cn=Sandy Brown,ou=marketing,o=ibm,c=US
cn=Leslie Jones,ou=development,o=ibm,c=US
Any of the attributes defined in the directory schema may be used to make up a
DN. The order of the component attribute value pairs is important. The DN
contains one component for each level of the directory hierarchy from the root
down to the level where the entry resides. LDAP DNs begin with the most
specific attribute (usually some sort of name), and continue with progressively
broader attributes, often ending with a country attribute. The first component of
the DN is referred to as the Relative Distinguished Name (RDN). It identifies an
entry distinctly from any other entries that have the same parent. In the examples
above, the RDN cn=Roger Smith separates the first entry from the second entry,
(with RDN cn=Sandy Brown). These two example DNs are otherwise equivalent.
The attribute:value pair making up the RDN for an entry must also be present in
the entry. (This is not true of the other components of the DN.)
The Distinguished Name (DN) syntax supported by this server is based on RFC
2253. The Backus-Naur Form (BNF) syntax is shown in Example.
Example DN syntax
<name> ::= <name-component> ( <spaced-separator> )
| <name-component> <spaced-separator> <name>
<spaced-separator> ::= <optional-space>
<separator>
<optional-space>
<separator> ::= "," | ";"
<optional-space> ::= ( <CR> ) *( " " )
<name-component> ::= <attribute>
| <attribute> <optional-space> "+"
<optional-space> <name-component>
<attribute> ::= <string>
| <key> <optional-space> "=" <optional-space> <string>
<key> ::= 1*( <keychar> ) | "OID." <oid> | "oid." <oid>
<keychar> ::= letters, numbers, and space
<oid> ::= <digitstring> | <digitstring> "." <oid>
<digitstring> ::= 1*<digit>
<digit> ::= digits 0-9
<string> ::= *( <stringchar> | <pair> )
| '"' *( <stringchar> | <special> | <pair> ) '"'
| "#" <hex>
<special> ::= "," | "=" | <CR> | "+" | "<" | ">"
| "#" | ";"
<pair> ::= "\" ( <special> | "\" | '"')
<stringchar> ::= any character except <special> or "\" or '"'
<hex> ::= 2*<hexchar>
In addition, space (' ' ASCII 32) characters may be present either before or after
a '+' or '='. These space characters are ignored when parsing.
String Form
The exact syntax for names is defined in RFC 2253. Rather than duplicating the
RFC text, the following are examples of valid distinguished names written in
string form:
This a name containing three RDNs in which the first RDN is multi-valued.
cn=L. Eagle, o=Sue\, Grabbit and Runn, c=GB
This example shows the method of quoting a comma (using a backslash as the
escape character) in an organization name.
cn=Before\0DAfter,o=Test,c=GB
This is an example name in which a value contains a carriage return character
(0DH).
sn=Lu\C4\8Di\C4\87
This last example represents an RDN surname value consisting of five letters
(including non-standard ASCII characters) that is written in printable ASCII
characters. The below table explains the quoted character codes.
URL Form
The LDAP URL format has the general form ldap://<host>:<port>/<path>, where
<path> has the form <dn>[?<attributes>[?<scope>?<filter>]]. The <dn> is an
LDAP distinguished name using a string representation. The <attributes> indicate
which attributes should be returned from the entry or entries. If omitted, all
attributes are returned. The <scope> specifies the scope of the search to be
performed. Scopes may be current entry, one-level (current entry’s children), or
the whole subtree. The <filter> specifies the search filter to apply to entries within
the specified scope during the search. The URL format allows Internet clients, for
example, Web browsers, to have direct access to the LDAP protocol and thus
LDAP directories.
ldap://austin.ibm.com/ou=Austin,o=IBM
ldap:///ou=Austin,o=IBM??sub?(cn=Joe Q. Public)
This is an LDAP URL referring to the set of entries found by querying any
capable LDAP server (no hostname was given) and doing a subtree search of the
IBM Austin subtree for any entry with a common name of Joe Q. Public retrieving
all attributes. The LDAP URL format is defined in RFC 2255.
Functional Model
The LDAP functional model is comprised of three categories of operations that
can be performed against a LDAPv3 directory service:
Authentication: Bind, Unbind, and Abandon operations used to connect
and disconnect to and from an LDAP server, establish access rights and
protect information.
Query: Search for and Compare entries for entries meeting user-specified
criteria.
Update: Add an entry, Delete an entry, Modify an entry, and modify the
distinguished name (ModifyRDN) or relative distinguished name of an
entry.
Query
The most common operation is search. The search operation is very flexible and
has some of the most complex options.
The search operation allows a client to request that an LDAP server search
through some portion of the DIT for information meeting user-specified criteria
in order to read and list the result(s). There are no separate operations for read
and list; they are incorporated in the search function. The search can be very
general or very specific. The search operation allows one to specify the starting
point within the DIT, how deep within the DIT to search, what attributes an entry
must have to be considered a match, and what attributes to return for matched
entries.
Base A DN that defines the starting point, called the base object, of the
search. The base object is a node within the DIT.
Scope Specifies how deep within the DIT to search from the base object.
There are three choices: base Object, single Level, and whole Subtree. If
base Object is specified, only the base object is examined. If single Level
is specified, only the immediate children of the base object are examined;
the base object itself is not examined. If whole Subtree is specified, the
base object and all its descendants are examined.
under the base object. If aliases are dereferenced, then they are alternate
names for objects of interest in the directory. Not dereferencing aliases
allows the alias entries themselves to be examined.
Limits Searches can be very general, examining large subtrees and causing
many entries to be returned. The user can specify time and size limits to
prevent wayward searching from consuming too many resources. The size
limit restricts the number of entries returned from the search. The time limit
limits the total time of the search. Servers are free to impose stricter limits
than requested by the client.
If the server does not contain the base object, it will return a referral to a server
that does, if possible. Once the base object is found singleLevel and wholeSubtree
searches may encounter other referrals. These referrals are returned in the search
result along with other matching entries. These referrals are called continuation
references because they indicate where a search could be continued. For example,
when searching a subtree for anybody named Smith, a continuation reference to
another server might be returned, possibly along with everal other matching
entries. It is not guaranteed that an entry for somebody named Smith actually
exists at that server, only that the continuation reference points to a subtree that
could contain such an entry. It is up to the client to follow continuation references
if desired. Since only LDAP Version 3 specifies referrals, continuation references
are not supported in earlier versions.
The search filter defines criteria that an entry must match to be returned from a
search. The basic component of a search filter is an attribute value assertion of
the form: attribute operator value For example, to search for a person named
John Smith the search filter would be cn=John Smith. In this case, cn is the
attribute, = is the operator, and John Smith is the value. This search filter
matches entries with the common name John Smith. The below table shows the
search filter options.
The * character matches any substring and can be used with the = operator. For
example, cn=J*Smi* would match John Smith and Jan Smitty.
Search filters can be combined with Boolean operators to form more complex
search filters. The syntax for combining search filters is: ( "&" or "|" (filter1)
(filter2) (filter3) ...) ("!" (filter)) The Boolean operators are listed in below table
Compare
The compare operation compares an entry for an attribute value. If the entry has
that value, compare returns TRUE. Otherwise, compare returns FALSE.
Although compare is simpler than a search, it is almost the same as a base scope
search with a search filter of attribute=value. The difference is that if the entry
does not have the attribute at all (the attribute is not present), the search will return
not found. This is indistinguishable from the case where the entry itself does not
exist. On the other hand, compare will return FALSE. This indicates that the entry
does exist, but does not have an attribute matching the value specified.
Update Operations
Update operations modify the contents of the directory. The below table
summarizes the update operations.
Operation Description
Add Inserts new entries into the directory.
delete Deletes existing entries from the
directory. Only leaf nodes can be
deleted. Aliases are not resolved
when deleting.
modify Changes the attributes and values
contained within an existing entry.
Allows new attributes to be added
and existing attributes to be deleted
or modified.
modify DN Changes the least significant (left
most) component of a DN or moves a
subtree of entries to a new location in
the DIT. Entries cannot be moved
across server boundaries.
Authentication Operations
Controls are added to the end of the operation’s protocol message. They are
supplied as parameters to functions in the API.
A control has a dotted decimal string object ID used to identify the control, an
arbitrary control value that holds parameters for the control, and a criticality level.
If the criticality level is TRUE, the server must honor the control; or if the server
does not support the control, reject the entire operation. If the criticality level is
FALSE, a server that does not support the control must perform the operation as
if there was no control specified. For example, a control might extend the delete
operation by causing an audit record of the deletion to be logged to a file specified
by the control value information.
Security model
The security model is based on the bind operation. There are several different
bind operations possible, and thus the security mechanism applied is different as
well. One possibility is when a client requesting access supplies a DN identifying
itself along with a simple clear-text password. If no DN and password is declared,
an anonymous session is assumed by the LDAP server. The use of clear text
passwords is strongly discouraged when the underlying transport service cannot
guarantee confidentiality and may therefore result in disclosure of the password
to unauthorized parties.
Directory security
Security is of great importance in the networked world of computers, and this is
true for LDAP as well. When sending data over insecure networks, internally or
externally, sensitive information may need to be protected during transportation.
There is also a need to know who is requesting the information and who is sending
it. This is especially important when it comes to the update operations on a
directory. The term security, as used in the context of this book, generally covers
the following four aspects:
The following sections focus on the first three aspects (since authorization is not
yet contained in the LDAP Version 3 standard): Authentication, integrity and
confidentiality. There are several methods that can be used for this purpose; the
most important ones are discussed here. These are:
No authentication.
Basic authentication.
Simple Authentication and Security Layer (SASL). This includes
DIGEST-MD5. When a client uses Digest-MD5, the password is not
transmitted in clear text and the protocol prevents replay attacks.
No authentication
This is the simplest authentication method, one that obviously does not need to
be explained in much detail. This method should only be used when data security
is not an issue and when no special access control permissions are involved. This
could be the case, for example, when your directory is an address book browsable
by anybody. No authentication is assumed when you leave the password and DN
fields empty in an ldap operation. The LDAP server then automatically assumes
an anonymous user session and grants access with the appropriate access controls
defined for this kind of access (not to be confused with the SASL anonymous
user).
Basic authentication
The security mechanism in LDAP is negotiated when the connection between the
client and the server is established. This is the approach specified in the LDAP
application program interface (API). Besides the option of using no
authentication at all, the most simple security mechanism in LDAP is called basic
authentication, which is also used in several other Web-related protocols, such as
in HTTP. When using basic authentication with LDAP, the client identifies itself
to the server by means of a DN and a password which are sent in the clear over
the network (some implementations may use Base64 encoding instead). The
server considers the client authenticated if the DN and password sent by the client
match the password for that DN stored in the directory. Base64 encoding is
defined in the Multipurpose Internet Mail Extensions (MIME) LDAP standard
SASL
In SASL, connection protocols, like LDAP, IMAP, and so on, are represented by
profiles; each profile is considered a protocol extension that allows the protocol
and SASL to work together. A complete list of SASL profiles can be obtained
from the Information Sciences Institute (ISI). Each protocol that intends to use
SASL needs to be extended with a command to identify an authentication
mechanism and to carry out an authentication exchange. Optionally, a security
layer can be negotiated to encrypt the data after authentication and so ensure
confidentiality. LDAP Version 3 includes a command (ldap_sasl_bind()) to
encrypt the data after authentication.
SSL and TLS
The Secure Socket Layer (SSL) protocol was devised to provide both
authentication and data security. It encapsulates the TCP/IP socket so that
basically every TCP/IP application can use it to secure its communication.
SSL/TLS supports server authentication (client authenticates server), client
authentication (server authenticates client), or mutual authentication. In addition,
it provides for privacy by encrypting data sent over the network.
As a first step, the client asks the server for an SSL/TLS session.
The client also includes the SSL/TLS options it supports in the
request.
The server sends back its SSL/TLS options and a certificate
which includes, among other things, the server’s public key, the
identity for whom the certificate was issued (as a distinguished
name), the certifier’s name and the validity time. A certificate can
be thought of as the electronic equivalent of a passport. It has to
be issued by a general, trusted Certificate Authority (CA) which
vouches that the public key really belongs to the entity mentioned
in the certificate. The certificate is signed by the certifier which
can be verified with the certifier’s freely available public key.
The client then requests the server to prove its identity. This is to
make sure that the certificate was not sent by someone else who
intercepted it on a former occasion.
The server sends back a message including a message digest
(similar to a check sum) which is encrypted with its private key.
A message digest that is computed from the message content
using a hash function has two features. It is extremely difficult to
reverse, and it is nearly impossible to find a message that would
produce the same digest. The client can decrypt the digest with
the server’s public key and then compare it with the digest it
computes from the message. If both are equal, the server’s
identity is proved, and the authentication process is finished.
Next, server and client have to agree upon a secret (symmetric)
key used for data encryption. Data encryption is done with a
symmetric key algorithm because it is more efficient than the
computing-intensive public key method. The client therefore
generates a symmetric key, encrypts it with the server’s public
key, and sends it to the server. Only the server with its private
key can decrypt the secret key.
The server decrypts the secret key and sends back a test message
encrypted with the secret key to prove that the key has safely
arrived. They can now start communicating using the symmetric
key to encrypt the data.
The following sections are examples of setting up replication using either the
Web Administration Tool or the command line utilities, and an LDIF file. The
scenarios are of increasing complexity:
Overview of Replication
This section presents a high-level description of the various types of replication
topologies.
Simple Replication
The basic relationship in replication is that of a master server and its replica
server. The master server can contain a directory or a subtree of a directory. You
can use the information and example provided here to know more about it.
The master is writable, which means it can receive updates from clients for a
given subtree. The replica server contains a copy of the directory or a copy of part
of the directory of the master server. The replica is read only; it cannot be directly
updated by clients. Instead it refers client requests to the master server, which
performs the updates and then replicates them to the replica server.
A master server can have several replicas. Each replica can contain a copy of the
master's entire directory, or a subtree of the directory. In the following example
Replica 2 contains a copy of the complete directory of the Master Server, Replica
1 and Replica 3 each contain a copy of a subtree of the Master Server's directory.
Master-replica replication
The relationship between two servers can also be described in terms of roles,
either supplier or consumer. In the previous example the Master Server is a
supplier to each of the replicas. Each replica in turn is a consumer of the Master
Server.
Cascading Replication
Cascading replication is a topology that has multiple tiers of servers. You can use
the information and example provided here to know more about it.
Cascading replication
Peer-to-Peer Replication
There can be several servers acting as masters for directory information, with
each master responsible for updating other master servers and replica servers.
This is referred to as peer replication. You can use the information and example
provided here to know more about it.
Peer replication can improve performance, availability, and reliability.
Performance is improved by providing a local server to handle updates in a widely
distributed network. Availability and reliability are improved by providing a
backup master server ready to take over immediately if the primary master fails.
Peer master servers replicate all client updates to the replicas and to the other peer
masters, but do not replicate updates received from other master servers.
In a Peer-to-peer replication setup with one replica server for each peer-master,
if the primary master fails, the proxy server directs the requests to the backup
master server. However, the proxy server will not fall back to the primary
master until the backup master server fails.
The following figure shows an example of peer-to-peer replication:
Peer-to-peer Replication
Gateway Replication
The most notable difference is that a gateway server does replicate changes
received from other peer servers through the gateway.
A gateway server must be a master server, that is, writable. It acts as a peer server
within its own replication site. That is, it can receive and replicate client updates
and receive updates from the other peer-master servers within the replication site.
It does not replicate the updates received from the other peer-masters to any
servers within its own site.
Within the gateway network, the gateway server acts as a two-way forwarding
server. In one instance, the peers in its replication site act as the suppliers to the
gateway server and the other gateway servers are its consumers. In the other
instance the situation is reversed. The other gateway servers act as suppliers to
the gateway server and the other servers within its own replication site are the
consumers.
Gateway replication
Introduction
In present digital world, users have to access multiple systems for carrying out
their day-to-day business activities. As the number of systems increase, the
number of credentials for each user increases and thereby possibility of losing or
forgetting them also increases.
For Enterprise SSO, an executable is installed on the user’s desktop and profiles
are created to recognize the login/password change screens so that the agent can
respond to them. An example of such an SSO solution is a password manager that
automatically logs a user in when a certain website is visited. Since no changes
have to be made to the applications, setting up this kind of SSO is in most cases
a relatively easy way to provide SSO .
Complex SSO uses multiple authentication authorities with single or multiple sets
of crdentials for each user.
Complex SSO can further be classified as two basic schemes, Complex SSO with
a single set of credentials and Complex SSO with multiple sets of credentials.
A.) Complex SSO with a single set of credentials:
Complex SSO using single set of credentials can be accomplished in two ways
i.e. Token-based and Public Key-based as follows:
Credential Synchronization:
The multiple sets of credentials needed to access multiple systems are masked by
a single set of credentials to give an illusion that users need to remember only the
single set. The synchronization software relieves the user from changing the
credentials in all systems as and when the policy forces, by automatically
forwarding the change request to all concerned authentication servers. eg: Pass
Go.
based messages called Assertions that detail whether users are authenticated
(Authentication Assertion), what kind of rights, roles and access
(Attribute Assertion) they have and how they can use data and resources
(Authorisation Assertion) based on those rights and roles. It uses HTTP, SMTP,
FTP and SOAP, among other protocols and technologies to transmit these
assertions.
OpenID
Open ID is a decentralized authentication protocol. It consists of three main
entities:-
1) The OpenID Identifier: A String of text or an e-mail address that uniquely
identifies the user;
2) The OpenID Relying Party (RP): A Web application or service provider that
wants proof that the end user owns the said Identifier
3) The OpenID Provider (OP): A central server that issues, stores and manages
the OpenID identifiers of users. Relying Parties rely on this provider for an
assertion that the end user owns the said Identifier.
BrowserID
BrowserID is a decentralized identity system through which users can prove the
claim of their email addresses allowing user’s login into any website on the
Internet using single password. It avoids site-specific usernames and passwords,
an alternative for ad-hoc application level authentication. It implements Verified
Email Protocol built by Mozilla, which offers streamlined experience.
OAuth2
Some sources say that OAuth2 is not an SSO protocol, as it does not provide
authentication. One can disagree with this statement, however, as OAuth2
handles authorization and authentication is required for authorization. Since the
access token can be used by all services, authentication to these services can be
automated because an access token implies that the user is authenticated. The
main difference between an ID token and an access token is that an access token
should only contain access information and does not contain any identifying
information .
In OAuth2, authorization is delegated over several entities. A client asks for the
AS a message, saying, in effect, "this client has access to this source". The client
gives this message to server S, which trusts the AS and will grant access. So the
AS can deliver temporary access tokens to the client without giving the client too
much power.
This system can be thought of as a hotel. The Hotel has a key master (OAuth2
server) named AS. He gives all cleaners (clients) keys that open the door of the
rooms they have to clean. They can only open the doors assigned to them and no
others. All of these keys self-destruct after a few hours, after which time the
cleaners cannot open any door.
LDAP
The Lightweight Directory Access Protocol (LDAP) is a protocol for accessing
distributed directory services, which can be used for authentication and
authorization . LDAP is often used in combination with an active directory.
LDAP is based on DAP which is a protocol defined by the X.500 computer
network standards. The two lightweight protocols for desktop computers were
called: Directory Assistance Service (DAS) and Directory Interface to X.500
Implemented Efficiently (DIXIE).
The LDAP protocol consist of LDAP clients and an LDAP server. The clients
create an LDAP message that contains a certain request and sends it to the server.
The server processes this request and sends the results back to the client as one
or more LDAP messages.
For example, when an LDAP client searches for a specific entry, then it sends an
LDAP search request message to the server. Each message contains a unique
message-ID generated by the client. The server receives a message from its
directory and sends it to the client followed by a separate message that contains
the result code. All of the communication between the server and the client is
identified by the message-ID provided in the request of the client.
terminate a session; and the abandon operation allows a client to indicate that it
is no longer interested in the results of an operation it had previously submitted.
CAS
Central Authentication Server (CAS) 1.0 was developed by Yale University as
an easy-touse Single Sign-On solution for the web. It consisted of servlets and
web pages. The goal was to make CAS a flexible and extendible protocol able
to meet the varying requirements of other institutions.
CAS is an authentication SSO solution, as its name, "Central Authentication
Server", already suggests. CAS is an open protocol that consists of a Java server
component that communicates with clients written in:
• Java
• .Net
• PHP
• Perl
• Apache
• uPortal
The CAS protocol consist of two parts: the CAS server and the CAS Client.
CAS server: A CAS server is a single machine used for authentication.
CAS clients: A CAS client is any service provider that is CAS-enabled and can
communicate with the server.
5. Auditing and logging functions should be used to facilitate the detection and
tracing of suspicious unsuccessful login attempts.
Public R- L
Group R- X
Owner R-W-X-D
Admins FC
System FC
Mandatory access control was developed for the US Army. Creating a complete
deployment using MAC in practice has turned out to be tricky. However, MAC
is a significant theoretical model for access control
A role-based access control RBAC is based on the roles or functions the user is
assigned within the organization. Access control decisions are based on work
roles and organizational roles. Each role has its own access capabilities. There are
no restrictions how many roles can be assigned to a user, or which permissions
can be assigned to a role. This is described in below Figure.
Objects associated with a role inherit privileges assigned to that role. As seen in
below Figure, there are many approaches to RBAC.
IAM systems support the RBAC model. To develop the maturity level of the
organization, role mining to target systems is needed to find out the system roles
and their suitability to the organizational and work role structure.
With ABAC auditing becomes a laborious task. Instead of only reviewing users
and their roles, the auditor has to enumerate user’s attributes and then the
corresponding attributes of the available protected objects. Also because the
attributes can change dynamically, it requires rules with all possible attribute
values while the user is active.
a)Benefits of RBAC
There are several benefits to using RBAC to restrict unnecessary network access
based on people's roles within an organization, including:
1. Core RBAC
introduces the concept of role activation as part of a user’s
session within a computer system.
required in any RBAC system, but the other components
are independent of each other and may be implemented
separately.
2. Hierarchical RBAC
Role hierarchies
a. General role hierarchies
i. Include the concept of multiple inheritance of permissions
and user membership among roles
b. Limited role hierarchies
i. Impose restrictions
ii. Role may have one or more immediate ascendants but is
restricted to a single immediate descendent.
User cannot be authorized for both roles, e.g., teller and auditor SSoD
policies deter fraud by placing constrains on administrative actions and
thereby restricting combinations of privileges that are made available to
users
SSD with Hierarchical RBAC
User cannot act simultaneously in both roles, e.g., teller and account
holder
There are several best practices organizations should follow for implementing
RBAC, including:
Determine the resources for which they need to control access, if they're
not already listed -- for instance, customer databases, email systems and
contact management systems.
Analyze the workforce and establish roles that have the same access needs.
However, don't create too many roles because that would defeat the
purpose of role-based access control and create user-based access control
rather than role-based access control. For instance, there could be a basic
user role that includes the access every employee need, such as to email
and the corporate intranet. Another role could be that of a customer service
representative who would have read/write access to the customer database,
and yet another role could be that of a customer database admin with full
control of the customer database.
After creating a list of roles and their access rights, align the employees to
those roles, and set their access.
Evaluate how roles can be changed, as well as how accounts for employees
who are leaving the company can be terminated and how new employees
can be registered.
Ensure RBAC is integrated across all systems throughout the company.
Conduct training so that the employees understand the principles of RBAC.
Periodically conduct audits of the roles, the employees who are assigned
to them and the access that's permitted for each role. If a role is found to
have unnecessary access to a certain system, change the role, and modify
the access level for those individuals who are in that role.
While RBAC and ABAC can be very complex subjects, here are four
simple concepts you can refer to not only as you start your IAM
implementation, but on an ongoing basis as your organization and needs
change:
When you can make access control decisions with broad strokes, use
RBAC. For example, giving all teachers access to Google or all contractors
access to email. When you need more granularity than this or need to make
a decision under certain conditions, use ABAC. For example, giving
teachers access to Google if they are at School X and teach Grade Y.
A general rule of thumb is that you should try to use RBAC before ABAC
because at their core, the controls are just searching or filters. The bigger
and more complex the search, the more processing power and time it takes.
And, the more users and applications an organization has, the greater
processing impact the searches/filters will have because of the increase in
search space.
• Less is More.
If you are creating a lot of very complex RBAC and/or ABAC filters, you
are probably doing something wrong. A little bit of planning in advance
can help you structure your directory data in a way that mitigates the need
to develop complex filters/queries. However, every now and then, you will
definitely have to get creative to establish the right level of access control,
but this should be the exception and not the rule.
You can always use RBAC and ABAC together in a hierarchal approach.
For example, using RBAC to control who can see what modules and then
using ABAC to control access to what they see (or can do) inside of a
module. This is similar to a WAN and LAN-based firewalls where the
WAN does the coarse-grain filtering and then LAN-based does the finer-
grain inspections.
Let’s understand the difference in more details by taking an example where the
permissions for a page and service are defined for both the scenarios.
Coarse grained
Assume the following permission sets which defines the access based on role
assigned to the user
Rule 1: The users having the role “A” can access the page
“/users/login.html”.
Rule 2: The users with role “B” can access the service “loginservice”.
Now as per above Rule sets the access is governed on the basis of role associated
to the user and not based on any other user specific details or environmental
conditions etc. As per above rules the user with appropriate roles can access the
resource irrespective of any other conditions.
Since normally a user may assign to multiple roles simultaneously there are
different flavors of RBAC (like flat or hierarchical RBAC system etc.) but that
will still rely on the roles and their combinations. The above rule sets here is based
on prominent authorization system called Role Based Access Control system
(RBAC). Here in this case RBAC is quite explicit and hence we can call it a
coarse-grained authorization.
Now imagine if we want to restrict the access based on other additional conditions
as well for the same scenario and as per below rules:
Now in above two rules set we can see that the level of authorization is more
detailed or scaled than the earlier rules set. Here we have more granularity for
taking decision and governing the access as per business needs and hence we can
call it as fine-grained authorization.
Silent device registration where the system does not require any user
interaction.
Ready-to-use, predefined policy attributes that are specific to context-
based access.
Scenario-based, predefined risk profiles.
A risk-scoring engine that calculates a risk score for the current transaction
based on the active risk profile. The risk score is based on configurable
weights that are assigned to context attributes and behavior attributes. If
the risk score is high, further challenges are presented to the user or access
is denied. If the risk score is low, the user is permitted access.
Context-based access policy decisions can be based on the risk score. The risk
score is calculated based on the active risk profile attributes that are retrieved
from the user.
The system allows for multiple risk profiles to be defined, but only one is active
at run time.
Each attribute included on a risk profile has an assigned weight to be used while
calculating the risk score of a given request. The active risk profile attributes are
evaluated to determine whether a user should be granted access to a protected
resource. A policy author can rely on the risk score to enforce stronger
authentication mechanisms or to perform device registration.
To get started setting up context-based access control for your installation, work
with:
Business Scenarios
Business transactions that have an increased security risk factor can benefit by
implementing context-based access.
The following examples are some scenarios where you can use context-based
access to provide a higher level of confidence for the transaction:
1. Over the shoulder attack: when a person types in his or her password, someone
might be able to observe what is typed and hence steal the password by looking
over the person’s shoulder, or by indirect monitoring using a camera.
4. Login spoofing attack: this is where an attacker sets up a fake login screen that
is similar in look-and-feel to the real login screen. When a user login to the fake
screen, his password will be recorded or transmitted to the attacker.
All these attacks, if successful, can help unauthorised users harvest the passwords
of legitimate users. Systems using passwords as the only authentication method
will be unable to differentiate whether the holder of the password is a valid user
or not.
For attackers, the single-password approach means that all systems will
automatically be compromised once passwords in a weakly protected system are
successfully hacked. Therefore, when an organisation decides to use the single-
password approach, all systems must be protected at the same level of security.
General Systems
For external applications, it is often hard to implement tight access control when
compared to internal applications, because an organisation might not have
complete control over the external environment. For instance, users may access a
company’s web applications from a public machine, home PC or other sites where
there is no control over security. There is therefore a greater risk of exposing
passwords to outsiders. If the same password is used for both internal and external
applications, there will be less security protection for internal systems. Once the
The following are desirable security features available in some operating and
application systems that can assist in enforcing some of the recommended
password selection criteria. It is recommended that such features should be
enabled whenever possible.
The idea is very simple. If you want a service, you need to have a ticket for that
service. To obtain a ticket, you must contact the Ticket Granting Service (TGS)
to obtain a service ticket. Once the ticket is obtained, you can use it to gain access
to the intent service offered by a Service Server (SS).
SPNEGO
The following diagrams shows how a client application obtains a service from a
web-application through the standard HTTP protocol. Basically,
Certificate Based
Below figure shows how certificates and the SSL protocol are used together for
authentication. To authenticate a user to a server, a client digitally signs a
randomly generated piece of data and sends both the certificate and the signed
data across the network. For the purposes of this discussion, the digital signature
associated with some data can be thought of as evidence provided by the client to
the server. The server authenticates the user’s identity on the strength of this
evidence.
Like for password-based authentication illustrated in Figure assumes that the user
has already decided to trust the server and has requested a resource. The server
has requested client authentication in the process of evaluating whether to grant
access to the requested resource.
Note –
1. The client software maintains a database of the private keys that correspond
to the public keys published in any certificates issued for that client. The
client asks for the password to this database the first time the client needs to
access it during a given session for example, the first time the user attempts
to access an SSL-enabled server that requires certificate-based client
authentication. After entering this password once, the user doesn’t need to
enter it again for the rest of the session, even when accessing other SSL-
enabled servers.
2. The client unlocks the private-key database, retrieves the private key for the
user’s certificate, and uses that private key to digitally sign some data that
has been randomly generated for this purpose on the basis of input from both
the client and the server. This data and the digital signature constitute
“evidence” of the private key’s validity. The digital signature can be created
only with that private key and can be validated with the corresponding public
key against the signed data, which is unique to the SSL session.
3. The client sends both the user’s certificate and the evidence, the randomly
generated piece of data that has been digitally signed, across the network.
4. The server uses the certificate and the evidence to authenticate the user’s
identity.
5. At this point the server may optionally perform other authentication tasks,
such as checking that the certificate presented by the client is stored in the
user’s entry in an LDAP directory. The server then continues to evaluate
whether the identified user is permitted to access the requested resource. This
evaluation process can employ a variety of standard authorization
mechanisms, potentially using additional information in an LDAP directory,
company databases, and so on. If the result of the evaluation is positive, the
server allows the client to access the requested resource.
Certificate authorities, CAs, validate identities and issue certificates. CAs can
be independent third parties or organizations that run their own certificate-issuing
server software. The methods used to validate an identity vary depending on the
policies of a given CA. In general, before issuing a certificate, the CA must use
its published verification procedures for that type of certificate to ensure that an
entity requesting a certificate is in fact who it claims to be.
By Definition
A federation is a relationship in which the participating entities agree to use the
same technical standard, enabling access to data and resources of one another. It
consists of one or more service providers (SP) and an identity provider (IdP). An
IdP is a partner in a federation that can authenticate the identity of a user. A
service provider is a company or program that provides a business function as a
service.
The Federation Module provides the following functions:
Federated single sign-on (SSO) for users across multiple applications.
Support for SAML 2.0 and OpenID Connect protocols for federated
access.
Pre-integrated federation connectors to popular cloud applications.
Federation Example
There are many examples of federated identity management systems in operation
today. Several of these examples can be seen on the Internet, where large service
providers, such as Facebook and Google act as identity providers, and allow users
to access third party services by means of their Facebook or Google account.
These large service providers offer software development kits (SDK) and
Application Programming Interfaces (API) so that application developers (for
web and mobile apps) can take advantage of existing authentication mechanisms
and attract new users to their services without new registration processes. Figure
illustrates a potential benefit of FIM from a user perspective, through existing
federation solutions.
To design a solution, the following areas need to be understood, and are covered
in this section:
The roles of identity and service provider: The definition who is the
authoritative source of the user identity information.
Identity syntax and attributes: An Identity profile should contain several user
attributes and credentials. Such attributes might be personal data, habits, and
biometrics. Different standards and implementations use different structures to
describe attributes and credentials. Ideally, those should be interoperable.
Account linking: The procedures for managing the account linking, to agree on
some common unique identifier for the user, which can be bounded with the
internal, local user identity at the service provider. This step also involves the
Roles
Within a federation, business partners play one of two roles: Identity provider
or service provider or both. The identity provider (IdP) is the authoritative site
responsible for authenticating an end user and asserting an identity for that
user in a trusted fashion to trusted business partners. Those business partners
who offer services but do not act as identity providers are known as service
providers. The identity provider takes on the bulk of the user's life cycle
management issues. The service provider (SP) relies on the IdP to assert
information about a user, leaving the SP to manage only those user attributes
that are relevant to the SP.
To achieve the overall user life cycle management required for a full federated
identity management solution, the identity provider assumes the management
of user account creation, account provisioning, password management, and
identity assertion. The identity provider and service provider cooperate to
provide a rich user experience by leveraging distinct federated identity
management profiles that together provide a seamless federation functionality
for a user.
Service provider – SP
A service provider may still manage local information for a user, even within
the context of a federation. For example, entering into a federated identity
management relationship may allow a service provider to handle account
management (including password management) to an IdP while the SP
focuses on the management of its user-specific data (for example, SP-side
service-specific attributes and personalization related information). In general,
a service provider will off-load identity management to an identity provider to
minimize its identity management requirements while still enabling full
service provider functionality.
As a protocol, SAML has three versions: SAML 1.0, 1.1, and SAML 2.0. SAML
1.0 and SAML 1.1 (collectively, SAML 1.x) focus on single sign-on
functionality. SAML 2.0 represents a major functional improvement over SAML
1.x. SAML 2.0 (approved in March 2005) is based on SAML 1.x with significant
input from the Liberty Alliance ID-FF and Shibboleth specifications.
SAML 2.0 is a protocol that you can use to perform federated single sign-on from
identity providers to service providers. In federated single sign-on, users
authenticate at identity provider. Service providers consume the identity
information asserted by identity providers.
SAML 2.0 relies on the use of SOAP, among other technologies, to exchange
XML messages over computer networks. The XML messages are exchanged
through a series of requests and responses.
In this process, one of the federation partners sends a request message to the other
federation partner. Then, that receiving partner immediately sends a response
message to the partner who sent the request.
content of the messages, and the way the messages are communicated between
partners and users.
Assertions
XML-formatted tokens that are used to transfer user identity information, such as
the authentication, attribute, and entitlement information, in the messages.
Protocols
The types of request messages and response messages that are used for obtaining
authentication data and for managing identities.
Bindings
The communication method that is used to transport the messages.
Profiles
Combinations of protocols, assertions, and bindings that are used together to
create a federation and enable federated single sign-on. You and your partner
must use the same SAML specification (2.0) and agree on which protocols,
bindings, and profiles to use.
Assertions
The assertions contain authentication statements. These authentication statements
assert that the principal (that is, the entity that requests access) was authenticated.
Assertions can also carry attributes about the user that the identity provider wants
to make available to the service provider.
Assertions are typically passed from the identity provider to the service provider.
The content of the assertions that are created is controlled by the SAML 2.0
specification. Select these assertions when you establish a federation. You can
also select these assertions by the definitions that are used in the identity mapping
method that you configure.
The identity mapping method can either be a custom mapping module or an XSL
transform file. The identity mapping also specifies how identities are mapped
between federation partners.
Protocols
SAML 2.0 defines several request-response protocols that correspond to the
action that is being communicated in the message. The SAML 2.0 protocols that
are supported are:
Authentication request
Single logout
Artifact resolution
Name identifier management
Flow initiation
The message flow can be initiated from the identity provider or the service
provider.
Bindings
The following bindings can be used in the Web browser SSO profile:
HTTP redirect
HTTP POST
HTTP artifact
The choice of binding depends on the type of messages being sent. For example,
an authentication request message can be sent from a service provider to an
identity provider using HTTP redirect, HTTP POST, or HTTP artifact. The
response message can be sent from an identity provider to a service provider by
using either HTTP POST or HTTP artifact. A pair of partners in a federation does
not need to use the same binding.
Single Logout
The Single Logout profile is used to terminate all the login sessions currently
active for a specified user within the federation. A user who achieves single sign-
on to a federation establishes sessions with more than one participant in the
federation.
The sessions are managed by a session authority, which in many cases is an
identity provider. When the user wants to end sessions with all session
participants, the session authority can use the single logout profile to globally
terminate all active sessions.
This profile provides options regarding the initiation of the message flow and the
transport of the messages:
Flow initiation
The message flow can be initiated from the identity provider or the service
provider.
Bindings
The following bindings can be used in the Single Logout profile:
HTTP redirect
HTTP POST
HTTP artifact
SOAP
Name Identifier Management
The Name Identifier Management profile manages user identities that are
exchanged between identity providers and service providers.
This profile can be used by identity providers or service providers to inform their
partners when there is a change in user aliases.
This profile can also be used by identity providers or service providers to
terminate user linkages at the partners.
To manage the aliases, the Federation module uses a function that is called the
alias service. The alias service stores and retrieves aliases that are related to a
federated identity. User aliases are stored and retrieved from high-volume
database.
This profile provides options regarding the initiation of the message flow and the
transport of the messages:
Flow initiation
The message flow can be initiated from the identity provider or the service
provider.
Bindings
The following bindings can be used in the Web browser SSO profile:
HTTP redirect
HTTP POST
HTTP artefact
SOAP
HTTP redirect
HTTP redirect enables SAML protocol messages to be transmitted within URL
parameters. It enables SAML requestors and responders to communicate by using
an HTTP user agent as an intermediary.
The intermediary might be necessary if the communicating entities do not have a
direct path of communication. The intermediary might also be necessary if the
responder requires interaction with a user agent, such as an authentication agent.
HTTP redirect is sometimes called browser redirect in single sign-on operations.
This profile is selected by default.
HTTP POST
HTTP POST enables SAML protocol messages to be transmitted within an
HTML form by using base64-encoded content. It enables SAML requestors and
responders to communicate by using an HTTP user agent as an intermediary. The
agent might be necessary if the communicating entities do not have a direct path
of communication. The intermediary might also be necessary if the responder
requires interaction with a user agent such as an authentication agent.
HTTP POST is sometimes called Browser POST, particularly when used in single
sign-on operations. It uses a self-posting form during the establishment and use
of a trusted session between an identity provider, a service provider, and a client
(browser).
HTTP artifact
HTTP artifact is a binding in which a SAML request or response (or both) is
transmitted by reference by using a unique identifier that is called an artifact.
A separate binding, such as a SOAP binding, is used to exchange the artifact for
the actual protocol message. It enables SAML requestors and responders to
communicate by using an HTTP user agent as an intermediary.
This setting is used when it is not preferable to expose the message content to the
intermediary.
SOAP
SOAP is a binding that uses Simple Object Access Protocol (SOAP) for
communication.
To use SOAP binding, SAML requestors must have a direct communication path
with SAML responders
SAML 2.0 name identifier formats control how the users at identity providers are
mapped to users at service providers during single sign-on.
Email address
Use the email address name identifier format if you want a user to log in at the
service provider as the same user that they use to log in at the identity provider.
For example, if a user is logged in at the identity provider as user1, then they will
also be logged in as user1 at the service provider after single sign-on.
Persistent aliases
Use the persistent name identifier format if you want a user to log in at the identity
provider as one user but log in at the service provider as a different user.
Before you can use this name identifier format, you must link the user at the
identity provider with the user at the service provider. You can choose to have
the user linking done during single sign-on or by using the alias service.
For example, suppose user1 in the identity provider is linked with user2 in the
service provider. If user1 is logged in at the identity provider, then they will be
logged in as user2 in service provider after single sign-on.
Transient aliases
Use the transient name identifier format if you want a user to log in as a shared
anonymous user, regardless of which user that they use to log in at the identity
provider.
For example, suppose user1 is a shared anonymous user in the service provider.
If the user is logged in as user2 in the identity provider, then they will be logged
in as user1 in the service provider after single sign-on.
Similarly, if the user is logged in as user3 in the identity provider, then they will
be logged in also as user1 in the service provider.
Alias service
To manage the aliases, the Federation module uses an alias service. The alias
service stores and retrieves aliases that are related to a federated identity.
Persistent name identifier format allows you to link a user at the identity provider
with a user at the service provider.
Liberty
The Liberty Alliance Project was formed to deliver and support a federated
network identity solution for the Internet that enables single sign-on for
consumers and business users in an open, federated way.
Liberty ID-FF describes profiles for B2C-based single sign-on and additional
functionality. Liberty ID-FF profiles include: Single sign-on (SSO), single log-
out (SLO), Account Linking (Register Name Identifier, or RNI in ID-FF 1.1),
Account De-Linking (Federation Termination Notification, or FTN in ID-FF 1.1),
and identity provider introduction (IPI). The Liberty-specified common user
identifier (CUID) is referred to as a NameIdentifier. It is an opaque reference to
a user that acts as an alias, meaning that it cannot be used to infer information
about the user, such as her identity. A Liberty NameIdentifier is used to establish
(and maintain) the account linking between an IdP and an SP. The RNI profile is
used to allow a reset of a user's NameIdentifier, replacing a current value with a
new NameIdentifier value. The FTN process is used to remove all references to a
NameIdentifier, thus achieving account de-linking. Taken together, these profiles
are intended to provide richer user management functionality within a federation
than simple single-sign-on.
In ID-FF 1.2, the RNI and FTN profiles have been collapsed into a single profile,
the Manage Name Identifier (MNI) profile. This profile moves all of the account
linking life cycle into a single profile.
The Liberty approach is based on business affiliates forming circles of trust. The
Liberty circles of trust is defined as “a group of service providers that share linked
identities and have pertinent business agreements in place regarding how to do
business and interact with identities.”
WS-Federation
WS-Federation is a specification defined by IBM, Microsoft, VeriSign, and RSA
within the scope of the IBM-Microsoft Web services security roadmap. WS-
Federation was published on July 8, 2003. WS-Federation interoperability
between IBM and Microsoft has been demonstrated several times, including by
Bill Gates and Steve Mills in New York City in September of 2003. Subsequent
to that, a public interoperability exercise was held on March 29–30, 2004 between
IBM, Microsoft, and other third-party vendors.
WS-Federation describes how to use the existing Web services security building
blocks to provide federation functionality, including trust, single sign-on (and
single sign-off), and attribute management across a federation. WS-Federation is
really a family of three specifications: WS-Federation, WS-Federation Passive
Client, and WS-Federation Active Client.
Resource owner
An entity capable of authorizing access to a protected resource. When the
resource owner is a person, it is called an user.
OAuth client
A third-party application that wants access to the private resources of the resource
owner. The OAuth client can make protected resource requests on behalf of the
resource owner after the resource owner grants it authorization. OAuth 2.0
introduces two types of clients: confidential and public. Confidential clients are
registered with a client secret, while public clients are not.
OAuth server
Known as the Authorization server in OAuth 2.0. The server that gives OAuth
clients scoped access to a protected resource on behalf of the resource owner. The
server issues an access token to the OAuth client after it successfully does the
following actions:
Authenticates the resource owner.
Validates a request or an authorization grant.
Obtains resource owner authorization.
An authorization server can also be the resource server.
Access token
A string that represents authorization granted to the OAuth client by the resource
owner. This string represents specific scopes and durations of access. It is granted
by the resource owner and enforced by the OAuth server.
Protected resource
A restricted resource that can be accessed from the OAuth server using
authenticated requests.
Resource server
The server that hosts the protected resources. It can use access tokens to accept
and respond to protected resource requests. The resource server might be the same
server as the authorization server.
Authorization grant
A grant that represents the resource owner authorization to access its protected
resources. OAuth clients use an authorization grant to obtain an access token.
There are four authorization grant types: authorization code, implicit, resource
owner password credentials, and client credentials.
Authorization code
A code that the Authorization server generates when the resource owner
authorizes a request.
Refresh token
A string that is used to obtain a new access token.
A refresh token is optionally issued by the authorization server to the OAuth
client together with an access token. The OAuth client can use the refresh token
to request another access token that is based on the same authorization, without
involving the resource owner again.
Endpoints provide OAuth clients the ability to communicate with the OAuth
server or authorization server within a definition.
All endpoints can be accessed through URLs. The syntax of the URLs is specific
to the purpose of the access.
If you are responsible for installing and configuring the appliance, you might find
it helpful to be familiar with these endpoints and URLs.
https://<hostname:port>/<junction>/sps/oauth/oauth20
For example:
https://server.oauth.com/mga/sps/oauth/oauth20
The following table describes the endpoints that are used in an API protection
definition.
Notes:
There is only a single set of endpoints.
Not all authorization grant types use all three endpoints in a single OAuth
2.0 flow.
The OAuth 2.0 support in Access Manager provides four different ways for an
OAuth client to obtain access the protected resource.
1. The OAuth client initiates the flow when it directs the user agent of the resource
owner to the authorization endpoint. The OAuth client includes its client
identifier, requested scope, local state, and a redirection URI. The authorization
server sends the user agent back to the redirection URI after access is granted or
denied.
2. The authorization server authenticates the resource owner through the user
agent and establishes whether the resource owner grants or denies the access
request.
3. If the resource owner grants access, the OAuth client uses the redirection URI
provided earlier to redirect the user agent back to the OAuth client. The
redirection URI includes an authorization code and any local state previously
provided by the OAuth client.
4. The OAuth client requests an access token from the authorization server
through the token endpoint. The OAuth client authenticates with its client
credentials and includes the authorization code received in the previous step.
The OAuth client also includes the redirection URI used to obtain the
authorization code for verification.
5. The authorization server validates the client credentials and the authorization
code. The server also ensures that the redirection URI received matches the URI
used to redirect the client in Step 3. If valid, the authorization server responds
back with an access token.
The authorization server can be the same server as the resource server or a
separate entity. A single authorization server can issue access tokens accepted by
multiple resource servers.
The authorization code workflow with refresh token diagram involves the
following steps:
2. The authorization server validates the client credentials and the authorization
grant. If valid, the authorization server issues an access token and a refresh
token.
3. The OAuth client makes a protected resource request to the resource server
by presenting the access token.
4. The resource server validates the access token. If the access token is valid, the
resource owner serves the request.
5. Repeat steps 3 and 4 until the access token expires. If the OAuth client knows
that the access token has expired, skip to Step 7. Otherwise, the OAuth client
makes another protected resource request.
7. The OAuth client requests a new access token by authenticating with the
authorization server with its client credentials and presenting the refresh token.
8. The authorization server validates the client credentials and the refresh token,
and if valid, issues a new access token and a new refresh token.
As a redirection-based flow, the OAuth client must be able to interact with the
user agent of the resource owner, typically a web browser. The OAuth client
must also be able to receive incoming requests through redirection from the
authorization server.
1. The OAuth client initiates the flow by directing the user agent of the resource
owner to the authorization endpoint. The OAuth client includes its client
identifier, requested scope, local state, and a redirection URI. The authorization
server sends the user agent back to the redirection URI after access is granted or
denied.
2. The authorization server authenticates the resource owner through the user
agent and establishes whether the resource owner grants or denies the access
request.
3. If the resource owner grants access, the authorization server redirects the user
agent back to the client using the redirection URI provided earlier. The
redirection URI includes the access token in the URI fragment.
4. The user agent follows the redirection instructions by making a request to the
web server without the fragment. The user agent retains the fragment
information locally.
5. The web server returns a web page, which is typically an HTML document
with an embedded script. The web page accesses the full redirection URI
including the fragment retained by the user agent. It can also extract the access
token and other parameters contained in the fragment.
6. The user agent runs the script provided by the web server locally, which
extracts the access token and passes it to the client.
The resource owner password credentials grant type is suitable in cases where the
resource owner has a trust relationship with the client. For example, the resource
owner can be a computer operating system of the OAuth client or a highly
privileged application.
You can only use this grant type when the OAuth client has obtained the
credentials of the resource owner. It is also used to migrate existing clients using
direct authentication schemes by converting the stored credentials to an access
token.
1. The resource owner provides the client with its user name and password.
2. The OAuth client requests an access token from the authorization server
through the token endpoint. The OAuth client authenticates with its client
credentials and includes the credentials received from the resource owner.
3. After the authorization server validates the resource owner credentials and the
client credentials, it issues an access token and optionally a refresh token.
1. The OAuth client requests an access token from the token endpoint by
authenticating with its client credentials.
OAuth 2.0 workflows for confidential clients that require client authentication at
the token endpoint, can be configured in one of the following ways:
OAuth 2.0 Authorization Server that is capable of authenticating the end-user and
providing claims to a Relying Party about the authentication event and the end-
user.
Entity Something that has a separate and distinct existence and that can be
identified in a context. An end-user is one example of an entity.
Claim
Piece of information asserted about an entity that is included in the ID token. An
OpenID Connect Provider should document which claims it includes in its ID
tokens.
The following claims are required claims about the authentication event:
aud (Audience): Must contain the client identifier of the RP registered at
the issuer.
iss(Issuer): The issuer identifier of the OP.
exp (Expiration time): The RP must validate the ID token before this time.
iat (Issued at): The time at which the ID token was issued.
The following claims are required claims about the user:
sub (Subject): A locally unique and permanent (never reassigned) identifier
of the end-user at the issuer.
Optional claims about the user can include first_name, last_name, picture,
gender, etc.
Bearer token
Token issued from the token endpoint. This includes an access token, a ID token,
and potentially a refresh token.
ID token
JSON Web Token (JWT) that contains claims about the authentication event and
the user.
JWTs are Base64 encoded JSON objects comprising three sections: Header,
Claims Set and JSON Web Signature (JWS). These are separated in the JWT by
a period ('.'). The Header must at least contain the algorithm used to sign the JWT
(the alg claim).
The Claims Set includes claims about the authentication event and the user.
The JSON Web Signature (JWS) is used to verify the signing of the JWT.
Refresh token
A string that is used to obtain a new access token.
A refresh token is optionally issued by the OpenID Connect Provider to the
OpenID Connect Partner together with an access token. The OpenID Connect
Partner can use the refresh token to request another access token that is based on
the same authorization, without involving the resource owner again.
Issuer identifier
Verifiable identifier for an issuer. An issuer identifier is a case sensitive URL
using the HTTPS scheme that contains scheme, host, and optionally, port number
and path components and no query or fragment components.
Authorization endpoint
The endpoint used to initiate an OpenID Connect flow. This endpoint is requested
in some bespoke manner.
Token endpoint
The endpoint used to exchange an authorization code for a bearer token.
This is also used to exchange a refresh token for a new access token. Access to
the token endpoint is secured and requires client credentials to be provided on
requests.
JWK endpoint
The endpoint used to advertise an OpenID Connect provider's public certificate
for use in asymmetric signing algorithms.
For example:
url_base_path = “/oidc/endpoint/amapp-runtime-myfederation/”
Requests to this endpoint should use the HTTP method GET or POST and
include the following query string parameters:
The big difference between OAuth2 and OpenID Connect is that OAuth2 offers
authorization only and OpenID Connect also offers authentication. For this
reason, OpenID connect also has additional authentication tokens that are called
"ID tokens".
However, OpenID Connect has no flow to share data between two applications.
With OAuth2, any information a user holds on any website can be shared with
another website. A user could, for example, copy his GMail address book into
Facebook by allowing Facebook to read his GMail account. Without OAuth2, the
only way to solve this is to give Facebook the GMail user name and password.
This is clearly not a smart thing to do, and this is exactly what OAuth2 is set up
to prevent.
On the other hand, OpenID Connect has an additional flow called Hybrid Flow
that is another user-interactive flow. As in Implicit Flow, tokens are returned to
the client, but there is no redirect URI. The client has to handle the
communication with the service as in the Code Authorization Flow.
OpenID Connect is a recently published protocol that uses new techniques, like
JSON and REST (vs XML and SOAP for SAML). It was designed to support web
apps, native apps and mobile applications, whereas SAML was designed only for
web-based apps. Because XML requires a heavy library to parse content, JSON
and REST are easier to implement in the implementation languages commonly
used on the web.
According to SURF, there is also a difference in the focus of the protocols:
OpenID Connect is user-centric, whereas SAML 2.0 is organization-centric. User
consent is an essential part of the OpenID Connect protocol, but it is also possible
with SAML. Some analysts point out that the pre-defined set of user attributes
offered by OpenID Connect is more geared toward consumer-to-web service
provider scenarios than toward enterprise scenarios (where one would expect
roles, entitlements, etc.).
To conclude, both protocols solve the same problem, but in a different way.
Because OpenID uses modern techniques, we expect that the market share of
OpenID Connect will grow and take over the leading position of SAML.
OpenID uses the open standards REST and JSON, whereas LDAP uses XML.
This is not unusual because LDAP was released in 1993 when Javascript (1995)
the basics of JSON? and the REST protocol (2000) did not even exist.
In short, LDAP may be used when an Enterprise SSO solution is required. If you
are looking for a web SSO solution, use OpenID Connect. It is built with the web
in mind, it is a smaller, and, in our opinion, it is easier to understand. OpenId
Connect also uses the new standards in web programming and is therefore easier
and faster to implement for web solutions.
CAS should be used for SSO authentication of local users. They do not
support federated authentication, which is possible with SAML and OpenID
Connect.
Based on this comparison, we know that OAuth2 and CAS do not meet the
requirement of offering authentication and authorization.
Thus, three potential solutions remain: OpenID Connect, SAML 2, and LDAP.
Because OAuth2 is the basis of OpenID Connect, we decided to continue looking
into this protocol.
Introduction
Multi-factor authentication is defined as ‘a method of authentication that uses
two or more authentication factors to authenticate a single claimant to a single
authentication verifier’.
The risk associated with this scenario is that an adversary may be able to
compromise the computer’s IPsec certificate at one point in time, compromise the
passphrase the user uses to authenticate to the VPN concentrator at another point
in time and, finally, compromise the user’s AD credentials at yet another point in
time. In this way the adversary is able to increase their access over time, which
increases the level of risk associated with this approach.
Biometrics
Smartcards
Mobile apps
use of devices for web browsing or reading emails may mean that the
device running the mobile app may no longer be secure
many devices are not secure, and a device can be compromised by
motivated and competent adversaries, particularly when travelling
overseas.
use of devices for web browsing or reading emails may mean that an SMS
message, email or voice call containing the one-time PIN or password may
no longer be secure, particularly when SMS messages are delivered via
VoIP or internet messaging platforms
many devices are not secure, and a device can be compromised by
motivated and competent adversaries, particularly when travelling
overseas
telecommunication networks do not provide end-to-end security and an
SMS message, email or voice call may be intercepted by motivated and
competent adversaries, particularly when travelling overseas.
Software certificates
The algorithm that generates each password uses the current time of day as one
of its factors, ensuring that each password is unique. Time-based one-time
passwords are commonly used for two-factor authentication and have seen
growing adoption by cloud application providers. In two-factor authentication
scenarios, a user must enter a traditional, static password and a TOTP to gain
access.
There are various methods available for the user to receive a time-based one-
time password, including:
factors such as tokens. RSA uses a time-based algorithm to calculate the OTP and
as such requires the time to be synchronized between the client side and server-
side OTP components and has on the user side a time indicator, so the user can
time their password entry.
The major upside to time-based OTP is than an event-based token if someone got
an OTP the only time limit they have to using it is until you generated a new OTP
and used it. In a time based OTP, one would only have a pre-defined time limit
to use set OTP (e.g., 25 seconds) which would be more than enough at most
instances. Event based OTP‘s on the other hand do not require time
synchronization to take place and does not require the user to wait for a password
to expire before entering it, etc...
Synchronization techniques
Challenge-Response type OTP
The OTP system generator passes the user’s secret pass-phrase, along with a seed
received from the server as part of the challenge, through multiple iterations of a
secure hash function to produce a one-time password. After each successful
authentication, the number of secure hash function iterations is reduced by one.
Thus, a unique sequence of passwords is generated. The server verifies the one-
time password received from the generator by computing the secure hash function
once and comparing the result with the previously accepted one-time password.
Time-synchronized type OTP
The time-synchronized one-time passwords are usually related to physical
hardware tokens (e.g., each user is given a personal token that generates a one-
time password). Inside the token is an accurate clock that has been synchronized
with the clock on the authentication server. On these OTP system, time is an
important part of the password algorithm since the generation of new passwords
is based on the current time rather than the previous password or a secret key.
Event-synchronized type OTP
Each time you ask an event-based token for a new password, it increments the
internal counter value by one. On the server, each time you successfully
authenticate, the server also increments its counter value by one. In this way the
token’s and the server’s counter values stay synchronized in lock step and always
will generate the same one-time password. Event based tokens can get out of sync
if the token is asked to generate a bunch of one-time passwords that are never
actually used in authentication attempts. Then the token’s counter value is
increased while the server, oblivious, never increments his. Finally, a token-
generated one-time password is used for an authentication attempt, but it fails
because the server doesn’t recognize it.
To calculate an OTP the token feeds the counter into the HMAC algorithm
using the token seed as the key. HOTP uses the SHA-1 hash function in the
HMAC. This produces a 160-bit value which is then reduced down to the 6 (or
8) decimal digits displayed by the token.
LinOTP: is acronym for Linux One Time Password that uses OTP to
increase the security of all types of logon processes.
MOTP: is acronym for Mobil One Time Password which deals with
synchronization between client and server with period of time usually
3munities; several software downloaded on mobiles support this
technology.
SMSOTP: SMS OTPs are used as an additional factor in a multi-factor
authentication system. Users are required to enter an OTP after logging in
with a user name and password.
Time-based OTP (TOTP for short), is based on HOTP but where the moving
factor is time instead of the counter. TOTP uses time in increments called the
timestep, which is usually 30 or 60 seconds. This means that each OTP is valid
for the duration of the timestep.
Comparison
Both OTP schemes offer single-use codes, but the key difference is that in HOTP
a given OTP is valid until it is used, or until a subsequent OTP is used. In HOTP
there are a number of valid "next OTP" codes. This is because the button on the
token can be pressed, thus incrementing the counter on the token, without the
resulting OTP being submitted to the validating server. For this reason, HOTP
validating servers accept a range of OTPs. Specifically, they will accept an OTP
that is generated by a counter that is within a set number of increments from the
previous counter value stored on the server. This is range is referred to as the
validation window. If the token counter is outside of the range allowed by the
server, the validation fails, and the token must be re-synchronized.
So clearly in HOTP there is a trade-off to make. The larger the validation window
the less likely the chance of needing to re-sync the token with the server, which
is inconvenient for the user. Importantly though, the larger the window the greater
the chance of an adversary guessing one of the accepted OTPs through a brute-
force attack.
In contrast, in TOTP there is only one valid OTP at any given time - the one
generated from the current UNIX time.
Choice
Choosing between HOTP and TOTP purely from a security perspective clearly
favours TOTP. Importantly, the validating server must be able to cope with
potential for time-drift with TOTP tokens in order to minimize any impact on
users.
Auditing
The term auditing has two different distinct meanings within the context of IT
security, so it’s important to recognize the differences.
First, auditing refers to the use of audit logs and monitoring tools to track activity.
For example, audit logs can record when any user accesses a file and document
exactly what the user did with the file and when.
A complete audit trail of system activities is a necessity to assure that the system
is functioning properly, even if there are no apparent signs of system failure or
unauthorized access. The system should provide a complete record of all access
control activity, like authentication requests, data access attempts and changes to
privilege levels. The record should contain both successful and failed activities.
The IAM system should also provide other versatile reports. There should be
available lists of all the users with different prerequisites, such as users with a
specific work role or users of a specific target system. There should be listings of
added, removed and inactive users, listings of unprocessed user right requests,
listings of rejected user rights and separate, selected listings for supervisors and
data protection officers. For the IAM administrator the system should also
provide provisioning reports of failed and succeeded cases.
Log files are typically created and maintained in each target system separately. It
is necessary to review the logs periodically. System logs are high in volume,
which makes it difficult to isolate and identify a given event for identification and
investigation. Some IAM systems have an option to provide a centralized log
storage with intelligent log analyzing tools.
Apart from centralized logging, every critical target systems’ user rights
management action have to generate a complete user-specific log and change
history. Also, using log files have to be logged. All the logged information has to
be easily provided in different kinds of auditing and analysing situations. Log
files have to be available for indisputable user rights auditing. No one can have a
right to change or remove log files. Typically, a log can include user IDs, dates
and times of log-on and log-off, system identities (IP-addresses, host names etc.)
and both successful and rejected authentication and access attempts.
The changes made to the IAM system have to generate a log file that is available
for administrators. Also, user actions have to be logged real-time. The archiving
obligation for log files is the time the person is working for the organization + 12
years. Log files need to be archived 12 years after their emergence. After the time
limit, the log files have to be erased appropriately.
According to ISO 27799, patient information systems should create a secure audit
record each time a user accesses, creates, updates or archives patient data. The
audit log should uniquely identify the user, the data subject, the function
performed by the user and note the date and time when the function was
performed.
Inspection Audits
It’s important to clearly define and adhere to the frequency of audit reviews.
Organizations typically determine the frequency of a security audit or security
review based on risk. Personnel evaluate vulnerabilities and threats against the
organization’s valuable assets to determine the overall level of risk. This helps
the organization justify the expense of an audit and determine how frequently
they want to have an audit.
Audits cost time and money, and the frequency of an audit is based on the
associated risk. For example, potential misuse or compromise of privileged
accounts represents a much greater risk than misuse or compromise of regular
user accounts. With this in mind, security personnel would perform user
entitlement audits for privileged accounts much more often than user entitlement
audits of regular user accounts.
As with many other aspects of deploying and maintaining security, security audits
are often viewed as key elements of due care. If senior management fails to
enforce compliance with regular security reviews, then stakeholders will hold
them accountable and liable for any asset losses that occur because of security
breaches or policy violations. When audits aren’t performed, it creates the
perception that management is not exercising due care.
Many organizations perform periodic access reviews and audits to ensure that
object access and account management practices support the security policy.
These audits verify that users do not have excessive privileges and that accounts
are managed appropriately. They ensure that secure processes and procedures are
in place, that personnel are following them, and that these processes and
procedures are working as expected. For example, access to highly valuable data
should be restricted to only the users who need it. An access review audit will
verify that data has been classified and that data classifications are clear to the
users. Additionally, it will ensure that anyone who has the authority to grant
access to data understands what makes a user eligible for the access. For example,
if a help desk professional can grant access to highly classified data, the help desk
professional needs to know what makes a user eligible for that level of access.
The access review verifies that a policy exists and verifies personnel are following
it. When terminated employees have continued access to the network after an exit
interview, they can easily cause damage. For example, an administrator can create
a separate administrator account and use it to access the network even if the
administrator’s original account is disabled.
User entitlement refers to the privileges granted to users. Users need rights and
permissions (privileges) to perform their job, but they only need a limited number
of privileges. In the context of user entitlement, the principle of least privilege
ensures that users have only the privileges they need to perform their job and no
more.
The purpose of this GTAG is to provide insight into what IAM means to an
organization and to suggest internal audit areas for investigation. In addition to
involvement in strategy development, the CAE has a responsibility to ask
business and IT management what IAM processes are currently in place and how
they are being administered. While this document is not to be used as the
definitive resource for IAM, it can assist CAEs and other internal auditors in
understanding, analysing, and monitoring their organization’s IAM processes.
The actual formats used by an organization to produce reports from audit trails
will vary greatly. However, reports should address a few basic or central
concepts:
• The purpose of the audit
• The scope of the audit
• The results discovered or revealed by the audit
In addition to these basic concepts, audit reports often include many details
specific to the environment, such as time, date, and a list of the audited systems.
They can also include a wide range of content that focuses on
• Problems, events, and conditions
• Standards, criteria, and baselines
• Causes, reasons, impact, and effect
• Recommended solutions and safeguards
Audit reports should have a structure or design that is clear, concise, and
objective. Although auditors will often include opinions or recommendations,
they should clearly identify them. The actual findings should be based on fact and
evidence gathered from audit trails and other sources during the audit.
involved in the creation of the reports or responsible for the correction of items
mentioned in the reports.
Auditors sometimes create a separate audit report with limited data for other
personnel. This modified report provides only the details relevant to the target
audience. For example, senior management does not need to know all the minute
details of an audit report. Therefore, the audit report for senior management is
much more concise and offers more of an overview or summary of findings. An
audit report for a security administrator responsible for correction of the problems
should be very detailed and include all available information on the events it
covers.
On the other hand, the fact that an auditor is performing an audit is often very
public. This lets personnel know that senior management is actively taking steps
to maintain security.
Business Challenges
In order to effectively compete in today’s business environment, companies are
increasing the number of users (customers, employees, partners, and suppliers)
who are allowed to access information. As IT is challenged to do more with fewer
resources, managing user identities and their access to resources throughout the
identity lifecycle is even more difficult. Typical IT environments have many local
administrators using manual processes to implement user changes across multiple
systems and applications.
Solution
An integrated identity management solution can help get users, systems, and
applications online and productive fast, and maintain dynamic compliance to
increase the resiliency and security of the IT environment. Identity Manager is
primarily concerned with user lifecycle management. Identity Manager allows
Identity Manager
Identity management is a foundational security component to help ensure users
have the access they need, and that systems, data, and applications are
inaccessible to unauthorized users.
the base of which is the most core required capability of the provisioning solution.
After the capabilities in the lowest level are addressed, you can move up to the
Communication between the provisioning solution and the managed system must
be bidirectional, secure, and bandwidth-efficient. Bidirectionality is critical to
capturing changes made directly to the managed system and reporting the change
to the provisioning solution for evaluation and response. The link must be
encrypted so that no one can listen in and steal authentication information such
as passwords. The link must also allow authentication of the source so that a new
command cannot be injected into the system by an imposter to create an
inappropriate account.
Last, because the managed resources are physically distributed across the
corporate wide area network (WAN) or the Internet, bandwidth efficiency must
be considered. These networks often have limited available capacity and are
User repositories
Endpoint repositories
Endpoint repositories contain data about privileges and accounts, and most
companies have a great variety of these repositories implemented throughout
their environment. Some of these are:
Password management
Orphan accounts are those active accounts found on many systems that cannot be
associated with a valid user. Improperly configured accounts are those associated
with valid users but granted improper authorities. These accounts may appear at
any time due to local administrators retaining rights to use local administrative
consoles. In enterprise-wide environments, these local consoles cannot be
disabled because of their multiple operational use. The key to the control of
improper and orphan accounts is, on a continuous basis, to associate every
account with a valid user and maintain a system-of-record detailing the approved
authorities of the account. When the user’s status with the organization changes,
their access rights must change too. If the account configuration changes, it must
be compared with an approved configuration and policy.
The ability to control orphan accounts requires that the provisioning system link
gather account information with authoritative information about the users
themselves. Authoritative user identity information is typically found in Human
Resources and various databases and directories containing information about
users in other businesses.
These approaches can be very slow. Requests can sit idle in an inbox or be
rejected because they are missing key information; consequently, the process
must begin again. A complete provisioning workflow solution automatically
routes requests to the proper approvers and escalates to alternates if action is not
taken on the request in a specified time. This workflow automation can turn a
process that typically takes a week into one that takes only minutes.
Traditionally, many organizations have treated audit logs as places to look for the
cause of a security breach after the fact. Increasingly, this is seen as an inadequate
use of the information available to an organization, which would be exhibiting
better due diligence by monitoring and reacting to logged breaches in as near to
real time as possible.
Centralized audit trails of access requests are an important aspect of supporting
independent audits of security practices and procedures in an organization.
These audit trails capture all aspects of the administration of access rights, from
initial access requests to changes in account details. Security audits are part of
every organization, whether they are conducted by internal security audit teams
or are external audits supporting formal bookkeeping. If recordkeeping is
incomplete, inaccurate, or stored in multiple locations, then these audits can
consume extensive time and human effort to conduct. Audits are frequently
disruptive to daily work efforts but are mandatory for the safe and secure
operation of the organization. Among other things, audit teams look for orphan
accounts or inappropriate access privileges that exist on important systems.
Audits may occur from once a quarter to as frequently as once a week, depending
on the organization.
Distributed administration
User administration policy automation is the way to evaluate and enforce business
processes and rules for granting access. Role Based Access Control (RBAC) is a
method of granting access rights to users based on their assignment to a defined
role in the organization. Provisioning solutions that embody RBAC or other types
of rules that assign access rights to users based on certain conditions and user
characteristics are examples of user administration policy automation.
This ultimate level of the hierarchy is the ability to provision across multiple
organizations that each contain user groups and shared services. In this
The benefits of centralizing the control over user management, while still
allowing for decentralized administration, impacts these four business areas:
Let us take a closer look at the capabilities of centralized user management that
help realize these benefits.
i. Single Interface
Most large IT systems today are very complex. They consist of many
heterogeneous resources (operating systems, databases, Web application servers,
and so on). Individual user accounts exist in every database or user identity
repository. This means that an administrator must master a different interface on
each platform or resource type in order to manage the user identity repository.
This can be compounded by having specialized administrators focusing on
Without centralized identity management and the use of life cycle rules, it is
almost impossible to enforce the corporate policy in a complex environment
dealing with a variety of target platforms, different system specifications, and
different administrators.
iii. Central Password Management
A user typically has multiple accounts and passwords. The ability to synchronize
passwords across platforms and applications provides ease of use for the user. It
can also improve the security of the environment because each user does not have
to remember multiple passwords and is therefore less likely to write them down.
Password strength policy can also be applied consistently across the enterprise.
As the number and type of users within the scope of an organization’s identity
management system changes, there will be increasing burdens on the system. Any
centralized system run by an IT department could face the burden of having to
manage users who are within other business units or even within other partner
organizations.
A key feature of any centralized system is therefore the ability to delegate the
day-to-day management of users to nominated leaders in other business units or
partner organizations.
v. User Self-Care
The most frequent reason users call the Help Desk is because they have forgotten
their password and they have locked their account while entering incorrect
passwords.
User repositories
Endpoint repositories
User repositories
Endpoint repositories
vii. Workflow
Managing identity and account-related data involves a great deal of approvals and
dependencies. It takes a lot of time and effort to collect the necessary approvals
and check for all sorts of dependencies between related components.
Gather approvals.
Reduce administrative workload.
Reduce turn-on time for new managed identities (account
generation, provisioning, and so on).
Enforce completeness (do not do this task before everything else
is gathered).
This requirement can only be met using centralized threat management tools, but
an important step towards meeting this goal should be part of an identity
management solution. Centralized auditing and logging of all additions, changes,
and deletions made on target repositories should be part of any centralized
identity management solution.
All user accounts have a life cycle: They are created, modified, and deleted. It
can take a long time to get a new user online, as administrators are often forced
to manually obtain approvals, provision resources, and issue passwords.
Generally, with manual work, there is the opportunity for human error and
management by mood. Self-service interfaces enable users to perform some of
these operations on their own information, such as password resets and personal
information updates.
Automating some of the business processes related to the user account life cycle
reduces the chance for error and simplifies operations. Any centralized identity
management solution must provide the means to emulate the manual processes
involved in provisioning requests, an approvals workflow, and an audit trail, in
addition to the normal provisioning tools.
When creating user account information, some characteristics are common to all
or a sub-set of users based on the context. Default policies, which fill in data
entry fields with pre-set values automatically if not specified, reduce the effort
to fill out those values for every account.
A validation policy ensures that information about an object complies with the
rules defined for that object in the enterprise. An example would be that the field
user name must be eight characters and start with a letter. Another validation
policy may be that every user must have at least one active group membership.
Defining an access control model for each type of resource (e-business, enterprise
and previous platforms, and applications) in an organization can be complex and
costly. A single access control model provides a consistent way to grant users
access to resources and control what access the user has for that resource or across
a set of resources.
For some organizations, a role-based access control model is a good goal for
which to aim, as this reduces cost and improves the security of identity
management.
Work styles are changing and not everyone is office bound. Some people may
work in a different business location everyday, while others may work from a
home office. Identity management interfaces should be ubiquitous to adjust to our
work styles. It may be necessary for users in partner organizations or clients to
self-manage some of their account data. This means that the software on the
access device may not be under the control of the parent organization. Since a
Web browser interface is a pervasive interface available on most devices, it makes
sense that any identity management solution interface should be Web based.
In order for administrators to perform their work tasks anytime from anywhere
with a network connection, the identity management solution must be Web
enabled and capable of being integrated with Internet-facing access control
systems.
Within the field of identity management, the use of automated provisioning may
trigger workflows. Distributing software or updating the configuration of the
user’s workstation by using the software distribution functionality found in the
system’s management architecture is one example of the type of functionality
required from an identity management solution.
Lifecycle Management
The person exists as a person entity to the identity management solution. From
the time of its creation to its deletion, it will change over time due to external
events such as transfers, promotions, leaves of absences, temporary assignments
or any other identity-related business process.
Life cycle management introduces the concept that a person’s use of an IT asset
from the time that the account is created until the time that the account is deleted
will change over time due to external events such as transfers, promotions, leaves
of absence, temporary assignments, or management assignments. There may also
be a need to routinely verify that the account is compliant with security policies
or external regulations. This effort is an on-going process and not a one-time
event. Control activities must therefore be implemented into the business
processes. Automation increases the effectiveness of these controls and business
processes.
Life cycle is a term used to describe how accounts for a person are created,
managed, and terminated based on certain events or a time-based paradigm.
A lifecycle is a term to describe how persons or accounts for a person are created,
managed, and terminated based on certain events or a time-based paradigm.
Over time modifications occur where access to some resources is granted while
access to other resources may be revoked. The cycle ends when the person
separates from the business and the termination process removes access to
resources, suspends all accounts, and eventually deletes the accounts and the
person from the systems.
Provisioning solutions are the link between the classical central management
solution and the target resources. The capability to quickly negotiate provisioning
requirements that map to the identity models and processes of a business is crucial
when architecting a solution. The provisioning aspect garners much of the focus
and attention. User provisioning is where the process begins, and if provisioning
Person creation
The person entity is created with the identity management solution. In most cases
person attributes, such as user name, e-mail address, phone number, and other
identity-related data, are imported from a person-authoritative system such as a
Human Resources (HR) system for employees, a contract system for business
partner persons, and other data sources for customer persons.
Account creation
The account entity is created on the managed platforms using attributes from the
person entity.
Identifying the sponsor (for example, sales or HR), determining the nature
of the relationship (customer, internal employee), verifying the user’s
identity, and assigning a role or roles.
Fulfillment, which entails gaining approval for the appropriate systems,
creating the user’s identity in the appropriate directories and repositories,
and granting access to those accounts.
Person The person’s attributes, such as name, e-mail address, and phone number.
Identity The user’s credentials, such as user name and password, as well as
information about the user that may be based on person entity, including name,
e-mail address, and phone number.
Access rights The systems, accounts, and applications the user has access to and
the level of access.
During the termination phase, organizations should verify that the relationship
between the user and the organization is, in fact, dissolving and disable access
accordingly. Often, accounts are disabled for a term and then deleted.
Unfortunately, although this sounds simple, it demands process rigor.
Reconciliation
Reconciliation is the process of synchronizing the accounts and supporting data
in Identity Manager with the accounts and supporting data on a managed resource
or endpoint. It is the Identity Manager discovery process that queries the state of
the accounts on the managed endpoint. To determine an owner relationship,
reconciliation compares the account information with existing user data stored on
the Identity Manager server by first looking for the existing ownership within the
Identity Manager server and, secondly, applying adoption rules configured for the
reconciliation.
Access control models have three flavours: Mandatory Access Control (MAC),
Role Based Access Control (RBAC), Discretionary Access Control (DAC).
This section looks at some of the access control models that are commonly
found or are planned for use with a centralized identity management solution.
RBAC examples
A senior employee Charles would already have the basic user role from
the time when he joined the organization. His work now requires that
access be granted to applications that are not included within the basic user
role. If he now needs access to the accounts and invoicing systems, Charles
could be awarded the accounting role in addition to the basic user role.
A manager Dolly would already have the basic user role from the time
when she joined the organization and may also have other roles. As she has
been promoted to a management post, so her needs to access other systems
have increased. It may also be, however, that her needs to access some
systems, as a result of her previous post, are no longer appropriate in her
management role. Thus, if Dolly had basic user and accounting as her roles
before promotion, it may be that she is granted the manager role, but has
her accounting role rescinded. This would leave her with the basic user and
manager roles suitable for her post.
The DAC model is when the owner of a resource decides whether to allow a
specific person access to her resource. This system is common in distributed
environments that have evolved from smaller operations into larger ones. When
it is well managed, it can provide adequate access control, but it is very dependant
upon the resource owner understanding how to implement the security policies of
the organization, and of all the models it is most like to be subject to management
by mood. Ensuring that authorized people have access to the correct resource
requires a good system for the tracking of leavers, joiners, and job changes.
Tracking requests for change is often paper driven, error prone, and can be costly
to maintain and audit.
Mandatory Access Control [MAC] Model
The MAC model is where the resources are grouped or marked according to a
sensitivity model. This model is most commonly found in military or government
environments. One example would be the markings of unclassified, restricted,
confidential, secret, and top secret. User privileges to view certain resources are
dependant upon that individual’s clearance level.
Reports
Identity Manager provides support for provisioning, for user accounts and for
access to various resources. Implemented within a suite of security products,
Identity Manager plays a key role to ensure that resources are provisioned only
to authorized persons. Identity Manager safeguards the accuracy and
completeness of information processing methods and granting authorized users
access to information and associated assets. Identity Manager provides an
integrated software solution for managing the provisioning of services,
applications, and controls to employees, business partners, suppliers, and others
associated with your organization across platforms, organizations, and
geographies. You can use its provisioning features to control the setup and
maintenance of user access to system and account creation on a managed
resource.
Approval workflows
Account and access request workflows are started during account and
access provisioning. You typically use account and access request
workflows to define approval workflows for account and access
provisioning.
Identity Manager provides audit trail information about how and why a
user has access. On a request basis, Identity Manager provides a process
to grant, modify, and remove access to resources throughout a business.
The process provides an effective audit trail with automated reports.
Dormant accounts. You can view a list of dormant accounts with the
Reports feature. Identity Manager includes a dormant account attribute
to service types that you can use to find and manage unused accounts
on services.
The Identity Manager administrator can also create new rules to be used
in password policies.
combines policies for all accounts that are owned by the user to
determine the password to be used. If conflicts between password
policies occur, the password might not be set.
Audits that are specific to recertification are created for use by several
reports that are related to recertification:
Reports
Identity Manager’s role is to manage users and their accounts. Passwords, group
memberships, and other attributes are associated with the users and accounts.
These all relate to managed systems and applications. To enable management of
users, accounts, and associated information, Identity Manager uses roles and
policies. Identity Manager also contains workflow, audit logs, and reports.
The above figure shows the relationship between a user and account. In the
figure, a user, or an employee, Jane Doe is defined in the HR system of the
company. When Jane is defined to Identity Manager, user accounts on
managed resources such as UNIX, Microsoft Active Directory, and
internal applications can be provisioned according to the provisioning
policy.
Identity feed
Identity Manager users are created either by importing identity records with the
use of an identity feed or by manually creating each user. An identity feed is the
process of synchronizing the data between an authoritative data source, such as
an HR system, and Identity Manager. The initial reconciliation populates Identity
Manager with new users, including their profile data. A subsequent reconciliation
creates new users and also updates the user profile of any duplicate users that are
found.
Identity feeds use data from an external authoritative data source to create,
modify, and delete user records in Identity Manager. Identity feeds are usually
implemented using connectors. Connectors can read file-based data in several
standard formats, such as comma-separated value, Extensible Markup Language
(XML), and LDAP Data Interchange Format (LDIF). Connectors can also extract
data from structured data sources such as directories and databases. Common
reasons for using an identity feed are to:
Passwords
Role
An Identity Manager provide user entitlement, or you can say Identity
manager do entitlement management for user accounts. Entitlement
management is technology that grants, resolves, enforces, revokes and
administers fine-grained access entitlements (also referred to as
“authorizations,” “privileges,” “access rights,” “permissions” and/or
“rules”). Its purpose is to execute IT access control policies to ensure that
user should not have access to more privileges than they need to perform
their jobs.
Identity Manager used role for managing user entitlement. Roles (or
Organizational roles) are a method of providing users with entitlements to
managed resources. Roles determine which resources are provisioned for
a user or set of users who share similar responsibilities. A role is a job
function that identifies the tasks that a person can do and the resources to
which the person has access.
You can assign a user to one or more roles. Additionally, roles can
themselves be members of other roles, in what is termed child roles that
contribute to role hierarchy. A role might have more than one parent, and
the inheritance comes from all parents and ancestors.
Activities are often assigned to roles rather than to individuals. This role-
based model lowers the risk that individuals might gain more system access
than required by their job function. You can also define policies (separation
of duty policy) to prevent users from having multiple roles that result in a
conflict of interest.
This diagram shows an example role hierarchy. The roles are shown in
blue. The role relationships are shown with blue arrows. People are shown
in green. A role hierarchy an include business roles and application roles.
Roles can have multiple children, like the All Hospital Employees role.
Roles can have multiple parents, like the Doctors role. People can be
members of multiple roles. Anna is a member of the Cardiologist and
Clinical Educators roles and receives the entitlements associated with those
roles. And, since the roles are part of a role hierarchy, she inherits the
entitlements from the Doctors, eMR User, and All Hospital Employees
roles.
Policy
Identity Policy
You can define basic rules for an identity policy. Basic rules can specify
which attributes to use, how many characters are used from each
attribute, and what case to use when creating a user ID. To set a
character limit, an identity policy rule defines the number of characters
to use from a first and second attribute to form the user ID. Forming the
user ID from the attributes has the following conditions:
Password Policy
A password policy defines the password strength rules that are used to
determine whether a new password is valid. A password strength rule
is a rule to which a password must conform. For example, password
strength rules might specify that the minimum number of characters of
a password must be 5. The rule might also specify that the maximum
number of characters must be 10.
A password policy sets the rules that passwords for a service must meet,
such as length and type of characters allowed and disallowed.
Additionally, the password policy might specify that an entry is
disallowed if the term is in a dictionary of unwanted terms.
You can specify the following standards and other rules for passwords:
Minimum and maximum length
Character restrictions
Frequency of password reuse
Disallowed user names or user IDs
Specify a minimum password age
Adoption Policy
Provisioning Policy
Recertification Policy
Data tier
Data tier is typically LDAP or Database in which Identity Manager store
all information of Users, roles, policies and transactional log record and
audit data. Identity Manager can use either LDAP or Database or both to
store this information.
Identity Manager can store all information of Users, roles, policies and
transactional log record and audit data in LDAP alone.
Identity Manager can store all information of Users, roles, policies and
transactional log record and audit data in Database alone.
Identity Manager can use both LDPA and database. In that scenario
normally, Users, roles, policies are stored in LDAP. While transactional
log record and audit data are stored in database.
Introduction
Identity and Access Management Governance is a set of processes and
policies for organizations to manage risks and maintain compliance with
regulations and policies by administering, securing, and monitoring
identities and their access to applications, information, and systems.
Although potentially complex in implementation, the concept of Identity
and Access Management (IAM) Governance is fairly straightforward:
determine who should have access to what resources and who should not,
according to government regulations, industry-specific regulations (SOX,
HIPPA, GLBA, etc.), and business regulations and guidelines.
Figure: Deliver the right accesses for the right people to the right
resources at the right time in the right context
People, their identity attributes, and their associated roles provide critical
links to the business and business processes that deliver organizational
visibility, accountability, and improved efficiency. Applications and their
associated roles link entitlements to the users so that they can perform their
job through appropriate access to systems and information. The
management of application entitlements should leverage the business
context of identities, services, and the surrounding environment. An
ongoing review of user activity monitoring aids conformance to policies
and regulations, but it also can leverage the monitoring and analysis of user
activities to predict, detect, and correct abnormal user behavior. It is also a
key feedback loop into an organization’s overall governance infrastructure.
Achieving IAM governance does not require that an organization start with
planning and follow the phases in any specific order. Organizational
priorities, controls, operating procedures, and the current state of the
identity repositories dictate where a business starts and where to build an
authoritative System of Record (SOR) for IAM Governance.
Access certification
Role (re)certification
The recertification policy also defines the operation that occurs if the
recipient declines or does not respond to the recertification request.
Recertification policies use a set of notifications to initiate workflow
activities in the recertification process. For example, a system
administrator of a specific service can create a recertification policy for the
service that sets a 90-day interval for account recertification. If the
recipient of the recertification declines recertification, the account can be
automatically suspended.
Automatic remediation
To deliver validation and remediation of user access and to ensure that the
changes are immediately reflected on the end system, organizations can
integrate access certification with user provisioning. If the certifier
removes the access or role for a user, the system automatically removes the
user access.
Certifiers can
Preview the impact of the certification before submitting it to see if any roles,
accounts, or groups are affected by the access decision.
Save incremental progress as a draft and complete the certification at a later time.
Access Enforcement
Identify the process, such as the processes for bringing users into
and out of the company, division, and department.
Discover data across people and the application and data
infrastructure.
While roles help simplify the access certification and access management,
it is important not to implement so many roles that the roles become
unmanageable. Role planning and role management requires in-depth
thinking about the purpose and structure of the role (flat vs. hierarchical).
Role and User Lifecycle describes how roles, users, and accesses are
created, managed, and terminated based on certain events or a time-
based paradigm.
Delegated Administration
Unlike a personal identity like jdoe, you can access privileged accounts
only with a privileged password, and account access is hard to disable. In
an enterprise environment, multiple Administrators might share access to
a single user ID for easier administration. When multiple Administrators
share accounts, you can no longer definitively prove that an account was
used by one Administrator as opposed to another. You lose personal
accountability and audit compliance.
Administrative overhead
Security administrators and resource owners spend an inordinate amount
of time tweaking provisioning policies, reconciling accounts, and
performing audits to minimize account proliferation on managed
resources.
The below figure summarizes the two extreme situations related to how
organizations might deal with privileged IDs.
This solution for sharing IDs is best applied on privileged roles that fit the
previously described scenarios. However, not all privileged IDs must be
shared. It is also not appropriate to force-fit all privileged IDs as shared
IDs. After all, the process of COCI introduces varying degrees of
inconvenience to privileged users.
Introduction
Increasingly, traditional on-premises IAM solutions are now coupled with cloud-
based IAM to more effectively support the rapidly evolving nature of digital
transformation and computing requirements.
Private clouds: Used exclusively for a single, private organization but may
support multiple internal consumers; widely considered the most secure
deployment model
Public clouds: Used by multiple, unrelated organizations on a shared
basis; very common deployment model, particularly for SaaS applications
Community clouds: Used by organizations with a common purpose,
mission, or audience
Hybrid clouds: A combination of two or more cloud models (private,
public, and community)
The industry frequently uses this term to define a combination of traditional on-
premises and off-premises hosting options; essentially on-premises combined
with any cloud model.
Introducing IDaaS
IDaaS provides relief to those managing data centers and frustrated with many
challenges of on-premises applications. Many of the challenges encountered by
those managing IAM solutions in their own environments are negated with
IDaaS.
IDaaS solutions provide customers relief from the overhead of infrastructure
support, specialized staffing, providing consistent deployments, and maintenance
and upgrades. As you evaluate IDaaS, be sure to factor the benefits in this section
into your decision matrix with their associated savings.
1. Infrastructure
IDaaS solutions don’t require servers, storage, or other infrastructure
installed and maintained at the consumer’s location; everything
is hosted from the cloud. For IDaaS, the only client side equipment
required is smart card readers or biometric devices on workstations
if MFA is utilized, but those devices are necessary regardless
of IDaaS or IAM. The benefit to the consumer is that there are no
capital expenditures (CAPEX) on hardware or infrastructure.
2. Staffing
IDaaS transfers administrative support from the consumer to the
cloud service provider. The infrastructure administrative duties
such as installation and configuration are already performed by the
cloud service provider at the multitenant level for all consumers;
the cloud provider staff performs these tasks for everyone. Application
level configuration specific to the customer’s application
environment may still be performed by the consumer, but wizards
and templates ensure that the “heavy lifting” is no longer required.
The consumer reaps the benefit of highly skilled, on-premises staff
being freed up to support other business-centric initiatives.
3. Deployment
IDaaS solutions are automatically deployed via the cloud by using
a standardized multitenant architecture. When a new consumer
starts its service, a new IDaaS environment is provisioned in the
cloud by using virtualization and cloning technologies. A standardized
baseline IDaaS image at the latest version and security
patch level is then customized via the consumer using selfservice
portal access, wizards, and templates.
The benefit of a standardized deployment process ensures that
the consumer is provided a secured, standardized, and baselined
environment so he can start his application specific customizations
sooner and at less risk.
Many organizations have a cloud footprint, but for those without,a SaaS solution
such as IDaaS is a good way to start.
Hybrid environments
Organizations with a combination of on-premises and cloudbased applications
are hybrid implementations. A majority of companies haven’t or can’t move all
their on-premises applications to the cloud (for a variety of reasons), but they’re
leveraging cloud for commodity-based or new technology applications.
Hybrid environments are also the most complex environments to support, but
they benefit greatly from IDaaS:
Simplified end-to-end cloud-based IAM for the enterprise across on-
premises and cloud-hosted applications
Elimination of siloed piecemeal IAM solutions that are costly, complex,
and have gaps in coverage
Standardized and consistent IAM services for all types of customers
regardless of applications utilized
Self-service portals and employee launch pads to consolidate and
simplify application access
Accelerated access to new capabilities via SaaS and mobile applications
while maintaining access to legacy applications
Providing an end-to-end enterprise-level solution for IAM is the strength of
IDaaS solutions. Hybrid environments are particularly well suited for IDaaS to
reduce complexity and costs for on premises applications and extend
functionality into the cloud for modern SaaS applications.
Appendix
When auditing identity and access management (IAM), the following checklist
is a high-level overview and is not intended to be a comprehensive audit
program or address all IAM-related risks.
Audit Question/Topic Status
1. Is there an IAM strategy in place?
A critical element for an effective IAM process is
the presence of a consistent approach to manage
the supporting information technology (IT)
infrastructure. Having a cohesive
strategy across the organization will enable all
departments to manage people, their identities,
and the access they need using similar processes,
if not necessarily with the same technology.
• Inquire about current IAM strategies in the
organization.
• If they exist, determine how and by whom they are
managed.
2.Are the risks associated with the IAM
process well understood by management and
other relevant individuals? Are the risks
addressed by the strategy?
Simply having a strategy does not ensure it covers
all the risks that IAM may present. It is important
that the strategy contains elements that identify
all relevant risks.
• Determine whether a risk assessment of
established IAM processes was conducted.
• Determine how risks are identified and addressed.
3.Is the organization creating or changing an
IAM process only to satisfy regulatory
concerns?
It is critical that IAM processes are integrated
with broader business issues and strategies. There
are numerous benefits to having a robust IAM
environment, such as having a better internal
control environment.
• Determine the needs of the organization with
respect to IAM.