0% found this document useful (0 votes)
21 views

Chapter 1

Jddjndndnd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter 1

Jddjndndnd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter 1

1. Fundamentals
Legacy System Features:

Many computer systems retain outdated trust assumptions from early


Internet designs.

These assumptions are from a time when the Internet was used mainly by
researchers and military labs.

Today, they allow Internet-based crime to flourish, as attackers exploit


these outdated trust models.

Attack Analysis and Defense:

Analyzing attacks helps determine the severity of damage they could


cause.

It also helps estimate if the attack could be replicated or spread to other


systems.

Defense against attacks includes:

Identifying compromised machines.

Removing malicious code from systems.

Patching vulnerabilities to prevent further exploitation.

Sound Security Models:

Effective security relies on sound models that outline specific security


properties.

Models should anticipate potential attack types.

They should also define specific defenses for each identified threat.

Rigorous Testing and Monitoring:

Chapter 1 1
Both hardware and software require rigorous testing to identify
vulnerabilities.

After deployment, systems need monitoring procedures to detect any


security breaches.

Quick response plans should be in place for breaches.

Timely Patching:

Applying security patches as soon as they’re available is essential.

Patching helps maintain system security and protect against newly


discovered threats.This structure provides a clear, easy-to-review format
for each concept and its main points. Let me know if there’s anything else
you’d like to expand on!

1.1 Confidentiality, Integrity, and Availability


Misuse of Computers and Networks:

Misuse is increasing rapidly, with issues like spam, phishing, and


computer viruses becoming multibillion-dollar problems.

Identity theft is a major concern, posing threats to personal finances and


credit ratings, and creating corporate liabilities.

Need for Security Knowledge:

Society requires a broader knowledge of computer security.

There is a need for security-educated IT professionals who can defend


against and prevent attacks.

Security-aware computer users are also essential to manage their


information and systems safely.

Introduction to Key Security Concepts:

The foundational model of information security is often represented by


the acronym C.I.A.:

Confidentiality: Ensures that information is accessible only to those


authorized to view it.

Chapter 1 2
Integrity: Ensures that information remains accurate and unaltered,
unless by authorized users.

Availability: Ensures that information and resources are accessible to


authorized users when needed.

The provided image (Figure 1) visually represents the C.I.A. concepts:

It shows three overlapping circles, each labeled with one of the core concepts:
Confidentiality, Integrity, and Availability.

The intersection of these circles symbolizes the balance needed between


these concepts to achieve comprehensive information security.

Confidentiality

Definition of Confidentiality:

Confidentiality is the prevention of unauthorized information disclosure.

Chapter 1 3
It involves allowing only authorized access to data while blocking others
from learning its content.

Historical Context:

The concept of keeping information secret predates computers, seen in


early cryptographic techniques.

Caesar Cipher: Julius Caesar used a basic cipher that replaced each letter
with another. Although simple by today's standards, it was effective due to
limited literacy among his enemies.

Tools for Confidentiality:

Encryption:

Information is transformed using an encryption key.

Only those with the decryption key can access the original data.

Access Control:

Sets rules to restrict data access based on identity or role.

Access is granted only to those with a "need to know."

Authentication:

Determines a person's or system's identity or role.

Commonly uses:

Something you have (e.g., smart card or radio key fob).

Something you know (e.g., password).

Something you are (e.g., fingerprint).

Authorization:

Decides if a person or system is allowed access based on policies.

Prevents attackers from tricking the system for unauthorized access.

Physical Security:

Establishes physical barriers, such as:

Locks on doors and cabinets.

Chapter 1 4
Faraday cages to block electromagnetic signals.

Soundproofing to prevent eavesdropping.

Example of Confidentiality in Action:

When a browser shows a lock icon while entering a credit card number,
multiple confidentiality tools are at work:

The browser authenticates the website.

The website checks the browser’s authorization to access the page.

Encryption protects the card data during transmission.

Physical security at the data center protects the server holding the
information.

Physical Eavesdropping Risks:

Physical eavesdropping can reveal sensitive information, such as:

Keystrokes captured through sound.

Screen images recreated via electromagnetic emissions or reflections


on nearby surfaces.

Image Description
The image (Figure 2) illustrates the three foundations of authentication:

Something you are: Represented by a fingerprint, indicating biometric traits.

Chapter 1 5
Something you know: Shown with a person and sample passwords, indicating
knowledge-based authentication (e.g., passwords).

Something you have: Depicted with a radio token with secret keys,
representing possession-based authentication (e.g., smart cards).

Integrity

Definition of Integrity:

Integrity ensures that information has not been altered in an unauthorized


way.

Telephone Game Analogy:

The concept of integrity can be illustrated by the Telephone game.

In this game, a message is whispered around a circle, often getting


distorted by the time it returns to the starting point.

This distortion shows how easily data integrity can be lost when
information is repeatedly passed.

Ways Data Integrity Can Be Compromised:

Benign (unintentional) Compromise:

Examples include a cosmic ray flipping a bit on a storage device or a


disk crash destroying files.

Malicious Compromise:

Examples include a virus modifying files in the operating system,


causing it to spread the virus to other computers.

Tools for Supporting Data Integrity:

Backups:

Data is periodically archived so that files can be restored if altered in


an unauthorized or unintended way.

Checksums:

A function computes a numerical value based on the contents of a file.

Chapter 1 6
Even a small change in the file (like flipping a single bit) produces a
different checksum, helping to detect breaches in data integrity.

Data Correcting Codes:

Methods that allow small changes in data to be detected and


corrected automatically.

These codes work at small storage units, such as bytes or memory


words, and can sometimes apply to entire files.

Common Trait in Integrity Tools:

All these tools use redundancy.

They replicate data or data functions to detect and sometimes correct


breaches in integrity.

Importance of Metadata Integrity:

Integrity isn’t just about the data content; it also applies to metadata.

Metadata includes:

File ownership information, last modified times, and access


permissions.

Attributes such as the file's name and location in the system.

Unauthorized changes to metadata, like access timestamps, are


considered integrity violations.

Example of Metadata Integrity Violation:

An intruder might not alter file content but may change metadata (e.g.,
access timestamps).

This action could compromise confidentiality if files aren’t encrypted and


may help detect unauthorized access if integrity checks are in place.

Availability

Definition of Availability:

Chapter 1 7
Availability ensures that information is accessible and modifiable in a
timely manner by authorized users.

Example of Practical Availability:

Information that is secure but difficult to access, such as in a remote,


highly protected location, is not practical from an information security
perspective.

The usefulness of information often depends on how readily available it


is.

Examples of Availability in Action:

Stock quotes are most valuable when they are up-to-date.

Credit card security: If a list of stolen credit card numbers isn’t available
to merchants in time, it can lead to financial loss.

Tools for Supporting Availability:

Physical Protections:

Infrastructure designed to keep information accessible even in


physical adversities.

Examples include buildings that withstand storms, earthquakes, and


bomb blasts and are equipped with generators to handle power
issues.

Computational Redundancies:

Redundant systems, such as RAID (Redundant Array of Inexpensive


Disks), ensure data availability.

Web server farms use multiple servers so that if one server fails, the
website remains available.

Availability as an Attack Target:

Attackers may target availability even if they aren’t concerned with data
confidentiality or integrity.

For instance, a thief with stolen credit cards may try to disrupt the
availability of the stolen card list, preventing it from being broadcasted to

Chapter 1 8
merchants.

1.2 Assurance, Authenticity, and Anonymity


Introduction to AAA Concepts:

Beyond the CIA Triad (Confidentiality, Integrity, Availability), AAA


concepts are also essential in modern computer security.

AAA stands for Assurance, Authenticity, and Anonymity.

AAA vs. CIA:

Unlike CIA concepts, which are interconnected, the AAA concepts are
independent of one another, focusing on different aspects of security and
privacy.

The image (Figure 3) illustrates the AAA concepts:

Assurance: Shown with an image symbolizing confidence and reliability in


security controls.

Authenticity: Depicted with an image of a document or token, representing


the genuineness of data and identities.

Chapter 1 9
Anonymity: Represented by an image that suggests concealment or privacy,
emphasizing the protection of user identity.

Assurance

Definition of Assurance*:

Assurance in computer security refers to how trust is provided and


managed in systems.

It involves confidence that people or systems behave as expected.

Components of Trust*:

Policies: Define expected behaviors for people or systems.

Example: An online music system’s policy may specify user access and
copying rules.

Permissions: Describe allowed behaviors for users interacting with the


system.

Example: An online music store may allow limited copying for users
who bought songs.

Protections: Mechanisms that enforce policies and permissions.

Example: A music system may have protections to prevent


unauthorized access and copying.

Assurance in Two Directions*:

Trust is managed from systems to users and from users to systems.

Example: A user provides credit card details expecting the system to


follow its policies, while the system manages trust through permissions
and protections.

Beyond CIA – Resource Management*:

Assurance also involves protecting and managing system resources, not


just information.

System designers need assurance that users follow policies regarding


CPU, memory, and network usage.

Chapter 1 10
Trust Management:

Trust management includes designing enforceable policies, granting


permissions to trusted users, and implementing enforcement
components.

System Assurance and Software Engineering*:

Assurance requires that the system’s software conforms to its design.

Incorrect implementation can compromise security, even if the design is


correct.

Example: A pseudo-random number generator (PRNG) used with a


fixed seed will produce predictable sequences, compromising
security.

User Trust in Systems:

Users often rely on limited computing capabilities and legal or


reputational recourse when placing trust in systems.

Assurance in systems can be strengthened by mechanisms like encrypted


browser sessions and checks on website authenticity.

Authenticity

Need for Authenticity:

Online services require a way to enforce policies and commitments.

Enforcing contracts electronically ensures that commitments, like buying


a song or renting a movie, can be trusted.

Definition of Authenticity:

Authenticity is the ability to verify that statements, policies, and


permissions issued by people or systems are genuine.

Nonrepudiation:

Nonrepudiation is the property that authentic statements cannot be


denied.

This prevents people or systems from denying their commitments.

Chapter 1 11
Digital Signatures:

Digital signatures provide a computational way to authenticate


documents.

They achieve nonrepudiation, serving as a digital counterpart to


handwritten signatures.

Digital signatures also help verify document integrity by becoming invalid


if the document is modified.

Requirement for Electronic Identification:

Authenticity depends on reliable electronic identification methods to


verify identities.

Anonymity

Need for Anonymity:

Personal identity is often tied to digital records (e.g., medical, purchase,


legal records), leading to privacy concerns.

Anonymity is the property that allows certain records or transactions to


remain unattributable to any individual.

Tools for Achieving Anonymity:

Aggregation:

Combines data from multiple individuals so that disclosed data (e.g.,


sums, averages) cannot be linked to any single person.

Example: U.S. Census publishes regional data only if it doesn’t reveal


individual details.

Mixing:

Intertwines transactions or information in a way that prevents tracing


back to individuals.

Example: Mixing systems process data in a quasi-random way to


protect identities.

Proxies:

Chapter 1 12
Trusted agents perform actions on behalf of users, concealing their
identities.

Example: Internet proxies allow users to access sites anonymously.

Pseudonyms:

Fictional identities used in place of real ones for communications or


transactions.

Example: Social networking sites allow interactions with pseudonyms


to protect real identities.

Goal of Anonymity:

Anonymity should be pursued with safeguards to ensure privacy


whenever appropriate.

1.3 Threats and Attacks

Threat/Attack Definition Examples Target

Intercepting
information meant Packet sniffers
Eavesdropping for someone else monitoring nearby Confidentiality
during Internet traffic.
transmission.

Man-in-the-middle
Unauthorized
attack, computer
Alteration modification of Data Integrity
viruses modifying
information.
system files.

Interrupting or
slowing down a Email spam filling up
Denial-of-
service or mail queues to slow Availability
Service
information down email servers.
access.

Masquerading Creating fake Phishing (fake Authenticity and


information as if websites to gather Confidentiality/Anonymity
passwords), (in phishing)

Chapter 1 13
from a genuine spoofing (network
source. packets with false
addresses).

Backing out of a
Denial of a
contract that
Repudiation commitment or Assurance
requires data receipt
data receipt.
acknowledgment.

Combining data Determining the


Correlation and sources to trace source of information
Anonymity
Traceback the origin of a by integrating
data stream. different data flows.

Other Attacks:

Military-Level Attacks: Targeting cryptographic secrets.

Composite Attacks: Combining multiple attack types.

1.4 Security Principles


Economy of Mechanism:

Simplicity in security design and implementation is crucial.

A simple security framework is easier for developers and users to


understand.

Simplicity supports efficient development and verification of security


methods.

This principle is closely linked to implementation and usability.

Fail-Safe Defaults:

The default configuration of a system should have conservative


protection.

New users should be given minimal access rights by default.

Often, applications favor usability over security, leading to more open


default settings (e.g., web browsers running downloaded code).

Chapter 1 14
Many access control models assume that, if access rights aren’t
specified, access is denied.

Complete Mediation:

The system should check every access to a resource for compliance with
the security policy.

Avoid performance techniques that reuse authorization checks, as


permissions may change over time.

Example: Online banking sites should prompt users to re-sign after a set
time (e.g., 15 minutes).

File systems should check permissions each time a program accesses a


file to avoid risks if permissions change during runtime.

Open Design:

The system should make its security architecture and design publicly
available.

Security should rely only on keeping cryptographic keys secret, not on


hiding system design.

Open design encourages scrutiny by multiple parties, leading to early


discovery and correction of vulnerabilities.

Making the system’s implementation available, such as through open


source software, allows for detailed security reviews and easier bug fixes.

This principle opposes security by obscurity, where organizations try to


achieve security by hiding cryptographic algorithms, a historically
unsuccessful approach.

Changing a compromised cryptographic key is straightforward, but


modifying a system after its design has been leaked is usually infeasible.

Separation of Privilege:

Require multiple conditions to access restricted resources or perform


certain actions.

Chapter 1 15
Apply separation of system components to limit potential damage if one
component is breached.

Least Privilege:

Ensure each program and user operates with the minimum privileges
needed to function.

By enforcing this, you restrict privilege abuse and minimize damage from
compromised applications or accounts.

Example: A web server application should have only the permissions


essential for its operation, not full administrator privileges, to reduce
potential harm.

Least Common Mechanism:

Minimize shared mechanisms in systems with multiple users to prevent


security risks.

Provide separate access channels for users needing access to the same
resources to avoid unintended security issues.

Psychological Acceptability:

Design user interfaces to be intuitive and aligned with user expectations.

Ensure security settings are straightforward, minimizing differences


between program behavior and user expectations to avoid
misconfigurations.

Example: Complex interfaces in email applications discourage users from


utilizing cryptographic features like encryption and digital signatures.

Work Factor:

Design security mechanisms where the cost of bypassing them matches


the potential resources of an attacker.

Example: A university database protecting student grades may need less


advanced security than a system safeguarding military secrets.

Chapter 1 16
Compromise Recording:

In some cases, it is more effective to record intrusion details than to


prevent them completely.

Example: Use surveillance cameras or log systems to monitor access to


files, emails, and web browsing in office networks.

2 Access Control Models


Access control models provide a rigorous way to manage who can access
information, helping to prevent attacks on confidentiality, integrity, and
anonymity.

These models rely on data managers, data owners, or system administrators


to define access control specifications.

The goal is to restrict access only to those with a legitimate need, following
the principle of least privilege.

2.1 Access Control Matrices:


An access control matrix is a table that defines permissions for each
subject-object pair.

Rows represent subjects (users, groups, or systems), while columns


represent objects (files, directories, devices).

Cells contain access rights (e.g., read, write, execute) for each subject-object
combination, with empty cells meaning no access.

Advantages:

Allows quick lookup of access rights by checking a specific cell.

Provides a visual overview of access control relationships, showing


permissions for all subject-object pairs at once.

Disadvantages:

Chapter 1 17
Scalability is a major issue; for large systems with many subjects and
objects, the matrix becomes extremely large and unmanageable.

Example: A system with 1,000 users and 1,000,000 files would need a
matrix with 1 billion cells, which is impractical to manage.

Alternatives:

To address the scalability issue, Access Control Lists (ACLs),


Capabilities, and Role-Based Access Control (RBAC) provide similar
functionality with reduced complexity.

The image (Table 1) shows an example access control matrix with read, write,
and execute permissions for four users across one file (/etc/passwd) and three
directories. This matrix provides a clear example of how different users have
distinct access rights to each resource.

2.2 Access Control Lists (ACLs):


ACLs use an object-centered approach by defining a list for each object,
showing which subjects have access and their specific rights.

The ACL model compresses each column of the access control matrix by
ignoring empty cells, reducing size.

Advantages:

Reduces size by only including nonempty cells from the access control
matrix.

Chapter 1 18
Stores ACLs with objects as metadata, making it easy for systems to
check permissions directly from the object (useful in file systems).

Disadvantages:

ACLs lack an efficient way to list all access rights of a given subject.

To find all access rights for a subject, the system must search every
object’s ACL individually, which can be time-consuming, especially for
tasks like removing a user from the system.

The image (Figure 5) shows access control lists (ACLs) for the directories and file
in Table 1, with each object listing the users who have read (r), write (w), and
execute (x) permissions.

2.3 Capabilities:
The capabilities model uses a subject-centered approach, listing objects
each subject has access to and specifying access rights.

This model compresses each row of the access control matrix by removing
empty cells.

Advantages:

Reduces size similarly to ACLs, as it only includes nonempty subject-


object pairs.

Allows administrators to quickly determine all access rights for any


subject by viewing the subject’s capabilities list.

Chapter 1 19
When a subject requests access, the system only checks the capabilities
list for that subject, which can be efficient if the list is small.

Disadvantages:

Capabilities aren’t directly associated with objects, so determining all


access rights for an object requires searching through every subject’s
capabilities list.

The image (Figure 6) shows capabilities lists for four users, each listing the read
(r), write (w), and execute (x) permissions they have for different objects.

2.4 Role-Based Access Control (RBAC):


In RBAC, administrators define roles and assign access rights to these roles,
instead of directly to users.

Each role is associated with specific access rights suitable for that role’s
responsibilities.

Subjects (users) are assigned to roles, and their access rights become the
union of the rights of all assigned roles.

Example: A student working as a backup assistant would have both student


and backup agent roles, combining the access rights of both.

Role Hierarchies:

Chapter 1 20
RBAC allows hierarchical role structures, where higher roles inherit
access rights from lower roles.

Example: In a computer science department, system administrator may


be above backup agent, inheriting its rights.

Advantages:

Reduces the number of access rules to manage, as the system only


needs to track role-based rights.

Simplifies checking access rights by verifying if a subject’s role has the


required permission.

Disadvantages:

Most current operating systems do not support role-based access control


natively.

The image (Figure 7) shows a role hierarchy for a computer science department,
illustrating how roles like Department Chair, System Administrator, and Faculty
are organized, with access rights inherited by roles lower in the hierarchy.

3. Cryptographic Concepts

Chapter 1 21
Cryptography provides techniques to achieve various security goals
effectively.

3.1 Encryption:
Traditionally, encryption enables confidential communication between two
parties, often named Alice and Bob, over an insecure channel.

Plaintext (M) is the original message that needs to be kept confidential.

Alice converts plaintext M into ciphertext (C) using an encryption algorithm


(E), represented by C = E(M)
Ciphertext C is sent to Bob, who then uses a decryption algorithm (D) to
retrieve the original plaintext (M) from ciphertext (C), represented by M =
D(C).
The encryption and decryption algorithms ensure that only Alice and Bob can
understand the message, keeping it secure even if others intercept ciphertext
C.

Cryptosystems:
The decryption algorithm requires a secret key known to Bob (and possibly
Alice) to retrieve the original message.

The encryption algorithm uses an encryption key associated with the


decryption key. If deriving the decryption key from the encryption key is
feasible, both keys should be kept secret.

A cryptosystem consists of seven components:

1. Set of possible plaintexts

2. Set of possible ciphertexts

3. Set of encryption keys

4. Set of decryption keys

5. Correspondence between encryption and decryption keys

6. Encryption algorithm

Chapter 1 22
7. Decryption algorithm

Caesar Cipher Example:

The Caesar cipher uses the Latin alphabet (23 characters) with a shift
operation.

Encryption key: e = 3 (shifts each character forward by 3).

Decryption key: d = -3 (shifts each character backward by 3).

Example shifts:

s(D, 3) = G

s(R, -2) = P

Encryption replaces each character x in plaintext with s(x, e).

Decryption replaces each character x with s(x, d).

The encryption and decryption keys are opposites, and both algorithms
perform a circular shift on each character.

Modern Cryptosystems:
Modern cryptosystems are significantly more complex and secure than basic
ciphers like the Caesar cipher.

Advanced Encryption Standard (AES) is a popular modern cryptosystem that


uses 128, 196, or 256-bit keys.

The length of AES keys makes brute-force attacks (trying all possible keys)
practically infeasible for an eavesdropper.

Symmetric Encryption:
In symmetric cryptosystems (or shared-key cryptosystems), the same key
(K) is used for both encryption and decryption.

Alice and Bob must share the key K to communicate securely.

AES is an example of a symmetric cryptosystem, requiring both parties to


have access to the same key for secure communication.

Chapter 1 23
The image (Figure 8) illustrates a symmetric cryptosystem, where both the
sender and recipient use a shared secret key for encryption and decryption. An
attacker who eavesdrops cannot decrypt the ciphertext without knowing the key.

Symmetric Key Distribution:


Symmetric cryptosystems, like AES, are fast but require a secure way to
distribute the key (K) so that only the intended parties (e.g., Alice and Bob)
know it, not an eavesdropper like Eve.

If n parties wish to communicate privately, each pair of parties needs a


unique key, resulting in a total of n(n − 1)/2 distinct keys.

Chapter 1 24
The image (Figure 9) shows pairwise confidential communication among
multiple users, requiring n(n − 1)/2 distinct keys. Each key is shared only between
two users, ensuring privacy from other users.

Public-Key Encryption:
In a public-key cryptosystem, each user has a public key (shared openly) and
a private key (kept secret).

To send an encrypted message to Bob, Alice uses Bob’s public key to encrypt
her message. Bob then uses his private key to decrypt it.

This method avoids the need for a shared secret key and only requires each
user to keep their private key secure. (Advantage)

Efficient Communication: For n users, a public-key system needs n key pairs


(public and private), unlike symmetric systems, which require n(n − 1)/2 keys.
(Advantage)

Disadvantages of Public-Key Cryptography:

Slower Encryption/Decryption: Public-key algorithms (e.g., RSA, ElGamal)


are slower than symmetric encryption, making them less suitable for
sessions with frequent communication.

Larger Key Sizes: Public-key cryptosystems need longer keys (e.g., RSA
with 2048-bit keys) compared to symmetric systems (e.g., AES with 256-
bit keys).

Hybrid Approach: To overcome these issues, public-key cryptography is


often used to exchange a shared secret key, which is then used for
secure communication with a symmetric encryption scheme.

Chapter 1 25
Figure 10: Shows a public-key cryptosystem where the sender uses the
recipient’s public key to encrypt, and the recipient uses their private key to
decrypt. An attacker cannot decrypt without the private key.

Figure 11: Illustrates pairwise communication in a public-key system, requiring


only n key pairs for n users.

Chapter 1 26
Figure 12: Depicts the use of a public-key system to exchange a shared secret
key, which is then used for symmetric encryption.

3.2 Digital Signatures:


Public-key cryptosystems allow the creation of digital signatures.

In typical public-key encryption schemes, encryption and decryption order


can be reversed.

Bob can apply the decryption algorithm to a message M using his private
key SB , resulting in DS B (M)


When anyone applies the encryption algorithm with Bob’s public key PB to ​

this output, they get back the original message: EPB (DS B (M))


​ = M

Using a Private Key for a Digital Signature:


When Bob wants to prove he is the author of message M, he creates a digital
signature by applying his private key:

This result, S, serves as Bob’s digital signature for the message.

Bob sends both signature S and message M to Alice.

Chapter 1 27
Alice can verify the signature by encrypting \( S \) with Bob’s public key:

M = EPB (S) ​

This verification confirms that only Bob could have produced S because it
required his private key,SB , to create it.

Limitation: This method produces a signature as long as the message, making


it impractical for direct use in real-world applications.

Cryptographic Hash Functions:


Cryptographic hash functions provide checksums on messages with special
properties.

They are typically one-way functions, meaning it’s easy to compute the hash
of a message h(M), but difficult to reverse it.

Example: SHA-256 produces a 256-bit hash value and is believed to be one-


way.

Message Authentication Codes (MAC):

A cryptographic hash function combined with a shared secret key


provides message integrity.

If Alice and Bob share a secret key K , Alice can send a message M with
integrity protection by computing a MAC:

A = h(K∣∣M)
Alice sends the pair (M, A)to Bob over an insecure channel.

Bob, upon receiving (M ′ , A′ ), computes his own MAC:

A′′ = h(K∣∣M ′ )
If A′′ = A′ , Bob can be confident that M ′ is the original message M .
An attacker cannot alter the message and compute a correct MAC without
knowing the secret key K .

Chapter 1 28
The image (Figure 15) demonstrates using a message authentication code (MAC)
to verify message integrity. The sender computes a MAC using a shared key, and
the recipient verifies it, detecting any unauthorized changes made during
transmission.

Digital Certificates
Public-key cryptography enables Alice to send a shared secret key (K) to Bob
by encrypting it with Bob’s public key PB . ​

Problem: Alice needs assurance that PB truly belongs to the right Bob.

Solution: A trusted authority, known as a certificate authority (CA), can


issue a digital certificate verifying Bob’s identity and his public key.

A digital certificate links a person’s identity with their public key and is
digitally signed by the CA.

To trust Bob’s public key, Alice only needs to trust the CA and know its public
key, which is often pre-installed in operating systems.

Example Digital Certificate Information:

Certification authority name (e.g., Thawte)

Date of issuance (e.g., 1/1/2009)

Expiration date (e.g., 12/31/2011)

Chapter 1 29
Website address (e.g., mail.google.com)

Organization name (e.g., “Google, Inc.”)

Public key (e.g., an RSA 1024-bit key)

Cryptographic hash function (e.g., SHA-256)

Digital signature

Browser Usage: When a browser indicates a secure site (e.g., “locks the
lock”), it relies on a digital certificate to authenticate the web server’s identity,
helping prevent phishing attacks by confirming the organization’s name on the
certificate.

4 Implementation and Usability Issues

4.2 Passwords
Usernames and passwords are a common method for authenticating users in
computer systems.

Even systems using cryptographic keys, tokens, or biometrics often add


password protection for additional security.

Example: A symmetric cryptosystem’s secret key might be encrypted on a


hard drive, with its decryption key derived from a password.

Password Security is critical; passwords should ideally be easy to remember


and hard to guess.

Easy-to-remember passwords often include words, pet names, or personal


dates, but these are easier to guess.

Hard-to-guess passwords should be random sequences from a large set of


characters (lowercase, uppercase, numbers, symbols).

Frequent password changes are sometimes required by administrators,


increasing security but also making passwords harder to remember.

Chapter 1 30
Dictionary Attack:
Easy-to-remember passwords are vulnerable because they come from a
small set of common possibilities.

Attackers use dictionaries of common passwords, including:

~50,000 English words

~1,000 common first names

~1,000 pet names

~10,000 last names

~36,525 birthdays for people up to 100 years old

A dictionary attack involves systematically trying each word from this


dictionary to guess the password.

With modern computers, an attacker can attempt one password per


millisecond, completing a dictionary attack on 100,000 entries in about 100
seconds (under 2 minutes).

To counter this, systems may introduce delays after failed attempts or lock
out users after repeated failures.

Secure Passwords:
Secure passwords use a large character set (alphabet) to make dictionary
attacks slower.

Example: An 8-character password with all printable characters on a typical


American keyboard has approximately \( 94^8 = 6,095,689,385,410,816 \)
possibilities (6 quadrillion).

Testing one password per nanosecond would take around 1 month to break;
testing at 1 microsecond would take about 95 years.

Memorization Tips:

Avoid writing passwords down on post-it notes.

Use a memorable sentence and take the first letter of each word,
capitalizing some and adding special characters.

Chapter 1 31
Example: "Mark took Lisa to Disneyland on March 15" becomes
MtLtDoM15.

For extra security, replace characters (e.g., t with +) to create


MtL+DoM15, making it stronger and long-lasting.

4.3 Social Engineering


Social engineering refers to using human insiders to bypass computer
security solutions.

Techniques include burglary, bribery, blackmail, and trickery.

Pretexting:
An attacker pretends to be someone else (e.g., Eve calls a helpdesk claiming
to be Alice).

By providing personal information (like birthday or pet's name), the attacker


convinces the agent to reset Alice’s password.

This technique relies on an invented story or pretext.

Baiting:
An attacker uses a “gift” to trick someone into installing malicious software.

Example: Leaving infected USB drives in a company parking lot, hoping an


employee will use one and introduce malware into the secure system.

Quid Pro Quo:


The attacker offers “something for something” to gain information.

Example: Bob pretends to be a helpdesk agent and helps Alice with her
computer, then casually asks for her password.

Bob may use caller-ID spoofing to appear as a legitimate helpdesk contact,


enhancing his credibility. This technique is also known as vishing (VoIP
phishing).

Chapter 1 32
Effectiveness:
Social engineering can bypass strong security measures.

System designers should consider human interaction factors when


implementing secure systems.

Chapter 1 33

You might also like