Unit 5 Modified
Unit 5 Modified
System Protection: Goals of protection, Principles and domain of protection, Access control.
System Security: Introduction, Program threats, System and network threats, Cryptography for security, User authentication,
Implementing security defenses, Firewalling to protect systems and networks, Computer security classification
Access Control
Role-Based Access Control, RBAC, assigns privileges to users, programs, or roles as appropriate, where "privileges"refer to
the right to call certain system calls, or to use certain parameters with those calls.
RBAC supports the principle of least privilege, and reduces the susceptibility to abuse as opposed to SUID or SGID
programs.
SECURITY
INTRODUCTION
Protection dealt with protecting files and other resources from accidental misuse by cooperating userssharing a system,
generally using the computer for normal purposes.
This chapter ( Security ) deals with protecting systems from deliberate attacks, either internal or external, from individuals
intentionally attempting to steal information, damage information, or otherwise deliberately wreak havocin some manner.
Some of the most common types of violations include:
Breach of Confidentiality - Theft of private or confidential information, such as credit-card numbers, trade secrets,
patents, secret formulas, manufacturing procedures, medical information, financial information, etc. Breach of Integrity
- Unauthorized modification of data, which may have serious indirect consequences. Forexample a popular game or
other program's source code could be modified to open up security holes on userssystems before being released to the
public.
Breach of Availability - Unauthorized destruction of data, often just for the "fun" of causing havoc and for
bragging rites. Vandalism of web sites is a common form of this violation.
Theft of Service - Unauthorized use of resources, such as theft of CPU cycles, installation of daemons runningan
unauthorized file server, or tapping into the target's telephone or networking services.
Denial of Service, DOS - Preventing legitimate users from using the system, often by overloading andoverwhelming
the system with an excess of requests for service.
One common attack is masquerading, in which the attacker pretends to be a trusted third party. A variation of this isthe
man-in-the-middle, in which the attacker masquerades as both ends of the conversation to two targets.
A replay attack involves repeating a valid transmission. Sometimes this can be the entire attack, ( such as repeating arequest
for a money transfer ), or other times the content of the original message is replaced with malicious content.
#include
#define BUFFER_SIZE 256
Now that we know how to change where the program returns to by overflowing the buffer, the second step
is to insert some nefarious code, and then get the program to jump to our inserted code.
Our only opportunity to enter code is via the input into the buffer, which means there isn't room for very much.
One of the simplest and most obvious approaches is to insert the code for "exec( /bin/sh )".
The bad code is then padded with as many extra bytes as are needed to overflow the buffer to the correct extent,
and the address of the buffer inserted into the return address location.
The resulting block of information is provided as "input", copied into the buffer by the original program,and
then the return statement causes control to jump to the location of the buffer and start executing the code to
launch a shell.
Figure 15.4 - Hypothetical stack frame for Figure 15.2, (a) before and (b) after.
Unfortunately famous hacks such as the buffer overflow attack are well published and well known, and it
doesn't take a lot of skill to follow the instructions and start attacking lots of systems until the law of averages
eventually works out
Fortunately modern hardware now includes a bit in the page tables to mark certain pages as non- executable. In
this case the buffer-overflow attack would work up to a point, but as soon as it "returns" toan address in the data
space and tries executing statements there, an exception would be thrown crashing the program.
Viruses
A virus is a fragment of code embedded in an otherwise legitimate program, designed to replicate itself( by
infecting other programs ), and ( eventually ) wreaking havoc.
Viruses are more likely to infect PCs than UNIX or other multi-user systems, because programs in thelatter
systems have limited authority to modify other programs or to access critical system structures
Viruses are delivered to systems in a virus dropper, usually some form of a Trojan Horse, and usually via e-
mail or unsafe downloads.
Viruses take many forms ( see below. ) Figure 15.5 shows typical operation of a boot sector virus:
Note that asymmetric encryption is much more computationally expensive than symmetric
encryption, and as such it is not normally used for large transmissions. Asymmetric encryption is
suitable for small messages, authentication, and key distribution, as covered inthe following
sections.
Authentication
Authentication involves verifying the identity of the entity who transmitted a message.For
example, if D(Kd)(c) produces a valid message, then we know the sender was in
possession of E(Ke).
This form of authentication can also be used to verify that a message has not been modifiedAuthentication
revolves around two functions, used for signatures ( or signing ), and
verification:
A signing function, S(Ks) that produces an authenticator, A, from any given messagem.
A Verification function, V(Kv,m,A) that produces a value of "true" if A was createdfrom m,
and "false" otherwise.
Obviously S and V must both be computationally efficient.
More importantly, it must not be possible to generate a valid authenticator, A, withouthaving
possession of S(Ks).
Furthermore, it must not be possible to divine S(Ks) from the combination of ( m and A
), since both are sent visibly across networks.
Understanding authenticators begins with an understanding of hash functions, which is thefirst
step:
Hash functions, H(m) generate a small fixed-size block of data known as a messagedigest, or
hash value from any given input data.
For authentication purposes, the hash function must be collision resistant on m. That isit
should not be reasonably possible to find an alternate message m' such that H(m') = H(m).
Popular hash functions are MD5, which generates a 128-bit message digest, and SHA-1,
which generates a 160-bit digest.
Message digests are useful for detecting ( accidentally ) changed messages, but are not usefulas
authenticators, because if the hash function is known, then someone could easily change the
message and then generate a new hash value for the modified message.
A message-authentication code, MAC, uses symmetric encryption and decryption of the message
digest, which means that anyone capable of verifying an incoming message couldalso generate a
new message.
An asymmetric approach is the digital-signature algorithm, which produces authenticators called
digital signatures. In this case Ks and Kv are separate, Kv is the public key, and it is not practical
to determine S(Ks) from public information. In practice the sender of a messagesigns it ( produces
a digital signature using S(Ks) ), and the receiver uses V(Kv) to verify thatit did indeed come from
a trusted source, and that it has not been modified.
There are three good reasons for having separate algorithms for encryption of messages andauthentication of
messages:
1. Authentication algorithms typically require fewer calculations, making verification a
faster operation than encryption.
2. Authenticators are almost always smaller than the messages, improving space
efficiency. (?)
3. Sometimes we want authentication only, and not confidentiality, such as when a vendor
issues a new software patch.
Another use of authentication is non-repudiation, in which a person filling out an electronicform
cannot deny that they were the ones who did so.
Key Distribution
Key distribution with symmetric cryptography is a major problem, because all keys must be kept
secret, and they obviously can't be transmitted over unsecure channels.
Another problem with symmetric keys, is that a separate key must be maintained and used foreach
correspondent with whom one wishes to exchange confidential information.
Asymmetric encryption solves some of these problems, because the public key can be freely
transmitted through any channel, and the private key doesn't need to be transmitted anywhere.
Unfortunately there are still some security concerns regarding the public keys used in
asymmetric encryption. Consider for example the following man-in-the-middle attack
involving phony public keys:
Passwords
Passwords are the most common form of user authentication. If the user is in possession of the correct password,
then they are considered to have identified themselves.
In theory separate passwords could be implemented for separate activities, such as reading this file, writing that
file, etc. In practice most systems use one password to confirm user identity, and then authorization is based
upon that identification. This is a result of the classic trade-off between security andconvenience.
Password Vulnerabilities
Passwords can be guessed.
Intelligent guessing requires knowing something about the intended target in specific, or about people and
commonly used passwords in general.
Brute-force guessing involves trying every word in the dictionary, or every valid combination of
characters. For this reason good passwords should not be in any dictionary ( in any language ),
should be reasonably lengthy, and should use the full range of allowable characters by including
upper and lower case characters, numbers, and special symbols.
"Shoulder surfing" involves looking over people's shoulders while they are typing in their password.
Even if the lurker does not get the entire password, they may get enough clues to narrow it down, especially if
they watch on repeated occasions.
Common courtesy dictates that you look away from the keyboard while someone is typing theirpassword.
Passwords echoed as stars or dots still give clues, because an observer can determine how manycharacters are
in the password. :-(
"Packet sniffing" involves putting a monitor on a network connection and reading data contained in thosepackets.
SSH encrypts all packets, reducing the effectiveness of packet sniffing.
However you should still never e-mail a password, particularly not with the word "password" in thesame
message or worse yet the subject header.
Beware of any system that transmits passwords in clear text. ( "Thank you for signing up for XYZ.
Your new account and password information are shown below". ) You probably want to have a spare
throw-away password to give these entities, instead of using the same high-security passwordthat you
use for banking or other confidential uses.
Long hard to remember passwords are often written down, particularly if they are used seldomly or mustbe
changed frequently. Hence a security trade-off of passwords that are easily divined versus those that get
written down. :-(
Passwords can be given away to friends or co-workers, destroying the integrity of the entire user-identification
system.
Most systems have configurable parameters controlling password generation and what constitutesacceptable
passwords.
They may be user chosen or machine generated.
They may have minimum and/or maximum length requirements.
They may need to be changed with a given frequency. ( In extreme cases for every session. ) A
variable length history can prevent repeating passwords.
More or less stringent checks can be made against password dictionaries.
Encrypted Passwords
Modern systems do not store passwords in clear-text form, and hence there is no mechanism to look up anexisting
password.
Rather they are encrypted and stored in that form. When a user enters their password, that too isencrypted, and
if the encrypted version match, then user authentication passes.
The encryption scheme was once considered safe enough that the encrypted versions were stored in thepublicly
readable file "/etc/passwd".
They always encrypted to a 13 character string, so an account could be disabled by putting a stringof any
other length into the password field.
Modern computers can try every possible password combination in a reasonably short time, so nowthe
encrypted passwords are stored in files that are only readable by the super user. Any password- related
programs run as setuid root to get access to these files. ( /etc/shadow )
A random seed is included as part of the password generation process, and stored as part of the encrypted
password. This ensures that if two accounts have the same plain-text password that they will not have the
same encrypted password. However cutting and pasting encrypted passwords fromone account to another
will give them the same plain-text passwords.
One-Time Passwords
One-time passwords resist shoulder surfing and other attacks where an observer is able to capture apassword
typed in by a user.
These are often based on a challenge and a response. Because the challenge is different each time,the old
response will not be valid for future challenges.
For example, The user may be in possession of a secret function f( x ). The system challengeswith
some given value for x, and the user responds with f( x ), which the system can then verify. Since
the challenger gives a different ( random ) x each time, the answer is constantly changing.
A variation uses a map ( e.g. a road map ) as the key. Today's question might be "On what corner
is SEO located?", and tomorrow's question might be "How far is it from Navy Pier to Wrigley
Field?" Obviously "Taylor and Morgan" would not be accepted as a valid answer for the second
question!
Another option is to have some sort of electronic card with a series of constantly changing numbers,
based on the current time. The user enters the current number on the card, which will onlybe valid for a
few seconds. A two-factor authorization also requires a traditional password in addition to the number on
the card, so others may not use it if it were ever lost or stolen.
A third variation is a code book, or one-time pad. In this scheme a long list of passwords is generated, and
each one is crossed off and cancelled as it is used. Obviously it is important to keepthe pad secure.
Biometrics
Biometrics involve a physical characteristic of the user that is not easily forged or duplicated and notlikely
to be identical between multiple users.
Fingerprint scanners are getting faster, more accurate, and more economical.
Palm readers can check thermal properties, finger length, etc.
Retinal scanners examine the back of the users' eyes.
Voiceprint analyzers distinguish particular voices.
Difficulties may arise in the event of colds, injuries, or other physiological changes.
IMPLEMENTING SECURITY DEFENSES
Security Policy
A security policy should be well thought-out, agreed upon, and contained in a living document thateveryone
adheres to and is updated as needed.
Examples of contents include how often port scans are run, password requirements, virus detectors, etc.
Vulnerability Assessment
Periodically examine the system to detect vulnerabilities.
Port scanning.
Check for bad passwords.
Look for suid programs.
Unauthorized programs in system directories.
Incorrect permission bits set.
Program checksums / digital signatures which have changed.Unexpected
or hidden network daemons.
New entries in startup scripts, shutdown scripts, cron tables, or other system scripts or configurationfiles.
New unauthorized accounts.
The government considers a system to be only as secure as its most far-reaching component. Any system
connected to the Internet is inherently less secure than one that is in a sealed room with no external
communications.
Some administrators advocate "security through obscurity", aiming to keep as much information about their
systems hidden as possible, and not announcing any security concerns they come across. Others announce
security concerns from the rooftops, under the theory that the hackers are going to find out anyway, and the
only one kept in the dark by obscurity are honest administrators who need to get the word.
Intrusion Detection
Intrusion detection attempts to detect attacks, both successful and unsuccessful attempts. Different
techniques vary along several axes:
The time that detection occurs, either during the attack or after the fact.
The types of information examined to detect the attack(s). Some attacks can only be detected by
analyzing multiple sources of information.
The response to the attack, which may range from alerting an administrator to automatically
stopping the attack ( e.g. killing an offending process ), to tracing back the attack in order to
identify the attacker.
Another approach is to divert the attacker to a honeypot, on a honeynet. The idea behind a
honeypot is a computer running normal services, but which no one uses to do any real
work.Such a system should not see any network traffic under normal conditions, so any traffic
going to or from such a system is by definition suspicious.
Intrusion Detection Systems, IDSs, raise the alarm when they detect an intrusion. Intrusion Detection andPrevention
Systems, IDPs, act as filtering routers, shutting down suspicious traffic when it is detected.
There are two major approaches to detecting problems:
Signature-Based Detection scans network packets, system files, etc. looking for recognizablecharacteristics of
known attacks, such as text strings for messages or the binary code for
"exec /bin/sh". The problem with this is that it can only detect previously encountered problems for which the
signature is known, requiring the frequent update of signature lists.
Anomaly Detection looks for "unusual" patterns of traffic or operation, such as unusually heavyload
or an unusual number of logins late at night.
The benefit of this approach is that it can detect previously unknown attacks, so called zero-day
attacks.
One problem with this method is characterizing what is "normal" for a given system. Oneapproach
is to benchmark the system, but if the attacker is already present when the benchmarks are made,
then the "unusual" activity is recorded as "the norm."
Another problem is that not all changes in system performance are the result of security attacks.
If the system is bogged down and really slow late on a Thursday night, does that mean that a
hacker has gotten in and is using the system to send out SPAM, or does it simplymean that a CS
385 assignment is due on Friday? :-)
To be effective, anomaly detectors must have a very low false alarm ( false positive ) rate, lest the
warnings get ignored, as well as a low false negative rate in which attacks are missed.
Virus Protection
Modern anti-virus programs are basically signature-based detection systems, which also have the ability ( in
some cases ) of disinfecting the affected files and returning them back to their original condition.
Both viruses and anti-virus programs are rapidly evolving. For example viruses now commonly mutate
every time they propagate, and so anti-virus programs look for families of related signatures rather than
specific ones.
Some antivirus programs look for anomalies, such as an executable program being opened for writing(
other than by a compiler. )
Avoiding bootleg, free, and shared software can help reduce the chance of catching a virus, but even
shrink-wrapped official software has on occasion been infected by disgruntled factory workers.
Some virus detectors will run suspicious programs in a sandbox, an isolated and secure area of the system which
mimics the real system.
Rich Text Format, RTF, files cannot carry macros, and hence cannot carry Word macro viruses.
Known safe programs ( e.g. right after a fresh install or after a thorough examination ) can be digitally signed,
and periodically the files can be re-verified against the stored digital signatures. ( Which should bekept secure,
such as on off-line write-only medium. )
Auditing, Accounting, and Logging
Auditing, accounting, and logging records can also be used to detect anomalous behavior.
Some of the kinds of things that can be logged include authentication failures and successes, logins, running of
suid or sgid programs, network accesses, system calls, etc. In extreme cases almost every keystroke and electron
that moves can be logged for future analysis. ( Note that on the flip side, all this detailed logging can also be
used to analyze system performance. The down side is that the logging alsoaffects system performance (
negatively! ), and so a Heisenberg effect applies. )
"The Cuckoo's Egg" tells the story of how Cliff Stoll detected one of the early UNIX break ins when henoticed
anomalies in the accounting records on a computer system being used by physics researchers.
Tripwire Filesystem ( New Sidebar )
The tripwire filesystem monitors files and directories for changes, on the assumption that most intrusionseventually
result in some sort of undesired or unexpected file changes.
The tw.config file indicates what directories are to be monitored, as well as what properties of each file are to
be recorded. ( E.g. one may choose to monitor permission and content changes, but not worry aboutread access
times. )
When first run, the selected properties for all monitored files are recorded in a database. Hash codes areused to
monitor file contents for changes.
Subsequent runs report any changes to the recorded data, including hash code changes, and any newlycreated or
missing files in the monitored directories.
For full security it is necessary to also protect the tripwire system itself, most importantly the database of
recorded file properties. This could be saved on some external or write-only location, but that makes it harder to
change the database when legitimate changes are made.
It is difficult to monitor files that are supposed to change, such as log files. The best tripwire can do in thiscase
is to watch for anomalies, such as a log file that shrinks in size.
Free and commercial versions are available at http://tripwire.org and http://tripwire.com.
FIREWALLING TO PROTECT SYSTEMS AND NETWORKS
Firewalls are devices ( or sometimes software ) that sit on the border between two security domains and monitor/logactivity
between them, sometimes restricting the traffic that can pass between them based on certain criteria.
For example a firewall router may allow HTTP: requests to pass through to a web server inside a company domain while not
allowing telnet, ssh, or other traffic to pass through.
A common architecture is to establish a de-militarized zone, DMZ, which sort of sits "between" the company domainand
the outside world, as shown below. Company computers can reach either the DMZ or the outside world, but outside
computers can only reach the DMZ. Perhaps most importantly, the DMZ cannot reach any of the other company computers,
so even if the DMZ is breached, the attacker cannot get to the rest of the company network. ( Insome cases the DMZ may
have limited access to company computers, such as a web server on the DMZ that needs toquery a database on one of the
other company computers. )
Figure 15.10 - Domain separation via firewall.
Firewalls themselves need to be resistant to attacks, and unfortunately have several vulnerabilities:
Tunneling, which involves encapsulating forbidden traffic inside of packets that are allowed.
Denial of service attacks addressed at the firewall itself.
Spoofing, in which an unauthorized host sends packets to the firewall with the return address of an authorizedhost.
In addition to the common firewalls protecting a company internal network from the outside world, there are alsosome
specialized forms of firewalls that have been recently developed:
A personal firewall is a software layer that protects an individual computer. It may be a part of the operating
system or a separate software package.
An application proxy firewall understands the protocols of a particular service and acts as a stand-in ( and relay ) for
the particular service. For example, and SMTP proxy firewall would accept SMTP requests from the
outside world, examine them for security concerns, and forward only the "safe" ones on to the real SMTP serverbehind
the firewall.
XML firewalls examine XML packets only, and reject ill-formed packets. Similar firewalls exist for other
specific protocols.
System call firewalls guard the boundary between user mode and system mode, and reject any system calls that violate
security policies.
COMPUTER-SECURITY CLASSIFICATIONS
No computer system can be 100% secure, and attempts to make it so can quickly make it unusable.
However one can establish a level of trust to which one feels "safe" using a given computer system for particularsecurity
needs.
The U.S. Department of Defense's "Trusted Computer System Evaluation Criteria" defines four broad levels of trust,and
sub-levels in some cases:
Level D is the least trustworthy, and encompasses all systems that do not meet any of the more stringent criteria. DOS
and Windows 3.1 fall into level D, which has no user identification or authorization, and anyone who sits down has full
access and control over the machine.
Level C1 includes user identification and authorization, and some means of controlling what users are allowed to
access what files. It is designed for use by a group of mostly cooperating users, and describes most common UNIX
systems.
Level C2 adds individual-level control and monitoring. For example file access control can be allowed or denied on
a per-individual basis, and the system administrator can monitor and log the activities of specific individuals.
Another restriction is that when one user uses a system resource and then returns it back to the system, another user
who uses the same resource later cannot read any of the information that the first user stored there. ( I.e. buffers, etc.
are wiped out between users, and are not left full of old contents. ) Some specialsecure versions of UNIX have been
certified for C2 security levels, such as SCO.
Level B adds sensitivity labels on each object in the system, such as "secret", "top secret", and "confidential".
Individual users have different clearance levels, which controls which objects they are able to access. All human-
readable documents are labeled at both the top and bottom with the sensitivity level of the file.
Level B2 extends sensitivity labels to all system resources, including devices. B2 also supports covert channelsand the
auditing of events that could exploit covert channels.
B3 allows creation of access-control lists that denote users NOT given access to specific objects.
Class A is the highest level of security. Architecturally it is the same as B3, but it is developed using formal methods
which can be used to prove that the system meets all requirements and cannot have any possible bugsor other
vulnerabilities. Systems in class A and higher may be developed by trusted personnel in secure facilities.
These classifications determine what a system can implement, but it is up to security policy to determine how they
are implemented in practice. These systems and policies can be reviewed and certified by trusted organizations, such
as the National Computer Security Center. Other standards may dictate physical protectionsand other issues.