0% found this document useful (0 votes)
15 views9 pages

MCSE-103 Advanced Computer Architecture

The document provides a comprehensive overview of parallel computing structures, including Flynn's and Handler's classifications, pipelined and vector processors, data and control hazards, SIMD multiprocessor structures, interconnection networks, and parallel algorithms. It discusses various types of processors, their characteristics, applications, and challenges, as well as techniques for scheduling and load balancing in multiprocessor systems. Additionally, it references key literature on advanced computer architecture and parallel processing.

Uploaded by

gixayew714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

MCSE-103 Advanced Computer Architecture

The document provides a comprehensive overview of parallel computing structures, including Flynn's and Handler's classifications, pipelined and vector processors, data and control hazards, SIMD multiprocessor structures, interconnection networks, and parallel algorithms. It discusses various types of processors, their characteristics, applications, and challenges, as well as techniques for scheduling and load balancing in multiprocessor systems. Additionally, it references key literature on advanced computer architecture and parallel processing.

Uploaded by

gixayew714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

### Unit 1: Flynn's and Handler's Classifica on of Parallel Compu ng Structures

#### Flynn's Classifica on of Parallel Processing

- **SISD (Single Instruc on stream, Single Data stream)**:

- Tradi onal uniprocessor systems.

- Single control unit and a single processing element.

- **Characteris cs**:

- Sequen al execu on of instruc ons.

- Common in basic personal computers and simple microprocessors.

- **Examples**: Basic personal computers, simple microprocessors like the Intel 8086.

- **SIMD (Single Instruc on stream, Mul ple Data streams)**:

- Mul ple processing elements perform the same opera on on different data points simultaneously.

- **Characteris cs**:

- Efficient for tasks with high data parallelism.

- Common in vector processors and modern GPUs.

- **Examples**: Vector processors like the Cray-1, GPUs used in graphics and AI applica ons.

- **MISD (Mul ple Instruc on streams, Single Data stream)**:

- Mul ple instruc ons operate on a single data stream.

- **Characteris cs**:

- Rare and not widely used in prac cal systems.

- Used hypothe cally in certain fault-tolerant systems.

- **Examples**: Hypothe cal systems for redundant data processing.

- **MIMD (Mul ple Instruc on streams, Mul ple Data streams)**:

- Mul ple autonomous processors execute different instruc ons on different data.

- **Characteris cs**:

- Supports a wide range of applica ons and flexible task execu on.

- Common in mul core processors and distributed systems.

- **Examples**: Mul core processors, clusters, and distributed systems like Google's cloud
infrastructure.
#### Handler's Classifica on of Parallel Compu ng Structures

- **Handler's Taxonomy**:

- Focuses on dis nguishing systems based on their interconnec on networks and processor control
mechanisms.

- **Categories**:

- **Control**: Centralized vs. Decentralized.

- **Topology**: Sta c vs. Dynamic.

- **Rou ng**: Direct vs. Indirect.

### Pipelined and Vector Processors

#### Pipelined Processors

- **Defini on**:

- Technique where mul ple instruc on phases are overlapped to increase throughput.

- **Stages of Pipelining**:

- **Fetch**: Retrieve instruc on from memory.

- **Decode**: Interpret the instruc on and fetch operands.

- **Execute**: Perform the opera on.

- **Memory Access**: Read/write data from/to memory.

- **Write-Back**: Store the result back in a register.

- **Advantages**:

- **Increased Throughput**: Mul ple instruc ons are processed simultaneously.

- **Efficiency**: Each stage operates in parallel, improving resource u liza on.

- **Scalability**: Addi onal stages can be added to improve performance.

- **Challenges**:

- **Data Hazards**: Occur when instruc ons depend on the results of previous instruc ons.

- **Control Hazards**: Arise from branch instruc ons altering the flow of execu on.

- **Structural Hazards**: Happen when hardware resources are insufficient to support all stages.
#### Vector Processors

- **Defini on**:

- Processors that perform opera ons on en re vectors (arrays) of data simultaneously rather than scalar
opera ons.

- **Characteris cs**:

- **Single Instruc on Mul ple Data (SIMD)**: Executes the same instruc on on mul ple data points.

- **High Throughput**: Efficiently handles data-parallel tasks.

- **Applica ons**: Scien fic compu ng, engineering simula ons, graphics processing.

- **Examples**:

- **Cray-1**: One of the first vector processors, used for scien fic calcula ons.

- **Modern GPUs**: U lize vector processing for graphics rendering and AI computa ons.

- **Advantages**:

- **Performance**: Significant speedup for tasks with high data parallelism.

- **Resource U liza on**: Maximizes use of processor resources.

- **Scalability**: Can handle large datasets efficiently.

- **Challenges**:

- **Programming Complexity**: Requires specialized knowledge and techniques.

- **Data Alignment**: Ensuring data is properly aligned in memory for efficient access.

- **Hardware Cost**: High cost of vector processing units.

### Unit 2: Data and Control Hazards

#### Data Hazards

- **Defini on**:

- Occur when instruc ons that exhibit data dependence modify data in different stages of a pipeline.

- **Types of Data Hazards**:

- **Read A er Write (RAW)**: A subsequent instruc on reads a register that a previous instruc on
writes to.
- **Write A er Read (WAR)**: A subsequent instruc on writes to a register that a previous instruc on
reads from.

- **Write A er Write (WAW)**: Two instruc ons write to the same register.

- **Resolu on Methods**:

- **Pipeline Interlocking**: Pauses subsequent instruc ons un l data dependencies are resolved.

- **Operand Forwarding**: Bypasses data from one pipeline stage to another to resolve RAW hazards.

- **Register Renaming**: Dynamically allocates physical registers to avoid WAW and WAR hazards.

#### Control Hazards

- **Defini on**:

- Occur due to branch instruc ons that change the program counter, disrup ng the sequen al flow of
instruc ons.

- **Resolu on Methods**:

- **Branch Predic on**: Predicts the outcome of a branch to prefetch instruc ons.

- **Delayed Branching**: Delays the execu on of branch instruc ons un l the branch decision is made.

- **Specula ve Execu on**: Executes instruc ons before the branch decision is finalized, rolling back if
the predic on is wrong.

### SIMD Mul processor Structures

- **Defini on**:

- Single Instruc on stream, Mul ple Data streams (SIMD) architectures execute the same instruc on
across mul ple processing elements.

- **Characteris cs**:

- **Parallel Data Processing**: Suitable for tasks with high data parallelism.

- **Simplified Control**: Single control unit broadcasts instruc ons to all processing elements.

- **Applica ons**:

- **Graphics Processing**: GPUs use SIMD for rendering images.

- **Scien fic Compu ng**: Matrix opera ons, simula ons, and other parallel data tasks.

- **Examples**:

- **GPUs**: Modern graphics processing units are a common example of SIMD architectures.
- **Vector Processors**: Early examples like the Cray-1.

### Unit 3: Interconnec on Networks

#### Defini on and Importance

- **Interconnec on Networks**:

- Networks that connect processors to memory modules and I/O devices, facilita ng communica on in
mul processor systems.

- **Types of Interconnec on Networks**:

- **Bus-Based Networks**:

- **Single Bus**: Simple, cost-effec ve, but prone to bo lenecks.

- **Mul ple Bus**: Increases bandwidth but adds complexity.

- **Crossbar Switches**:

- **Characteris cs**: Direct, non-blocking connec ons between inputs and outputs.

- **Advantages**: High performance, scalable.

- **Disadvantages**: Expensive, complex to implement.

- **Mul stage Networks**:

- **Characteris cs**: Mul ple stages of switches, allowing many-to-many connec ons.

- **Examples**: Omega network, Banyan network.

- **Advantages**: High scalability, efficient communica on.

- **Disadvantages**: Latency increases with the number of stages.

- **Direct Networks**:

- **Topology**: Mesh, torus, hypercube.

- **Advantages**: Lower latency, scalable.

- **Disadvantages**: Complex rou ng, requires sophis cated algorithms.

### Parallel Algorithms for Array Processors

#### Defini on and Characteris cs


- **Array Processors**:

- Processors arranged in a grid, performing the same opera on on different data points.

- **Parallel Algorithms**:

- Designed to execute efficiently on parallel compu ng architectures.

- **Examples**:

- **Matrix Mul plica on**: Efficiently handled by array processors.

- **Fourier Transforms**: Parallel algorithms improve speed and efficiency.

### Search Algorithms in Parallel Processing

#### Defini on and Importance

- **Search Algorithms**:

- Techniques for loca ng data within a dataset.

- **Parallel Search**:

- **Characteris cs**: Divide-and-conquer approach, distribu ng the search task among mul ple
processors.

- **Examples**: Parallel versions of linear search, binary search.

### MIMD Mul processor Systems

#### Defini on and Characteris cs

- **MIMD (Mul ple Instruc on streams, Mul ple Data streams)**:

- Mul ple processors execute different instruc ons on different data streams simultaneously.

- **Applica ons**:

- Suitable for a wide range of applica ons from scien fic compu ng to databases.

- **Examples**:

- **Mul core Processors**: Each core can execute different instruc ons.

- **Clusters**: Networked computers working together on different tasks.


### Unit 4: Scheduling and Load Balancing in Mul processor Systems

#### Scheduling in Mul processor Systems

- **Defini on**:

- Alloca ng tasks to processors in a mul processor system to op mize performance.

- **Techniques**:

- **Sta c Scheduling**: Tasks are allocated before execu on begins.

- **Dynamic Scheduling**: Tasks are allocated during execu on based on current system state.

- **Algorithms**:

- **Round Robin**: Simple, fair but may not be op mal.

- **Priority Scheduling**: Tasks are priori zed based on criteria like urgency, importance.

#### Load Balancing in Mul processor Systems

- **Defini on**:

- Distribu ng workload evenly across processors to avoid bo lenecks and op mize performance.

- **Techniques**:

- **Sta c Load Balancing**: Pre-determined distribu on of tasks.

- **Dynamic Load Balancing**: Real- me distribu on based on current load.

- **Challenges**:

- **Heterogeneity**: Different processors may have different capabili es.

- **Communica on Overhead**: Balancing tasks between processors o en involves communica on


which can introduce delays and overhead.

- **Task Granularity**: The size of tasks can impact load balancing efficiency. Fine-grained tasks allow
more flexibility in distribu on but may increase communica on overhead, whereas coarse-grained tasks
reduce communica on needs but may lead to imbalance.

- **Techniques for Dynamic Load Balancing**:

- **Task Migra on**: Moving tasks from overloaded processors to underloaded ones.

- **Work Stealing**: Idle processors "steal" tasks from busy processors.

- **Load Es ma on**: Con nuously monitoring processor loads to inform load balancing decisions.
#### Mul processing Control and Algorithms

- **Mul processing Control**:

- **Centralized Control**: A single control unit manages all processors. Simplifies design but can
become a bo leneck.

- **Decentralized Control**: Each processor has its control unit. Increases complexity but improves
scalability.

- **Synchroniza on Mechanisms**: Ensuring that processors operate in a coordinated manner.


Includes barriers, locks, semaphores, and monitors.

- **Algorithms for Mul processing**:

- **Parallel Sor ng Algorithms**:

- **Bitonic Sort**: Suitable for parallel execu on. Works by repeatedly merging subsequences into a
bitonic sequence and then sor ng.

- **Parallel QuickSort**: Divides the dataset and sorts subsequences in parallel.

- **Parallel Search Algorithms**:

- **Parallel Depth-First Search (DFS)**: Suitable for applica ons like parallel tree traversal.

- **Parallel Breadth-First Search (BFS)**: O en used in graph processing.

- **Matrix Mul plica on**:

- **Strassen's Algorithm**: Breaks down matrix mul plica on into smaller mul plica ons which can
be done in parallel.

- **Cannon's Algorithm**: Designed for distributed memory systems, op mizing communica on


between processors.

- **Load Balancing Algorithms**:

- **Round Robin**: Simple but may not be op mal.

- **Dynamic Scheduling**: Allocates tasks based on current system state.

- **Work Stealing**: Idle processors dynamically take work from busy processors.

## Reference Books

### Advanced Computer Architecture - Parthsarthy, Cengage (Thomson)


- **Flynn's and Handler's Classifica on**: Discusses various classifica ons of parallel compu ng
structures.

- **Pipelined and Vector Processors**: Detailed explana on of pipelining stages and vector processing
units.

- **Data and Control Hazards**: Methods to iden fy and resolve hazards in pipelines.

- **SIMD Mul processor Structures**: Characteris cs and applica ons of SIMD architectures.

- **Interconnec on Networks**: Different types of networks and their performance implica ons.

- **Parallel Algorithms**: Algorithms designed for parallel execu on, including sor ng and matrix
mul plica on.

- **Load Balancing**: Techniques to evenly distribute workload across processors.

### Computer Architecture and Organisa on - John Hays, McGraw-Hill

- **Fundamentals of Parallel Processing**: Basic concepts and importance of parallel processing.

- **Pipelining and Performance**: How pipelining improves performance and the associated challenges.

- **Hazards in Pipelines**: Detailed types of hazards and methods to mi gate them.

- **Mul processor Systems**: Differences between mul processor and mul computer systems.

- **Scheduling Algorithms**: Techniques to efficiently schedule tasks in mul processor systems.

- **Memory Architectures**: UMA, NUMA, and other memory architectures.

### Computer Architecture and Parallel Processing - Hwang and Briggs, TMH

- **Classifica on of Parallel Computers**: Detailed coverage of Flynn’s and Handler’s classifica ons.

- **Pipelined and Vector Processors**: Comprehensive study of pipelining and vector processing.

- **Data Dependence and Hazards**: In-depth analysis of data and control hazards in pipelines.

- **Interconnec on Networks**: Various network topologies and their applica ons in mul processor
systems.

- **Parallel Algorithms**: Algorithms for array processors, parallel search, and matrix opera ons.

- **Load Balancing and Scheduling**: Strategies and algorithms for balancing load and scheduling tasks
in mul processor systems.

- **Control Mechanisms**: Centralized vs. decentralized control and synchroniza on mechanisms in


mul processor environments.

You might also like