0% found this document useful (0 votes)
9 views

OOp - Basics Discussion

Object oriented language using c++ the basic concept of oop in cpp with its use and importance

Uploaded by

Zara Noor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

OOp - Basics Discussion

Object oriented language using c++ the basic concept of oop in cpp with its use and importance

Uploaded by

Zara Noor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Procedural vs.

Non-Procedural Languages: A Closer Look:

In the programming world, there are two main ways to tell a computer what to do:
procedurally and non-procedurally. Each approach has its own strengths, weaknesses,
and ideal uses. Let's delve deeper into these paradigms:

Procedural Languages:

Imagine you're giving instructions to a friend on how to bake a cake. You'd likely give
them step-by-step instructions: preheat the oven, mix the ingredients, pour the batter,
bake for a specific time, etc. This is essentially how procedural languages work.

 Focus on "how": These languages explicitly tell the computer the exact sequence of
steps to take to achieve a desired outcome.
 Imperative style: They rely heavily on commands that change the state of the program
(e.g., assigning values to variables, performing calculations).
 Sequential execution: Code is typically written in lines that are executed one after the
other, in the defined order.
 Control flow structures: Loops and conditional statements are used to control the flow of
execution.
 Examples: C, Java, Python (in many cases), C++, Pascal, FORTRAN

Think of procedural languages like a detailed recipe, outlining each step in precise
order.

Non-Procedural Languages:

Now imagine you're describing your dream cake to a skilled pastry chef. Instead of
specific instructions, you'd focus on desired characteristics: rich chocolate flavor, fluffy
texture, a hint of raspberry. The chef, based on their expertise, would then figure out the
recipe and baking process themselves. This is closer to how non-procedural languages
operate.

 Focus on "what": These languages describe the desired outcome or relationships


between data, without specifying the exact steps to achieve it.
 Declarative style: They focus on stating facts and constraints rather than giving explicit
commands.
 Functional style: They often rely on functions as the primary building blocks and avoid
state changes within the function itself.
 Examples: SQL, Prolog, Haskell, Lisp, some aspects of Python
Think of non-procedural languages like a wish list or set of rules, letting the computer
determine the best way to achieve the desired outcome.

Key Differences:

Procedural Non-Procedural
Feature
Languages Languages

How to
What needs to be
Focus achieve a
done
result

Declarative/
Style Imperative
Functional

Execution Sequential Declarative

Explicit
Control (loops, Implicit (based on
flow conditionals relationships)
)

State Minimal within


Frequent
changes functions

C, Java,
Python, C+ SQL, Prolog, Haskell,
Examples
+, Pascal, Lisp
FORTRAN

Choosing the Right Tool:

The choice between procedural and non-procedural languages depends on the specific
problem you're trying to solve:

 Use procedural languages for:


o Tasks requiring fine-grained control over execution flow (e.g., system
programming).
o Performance-critical applications where efficiency is paramount.
o Situations where imperative logic is straightforward and easy to understand.
 Use non-procedural languages for:
o Problems with well-defined relationships and constraints (e.g., querying
databases).
o Tasks where conciseness and abstraction are valued (e.g., mathematical
computations).
o Applications where logic can be naturally expressed in terms of functions.

Remember, both paradigms have their place, and often, the best approach combines
elements of both.

Here's a comprehensive explanation of function overloading in C++, along with


illustrative code examples:

Function Overloading in C++:

Definition: The ability to define multiple functions with the same name but different
parameter lists (types and/or number of arguments).

Purpose:

Code readability: Use the same name for related operations, improving code clarity.

Flexibility: Handle various data types with the same function name.

Rules for Function Overloading:

Lists Must Differ: The overloaded functions must have different parameter lists in
terms of:

 Number of parameters
 Data types of parameters
 Sequence of parameters

Return Type Alone Isn’t Enough: Overloading based solely on return type is not
allowed.
Examples:

1. Overloading with Different Number of Argument.

#include<iostream>

Using Namespace Std;

Int Add(int a, int b){

Return a + b;

Int Add(int a, int b, int c){

Return a + b + c;

Int Main(){

Int sum1 = add(10, 20); // Calls first add() function

Int sum2 = add(10, 20, 30); // Calls second add() function

Cout << “Sum of two numbers: “ << sum1 << endl;

Cout << “Sum of three numbers: “ << sum2 << endl; Return 0;}

2. Overloading with Different Data Types of Arguments:

#include <iostream>

#include <string>

Using namespace std;

Void print(int x) {
Cout << “Integer: “ << x << endl;

Void print(double x) {

Cout << “Double: “ << x << endl;

Void print(string str) {

Cout << “String: “ << str << endl;

Int main() {

Print(10); // Calls print(int)

Print(3.14); // Calls print(double)

Print(“Hello”); // Calls print(string)

Return 0;

Use code with caution. Learn more

3. Overloading with Different Sequence of Parameters:

C++

#include <iostream>

Using namespace std;


Void swap(int x, int y) {

Int temp = x;

X = y;

Y = temp;

Void swap(double x, double y) {

// Same logic as above, but for doubles

// Different sequence of parameters

Void swap(int& x, int& y) {

Int temp = x;

X = y;

Y = temp;

Int main() {

// …

Use code with caution. Learn more

Key Points:
Overloading is resolved at compile time, not runtime.

It’s a form of compile-time polymorphism.

It can improve code readability and maintainability.

Sources

Github.com/DeepakProgrammer1/Deepak-Singh-Kushwaha

Function overloading is a feature of object-oriented programming where two or more


functions can have the same name but different parameters. When a function name is
overloaded with different jobs it is called Function Overloading. In Function Overloading
“Function” name should be the same and the arguments should be different. Function
overloading can be considered as an example of a polymorphism feature in C++.

If multiple functions having same name but parameters of the functions should be
different is known as Function Overloading.

If we have to perform only one operation and having same name of the functions
increases the readability of the program.

Suppose you have to perform addition of the given numbers but there can be any
number of arguments, if you write the function such as a(int,int) for two parameters, and
b(int,int,int) for three parameters then it may be difficult for you to understand the
behavior of the function because its name differs.

The parameters should follow any one or more than one of the following conditions for
Function overloading:

Parameters should have a different type

Add(int a, int b)
Add(double a, double b)

Function Overloading in C++





unction overloading is a feature of object-oriented programming where
two or more functions can have the same name but different
parameters. When a function name is overloaded with different jobs it
is called Function Overloading. In Function Overloading “Function”
name should be the same and the arguments should be different.
Function overloading can be considered as an example of
a polymorphism feature in C++.
If multiple functions having same name but parameters of the
functions should be different is known as Function Overloading.
If we have to perform only one operation and having same name of the
functions increases the readability of the program.
Suppose you have to perform addition of the given numbers but there
can be any number of arguments, if you write the function such as
a(int,int) for two parameters, and b(int,int,int) for three parameters
then it may be difficult for you to understand the behavior of the
function because its name differs.

The parameters should follow any one or more than one of the
following conditions for Function overloading:
Parameters should have a different type
add(int a, int b)
add(double a, double b)
Below is the implementation of the above discussion:
C++

#include <iostream>

using namespace std;


void add(int a, int b)

cout << "sum = " << (a + b);

void add(double a, double b)

cout << endl << "sum = " << (a


+ b);

// Driver code

int main()

add(10, 2);

add(5.3, 6.2);

return 0;

Output
sum = 12
sum = 11.5
Parameters should have a different number
add(int a, int b)
add(int a, int b, int c)
Below is the implementation of the above discussion:
C++

#include <iostream>
using namespace std;

void add(int a, int b)

cout << "sum = " << (a + b);

void add(int a, int b, int c)

cout << endl << "sum = " << (a


+ b + c);

// Driver code

int main()

add(10, 2);

add(5, 6, 4);

return 0;

Output
sum = 12
sum = 15
Parameters should have a different sequence of parameters.
add(int a, double b)
add(double a, int b)
Below is the implementation of the above discussion:
C++

#include<iostream>

using namespace std;

void add(int a, double b)

cout<<"sum = "<<(a+b);

void add(double a, int b)

cout<<endl<<"sum = "<<(a+b);

// Driver code

int main()

add(10,2.5);

add(5.5,6);

return 0;

Output
sum = 12.5
sum = 11.5
Following is a simple C++ example to demonstrate function
overloading.
CPP

#include <iostream>

using namespace std;

void print(int i) {

cout << " Here is int " << i << endl;

void print(double f) {

cout << " Here is float " << f << endl;

void print(char const *c) {

cout << " Here is char* " << c << endl;

int main() {

print(10);

print(10.10);

print("ten");

return 0;

Output
Here is int 10
Here is float 10.1
Here is char* ten
C++

#include<iostream>
using namespace std;

void add(int a, int b)

cout<<"sum ="<<(a+b);

void add(int a, int b,int c)

cout<<endl<<"sum ="<<(a+b+c);

main()

add(10,2);

add(5,6,4);

return 0;

C++

#include<iostream>

using namespace std;

void add(int a, double b)

cout<<"sum ="<<(a+b);
}

void add(double a, int b)

cout<<endl<<"sum ="<<(a+b);

main()

add(10,2.5);

add(5.5,6);

return 0;



Function overloading is possible in C++ and Java but only if the
functions must differ from each other by the types and the number of
arguments in the argument list. However, functions can not be
overloaded if they differ only in the return type.

Function Overloading with return type:


Function overloading comes under the compile-time polymorphism.
During compilation, the function signature is checked. So, functions
can be overloaded, if the signatures are not the same. The return type
of a function has no effect on function overloading, therefore the same
function signature with different return type will not be overloaded.
Example: if there are two functions: int sum() and float sum(),
these two will generate a compile-time error as function overloading
is not possible here.
Let’s understand this further through the following programs in C++
and Java:
C++ Program:
CPP

// CPP Program to demonstrate


that function overloading

// fails if only return types


are different

#include <iostream>

int fun() { return 10; }

char fun() { return 'a'; }

// compiler error as it is a
new declaration of fun()

// Driver Code

int main()

char x = fun();

getchar();

return 0;

Output
prog.cpp: In function ‘char fun()’:
prog.cpp:6:10: error: ambiguating new declaration of ‘char
fun()’
char fun() { return 'a'; }
^
prog.cpp:4:5: note: old declaration ‘int fun()’
int fun() { return 10; }
^
Functions that cannot be overloaded in C++

In C++, following function declarations cannot be overloaded. 1) Function declarations


that differ only in the return type. For example, the following program fails in
compilation.

#include<iostream>

Int foo() {

Return 10;

Char foo() {

Return ‘a’;

Int main()

Char x = foo();

Return 0;}

2) Member function declarations with the same name and the name parameter-type-list
cannot be overloaded if any of them is a static member function declaration. For
example, following program fails in compilation.

#include<iostream>

Class Test {

Static void fun(int i) {}

Void fun(int i) {}

};
Int main(){

Test t;

Getchar();

Return 0;

3) Parameter declarations that differ only in a pointer * versus an array [] are


equivalent. That is, the array declaration is adjusted to become a pointer declaration.
Only the second and subsequent array dimensions are significant in parameter types.
For example, following two function declarations are equivalent.

Int fun(int *ptr);

Int fun(int ptr[]); // redeclaration of fun(int *ptr)

4) Parameter declarations that differ only in that one is a function type and the other is
a pointer to the same function type are equivalent.

Void h(int ());

Void h(int (*)()); // redeclaration of h(int())

5) Parameter declarations that differ only in the presence or absence of const and/or
volatile are equivalent. That is, the const and volatile type-specifiers for each parameter
type are ignored when determining which function is being declared, defined, or called.
For example, following program fails in compilation with error “redefinition of `int f(int)’ “
Example:

#include<iostream>

#include<stdio.h>

Using namespace std;

Int f ( int x) {

Return x+10;
}

Int f ( const int x) {

Return x+10;

Int main() {

Getchar();

Return 0;

Only the const and volatile type-specifiers at the outermost level of the parameter type
specification are ignored in this fashion; const and volatile type-specifiers buried within a
parameter type specification are significant and can be used to distinguish overloaded
function declarations. In particular, for any type T, “pointer to T,” “pointer to const T,”
and “pointer to volatile T” are considered distinct parameter types, as are “reference to
T,” “reference to const T,” and “reference to volatile T.” For example, see the example in
this comment posted by Venki. 6) Two parameter declarations that differ only in their
default arguments are equivalent. For example, following program fails in compilation
with error “redefinition of `int f(int, int)’ “.

#include<iostream>

#include<stdio.h>

Using namespace std;

Int f ( int x, int y) {

Return x+10;

Int f ( int x, int y = 10) {

Return x+y;
}

Int main() {

Return 0;

}.

Conversation with Bard

Why in execution processes data store in primary memory:

Execution processes store data in primary memory for two main reasons: speed and
accessibility.

Speed: Primary memory, also known as RAM (Random Access Memory), is


significantly faster than secondary storage devices like hard drives and SSDs. This
speed difference is measured in microseconds for RAM and milliseconds for secondary
storage. This faster access time allows the CPU to quickly retrieve and manipulate data
needed for immediate execution, leading to smoother and more responsive
performance.

Accessibility: The CPU can directly access data stored in primary memory. In contrast,
the CPU cannot directly access data in secondary storage. It requires additional steps
and hardware components to transfer data between secondary storage and primary
memory before the CPU can use it. This process adds significant overhead and slows
down execution.

Here’s an analogy: Imagine your primary memory as your desk workspace, where you
have the papers and tools you’re actively working on. Secondary storage would be like
filing cabinets in another room. It holds everything you might need eventually, but
accessing something specific takes time and effort. For tasks requiring immediate
attention, you naturally keep things on your desk for quick retrieval and manipulation.
However, it’s important to note that primary memory is volatile, meaning it loses its
contents when the computer is powered off. This is why secondary storage, despite its
slower access times, is essential for long-term data storage. The operating system
manages the process of loading necessary data from secondary storage to primary
memory for execution and then writing back any changes when the work is done.

In summary, execution processes store data in primary memory for faster access and
direct manipulation by the CPU, leading to efficient and responsive performance. While
secondary storage is crucial for long-term data retention, its slower access makes it
unsuitable for active execution tasks.

How variable are created and where?

The creation and location of variables depend on the specific programming language
and environment you’re using. Here’s a general overview:

Creation:

Declaration: In most languages, you must first declare a variable by specifying its name
and sometimes its data type, even if you don’t assign a value immediately. This
reserves memory and defines the variable’s purpose.

Assignment: To give the variable a value, you assign it using an equal sign (=) or similar
operator.

Location:

Scope: Different parts of your code have different levels of access to variables, called
scopes. Variables can be:

Local: Created within a specific function or block of code, accessible only within that
scope (fastest access).
Global: Declared outside any function, accessible throughout the program (use with
caution due to potential naming conflicts and debugging difficulties).

Instance: Created within an object, accessible only to that object and its methods.

Static: Global variable within a class, shared by all instances of that class.

Memory Allocation: Based on the scope and language, variables may be stored in
different memory locations:

Stack: For local variables, offering fast access and automatic deallocation when the
function ends.

Heap: For dynamically allocated variables (e.g., using new in JavaScript), managed by
the garbage collector.

Data Segment: For global and static variables, persistent throughout the program
execution.

Examples:

Python: name = “Alice” declares and assigns a string variable.

C++: int age; age = 30; declares an integer variable and assigns a value.

Java: String message = “Hello”; creates a local string variable in a method.

Points to Remember:

Different languages have different syntax and rules for variable creation and storage.

Understanding scope and memory allocation is crucial for efficient programming and
avoiding errors.

Global variables should be used sparingly due to potential drawbacks.


If you have a specific programming language in mind, I can provide more detailed
information about its variable creation and location mechanisms.

Type of segments in computer

The term “segment” can have different meanings depending on the context in computer
science. Here are some of the most common types of segments:

Memory Segments:

Operating Systems: In older operating systems like early versions of MS-DOS, memory
was divided into segments for code, data, and stack. This provided some memory
protection but has largely been replaced by paging for better memory management.

Virtual Memory: Virtual memory utilizes segments in combination with pages to create a
flexible memory management system. Each process can have its own address space
divided into segments, further divided into smaller fixed-size pages.

Data Structures:

Arrays: An array is a basic data structure storing elements of the same data type in
contiguous memory locations. It can be considered as a single segment of data.

Linked Lists: These structures consist of nodes containing data and references to other
nodes, forming a linear sequence. Each node can be seen as a separate segment.

Trees and Graphs: Nodes in these structures can also be viewed as segments holding
data and connections to other nodes.

Computer Graphics:

Line Segments: In 2D and 3D graphics, line segments are fundamental building blocks
used to represent lines, edges, and other geometric shapes. Each segment has start
and end points defining its length and direction.

Network Segmentation:

VLANs (Virtual LANs): In computer networks, VLANs logically segment a single physical
network into multiple broadcast domains, increasing security and reducing traffic
congestion.
Other:

Market Segmentation: This marketing strategy divides a customer base into segments
based on shared characteristics for targeted marketing campaigns.

Code Segmentation: Some development tools break down code into segments for
better organization and analysis.

Remember, the specific meaning of “segment” depends on the context. If you can
provide more context about what you’re interested in, I can offer a more targeted
explanation.

Profile picture

What is code segment

View other drafts

“Code segment” can have two main meanings in computer science, depending on the
context:

1. Memory Segment:This refers to a historical approach to memory management


used in older operating systems like early versions of MS-DOS. Memory was
divided into fixed-size blocks called segments, typically for:

Code segment: Containing executable instructions for the program.

Data segment: Storing program data like variables and constants.

Stack segment: Used for function calls and temporary data.

This method offered basic memory protection, but it wasn’t very flexible and has been
largely replaced by paging in modern systems. Paging divides memory into smaller,
fixed-size pages, allowing for more efficient memory utilization and dynamic allocation.

2. Executable Instructions:

More commonly today, “code segment” can simply refer to a portion of an object file or a
program’s memory space that contains executable instructions. This segment holds the
actual machine code or bytecode that the CPU can directly interpret and execute.
Essentially, it’s the part of the program that “does the work.”
Here are some key points about code segments in this context:

They are stored in object files after compiling the source code.

When loaded into memory, they are typically placed in a read-only section protected
from accidental modification.

Depending on the programming language and architecture, a code segment might


include other things like constants and tables used by the instructions.

Understanding code segments is essential for concepts like:

Memory management: Knowing how code interacts with memory helps optimize
program performance and prevent memory-related errors.

Debugging: Analyzing code segments can help pinpoint the source of program crashes
or unexpected behavior.

Disassembly: Tools can decode machine code in a code segment to reveal its
underlying instructions for analysis or reverse engineering.

Remember, the specific meaning of “code segment” depends on the context. If you
have a specific situation in mind, feel free to share more details and I can provide a
more tailored explanation.

Stack segments:

The term “stack segment” can have two different meanings in computer architecture,
depending on the context:

1. Stack segment as a memory region:

In older memory management systems like segmentation, the stack was treated as a
dedicated segment within the physical memory. This segment held temporary data used
by function calls and local variables. It had its own base address and size, defined in
segment registers alongside other segments like code and data. This approach offered
modularity and security benefits, but it’s less common in modern systems.

2. Stack segment as a logical concept:

Modern systems often use simpler memory management techniques like paging, where
memory is divided into fixed-size pages instead of segments. However, the concept of a
stack remains crucial for function calls and temporary data storage. Here, the “stack
segment” refers to the logical structure of this data, following the LIFO (Last In, First
Out) principle. Imagine a pile of plates; adding a new plate pushes it on top, and
removing one takes the topmost plate. This “stack” grows and shrinks dynamically
within the allocated memory region, not needing a dedicated segment.

Role in memory management:

Regardless of the implementation, the stack plays a vital role in memory management:

Function calls: When a function is called, the arguments, local variables, and return
address are pushed onto the stack. This creates a separate execution environment for
each function, allowing nested function calls and recursion. When the function returns,
the data is popped off the stack, restoring the previous environment.

Temporary data: Variables declared within a function or block typically reside on the
stack. This allows dynamic memory allocation for temporary values without worrying
about deallocation, as the stack automatically cleans up when the function ends.

Parameter passing: In some architectures, function arguments are passed on the stack,
making them accessible within the called function.

Understanding the concept of the stack, whether implemented as a dedicated segment


or a logical structure, is essential for comprehending program execution, memory
usage, and debugging issues.

I hope this clarifies the concept of the stack segment in both historical and modern
contexts. If you have any further questions or specific aspects you’d like to explore, feel
free to ask!

Heap segments:

The heap segment is a crucial part of memory management in computer systems.


Here's a breakdown of its key characteristics:
Purpose:
 The heap is a dynamic memory region used for allocating memory at runtime. It serves
as a flexible pool for storing data whose size or lifetime isn't known beforehand.
 This contrasts with the stack, which has a fixed size and primarily stores function call
data with a well-defined scope.
Allocation and Deallocation:
 Memory on the heap is allocated using functions like malloc, calloc, and realloc in
C and C++, or similar mechanisms in other languages.
 Unlike the stack, where deallocation is automatic, the programmer is responsible for
explicitly deallocating heap memory using free or equivalent functions. This is critical to
prevent memory leaks, where unused memory remains allocated and unavailable for
other uses.
Applications:
 The heap is essential for diverse data structures whose size cannot be determined at
compile time, such as:
o Dynamic arrays whose size grows or shrinks based on user input
o Linked lists and trees, which have variable sizes depending on their content
o Objects created at runtime based on user actions
Management and Challenges:
 Efficient heap management is crucial for program performance and stability.
 Improper allocation and deallocation can lead to:
o Memory leaks: Unused memory remains allocated, reducing available space and
potentially causing program crashes.
o Fragmentation: Used and unused memory become scattered, making it harder to
allocate larger blocks later.
o Buffer overflows: Improperly managing buffer sizes can lead to writing beyond
allocated memory, potentially causing security vulnerabilities or program crashes.
Beyond Programming Languages:
 The concept of a heap can extend beyond traditional programming languages.
Operating systems might use heaps for managing memory allocated to processes or
shared among them.
I hope this clarifies the heap segment in memory management. If you have any further
questions or specific aspects you'd like to explore, feel free to ask!

Data segments:

Data segments in memory management can refer to two distinct but related concepts

**1. Segmentation as a memory management technique:**

In this context, data segments are logical divisions of a program’s memory space. Each
segment represents a specific type of data, such as code, initialized data, uninitialized
data, stack, or heap. This logical grouping allows for better memory protection and
organization compared to a single contiguous memory block.

Here are some key points about memory segmentation:

* **Advantages:**

Flexibility:Segments can be of different sizes and grow/shrink dynamically, improving


memory utilization.

Protection:Access control can be applied at the segment level, enhancing security.

* **Sharing:** Segments can be shared between processes, facilitating


communication and resource reuse.

Disadvantages:

Complexity: Requires additional bookkeeping by the operating system.

External fragmentation: Free memory can become unusable due to gaps between
segments.

2. Data segment within a program:**

This refers to a specific segment in a program’s memory layout that stores initialized
global and static variables. While not all operating systems or programming languages
explicitly use terminology like “data segment,” the concept of dedicated memory regions
for variables still exists.

Here are some characteristics of a data segment:

* **Content:** Holds initialized global and static variables, essentially data with known
values at compile time.

* **Access:** Typically read-write, as variable values can change during program


execution.

* **Size:** Static, determined by the size of the variables it contains.


Understanding both meanings of “data segments” is crucial for effectively grasping
memory management concepts. If you have a specific question or context in mind, feel
free to provide more details, and I’ll be happy to delve deeper!

You might also like