OOp - Basics Discussion
OOp - Basics Discussion
In the programming world, there are two main ways to tell a computer what to do:
procedurally and non-procedurally. Each approach has its own strengths, weaknesses,
and ideal uses. Let's delve deeper into these paradigms:
Procedural Languages:
Imagine you're giving instructions to a friend on how to bake a cake. You'd likely give
them step-by-step instructions: preheat the oven, mix the ingredients, pour the batter,
bake for a specific time, etc. This is essentially how procedural languages work.
Focus on "how": These languages explicitly tell the computer the exact sequence of
steps to take to achieve a desired outcome.
Imperative style: They rely heavily on commands that change the state of the program
(e.g., assigning values to variables, performing calculations).
Sequential execution: Code is typically written in lines that are executed one after the
other, in the defined order.
Control flow structures: Loops and conditional statements are used to control the flow of
execution.
Examples: C, Java, Python (in many cases), C++, Pascal, FORTRAN
Think of procedural languages like a detailed recipe, outlining each step in precise
order.
Non-Procedural Languages:
Now imagine you're describing your dream cake to a skilled pastry chef. Instead of
specific instructions, you'd focus on desired characteristics: rich chocolate flavor, fluffy
texture, a hint of raspberry. The chef, based on their expertise, would then figure out the
recipe and baking process themselves. This is closer to how non-procedural languages
operate.
Key Differences:
Procedural Non-Procedural
Feature
Languages Languages
How to
What needs to be
Focus achieve a
done
result
Declarative/
Style Imperative
Functional
Explicit
Control (loops, Implicit (based on
flow conditionals relationships)
)
C, Java,
Python, C+ SQL, Prolog, Haskell,
Examples
+, Pascal, Lisp
FORTRAN
The choice between procedural and non-procedural languages depends on the specific
problem you're trying to solve:
Remember, both paradigms have their place, and often, the best approach combines
elements of both.
Definition: The ability to define multiple functions with the same name but different
parameter lists (types and/or number of arguments).
Purpose:
Code readability: Use the same name for related operations, improving code clarity.
Flexibility: Handle various data types with the same function name.
Lists Must Differ: The overloaded functions must have different parameter lists in
terms of:
Number of parameters
Data types of parameters
Sequence of parameters
Return Type Alone Isn’t Enough: Overloading based solely on return type is not
allowed.
Examples:
#include<iostream>
Return a + b;
Return a + b + c;
Int Main(){
Cout << “Sum of three numbers: “ << sum2 << endl; Return 0;}
#include <iostream>
#include <string>
Void print(int x) {
Cout << “Integer: “ << x << endl;
Void print(double x) {
Int main() {
Return 0;
C++
#include <iostream>
Int temp = x;
X = y;
Y = temp;
Int temp = x;
X = y;
Y = temp;
Int main() {
// …
Key Points:
Overloading is resolved at compile time, not runtime.
Sources
Github.com/DeepakProgrammer1/Deepak-Singh-Kushwaha
If multiple functions having same name but parameters of the functions should be
different is known as Function Overloading.
If we have to perform only one operation and having same name of the functions
increases the readability of the program.
Suppose you have to perform addition of the given numbers but there can be any
number of arguments, if you write the function such as a(int,int) for two parameters, and
b(int,int,int) for three parameters then it may be difficult for you to understand the
behavior of the function because its name differs.
The parameters should follow any one or more than one of the following conditions for
Function overloading:
Add(int a, int b)
Add(double a, double b)
The parameters should follow any one or more than one of the
following conditions for Function overloading:
Parameters should have a different type
add(int a, int b)
add(double a, double b)
Below is the implementation of the above discussion:
C++
#include <iostream>
// Driver code
int main()
add(10, 2);
add(5.3, 6.2);
return 0;
Output
sum = 12
sum = 11.5
Parameters should have a different number
add(int a, int b)
add(int a, int b, int c)
Below is the implementation of the above discussion:
C++
#include <iostream>
using namespace std;
// Driver code
int main()
add(10, 2);
add(5, 6, 4);
return 0;
Output
sum = 12
sum = 15
Parameters should have a different sequence of parameters.
add(int a, double b)
add(double a, int b)
Below is the implementation of the above discussion:
C++
#include<iostream>
cout<<"sum = "<<(a+b);
cout<<endl<<"sum = "<<(a+b);
// Driver code
int main()
add(10,2.5);
add(5.5,6);
return 0;
Output
sum = 12.5
sum = 11.5
Following is a simple C++ example to demonstrate function
overloading.
CPP
#include <iostream>
void print(int i) {
void print(double f) {
int main() {
print(10);
print(10.10);
print("ten");
return 0;
Output
Here is int 10
Here is float 10.1
Here is char* ten
C++
#include<iostream>
using namespace std;
cout<<"sum ="<<(a+b);
cout<<endl<<"sum ="<<(a+b+c);
main()
add(10,2);
add(5,6,4);
return 0;
C++
#include<iostream>
cout<<"sum ="<<(a+b);
}
cout<<endl<<"sum ="<<(a+b);
main()
add(10,2.5);
add(5.5,6);
return 0;
Function overloading is possible in C++ and Java but only if the
functions must differ from each other by the types and the number of
arguments in the argument list. However, functions can not be
overloaded if they differ only in the return type.
#include <iostream>
// compiler error as it is a
new declaration of fun()
// Driver Code
int main()
char x = fun();
getchar();
return 0;
Output
prog.cpp: In function ‘char fun()’:
prog.cpp:6:10: error: ambiguating new declaration of ‘char
fun()’
char fun() { return 'a'; }
^
prog.cpp:4:5: note: old declaration ‘int fun()’
int fun() { return 10; }
^
Functions that cannot be overloaded in C++
#include<iostream>
Int foo() {
Return 10;
Char foo() {
Return ‘a’;
Int main()
Char x = foo();
Return 0;}
2) Member function declarations with the same name and the name parameter-type-list
cannot be overloaded if any of them is a static member function declaration. For
example, following program fails in compilation.
#include<iostream>
Class Test {
Void fun(int i) {}
};
Int main(){
Test t;
Getchar();
Return 0;
4) Parameter declarations that differ only in that one is a function type and the other is
a pointer to the same function type are equivalent.
5) Parameter declarations that differ only in the presence or absence of const and/or
volatile are equivalent. That is, the const and volatile type-specifiers for each parameter
type are ignored when determining which function is being declared, defined, or called.
For example, following program fails in compilation with error “redefinition of `int f(int)’ “
Example:
#include<iostream>
#include<stdio.h>
Int f ( int x) {
Return x+10;
}
Return x+10;
Int main() {
Getchar();
Return 0;
Only the const and volatile type-specifiers at the outermost level of the parameter type
specification are ignored in this fashion; const and volatile type-specifiers buried within a
parameter type specification are significant and can be used to distinguish overloaded
function declarations. In particular, for any type T, “pointer to T,” “pointer to const T,”
and “pointer to volatile T” are considered distinct parameter types, as are “reference to
T,” “reference to const T,” and “reference to volatile T.” For example, see the example in
this comment posted by Venki. 6) Two parameter declarations that differ only in their
default arguments are equivalent. For example, following program fails in compilation
with error “redefinition of `int f(int, int)’ “.
#include<iostream>
#include<stdio.h>
Return x+10;
Return x+y;
}
Int main() {
Return 0;
}.
Execution processes store data in primary memory for two main reasons: speed and
accessibility.
Accessibility: The CPU can directly access data stored in primary memory. In contrast,
the CPU cannot directly access data in secondary storage. It requires additional steps
and hardware components to transfer data between secondary storage and primary
memory before the CPU can use it. This process adds significant overhead and slows
down execution.
Here’s an analogy: Imagine your primary memory as your desk workspace, where you
have the papers and tools you’re actively working on. Secondary storage would be like
filing cabinets in another room. It holds everything you might need eventually, but
accessing something specific takes time and effort. For tasks requiring immediate
attention, you naturally keep things on your desk for quick retrieval and manipulation.
However, it’s important to note that primary memory is volatile, meaning it loses its
contents when the computer is powered off. This is why secondary storage, despite its
slower access times, is essential for long-term data storage. The operating system
manages the process of loading necessary data from secondary storage to primary
memory for execution and then writing back any changes when the work is done.
In summary, execution processes store data in primary memory for faster access and
direct manipulation by the CPU, leading to efficient and responsive performance. While
secondary storage is crucial for long-term data retention, its slower access makes it
unsuitable for active execution tasks.
The creation and location of variables depend on the specific programming language
and environment you’re using. Here’s a general overview:
Creation:
Declaration: In most languages, you must first declare a variable by specifying its name
and sometimes its data type, even if you don’t assign a value immediately. This
reserves memory and defines the variable’s purpose.
Assignment: To give the variable a value, you assign it using an equal sign (=) or similar
operator.
Location:
Scope: Different parts of your code have different levels of access to variables, called
scopes. Variables can be:
Local: Created within a specific function or block of code, accessible only within that
scope (fastest access).
Global: Declared outside any function, accessible throughout the program (use with
caution due to potential naming conflicts and debugging difficulties).
Instance: Created within an object, accessible only to that object and its methods.
Static: Global variable within a class, shared by all instances of that class.
Memory Allocation: Based on the scope and language, variables may be stored in
different memory locations:
Stack: For local variables, offering fast access and automatic deallocation when the
function ends.
Heap: For dynamically allocated variables (e.g., using new in JavaScript), managed by
the garbage collector.
Data Segment: For global and static variables, persistent throughout the program
execution.
Examples:
C++: int age; age = 30; declares an integer variable and assigns a value.
Points to Remember:
Different languages have different syntax and rules for variable creation and storage.
Understanding scope and memory allocation is crucial for efficient programming and
avoiding errors.
The term “segment” can have different meanings depending on the context in computer
science. Here are some of the most common types of segments:
Memory Segments:
Operating Systems: In older operating systems like early versions of MS-DOS, memory
was divided into segments for code, data, and stack. This provided some memory
protection but has largely been replaced by paging for better memory management.
Virtual Memory: Virtual memory utilizes segments in combination with pages to create a
flexible memory management system. Each process can have its own address space
divided into segments, further divided into smaller fixed-size pages.
Data Structures:
Arrays: An array is a basic data structure storing elements of the same data type in
contiguous memory locations. It can be considered as a single segment of data.
Linked Lists: These structures consist of nodes containing data and references to other
nodes, forming a linear sequence. Each node can be seen as a separate segment.
Trees and Graphs: Nodes in these structures can also be viewed as segments holding
data and connections to other nodes.
Computer Graphics:
Line Segments: In 2D and 3D graphics, line segments are fundamental building blocks
used to represent lines, edges, and other geometric shapes. Each segment has start
and end points defining its length and direction.
Network Segmentation:
VLANs (Virtual LANs): In computer networks, VLANs logically segment a single physical
network into multiple broadcast domains, increasing security and reducing traffic
congestion.
Other:
Market Segmentation: This marketing strategy divides a customer base into segments
based on shared characteristics for targeted marketing campaigns.
Code Segmentation: Some development tools break down code into segments for
better organization and analysis.
Remember, the specific meaning of “segment” depends on the context. If you can
provide more context about what you’re interested in, I can offer a more targeted
explanation.
Profile picture
“Code segment” can have two main meanings in computer science, depending on the
context:
This method offered basic memory protection, but it wasn’t very flexible and has been
largely replaced by paging in modern systems. Paging divides memory into smaller,
fixed-size pages, allowing for more efficient memory utilization and dynamic allocation.
2. Executable Instructions:
More commonly today, “code segment” can simply refer to a portion of an object file or a
program’s memory space that contains executable instructions. This segment holds the
actual machine code or bytecode that the CPU can directly interpret and execute.
Essentially, it’s the part of the program that “does the work.”
Here are some key points about code segments in this context:
They are stored in object files after compiling the source code.
When loaded into memory, they are typically placed in a read-only section protected
from accidental modification.
Memory management: Knowing how code interacts with memory helps optimize
program performance and prevent memory-related errors.
Debugging: Analyzing code segments can help pinpoint the source of program crashes
or unexpected behavior.
Disassembly: Tools can decode machine code in a code segment to reveal its
underlying instructions for analysis or reverse engineering.
Remember, the specific meaning of “code segment” depends on the context. If you
have a specific situation in mind, feel free to share more details and I can provide a
more tailored explanation.
Stack segments:
The term “stack segment” can have two different meanings in computer architecture,
depending on the context:
In older memory management systems like segmentation, the stack was treated as a
dedicated segment within the physical memory. This segment held temporary data used
by function calls and local variables. It had its own base address and size, defined in
segment registers alongside other segments like code and data. This approach offered
modularity and security benefits, but it’s less common in modern systems.
Modern systems often use simpler memory management techniques like paging, where
memory is divided into fixed-size pages instead of segments. However, the concept of a
stack remains crucial for function calls and temporary data storage. Here, the “stack
segment” refers to the logical structure of this data, following the LIFO (Last In, First
Out) principle. Imagine a pile of plates; adding a new plate pushes it on top, and
removing one takes the topmost plate. This “stack” grows and shrinks dynamically
within the allocated memory region, not needing a dedicated segment.
Regardless of the implementation, the stack plays a vital role in memory management:
Function calls: When a function is called, the arguments, local variables, and return
address are pushed onto the stack. This creates a separate execution environment for
each function, allowing nested function calls and recursion. When the function returns,
the data is popped off the stack, restoring the previous environment.
Temporary data: Variables declared within a function or block typically reside on the
stack. This allows dynamic memory allocation for temporary values without worrying
about deallocation, as the stack automatically cleans up when the function ends.
Parameter passing: In some architectures, function arguments are passed on the stack,
making them accessible within the called function.
I hope this clarifies the concept of the stack segment in both historical and modern
contexts. If you have any further questions or specific aspects you’d like to explore, feel
free to ask!
Heap segments:
Data segments:
Data segments in memory management can refer to two distinct but related concepts
In this context, data segments are logical divisions of a program’s memory space. Each
segment represents a specific type of data, such as code, initialized data, uninitialized
data, stack, or heap. This logical grouping allows for better memory protection and
organization compared to a single contiguous memory block.
* **Advantages:**
Disadvantages:
External fragmentation: Free memory can become unusable due to gaps between
segments.
This refers to a specific segment in a program’s memory layout that stores initialized
global and static variables. While not all operating systems or programming languages
explicitly use terminology like “data segment,” the concept of dedicated memory regions
for variables still exists.
* **Content:** Holds initialized global and static variables, essentially data with known
values at compile time.