0% found this document useful (0 votes)
100 views

Unit III Notes Mobile Application Development

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Unit III Notes Mobile Application Development

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

lOMoARcPSD|37512785

UNIT III-Notes - Mobile Application Development

Mobile Application Development (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

UNIT III
MEMORY MANAGEMENT-DESIGN PATTERNS FOR LIMITED MEMORY –
WORKFLOW FOR APPLICATION DEVELOPMENT – JAVA API – DYNAMIC
LINKING – PLUGINS AND RULE OF THUMB FOR USING DLL –
CONCURRENCY AND RESOURCE MANAGEMENT

OVERVIEW OF MEMORY MANAGEMENT


DEFINITION
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main memory and disk
during execution. Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free.
When mobile operating system are considered, memory management plays the key role.
As the mobile devices are constrained about hardware part, special care has to be taken for the
memory management.
Android operating system is based on the linux kernel which mainly has the paging system.
The memory is categorizes as internal memory, swap memory, external memory etc.,
 The Android Runtime (ART) and dalvik virtual machine use paging and memory-
mapping (mmapping) to manage memory.
 This means that any memory an app modifies—whether by allocating new
objects or touching mmapped pages—remains resident in RAM and
cannot be paged out.
 The only way to release memory from an app is to release object references
that the app holds, making the memory available to the garbage collector.
 That is with one exception: any files mapped in without modification, such as code, can
be paged out of RAM if the system wants to use that memory elsewhere.
LINUX KERNEL VS ANDROID OS
 Android OS is nearly similar to the Linux kernel. Android OS has enhanced its features
by adding more custom libraries to the already existing ones in order to support better
system functionalities. For example last seen first killed design, kill the least recently
used process first.
 Memory management is the hardest part of mobile development. Mobile devices, from
cheaper ones to the most expensive ones, have limited amount of dynamic memory
compared to our personal computers.
 The basic facilities run by the kernel are process management, memory
management, device management and system calls. Android also supports all these
features.

Poor memory management shows itself in various ways:


1. Allocating more memory space than application actually needs,
2. Not releasing the memory area retained by application,
3. Releasing a memory area more than once (usually as a result of multi-thread operations),
4. While using automatic memory solutions (ARC or GC), losing the tracks of your objects.
GARBAGE COLLECTION
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

 A managed memory environment, like the ART or Dalvik virtual machine, keeps track
of each memory allocation.
 Once it determines that a piece of memory is no longer being used by the program, it
frees it back to the heap, without any intervention from the programmer. The
mechanism for reclaiming unused memory within a managed memory
environment is known as garbage collection.

Garbage collection has two goals:
1. Find data objects in a program that cannot be accessed in the future;
2. Reclaim the resources used by those objects.
Android’s memory heap is a generational one, meaning that there are
different buckets of allocations that it tracks, based on the expected life and size
of an object being allocated. For example, recently allocated objects belong in the Young
generation. When an object stays active long enough, it can be promoted to an older
generation, followed by a permanent generation.
Each heap generation has its own dedicated upper limit on the amount of
memory that objects there can occupy. Any time a generation starts to fill up, the system
executes a
garbage collection event in an attempt to free up memory. The duration of the garbage
collection depends on which generation of objects it’s collecting and how many active
objects are in each generation.

 The system has a running set of criteria for determining when to perform
garbage collection. When the criteria are satisfied, the system stops executing
the process and begins garbage collection.

 If garbage collection occurs in the middle of an intensive processing loop like an


animation or during music playback, it can increase processing time. This increase can
potentially push code execution in your app past the recommended 16ms threshold for
efficient and smooth frame rendering.

UNDERSTANDING APPLICATION PRIORITY AND PROCESS STATES


 The order in which processes are killed to reclaim resources is determined
by the priority of the hosted applications. An application‘s priority is equal
to its highest-priority component.
Where two applications have the same priority, the process that has been at a lower
priority longest will be killed first. Process priority is also affected by
interprocess dependencies; if an application has a dependency on a
Service or Content Provider supplied by a second application, the
secondary application will have at least as high a priority as the
application it supports.
 All Android applications will remain running and in memory until the system
needs its resources for other applications.

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

Active Processes
 Active (foreground) processes are those hosting applications with
components currently interacting with the user.
 These are the processes Android is trying to keep responsive by reclaiming
resources.
There are generally very few of these processes, and they will be killed only as a last resort.
 Activities in an ―active‖ state; that is, they are in the foreground
and responding to user events. You will explore Activity states in greater detail later in
this chapter.
 Activities, Services, or Broadcast Receivers that are currently executing an
onReceive event handler.
 Services that are executing an onStart, onCreate, or onDestroy event handler.
Visible Processes
 Visible, but inactive processes are those hosting ―visible ‖ Activities . As
the name suggests, visible Activities are visible, but they aren’t in the
foreground or responding to user events.

Started Service Processes
 Processes hosting Services that have been started. Services support ongoing
processing that should continue without a visible interface.
 Because Services don‘t interact directly with the user, they receive a slightly
lower priority than visible Activities.

 They are still considered to be foreground processes and won’t be


killed unless resources are needed for active or visible processes.
Background Processes
 Processes hosting Activities that aren’t visible and that don’t
have any Services that have been started are considered
background processes.
 There will generally be a large number of background processes that Android will kill
using a last-seen-first-killed pattern to obtain resources for foreground processes.
Empty Processes
 To improve overall system performance, Android often retains applications in
memory after they have reached the end of their lifetimes.
 Android maintains this cache to improve the start-up time of applications when
they‘re relaunched. These processes are routinely killed as required.

DDMS
 Android Studio includes a debugging tool called the Dalvik Debug Monitor Service
(DDMS). DDMS provides services like screen capture on the device, threading, heap
information on the device, logcat, processes, incoming calls, SMS checking, location,
data spoofing, and many other things related to testing your Android application.

 DDMS connects the IDE to the applications running on the device. On Android, every
application runs in its own process, each of which hosts its own virtual machine (VM).
And each process listens for a debugger on a different port.
 When it starts, DDMS connects to ADB (Android Debug Bridge, which is a command-
line utility included with Google‘s Android SDK.).
 An Android Debugger is used for debugging the Android app and starts a device
monitoring service between the two. This will notify DDMS when a device is connected
or disconnected.
 When a device is connected, a VM monitoring service is created between ADB and
DDMS, which will notify DDMS when a VM on the device is started or terminated.

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

DESIGN PATTERNS FOR LIMITED MEMORY

When composing designs for devices with a limited amount of memory,


the most important principle is not to waste memory, as pointed out by
Noble and Weir (2001). This means that the design should be based on the most adequate
data structure, which offers the right operations.

LINEAR DATA STRUCTURES

 In contrast to data structures where a separate memory area is reserved for


each item, linear data structures are those where different elements are
located next to each other in the memory.
 Examples of non-linear data structures include common implementations of
lists and tree-like data structures, whereas linear data structures can be
lists and tables, for instance.
 The difference in the allocation in the memory also plays a part in the quality properties
of data structures.
Linear data structures are generally better for memory management than non-linear ones for
several reasons, as listed in the following:
• Less fragmentation. Linear data structures occupy memory place from one
location, whereas non-linear ones can be located in different places. Obviously, the
former results in less possibility for fragmentation.
• Less searching overhead. Reserving a linear block of memory for several items only
takes one search for a suitable memory element in the run-time environment,
whereas non-linear structures require one request for memory per allocated element. Combined
with a design where one object allocates a number of child objects, this may also lead to a
serious performance problem.
• Design-time management. Linear blocks are easier to manage at design time,
as fewer reservations are made. This usually leads to cleaner designs.
• Monitoring. Addressing can be performed in a monitored fashion, because it is possible to
check that the used index refers to a legal object.
• Cache improvement. When using linear data structures, it is more likely that the next data
element is already in cache, as cache works internally with blocks of memory. A related issue
is that most caches expect that data structures are used in increasing order of used memory
locations. Therefore, it is beneficial to reflect this in designs where applicable.
• Index uses less memory. An absolute reference to an object usually consumes 32 bits,
whereas by allocating objects to a vector of 256 objects, assuming that this is the upper limit of
objects, an index of only 8 bits can be used. Furthermore, it is possible to check that there will
be no invalid indexing.

BASIC DESIGN DECISIONS


1. Allocate all memory at the beginning of a program. This ensures that the
application always has all the memory it needs, and memory allocation can only fail at
the beginning of the program.
2. Allocate memory for several items, even if you only need one. Then,
one can build a policy where a number of objects is reserved with one allocation
request. These objects can then be used later when needed
3. Use standard allocation sizes
4. Reuse objects
5. Release early, allocate late
6. Use permanent storage or ROM when applicable. In many situations, it is
not even desirable to keep all the data structures in the program memory due to physical
restrictions.
7. Avoid recursion. Invoking methods obviously causes stack frames to be generated.
While the size of an individual stack frame can be small – for instance, in Kilo Virtual
Machine (KVM), which is a mobile Java virtual machine commonly used in early
Java enabled mobile phones, the size of a single stack frame is at least 28 bytes (7 × 4
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

bytes) – functions calling themselves recursively can end up using a lot of stack, if the
depth of the recursion is not considered beforehand.

Data Packing
Data packing is probably the most obvious way to reduce memory
consumption. There are several sides to data packing.

Use compression with care. In addition to considering the data layout in memory, there are several
compression techniques for decreasing the size of a file.
 Table compression, also referred to as nibble coding or Huffman
coding, is about encoding each element of data in a variable
number of bits so that the more common elements require fewer
bits.

 Difference coding is based on representing sequences of data


according to the differences between them. This typically results in
improved memory reduction than table compression, but also sometimes leads to more
complexity, as not only absolute values but also differences are to be managed.
 Adaptive compression is based on algorithms that analyze the data to
be compressed and then adapt their behavior accordingly. Again,
further complexity is introduced, as it is the compression algorithm that is evolving,
not only data.

WORKFLOW FOR APPLICATION DEVELOPMENT

We have provided the complete phases of a mobile app development process that
would guide you throughout your development journey.

1. Researching
2. Wire framing
3. Evaluating Technical Feasibility
4. Prototyping
5. Designing
6. Developing
7. Testing
8. Deploying the app

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

1. Researching

 You might already have plenty of ideas for your mobile app; it is still good to dig deeper
into demographics, behavior patterns, and demand of your targeted audiences. The other
important thing that covers up under this phase of the mobile app development process is
not to overlook your competitors. By researching thoroughly, get yourself answers to these
following questions:
o Who is your targeted audience?
o What will be a suitable platform to launch your app?
o What are your competitors offering to customers?
 These are just a few questions from the long list that you have to keep in mind. While
researching, think from your customers‘ perspective to find out what additional
features should be there in your app to make it stand out in the crowd. Sparing enough
time for researching and brainstorming will build a strong foundation for your mobile app
development.

2. Wire framing
 Wire framing gives a clear understanding of your app’s features and
functionalities, and hence, is a crucial phase. Draw detailed sketches
of the product you want to build to reveal the usability problems beforehand. Wire
framing help narrow down the ideas and organize all the app design components correctly.
 Try to identify how your planned features will blend into a fully-functional mobile
application. Also, make a storyboard or roadmap to demonstrate how a user will use and
explore your app. Your prime focus should be on delivering an excellent customer
experience by simplifying the roadmap.

3. Evaluating Technical Feasibility

 In this phase of mobile app development process, you have to check whether the
backend system would be capable of supporting the app’s functionality
or not. To figure out the technical feasibility of your app‘s idea, access the public data
by sourcing public application programming interfaces (APIs).
 We have to understand that an app with different formats (wearables, smartphones, tablets,
etc.) and platforms (Android, iOS, or any other) will not have the same needs. By the end of
this stage, you would have various ideas for your app‘s functionality.
4. Prototyping

 Prototyping helps to determine whether you are moving in the right


direction or not. It is totally understandable that you cannot fully deliver the
experience to let your users‘ knowledge about the working and functioning of your app
without developing it completely.
 Create a prototype that conveys the app‘s concept to your targeted audience to verify how
it works. We can allow your stakeholders to have the first look at your app and touch the
prototype to deliver their feedback about it.

5. Designing

 UI (User Interface) and UX (User Experience) are the two vital


components of your mobile app design. The former is responsible for the look and
appeal of your application, whereas the latter facilitates the interaction between the design
elements.
 The time required for designing cannot be specified as it may take only a couple of hours to
a few days. Another factor that impacts your app designing time is the experience of the
developers from your mobile app development services provider.
 It is a multistep process that needs to be done carefully to ensure that the outcome provides

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

a clear vision of your business idea.

6. Developing

 Generally, this phase of the android and iOS app development process starts at the very
initial stage.
 Right after you finalize an app idea, the developers need to develop a prototype to
authenticate the features and functionalities.
 The development phase is further divided into various sections, where the
team or a developer writes pieces of code, which then tested by another team. After marking
the first section as bug-free, the development team moves further.
 In the case of complex projects with frequently changing user requirements, it is good to opt
for an agile methodology. This type of methodology leads to progressive development and
brings flexibility in the planning process.

7. Testing
 Testing early and frequently gives developers the advantage of fixing a bug right
when it occurs.
 It also controls the final cost of the development as it will require both money and
efforts to fix a bug, which occurred at the first stage, after you reach on fifth or more.
 While testing your app, consider compatibility, security, usability,
performance, UI checks, and other factors in mind. Check whether your
application serves its purpose or not.

8. Deploying the app

 In this stage, your app is ready to launch. To do so, select a day and release your
mobile application on the desired platforms.
 Deploying the app is not the final step technically as you receive feedback from your
audience and thus, have to make the changes accordingly.
The other two mobi answer the queries of the users using your mobile application. It would
be no wrong to say that mobile app development is a long-term commitment rather than just
a short-term project.

JAVA API
What is Java?
 Java is an object-oriented programming language that runs on almost
all electronic devices. Java is platform-independent because of Java
virtual machines (JVMs).
 It follows the principle of "write once, run everywhere.‖ When a JVM is
installed on the host operating system, it automatically adapts to the environment and
executes the program‘s functionalities.
 To install Java on a computer, the developer must download the JDK and set up the
Java Runtime Environment (JRE).
As previously noted, a Java download consists of two files:
 JDK
 JRE
The JDK file is key to developing APIs in Java and consists of:
 The compiler
 The JVM
 The Java API
WHAT IS JAVA API AND THE NEED FOR JAVA APIS?
Java application programming interfaces (APIs) are predefined software tools that easily enable interactivity
between multiple applications
Compiler

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

A Java compiler is a predefined program that converts the high-level, user-written code
language to low-level, computer-understandable, byte-code language during the
compile time.

JVM

A JVM processes the byte-code from the compiler and provides an output in a
user-readable format.

Java APIS

Java APIs are integrated pieces of software that come with JDKs. APIs in Java provides
the interface between two different applications and establish communication.

 APIs are important software components bundled with the JDK.


 APIs in Java include classes, interfaces, and user Interfaces.
 They enable developers to integrate various applications and websites and offer
real-time information.
 The following image depicts the fundamental components of the Java API.

Three types of developers use Java APIs based on their job or project:
1. Internal developers

2. Partner developers
3. Open developers

Internal Developers

Internal developers use internal APIs for a specific organization. Internal APIs are accessible only
by developers within one organization.
Applications that use internal APIs include:
 B2B
 B2C
 A2A
 B2E
Examples include Gmail, Google Cloud VM, and Instagram.

Partner Developers

Organizations that establish communications develop and use partner APIs. These types of APIs
are available to partner developers via API keys.
Applications that use partner APIs include:
 B2B
 B2C
Examples include Finextra and Microsoft (MS Open API Initiative),

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

Open Developers

Some leading companies provide access to their APIs to developers in the open-source
format. These businesses provide access to APIs via a key so that the company can ensure
that the API is not used illegally.
The application type that uses internal APIs is:
 B2C
Examples include Twitter and
Telnyx. THE NEED FOR
JAVA APIS
Java developers use APIs to:

Streamline Operating Procedures

Social media applications like Twitter, Facebook, LinkedIn, and Instagram


provide users with multiple options on one screen. Java APIs make this functionality possible.

Improve Business Techniques

Introducing APIs to the public leads many companies to release private data to generate new ideas,
fix existing bugs, and receive new ways to improve operations.

Create Powerful Applications

Online banking has changed the industry forever, and APIs offer customers the ability to manage
their finances digitally with complete simplicity.
TYPES OF JAVA APIS
There are four types of APIs in Java:
 Public
 Private
 Partner
 Composite

Public
Public (or open) APIs are Java APIs that come with the JDK. They do not have strict restrictions
about how developers use them.

Private

Private (or internal) APIs are developed by a specific organization and are accessible to only
employees who work for that organization.

Partner

Partner APIs are considered to be third-party APIs and are developed by organizations for strategic
business operations.

Composite
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

Composite APIs are micro services, and developers build them by combining several service
APIs. THE ADVANTAGES OF APIS
Some of the main advantages of using Java APIs include:

Extensive SQL Support

APIs in Java enable a wide range of SQL support services in user applications through a component-based
interface
Scope

Java APIs easily make websites, applications, and information available to a wide range of users
and audiences.

Customization

Java APIs enable developers and businesses to build applications that personalize the user interface
and data.

Adaptability

Java APIs are highly flexible and adaptable because they can easily accept feature updates and
changes to frameworks and operating environments.

DYNAMIC LINKING
Application

APIs in Java provide effortless access to all of an application‘s major software components and
easily deliver services.

Efficiency

Java APIs are highly efficient because they enable rapid application deployment. Also, the data that
the application generates is always available online.

Automation

APIs allow computers to automatically upload, download, update and delete data automatically
without human interaction.

Integration

Java APIs can integrate into any application and website and provide a fluid user experience
with dynamic data delivery.
DYNAMIC LINKING

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

Dynamic linking, often implemented with dynamically linked libraries (DLL), is a


common way to partition applications and subsystems into smaller portions,
which can be compiled, tested, reused, managed, deployed, and installed
separately.

 Several applications can use the library in such a fashion that only one copy
of the library is needed, thus saving memory.
 Application-specific tailoring can be handled in a convenient fashion, provided that
supporting facilities exist.
 Smaller compilations and deliveries are enabled
 Composition of systems becomes more flexible, because only a
subset of all possible software can be included in a device when
creating a device for a certain segment.
 It is easier to focus testing to some interface and features that can be
accessed using that interface.
 Library structure eases scoping of system components and enables
the creation of an explicit unit for management.
 Work allocation can be done in terms of dynamic libraries, if the implementation is
carried out using a technique that does not support convenient mechanisms for
modularity.
STATIC VERSUS DYNAMIC DLLS
 While dynamically linked libraries are all dynamic in their nature, there are two different
implementation schemes.
 One is static linking, which most commonly means that the library is
instantiated at the starting time of a program, and the loaded library
resides in the memory as long as the program that loaded the library into its memory
space is being executed.
 In contrast to static DLLs, dynamic DLLs, which are often also referred to
as plug-in, especially if they introduce some special extension, can
be loaded and unloaded whenever needed, and the facilities can thus be
altered during an execution.
 The benefit of the approach is that one can introduce new features using such libraries.
For instance, in the mobile setting one can consider that sending a message is an
operation that is similar to different message types (e.g. SMS, MMS, email), but
different implementations are needed for communicating with the network in the correct
fashion.

CHALLENGES WITH USING DLLS


1. A common risk associated with dynamic libraries is the fragmentation of the
total system into small entities that refer to each other seemingly uncontrollably
2. Another problem is that if a dynamic library is used by only one application,
memory consumption increases due to the infrastructure needed for
management of the library.
IMPLEMENTATION TECHNIQUES
 Fundamentally, dynamically linked libraries can be considered as
components that offer a well-defined interface for other pieces of
software to access it.
 Additional information may be provided to ease their use. This is not a necessity,
however, but they can be self-contained as well, in which case the parts of the
program that use libraries must find the corresponding information from libraries.
 Usually, this is implemented with standard facilities and an application programmer
has few opportunities to optimize the system.

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

Dynamically linked libraries can be implemented in two different fashions.


1. Offset based linking
2. Signature based linking
OFFSET BASED LINKING
 Linking based on offsets is probably the most common way to load
dynamic libraries.
 The core of the approach is to add a table of function pointers to the library file, which
identifies where the different methods or procedures exported from the dynamically
linked library are located, thus resembling the virtual function table used in
inheritance.
SIGNATURE BASED LINKING
 In contrast to offset-based linking of dynamically linked libraries, also language-
level constructs, such as class names and method signatures, can
be used as the basis for linking.
 Then, the linking is based on loading the whole library to the memory and then
performing the linking against the actual signatures of the functions, which must then
be present in one form or another.

PLUGINS AND RULE OF THUMB FOR


 Plugins, which dynamically loaded dynamically linked libraries are often referred to as,
especially if they play a role of an extension or specialization, are a special type of DLL
that enable differentiation of operations for different purposes at runtime.
 Usually implement a common interface used by an application, but their operations can
still differ at the level of implementation.
 One could implement different plugins for different types of messages that can be sent
from a mobile device.
PLUGIN PRINCIPLES
 Plugins take advantage of the binary compatibility of the interface
provided by a dynamically linked library.
 The important concepts of a plugin are the interfaces they implement, and the
implementations they provide for interfaces. The interface part is used by applications
using the plugin for finding the right plugins, and the implementation defines the actual
operations.
 Commonly some special information regarding the interface is provided, based on
which the right plugin library can be selected.
 When a plugin is selected for use, its implementation part is
instantiated in the memory similarly to normal dynamically linked
libraries.
 It is possible to load and unload several plugins for the same
interface
 during the execution of an application, depending on required operations

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

One applicable solution for the implementation of plugins is the abstract factory design
pattern introduced by Gamma et al. (1995).

Which prefixes Abs and Conc refer to abstract and concrete elements
of the design. In some cases, the programmer is responsible for all the
operations, in which case all the plugins are just plain dynamically linked libraries
from which the programmer selects one.
 In other cases, sophisticated support for plugins is implemented, where the
infrastructure handles plugin selection based on some heuristics, for instance.
 The idea of plugins can be applied in a recursive fashion. This means
that a plugin used for specializing some functions of the system can use other
plugins in its implementation to allow improved flexibility.

IMPLEMENTATION-LEVEL CONCERNS
To begin with, a framework is commonly used for loading and unloading plugins. This
implies a mechanism for extending (or degenerating) an application on the fly when some
new services are needed.
Secondly, in order to enable loading, facilities must be offered for searching all the available
implementations of a certain interface. This selection process is commonly referred to as
resolution. In many implementations, a default implementation is provided if no other libraries
are present, especially when a system plugin that can be overridden is used.
Finally, a policy for registering components and removing them is needed in order to enable
dynamic introduction of features. This can be based on the use of registry files, or simply on
copying certain files to certain locations, for instance.
MANAGING MEMORY CONSUMPTION RELATED TO DYNAMICALLY
LINKED LIBRARIES
Memory consumption forms a major concern in the design of software for mobile devices. At
the same time, a dynamically linked library is often the smallest unit of software that can be
realistically managed when developing software for mobile devices.
MEMORY LIMIT
Setting explicit limits regarding memory usage for all parts of the system is one way to
manifest the importance of controlling memory usage. Therefore, make all dynamically
linked libraries (and other development-time entities) as well as their developers responsible
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

for the memory they allocate.


INTERFACE DESIGN PRINCIPLES
1. Select the right operation granularity
 In many cases, it is possible to reveal very primitive operations, out of which the
clients of a dynamically linked library can then compose more complex operations. In
contrast, one can also provide relatively few operations, each of which is responsible
for a more complex set of executions.
 A common rule of thumb is to select the granularity of the visible interface operations
so that they are logical operations that a client can ask the library to perform, and not
to allow setting and getting of individual values (overly simplistic operations) or
simply commanding the library to doIt() (overly abstract operations), for instance.
2. Allow client to control transmissions.
 This allows implementations where clients optimize their memory and general
resource situation in their behaviors, whereas if the service provider is in control, all
the clients get the same treatment, leaving no room for special situations on the client
side.
3. Minimize the amount of data to be transmitted.
 For individual functions that call each other this means that the size of the stack
remains smaller. Furthermore, there will be less need for copying the objects that are
passed from one procedure to another.
4. Select the best way to transmit the data. There are three fundamentally different ways
to transmit data, referred to as lending, borrowing, and stealing. They are described in the
following in detail.
1. Lending. When a client needs a service, it provides some resources (e.g.
memory) for the service provider to use. The responsibility for the resource remains
in the client‘s hands.
2. Borrowing. When a client needs a service, it expects the service provider to borrow
some of its own resources for the client‘s purposes. The client assumes the responsibility
for the deallocation, but uses the service provider‘s operation for this.
3.Stealing. The service provider allocates some resources whose control is transferred to
the client. The owner of the resource is changed, and the client assumes full ownership, including
the responsibility for the deallocation

RULES OF THUMB FOR USING DYNAMICALLY LOADED LIBRARIES


In principle, any unit of software can be implemented as a separate library. In practice,
however, creating a complex collection of libraries that depend on each other in an ad-hoc
fashion is an error that is to be avoided at all cost.
In the following, we list some candidate justifications for creating a separate dynamically
linked library out of a program component.
• Reusable or shareable components should be implemented using dynamically loaded
libraries, as otherwise all the applications that use the components must contain them
separately. This in turn consumes memory. In addition, sharing can take place in a form where
the same model, implemented as a dynamically loaded library, is reused in several different
devices that require a specialized user interface for each device.
• Variation or management point can be preferable to implement in terms of dynamic
libraries. This makes variation or management more controlled, as it can be directly associated
with a software component. Moreover, the library can be easily changed, if updates or
modifications are needed. For instance, interfaces can be treated in this fashion even if the
underlying infrastructure (e.g. C++) would not offer such an option. Management can also be
associated with scoping in general, allowing that a certain module can be evolved separately.
• Software development processes such as automated testing may require that all the
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

input is in a form that can be directly processed. Then, performing the tests may require that
binary components are delivered.
• Organizational unit can be authorized to compose a single library that is responsible for a
certain set of functions. Requesting a library then results in a separate deliverable that can
be directly integrated into the final system.

CONCURRENCY AND RESOURCE MANAGEMENT


 The software run inside a mobile device can in general be taken as an
event handler, which simply responds to the events received from the
outside world.
 The implementation mechanisms for concurrent programming are more
or less standardized.
 Threads and processes give boundaries for managing executions and resources.
 In addition, some mechanisms are available to ensure that several threads do
not modify the same variables at the same time.

INFRASTRUCTURE FOR CONCURRENT PROGRAMMING


When programming a system where some input is generated by the environment and requires
immediate reaction whereas other input leads to extensive executions, parallel processing is
usually needed.
Three different cases can be considered:
1. Executions are unknown to each other. However, they can still affect
each other by competing for the same resource, like processor
execution time.
2. Executions are aware of each other indirectly. For instance, they may
have some common shared resource via which they cooperate.
3. Executions communicate with each other directly.
THREADING
 Threads that wait for stimuli (response or activity) and react to
them are a commonly used mechanism for creating highly
responsive applications.
 This allows incoming stimuli to initiate operations, which in turn can generate new
stimuli for other threads, or perhaps more generally lead to the execution of some
procedures by the threads themselves.
 The cost of using threads is that each thread essentially requires
memory for one execution stack, and causes small overhead during
scheduling.
 Threads can be executed in a pre-emptive or non-pre-emptive fashion.
 The former means that the thread that is being executed can be interrupted, and another
thread can be selected for execution. The latter implies that once a thread is being
executed, its execution will only be interrupted for another thread when the executing
thread is ready.
INTER-THREAD COMMUNICATION
While threads are a mechanism for creating executions, in order to accomplish
operations at a higher level of abstraction, several threads are often needed that
cooperate for completing a task.
 For example, establishing and maintaining a phone call requires the cooperation of a
user interface, communication protocols, radio interface, a microphone, a speaker, and a
unit that coordinates the collaboration.
 This cooperation requires inter-thread communication. There are several
mechanisms for making threads communicate.
1. Shared Memory
 Probably the simplest form of communication is the case where
threads use a shared variable for their communication.
 In most cases, the access to the variable must be implemented such that threads can

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

temporarily block each other, so that only one thread at a time performs some
operations on the variable.
 In general, such operations commonly address memory that is shared by a number of
threads, and blocking is needed to ensure that only one thread at a
time enters the critical region.
 For more complex cases, semaphores are a common technique for ensuring that only
one thread at a time enters the critical region
2. Message Passing
 Message passing is another commonly used mechanism for implementing
cooperation between threads.
 In this type of an approach, the idea is that threads can send messages to each other, and that
kernel will deliver the messages.
 The architecture of such a system resembles message-dispatching architecture,
where the kernel acts as the bus, and individual processes are the stations attached
to it.

COMMON PROBLEMS
Concurrent programs can fail in three different ways.
1. One category of errors in mutual exclusion can lead to locking, where all
threads wait for each other to release resources that they can reserve.
2. Secondly, starvation occurs when threads reserve resources such that some
particular thread can never reserve all the resources it needs at the same time.
3. Thirdly, errors with regard to atomic operations, i.e., operations that are
to be executed in full without any intervention from other threads, can lead to a
failure.
MIDP JAVA AND CONCURRENCY
While Java again in principle hides the implementation of the threading model from the user, its
details can become visible to the programmer in some cases.
1. Threading in Virtual Machine
 In the implementation of the Java virtual machine, one important topic is how to
implement threads. There are two alternative implementations, one where threads of the
underlying operating system are used for implementing Java threads, and the other
where the used virtual machine is run in one operating system thread, and it is
responsible for scheduling Java threads for execution.
 The latter types of threads are often referred to as green threads; one cannot
see them from green grass.
 The scheme can be considered as a sophisticated form of event-based
programming, because in essence, the virtual machine simply acts as an event handler
and schedules the different threads into execution in accordance to incoming events.
2. Using Threads in Mobile Java
 Using threads in Java is simple. There is a type thread that can be
instantiated as a Runnable, which then creates a new thread.
 The new thread starts its execution from method run after the creator calls the start
method of the thread.
 Mutual exclusion is implemented either in the fashion the methods are called or
using the synchronized keyword, which enables monitor-like implementations.
3. As an example, consider the following case. There is a shared variable that is shared by two
classes. The variable is hosted by IncrementerThread, but the other thread (MyThread) is
allowed to access the value directly. The hosting thread (instance of IncrementerThread)
will only increment the shared integer value.
The definition of the thread is the following:
public class IncrementerThread extends Thread
{
public int i;
public IncrementerThread()
{

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

i = 0;
Thread t = new
Thread(this); t.start();
}
public void run()
{
for(;;) i++;
}}

Problems with Java Threading


1. Perhaps the main problem of the scheme presented above is that
changing between threads is a very costly operation.
2. The threads are rather independent entities, which is in line with the
principle that objects are entities of their own.
3.
RESOURCE MANAGEMENT

RESOURCE-RELATED CONCERNS IN MOBILE DEVICES


A mobile device is a specialized piece of hardware. It has several different
types of resources that may require special monitoring and management
activities that require scalable yet uniform designs.

OVERVIEW
 In many ways, each resource included in a mobile device can be considered as a
potential variation and management point.
 The parts that are associated with the file system and disks in general should form a
subsystem.
 We are essentially creating a manager for all aspects of the file system inside a
specialized entity. This gives a clear strategy for connecting software to the underlying
hardware.
 First, a device driver addresses the hardware, and on top of the driver, a
separate resource manager takes care of higher-level concerns.
 Subsystems can communicate with each other by for instance sending messages to each
other.
 Process boundaries can be used for separating the different resources at the level of an

implementation. Unfortunately this essentially makes the option of handling errors that
occur in the manager of a certain resource more difficult.
 Another problem in isolating resource managers is memory protection: in
many cases resource managers can use the same data but memory
protection may require the use of copies. A practical guideline for designing
such isolation is that it should be possible to reconstruct all events for debugging
purposes.
There are two fundamentally different solutions for embedding resources in the system.
1. The first solution is to put all resources under one control. This can be
implemented using a monolithic kernel or a virtual machine through which the access
to the resources of the device is provided.
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

2. The alternative is to use an approach where all resource managers run in


different processes and the kernel only has minimal scheduling and
interrupt handling responsibility.

GROUPING RESOURCE MANAGERS


A monolithic design, where several, if not all, resource-related operations are embedded in the
OS kernel, requires a design where the kernel includes a large amount of code and auxiliary
data.

 Where ellipses denote resources and application processes, and


the monolithic kernel is shown as a rectangle.

 The interface to the kernel can be understood as an API to all the


resources that are accessed via the kernel, although in a practical implementation
an interrupt-like routine is used.
 A practical example of such a system is Linux, where the kernel is in principle
monolithic, but dedicated modules are used for several rather independent tasks, like
processor and cache control, memory management, networking stacks, and device and
I/O interfaces, to name some examples.
 Such a system is commonly implemented in terms of (procedural) interfaces between
resources.

 Positive and negative aspects of this approach are the following. Addressing
different parts of the kernel using procedural interfaces can be
implemented in a performance-effective fashion as no context switching
is required but all the resources can be accessed directly.
 The operating system can serve the requests of programs faster, in
particular when an operation that requires coordination in several resources is needed. On
the downside, without careful management, it is possible to create a tangled code base in the
kernel, where the different parts of the system are very tightly coupled.
SEPARATING RESOURCE MANAGERS
 Parallel partitioning of resource-management-related facilities of a
mobile device leads to a design where individual resources are
managed by separate software modules.
 These modules can then communicate with each other using messages,
leading to an architecture commonly referred to as message
passing architecture

Downloaded by RISHIKESH KA CSE ([email protected])


lOMoARcPSD|37512785

One common implementation for this type of architecture is that all the modules run in a
process of their own. Inside a module, a thread is used to listen to messages.
Whenever a message is received, the thread executes a message-handling procedure, which
fundamentally is about event processing. It needs several context switching to avoid this we
introduce shared memory.

RESOURCE-HOSTING VIRTUAL MACHINE


 One more approach is to introduce a separate software entity, a
virtual machine to host resources of the underlying platform.
 Virtual machines can be of several levels of abstraction, ranging from low-level virtual
machines that can be considered a feature of the underlying hardware to complete
interpreters that can offer sophisticated services, and their features may vary
accordingly.
 The benefits of using a virtual machine in this context are the expected ones.
 Firstly, porting can be eased, and in fact it is possible to define a standard execution
environment, with well-defined, standardized interfaces, as defined by mobile Java.
Moreover, techniques such as dynamic compilation can also be used.

COMMON CONCERNS
There are several common concerns when considering resource use of a mobile device.
Many of them are derivatives of scarce hardware resources, but some of them can be traced to
the requirements of organizations developing mobile systems. For instance, one can consider
the following concerns:

Extension and adaptation is needed for being able to reuse the same code base
in different devices and contexts whose hardware and software characteristics and
available resources may differ. For instance, some devices can use hardwaresupported graphics
acceleration, whereas others use the main processor for this task.

• Performance requires special attention, because in many cases mobile device


hardware is slower than workstation on the one hand, and, due to the generic nature
Downloaded by RISHIKESH KA CSE ([email protected])
lOMoARcPSD|37512785

of mobile devices as application platforms, optimization for a single purpose is harder than in a
purely embedded setting on the other hand.

• Energy management is an issue that arises when mobile devices are used more like
computers and less like single-purpose devices; being active consumes more memory.

• Internal resource management is needed for ensuring that the right resources are
available at the right time. Furthermore, it may be a necessity to introduce features for
handling hardware-specific properties. The issue is emphasized by the fact that in many cases,
resource management is always running in the background

Downloaded by RISHIKESH KA CSE ([email protected])

You might also like