0% found this document useful (0 votes)
40 views24 pages

Chapter-2 Data Science2

Emerging

Uploaded by

aberaendale334
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views24 pages

Chapter-2 Data Science2

Emerging

Uploaded by

aberaendale334
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 2: Data Science

Chapter contents :

 An Overview of Data Science

 What are data and information

 Data types and their representation

 Data value Chain

 Basic concepts of big data


Overview of Data Science
• What is data science, data , information and big data?

Data is a raw facts which cannot be used for decision or judgments.

 Data science is:-

 defined as a multi-disciplinary field that uses scientific methods, processes, algorithms,


and systems in order to extract knowledge and insights from structured, semi-structured
and unstructured data.

 It is much more than simply analyzing data.

 It offers a range of roles and requires a range of skills.

 In case of academic discipline and profession, data science continues to evolve as one of
the most promising and in-demand career paths for skilled professionals. 2
cont. ..
 Data professionals understand that they must advance past the traditional skills of
analyzing large amounts of data, data mining, and programming skills.

 Data scientists need to be curious and result-oriented, with exceptional


industry-specific knowledge and communication skills that allow them to
explain highly technical results to their non-technical counterparts.

3
What are data and information

• What is Data and information ?

 Data can be defined as: - a representation of facts, figures, concepts, or


instructions in a formalized manner,

• which should be suitable for communication, interpretation, or


processing, by human or electronic machines.

 It can be described as unprocessed facts and figures.

 Is represented with the help of characters such as alphabets (A-Z, a-z), digits (0-
9) or special characters (+, -, /, *, <,>, =, etc.).

4
cont. ..
 Information is defined as :-

Processed or interpreted data on which decisions and actions are


based.
It is data that has been processed into a form that is meaningful to
the recipient.
It is created from organized, structured, and processed data in a
particular context.

5
Data Processing Cycle
Data Processing Cycle is the sequence of steps or operations used to
transform raw data into useful information..

Data processing is the re-structuring or re-ordering of data by people


or machines to increase their usefulness and add values for a particular
purpose.

Data processing consists the following basic steps -:-

I. Input

II. Processing and

III. Output. Input Processing Output


6
Cont..
►Input - in this step, the input data/raw data is prepared in some convenient
form for processing.

►Processing –in this step, the input data is changed to produce data in a
more useful form.
 Transforming raw data into data in more usable for form.
 For example, interest can be calculated on deposit to a bank, or a summary of
sales for the month can be calculated from the sales orders.

►Output - at this stage, the result or outcome of the preceding processing


step is collected.
 Decoding or interpreting the processing output and presenting it to the user.
 For example, output data may be payroll for employees. 7
Data types and their representation

Data types can be described from diverse perspectives. Here some of the perspectives:-
i. Data types from Computer programming perspective
Common data types in programmers perspective include:
 Integers(int)- is used to store integer numbers.
 For instance, Integers = { ..., −4, −3, −2, −1, 0, 1, 2, 3, 4, ... }
 Booleans(bool)- is used to represent restricted to one of two values: true or
false.
 Characters(char)- is used to store a single character((letters, digits, symbols, etc...).
 Floating-point numbers(float)- is used to store real numbers.
 Alphanumeric strings(string)- used to store a combination of characters and
numbers. 8
cont. ..
ii. Data types from Data Analytics perspective
From a data analytics point of view, there are 3 common types of data types.
A. Structured Data:- adheres/follows a pre-defined data model and is
therefore highly organized and straightforward to analyze.
 It conforms to a tabular format or organized in rows and columns.
 e.g. Excel files or SQL databases

B. Semi-structured Data:- is a form of structured data that does not


conform with the formal structure of data models associated with
relational databases or other forms of data tables.
 It is also known as a self-describing structure. (why ?)
 e.g. JSON , XML, sensor data. 9
. Excel files or SQL databases

A semi-structured data: XML Example.


<employees>
<employee>
A semi-structured data: JSON Example. <firstName>John</firstName> <lastName>Doe</lastName>
</employee>
{"employees":[ <employee>
{ "firstName":"John", "lastName":"Doe" }, <firstName>Anna</firstName> <lastName>Smith</lastName>
</employee>
{ "firstName":"Anna", "lastName":"Smith" }, <employee>
{ "firstName":"Peter", "lastName":"Jones" } <firstName>Peter</firstName> <lastName>Jones</lastName>
]} </employee>
</employees> 10
cont. ..
C. Unstructured data:- is information that does not have either a
predefined data model or is not organized in a pre-defined manner.

 Unstructured data is not organized in rows and columns.

 Unstructured data is qualitative data that consists of audio, video files,


image files, texts files, NoSQL databases, descriptions etc.

Figure 2.2 Data types from a data analytics perspective 11


iii. Metadata :- is simply defined as data about data.
 It provides additional information about a specific set of data.

 For example, in a set of photographs, it describe when and where the


photos were taken. and also it provides fields for dates and locations
which, by themselves, can be considered structured data.
 Because of this reason, metadata is frequently used by Big Data
solutions for initial analysis.
 Metadata is not a separate data structure, but it is one of the most
important elements for Big Data analysis and big data solutions.
12
Metadata

13
Data value Chain
• It is introduced to describe the information flow within a big data system as a
series of steps needed to generate value and useful insights from data.
• The Big Data Value Chain identifies the following key high-level activities:

14
 Data Acquisition :- is the process of gathering, filtering, and cleaning data before it is put in a data
warehouse or any other storage solution on which data analysis can be carried out.

 Data acquisition is one of the major big data challenges in terms of infrastructure requirements. Why?

Because, the infrastructure is required to:


 support the acquisition of big data with low latency in capturing data & executing
queries.
 be able to handle very high transaction volumes in distributed Environment

 support flexible and dynamic data structures

 Data Analysis :- making the raw data acquired amenable to use in decision-making as well as domain-
specific usage.

 it involves exploring, transforming, and modeling data with the goal of highlighting relevant data,
synthesizing and extracting useful hidden information with high potential from a business 15point of
cont. ..
 Data Curation :- It is the active management of data over its life cycle to ensure it
meets the necessary data quality requirements for its effective usage.

 it contains different activities such as content creation, selection,


classification, transformation, validation, and preservation.
 Data Storage :- is the persistence and management of data in a scalable way that
satisfies the needs of applications that require fast access to the data.

 Data Usage:- It covers the data-driven business activities that need access to data, its
analysis, and the tools needed to integrate the data analysis within the business activity.

16
Basic concepts of big data

• Big data is the term for a collection of data sets so large and complex .

• it becomes difficult to process using on-hand database management tools or traditional


data processing applications.

• it is characterized by 3V and more:

 Volume: large amounts of data (Zeta bytes)

 Velocity: Data is live streaming or in motion

 Veracity: can we trust the data? How accurate is it? Figure 2.4 Characteristics of big data

 Variety: data comes in many different forms from diverse sources


17
Clustered Computing and Hadoop Ecosystem

Cluster computing :

• Big data clustering software combines the resources of many smaller machines,
seeking to provide a number of benefits:

 Resource Pooling:

 High Availability:

 Easy Scalability:

18
Hadoop and its Ecosystem
• Hadoop is an open-source framework intended to make interaction with big data easier.

is a framework that allows for the distributed processing of large datasets
across clusters of computers using simple programming models.
Characteristics of Hadoop
i. Economical:- Its systems are highly economical.
ii. Reliable:- stores copies of the data on different machines and is resistant to
hardware failure.
iii. Scalable/Accessible :- is easily scalable both, horizontally and vertically.
iv. Flexible:- you can store as much structured and unstructured data as U need.

19
cont. ..
It comprises the following components and
• Hadoop has an ecosystem that has
many others:
evolved from its four core  HDFS: Hadoop Distributed File System
components:  YARN: Yet Another Resource Negotiator
 MapReduce: Programming based Data Processing
 data management,  Spark: In-Memory data processing
 PIG, HIVE: Query-based processing of data services
 access,  HBase: NoSQL Database
 Mahout, Spark MLLib: Machine Learning algorithm
 processing, and libraries
 Solar, Lucene: Searching and Indexing
 storage.  Zookeeper: Managing cluster
 Oozie: Job Scheduling

20
cont. ..

21
Big Data Life Cycle with Hadoop
There are different stages of Big Data processing, some of them are:-

I. Ingesting/ Feeding data into the system :- data is ingested or transferred to Hadoop from

various sources such as relational databases, systems, or local files.

II. Processing the data in storage:- the data is stored and processed.

 The data is stored in the distributed file system, HDFS, and the NoSQL distributed data, HBase.

Spark and MapReduce perform data processing.

22
cont. ..

III. Computing and analyzing data:- data is analyzed by processing frameworks such as Pig,
Hive, and Impala.

 Pig converts the data using a map and reduce and then analyzes it.
Hive is also based on the map and reduce programming and is most suitable
for structured data.
iv. Visualizing the results:- is performed by tools such as Hue and Cloudera Search.

 In this stage, the analyzed data can be accessed by users.

23
24

You might also like