Data Manipulation With Pandas
Data Manipulation With Pandas
Transforming DataFrames
1. Introducing DataFrames
Hi, I'm Richie. I'll be your tour guide through the world of pandas.
pandas is a Python package for data manipulation. It can also be used for data
visualization; we'll get to that in Chapter 4.
3. Course outline
We'll start by talking about DataFrames, which form the core of pandas. In chapter 2,
we'll discuss aggregating data to gather insights. In chapter 3, you'll learn all about
slicing and indexing to subset DataFrames. Finally, you'll visualize your data, deal with
missing data, and read data into a DataFrame. Let's dive in.
pandas is built on top of two essential Python packages, NumPy and Matplotlib. Numpy
provides multidimensional array objects for easy data manipulation that pandas uses to
store data, and Matplotlib has powerful data visualization capabilities that pandas takes
advantage of.
5. pandas is popular
pandas has millions of users, with PyPi recording about 14 million downloads in
December 2019. This represents almost the entire Python data science community!
1. 1
https://pypistats.org/packages/pandas
6. Rectangular data
There are several ways to store data for analysis, but rectangular data, sometimes
called "tabular data" is the most common form. In this example, with dogs, each
observation, or each dog, is a row, and each variable, or each dog property, is a
column. pandas is designed to work with rectangular data like this.
7. pandas DataFrames
The info method displays the names of columns, the data types they contain, and
whether they have any missing values.
A DataFrame's shape attribute contains a tuple that holds the number of rows followed
by the number of columns. Since this is an attribute instead of a method, you write it
without parentheses.
The describe method computes some summary statistics for numerical columns, like
mean and median. "count" is the number of non-missing values in each column.
describe is good for a quick overview of numeric variables, but if you want more control,
you'll see how to perform more specific calculations later in the course.
The other two components of a DataFrame are labels for columns and rows. The
columns attribute contains column names, and the index attribute contains row numbers
or row names. Be careful, since row labels are stored in dot-index, not in dot-rows.
Notice that these are Index objects, which we'll cover in Chapter 3. This allows for
flexibility in labels. For example, the dogs data uses row numbers, but row names are
also possible.
Python has a semi-official philosophy on how to write good code called The Zen of
Python. One suggestion is that given a programming problem, there should only be one
obvious solution. As you go through this course, bear in mind that pandas deliberately
doesn't follow this philosophy. Instead, there are often multiple ways to solve a problem,
leaving you to choose the best. In this respect, pandas is like a Swiss Army Knife, giving
you a variety of tools, making it incredibly powerful, but more difficult to learn. In this
course, we aim for a more streamlined approach to pandas, only covering the most
important ways of doing things.
1. 1
https://www.python.org/dev/peps/pep-0020/
Inspecting a DataFrame
When you get a new DataFrame to work with, the first thing you need to do is explore it
and see what it contains. There are several useful methods and attributes for this.
.head() returns the first few rows (the “head” of the DataFrame).
.info() shows information on each of the columns, such as the data type and number
of missing values.
.shape returns the number of rows and columns of the DataFrame.
.describe() calculates a few summary statistics for each column.
You can usually think of indexes as a list of strings or numbers, though the
pandas Index data type allows for more sophisticated options. (These will be covered
later in the course.)
homelessness is available.
Instructions
100 XP
In this video, we'll cover the two simplest and possibly most important ways to find
interesting parts of your DataFrame.
2. Sorting
The first thing you can do is change the order of the rows by sorting them so that the
most interesting data is at the top of the DataFrame. You can sort rows using the
sort_values method, passing in a column name that you want to sort by. For example,
when we apply sort_values on the weight_kg column of the dogs DataFrame, we get
the lightest dog at the top, Stella the Chihuahua, and the heaviest dog at the bottom,
Bernie the Saint Bernard.
Setting the ascending argument to False will sort the data the other way around, from
heaviest dog to lightest dog.
To change the direction values are sorted in, pass a list to the ascending argument to
specify which direction sorting should be done for each variable. Now, Charlie, Lucy,
and Bella are ordered from tallest to shortest.
6. Subsetting columns
We may want to zoom in on just one column. We can do this using the name of the
DataFrame, followed by square brackets with a column name inside. Here, we can look
at just the name column.
To select multiple columns, you need two pairs of square brackets. In this code, the
inner and outer square brackets are performing different tasks. The outer square
brackets are responsible for subsetting the DataFrame, and the inner square brackets
are creating a list of column names to subset. This means you could provide a separate
list of column names as a variable and then use that list to perform the same subsetting.
Usually, it's easier to do in one line.
8. Subsetting rows
There are lots of different ways to subset rows. The most common way to do this is by
creating a logical condition to filter against. For example, let's find all the dogs whose
height is greater than 50 centimeters. Now we have a True or False value for every row.
9. Subsetting rows
We can use the logical condition inside of square brackets to subset the rows we're
interested in to get all of the dogs taller than 50 centimeters.
10. Subsetting based on text data
We can also subset rows based on text data. Here, we use the double equal sign in the
logical condition to filter the dogs that are Labradors.
We can also subset based on dates. Here, we filter all the dogs born before 2015.
Notice that the dates are in quotes and are written as year then month, then day. This is
the international standard date format.
To subset the rows that meet multiple conditions, you can combine conditions using
logical operators, such as the "and" operator seen here. This means that only rows that
meet both of these conditions will be subsetted. You could also do this in one line of
code, but you'll also need to add parentheses around each condition.
If you want to filter on multiple values of a categorical variable, the easiest way is to use
the isin method. This takes in a list of values to filter for. Here, we check if the color of a
dog is black or brown, and use this condition to subset the data.
Sorting rows
Finding interesting bits of data in a DataFrame is often easier if you change the order of
the rows. You can sort the rows by passing a column name to .sort_values().
In cases where rows have the same value (this is common if you sort on a categorical
variable), you may wish to break the ties by sorting on another column. You can sort on
multiple columns in this way by passing a list of column names.
Sort on … Syntax
1
o Sort homelessness by the number of homeless individuals, from smallest
to largest, and save this as homelessness_ind.
o Print the head of the sorted DataFrame.
2
3
2
Hint
Use double square-brackets with column names in quotes to select multiple columns.
# Select the state and family_members columns
state_fam = homelessness[["state", "family_members"]]
Subsetting rows
A large part of data science is about finding which bits of your dataset are interesting.
One of the simplest techniques for this is to find a subset of rows that match some
criteria. This is sometimes known as filtering rows or selecting rows.
There are many ways to subset a DataFrame, perhaps the most common is to use
relational operators to return True or False for each row, then pass that inside square
brackets.
dogs[dogs["height_cm"] > 60]
dogs[dogs["color"] == "tan"]
You can filter for multiple conditions at once by using the "bitwise and" operator, &.
dogs[(dogs["height_cm"] > 60) & (dogs["color"] == "tan")]
homelessness is available and pandas is loaded as pd.
Instructions 1/3
35 XP
Instructions 1/3
35 XP
1
Filter homelessness for cases where the number of individuals is greater than ten
thousand, assigning to ind_gt_10k. View the printed result.
Take Hint (-10 XP)
# Filter for rows where individuals is greater than 10000
ind_gt_10k = homelessness[homelessness["individuals"] >10000]
# See the result
print(ind_gt_10k)
2
Filter homelessness for cases where the USA Census region is "Mountain",
assigning to mountain_reg. View the printed result.
3
Filter homelessness for cases where the number of family_members is less than
one thousand and the region is "Pacific", assigning to fam_lt_1k_pac. View the
printed result.
o
o
o
o
Hint
The solution takes the form df[(df["col"] == "value_1") | (df["col"] ==
"value_2")].
Did you find this hint helpful?
2
Filter homelessness for cases where the USA census state is in the list of Mojave
states, canu, assigning to mojave_homelessness. View the printed result.
Hint
The solution takes the form df[df["col"].isin(["value_1", "value_2"])] .
New Columns
Got It!
1. New columns
In the last lesson, you saw how to subset and sort a DataFrame to extract interesting
bits. However, often when you first receive a DataFrame, the contents aren't exactly
what you want. You may have to add new columns derived from existing columns.
Creating and adding new columns can go by many names, including mutating a
DataFrame, transforming a DataFrame, and feature engineering. Let's say we want to
add a new column to our DataFrame that has each dog's height in meters instead of
centimeters. On the left-hand side of the equals, we use square brackets with the name
of the new column we want to create. On the right-hand side, we have the calculation.
Notice that both the existing column and the new column we just created are in the
DataFrame.
Let's see what the results are if we calculate the body mass index, or BMI, of these
dogs. BMI is usually calculated by taking a person's weight in kilograms and dividing it
by their height in meters, squared. Instead of doing this with people, we'll try it out with
dogs. Again, the new column is on the left-hand side of the equals, but this time, our
calculation involves two columns.
4. Multiple manipulations
The real power of pandas comes in when you combine all the skills you've learned so
far. Let's figure out the names of skinny, tall dogs. First, to define the skinny dogs, we
take the subset of the dogs who have a BMI of under 100. Next, we sort the result in
descending order of height to get the tallest skinny dogs at the top. Finally, we keep
only the columns we're interested in. Here, you can see that Max is the tallest dog with
a BMI of under 100.
You can create new columns from scratch, but it is also common to derive them from
other columns, for example, by adding columns together or by changing their units.
In this exercise, you'll answer the question, "Which state has the highest number of
homeless individuals per 10,000 people in the state?" Combine your new pandas skills
to find out.
Instructions
100 XP
Hint
Aggregating DataFrames
Summary statistics
1. Summary statistics
Hi, I'm Maggie, and I'll be the other instructor for this course. In the first chapter, you
learned about DataFrames, how to sort and subset them, and how to add new columns
to them. In this chapter, we'll talk about aggregating data, starting with summary
statistics. Summary statistics, as follows from their name, are numbers that summarize
and tell you about your dataset.
One of the most common summary statistics for numeric data is the mean, which is one
way of telling you where the "center" of your data is. You can calculate the mean of a
column by selecting the column with square brackets and calling dot-mean. There are
lots of other summary statistics that you can compute on columns, like median and
mode, minimum and maximum, and variance and standard deviation. You can also take
sums and calculate quantiles.
3. Summarizing dates
You can also get summary statistics for date columns. For example, we can find the
oldest dog's date of birth by taking the minimum of the date of birth column. Similarly,
we can take the maximum to see that the youngest dog was born in 2018.
The aggregate, or agg, method allows you to compute custom summary statistics. Here,
we create a function called pct30 that computes the thirtieth percentile of a DataFrame
column. Don't worry if this code doesn't make sense to you -- just know that the function
takes in a column and spits out the column's thirtieth percentile. Now we can subset the
weight column and call dot-agg, passing in the name of our function, pct30. It gives us
the thirtieth percentile of the dogs' weights.
agg can also be used on more than one column. By selecting the weight and height
columns before calling agg, we get the thirtieth percentile for both columns.
6. Multiple summaries
We can also use agg to get multiple summary statistics at once. Here's another function
that computes the fortieth percentile called pct40. We can pass a list of functions into
agg, in this case, pct30 and pct40, which will return the thirtieth and fortieth percentiles
of the dogs' weights.
7. Cumulative sum
pandas also has methods for computing cumulative statistics, for example, the
cumulative sum. Calling cumsum on a column returns not just one number, but a
number for each row of the DataFrame. The first number returned, or the number in the
zeroth index, is the first dog's weight. The next number is the sum of the first and
second dogs' weights. The third number is the sum of the first, second, and third dogs'
weights, and so on. The last number is the sum of all the dogs' weights.
8. Cumulative statistics
pandas also has methods for other cumulative statistics, such as the cumulative
maximum, cumulative minimum, and the cumulative product. These all return an entire
column of a DataFrame, rather than a single number.
9. Walmart
In this chapter, you'll be working with data on Walmart stores, which is a chain of
department stores in the US. The dataset contains weekly sales in US dollars in various
stores. Each store has an ID number and a specific store type. The sales are also
separated by department ID. Along with weekly sales, there is information about
whether it was a holiday week or not, the average temperature during the week in that
location, the average fuel price in dollars per liter that week, and the national
unemployment rate that week.
Explore your new DataFrame first by printing the first few rows of the sales DataFrame.
Print information about the columns in sales.
Print the mean of the weekly_sales column.
Print the median of the weekly_sales column.
# Print the head of the sales DataFrame
print(sales.head())
Summarizing dates
Summary statistics can also be calculated on date columns that have values with the
data type datetime64. Some summary statistics — like mean — don't make a ton of
sense on dates, but others are super helpful, for example, minimum and maximum,
which allow you to see what time range your data covers.
sales is available and pandas is loaded as pd.
Instructions
100 XP
Efficient summaries
While pandas and NumPy have tons of functions, sometimes, you may need a different
function to summarize your data.
The .agg() method allows you to apply your own custom functions to a DataFrame, as
well as apply functions to more than one column of a DataFrame at once, making your
aggregations super-efficient. For example,
df['column'].agg(function)
In the custom function for this exercise, "IQR" is short for inter-quartile range, which is
the 75th percentile minus the 25th percentile. It's an alternative to standard deviation
that is helpful if your data contains outliers.
1
o Use the custom iqr function defined for you along with .agg() to print the
IQR of the temperature_c column of sales.
Take Hint (-10 XP)
# A custom IQR function
def iqr(column):
return column.quantile(0.75) - column.quantile(0.25)
2
o Update the column selection to use the custom iqr function with .agg() to
print the IQR of temperature_c, fuel_price_usd_per_l,
and unemployment, in that order.
3
Cumulative statistics
Cumulative statistics can also be helpful in tracking summary statistics over time. In this
exercise, you'll calculate the cumulative sum and cumulative max of a department's
weekly sales, which will allow you to identify what the total sales were so far as well as
what the highest weekly sales were so far.
A DataFrame called sales_1_1 has been created for you, which contains the sales data
for department 1 of store 1. pandas is loaded as pd.
Instructions
100 XP
Counting
1. Counting
So far, in this chapter, you've learned how to summarize numeric variables. In this
video, you'll learn how to summarize categorical data using counting.
Counting dogs is no easy task when they're running around the park. It's hard to keep
track of who you have and haven't counted!
3. Vet visits
Here's a DataFrame that contains vet visits. The vet's office wants to know how many
dogs of each breed have visited their office. However, some dogs have been to the vet
more than once, like Max and Stella, so we can't just count the number of each breed in
the breed column.
Let's try to fix this by removing rows that contain a dog name already listed earlier in the
dataset, or in other words; we'll extract a dog with each name from the dataset once.
We can do this using the drop_duplicates method. It takes an argument, subset, which
is the column we want to find our duplicates based on - in this case, we want all the
unique names. Now we have a list of dogs where each one appears once. We have
Max the Chow Chow, but where did Max the Labrador go? Because we have two
different dogs with the same name, we'll need to consider more than just name when
dropping duplicates.
Since Max and Max are different breeds, we can drop the rows with pairs of name and
breed listed earlier in the dataset. To base our duplicate dropping on multiple columns,
we can pass a list of column names to the subset argument, in this case, name and
breed. Now both Maxes have been included, and we can start counting.
6. Easy as 1, 2, 3
To count the dogs of each breed, we'll subset the breed column and use the
value_counts method. We can also use the sort argument to get the breeds with the
biggest counts on top.
7. Proportions
The normalize argument can be used to turn the counts into proportions of the total.
25% of the dogs that go to this vet are Labradors.
Dropping duplicates
Removing duplicates is an essential skill to get accurate counts because often, you
don't want to count the same thing multiple times. In this exercise, you'll create some
new DataFrames using unique values from sales.
sales is available and pandas is imported as pd.
Instructions
100 XP
Remove rows of sales with duplicate pairs of store and type and save
as store_types and print the head.
Remove rows of sales with duplicate pairs of store and department and save
as store_depts and print the head.
Subset the rows that are holiday weeks using the is_holiday column, and drop the
duplicate dates, saving as holiday_dates.
Select the date column of holiday_dates, and print.
# Subset the rows where is_holiday is True and drop duplicate dat
es
holiday_dates = sales[sales["is_holiday"] == True].drop_duplicate
s("date")
So far, you've been calculating summary statistics for all rows of a dataset, but
summary statistics can be useful to compare different groups.
2. Summaries by group
While computing summary statistics of entire columns may be useful, you can gain
many insights from summaries of individual groups. For example, does one color of dog
weigh more than another on average? Are female dogs taller than males? You can
already answer these questions with what you've learned so far! We can subset the
dogs into groups based on their color, and take the mean of each. But that's a lot of
work, and the duplicated code means you can easily introduce copy and paste bugs.
3. Grouped summaries
That's where the groupby method comes in. We can group by the color variable, select
the weight column, and take the mean. This will give us the mean weight for each dog
color. This was just one line of code compared to the five we had to write before to get
the same results.
Just like with ungrouped summary statistics, we can use the agg method to get multiple
statistics. Here, we pass a list of functions into agg after grouping by color. This gives us
the minimum, maximum, and sum of the different colored dogs' weights.
You can also group by multiple columns and calculate summary statistics. Here, we
group by color and breed, select the weight column and take the mean. This gives us
the mean weight of each breed of each color.
You can also group by multiple columns and aggregate by multiple columns.
1
2
Group sales by "type" and "is_holiday", take the sum of weekly_sales, and
store as sales_by_type_is_holiday.
# For each store type, aggregate weekly_sales: get min, max, mean
, and median
sales_stats = sales.groupby("type")["weekly_sales"].agg([min, ma
x, np.mean, np.median])
# Print sales_stats
print(sales_stats)
# Print unemp_fuel_stats
print(unemp_fuel_stats)
Pivot Tables
Got It!
1. Pivot tables
Pivot tables are another way of calculating grouped summary statistics. If you've ever
used a spreadsheet, chances are you've used a pivot table. Let's see how to create
pivot tables in pandas.
In the last lesson, we grouped the dogs by color and calculated their mean weights. We
can do the same thing using the pivot_table method. The "values" argument is the
column that you want to summarize, and the index column is the column that you want
to group by. By default, pivot_table takes the mean value for each group.
3. Different statistics
If we want a different summary statistic, we can use the aggfunc argument and pass it a
function. Here, we take the median for each dog color using NumPy's median function.
4. Multiple statistics
To get multiple summary statistics at a time, we can pass a list of functions to the
aggfunc argument. Here, we get the mean and median for each dog color.
5. Pivot on two variables
You also previously computed the mean weight grouped by two variables: color and
breed. We can also do this using the pivot_table method. To group by two variables, we
can pass a second variable name into the columns argument. While the result looks a
little different than what we had before, it contains the same numbers. There are NaNs,
or missing values, because there are no black Chihuahuas or gray Labradors in our
dataset, for example.
Instead of having lots of missing values in our pivot table, we can have them filled in
using the fill_value argument. Here, all of the NaNs get filled in with zeros.
If we set the margins argument to True, the last row and last column of the pivot table
contain the mean of all the values in the column or row, not including the missing values
that were filled in with Os. For example, in the last row of the Labrador column, we can
see that the mean weight of the Labradors is 26 kilograms. In the last column of the
Brown row, the mean weight of the Brown dogs is 24 kilograms. The value in the bottom
right, in the last row and last column, is the mean weight of all the dogs in the dataset.
Using margins equals True allows us to see a summary statistic for multiple levels of the
dataset: the entire dataset, grouped by one variable, by another variable, and by two
variables.
In pandas, pivot tables are essentially another way of performing grouped calculations.
That is, the .pivot_table() method is an alternative to .groupby().
In this exercise, you'll perform calculations using .pivot_table() to replicate the
calculations you performed in the last lesson using .groupby().
sales is available and pandas is imported as pd.
Instructions 1/3
35 XP
1
# Print mean_sales_by_type
print(mean_sales_by_type)
# Import NumPy as np
import numpy as np
# Pivot for mean and median weekly_sales for each store type
mean_med_sales_by_type = sales.pivot_table(values="weekly_sales",
index="type", aggfunc=[np.mean, np.median])
# Print mean_med_sales_by_type
print(mean_med_sales_by_type)
# Print mean_sales_by_type_holiday
print(mean_sales_by_type_holiday)
In this exercise, you'll practice using these arguments to up your pivot table skills, which
will help you crunch numbers more efficiently!
1
Print the mean weekly_sales by department and type, filling in any missing
values with 0 and summing all rows and columns.
In chapter one, you saw that DataFrames are composed of three parts: a NumPy array
for the data, and two indexes to store the row and column details.
2. The dog dataset, revisited
Recall that dot-columns contains an Index object of column names, and dot-index
contains an Index object of row numbers.
You can move a column from the body of the DataFrame to the index. This is called
"setting an index," and it uses the set_index method. Notice that the output has
changed slightly; in particular, a quick visual clue that name is now in the index is that
the index values are left-aligned rather than right-aligned.
5. Removing an index
To undo what you just did, you can reset the index - that is, you remove it. This is done
via .reset_index().
6. Dropping an index
reset_index has a drop argument that allows you to discard an index. Here, setting drop
to True entirely removes the dog names.
dogs.reset_index(drop=True)
You may be wondering why you should bother with indexes. The answer is that it
makes subsetting code cleaner. Consider this example of subsetting for the rows where
the dog is called Bella or Stella. It's a fairly tricky line of code for such a simple task.
Now, look at the equivalent when the names are in the index. DataFrames have a
subsetting method called "loc," which filters on index values. Here you simply pass the
dog names to loc as a list. Much easier!
The values in the index don't need to be unique. Here, there are two Labradors in the
index.
Now, if you subset on "Labrador" using loc, all the Labrador data is returned.
You can include multiple columns in the index by passing a list of column names to
set_index. Here, breed and color are included. These are called multi-level indexes, or
hierarchical indexes: the terms are synonymous. There is an implication here that the
inner level of index, in this case, color, is nested inside the outer level, breed.
11. Subset the outer level with a list
To take a subset of rows at the outer level index, you pass a list of index values to loc.
Here, the list contains Labrador and Chihuahua, and the resulting subset contains all
dogs from both breeds.
To subset on inner levels, you need to pass a list of tuples. Here, the first tuple specifies
Labrador at the outer level and Brown at the inner level. The resulting rows have to
match all conditions from a tuple. For example, the black Labrador wasn't returned
because the brown condition wasn't matched.
In chapter 1, you saw how to sort the rows of a DataFrame using sort_values. You can
also sort by index values using sort_index. By default, it sorts all index levels from outer
to inner, in ascending order.
You can control the sorting by passing lists to the level and ascending arguments.
Indexes are controversial. Although they simplify subsetting code, there are some
downsides. Index values are just data. Storing data in multiple forms makes it harder to
think about. There is a concept called "tidy data," where data is stored in tabular form -
like a DataFrame. Each row contains a single observation, and each variable is stored
in its own column. Indexes violate the last rule since index values don't get their own
column. In pandas, the syntax for working with indexes is different from the syntax for
working with columns. By using two syntaxes, your code is more complicated, which
can result in more bugs. If you decide you don't want to use indexes, that's perfectly
reasonable. However, it's useful to know how they work for cases when you need to
read other people's code.
In this chapter, you'll work with a monthly time series of air temperatures in cities around
the world.
Look at temperatures.
Set the index of temperatures to "city", assigning to temperatures_ind.
Look at temperatures_ind. How is it different from temperatures?
Reset the index of temperatures_ind, keeping its contents.
Reset the index of temperatures_ind, dropping its contents.
# Look at temperatures
print(temperatures)
# Look at temperatures_ind
print(temperatures_ind)
Create a list called cities that contains "Moscow" and "Saint Petersburg".
Use [] subsetting to filter temperatures for rows where the city column takes a value
in the cities list.
Use .loc[] subsetting to filter temperatures_ind for rows where the city is in
the cities list.
The benefit is that multi-level indexes make it more natural to reason about nested
categorical variables. For example, in a clinical trial, you might have control and
treatment groups. Then each test subject belongs to one or another group, and we can
say that a test subject is nested inside the treatment group. Similarly, in the temperature
dataset, the city is located in the country, so we can say a city is nested inside the
country.
The main downside is that the code for manipulating indexes is different from the code
for manipulating columns, so you have to learn two syntaxes and keep track of how
your data is represented.
Set the index of temperatures to the "country" and "city" columns, and assign this
to temperatures_ind.
Specify two country/city pairs to keep: "Brazil"/"Rio De
Janeiro" and "Pakistan"/"Lahore", assigning to rows_to_keep.
Print and subset temperatures_ind for rows_to_keep using .loc[].
2. Slicing lists
Here are the dog breeds, this time as a list. To slice the list, you pass first and last
positions separated by a colon into square brackets. Remember that Python positions
start from zero, so 2 refers to the third element, Chow Chow. Also remember that the
last position, 5, is not included in the slice, so we finish at Labrador, not Chihuahua. If
you want the slice to start from the beginning of the list, you can omit the zero. Here,
using colon-3 returns the first three elements. Slicing with colon on its own returns the
whole list.
You can also slice DataFrames, but first, you need to sort the index. Here, the dogs
dataset has been given a multi-level index of breed and color; then, the index is sorted
with sort_index.
To slice rows at the outer level of an index, you call loc, passing the first and last values
separated by a colon. The full dataset is shown on the right for comparison. There are
two differences compared to slicing lists. Rather than specifying row numbers, you
specify index values. Secondly, notice that the final value is included. Here, Poodle is
included in the results.
The same technique doesn't work on inner index levels. Here, trying to slice from Tan to
Grey returns an empty DataFrame instead of the six dogs we wanted. It's important to
understand the danger here. pandas doesn't throw an error to let you know that there is
a problem, so be careful when coding.
The correct approach to slicing at inner index levels is to pass the first and last positions
as tuples. Here, the first element to include is a tuple of Labrador and Brown.
7. Slicing columns
Since DataFrames are two-dimensional objects, you can also slice columns. You do this
by passing two arguments to loc.
The simplest case involves subsetting columns but keeping all rows. To do this, pass a
colon as the first argument to loc. As with slicing lists, a colon by itself means "keep
everything." The second argument takes column names as the first and last positions to
slice on.
8. Slice twice
You can slice on rows and columns at the same time: simply pass the appropriate slice
to each argument. Here, you see the previous two slices being performed in the same
line of code.
9. Dog days
You slice dates with the same syntax as other types. The first and last dates are passed
as strings.
One helpful feature is that you can slice by partial dates. Here, the first and last
positions are only specified as 2014 and 2016, with no month or day parts. pandas
interprets this as slicing from the start of 2014 to the end of 2016; that is, all dates in
2014, 2015, and 2016.
You can also slice DataFrames by row or column number using the iloc method. This
uses a similar syntax to slicing lists, except that there are two arguments: one for rows
and one for columns. Notice that, like list slicing but unlike loc, the final values aren't
included in the slice. In this case, the fifth row and fourth column aren't included.
You can only slice an index if the index is sorted (using .sort_index()).
To slice at the outer level, first and last can be strings.
To slice at inner levels, first and last should be tuples.
If you pass a single slice to .loc[], it will slice the rows.
pandas is loaded as pd. temperatures_ind has country and city in the index, and is
available.
Instructions
100 XP
Instructions
100 XP
Use .loc[] slicing to subset rows from India, Hyderabad to Iraq, Baghdad.
Use .loc[] slicing to subset columns from date to avg_temp_c.
Slice in both directions at once from Hyderabad to Baghdad, and date to avg_temp_c.
Use Boolean conditions, not .isin() or .loc[], and the full date "yyyy-mm-dd", to
subset temperatures for rows in 2010 and 2011 and print the results.
Set the index of temperatures to the date column and sort it.
Use .loc[] to subset temperatures_ind for rows in 2010 and 2011.
Use .loc[] to subset temperatures_ind for rows from Aug 2010 to Feb 2011.