Spark DataFrames Project Exercise - Jupyter Notebook
Spark DataFrames Project Exercise - Jupyter Notebook
For now, just answer the questions and complete the tasks below.
Use the walmart_stock.csv file to Answer and complete the tasks below!
In [1]:
Load the Walmart Stock CSV File, have Spark infer the data types.
In [2]:
In [3]:
df.columns
Out[3]:
In [5]:
df.printSchema()
root
In [8]:
In [10]:
df.describe().show()
+-------+----------+------------------+-----------------+-----------------+-
----------------+-----------------+-----------------+
+-------+----------+------------------+-----------------+-----------------+-
----------------+-----------------+-----------------+
+-------+----------+------------------+-----------------+-----------------+-
----------------+-----------------+-----------------+
Bonus Question!
There are too many decimal places for mean and stddev in the describe() dataframe. Format the
numbers to just show up to two decimal places. Pay careful attention to the datatypes that .describe()
returns, we didn't cover how to do this exact formatting, but we covered something very similar. Check
this link for a hint
(http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Column.cast)
If you get stuck on this, don't worry, just view the solutions.
In [18]:
In [19]:
df.printSchema()
root
In [22]:
+-------+--------+--------+--------+--------+----------+
+-------+--------+--------+--------+--------+----------+
| count|1,258.00|1,258.00|1,258.00|1,258.00| 1,258|
+-------+--------+--------+--------+--------+----------+
Create a new dataframe with a column called HV Ratio that is the ratio of the High Price versus volume
of stock traded for a day.
In [23]:
+--------------------+
| HV Ratio|
+--------------------+
|4.819714653321546E-6|
|6.290848613094555E-6|
|4.669412994783916E-6|
|7.367338463826307E-6|
|8.915604778943901E-6|
|8.644477436914568E-6|
|9.351828421515645E-6|
| 8.29141562102703E-6|
|7.712212102001476E-6|
|7.071764823529412E-6|
|1.015495466386981E-5|
|6.576354146362592...|
| 5.90145296180676E-6|
|8.547679455011844E-6|
|8.420709512685392E-6|
|1.041448341728929...|
|8.316075414862431E-6|
|9.721183814992126E-6|
|8.029436027707578E-6|
|6.307432259386365E-6|
+--------------------+
In [25]:
df.orderBy(df['High'].desc()).select(['Date']).head(1)[0]['Date']
Out[25]:
'2015-01-13'
In [26]:
+-----------------+
| avg(Close)|
+-----------------+
|72.38844998012726|
+-----------------+
In [27]:
In [28]:
df.select(max('Volume'),min('Volume')).show()
+-----------+-----------+
|max(Volume)|min(Volume)|
+-----------+-----------+
| 80898100| 2094900|
+-----------+-----------+
In [29]:
Out[29]:
81
What percentage of the time was the High greater than 80 dollars ?
In [107]:
Out[107]:
9.141494435612083
Hint
(http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameStatFunctions.co
In [31]:
df.corr('High', 'Volume')
+-------------------+
| corr(High, Volume)|
+-------------------+
|-0.3384326061737161|
+-------------------+
In [32]:
+----+---------+
|Year|max(High)|
+----+---------+
|2015|90.970001|
|2013|81.370003|
|2014|88.089996|
|2012|77.599998|
|2016|75.190002|
+----+---------+
In other words, across all the years, what is the average Close price for Jan,Feb, Mar, etc... Your result
will have a value for each of these months.
In [33]:
month_df = month_df.groupBy('Month').mean()
month_df = month_df.orderBy('Month')
month_df['Month', 'avg(Close)'].show()
+-----+-----------------+
|Month| avg(Close)|
+-----+-----------------+
| 1|71.44801958415842|
| 2| 71.306804443299|
| 3|71.77794377570092|
| 4|72.97361900952382|
| 5|72.30971688679247|
| 6| 72.4953774245283|
| 7|74.43971943925233|
| 8|73.02981855454546|
| 9|72.18411785294116|
| 10|71.57854545454543|
| 11| 72.1110893069307|
| 12|72.84792478301885|
+-----+-----------------+
Great Job!