Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[pandas](http://pandas.pydata.org) provides high-level data structures and functions designed to make working with structured or tabular data fast, easy and expressive. The primary objects in pandas that we will be using are the `DataFrame`, a tabular, column-oriented data structure with both row and column labels, and the `Series`, a one-dimensional labeled array object.\n",
"\n",
"pandas blends the high-performance, array-computing ideas of NumPy with the flexible data manipulation capabilities of spreadsheets and relational databases. It provides sophisticated indexing functionality to make it easy to reshape, slice and perform aggregations.\n",
"While pandas adopts many coding idioms from NumPy, the most significant difference is that pandas is designed for working with tabular or heterogeneous data. NumPy, by contrast, is best suited for working with homogeneous numerical array data.\n",
"- [Data Structures](#structures)\n",
" - [Series](#series)\n",
" - [DataFrame](#dataframe)\n",
"- [Essential Functionality](#ess_func)\n",
" - [Reindexing](#reindexing)\n",
" - [Dropping Entries](#removing)\n",
" - [Indexing, Slicing and Filtering](#indexing)\n",
" - [Arithmetic Operations](#arithmetic)\n",
"- [Summarizing and Computing Descriptive Statistics](#sums)\n",
"- [Loading and storing data](#loading)\n",
" - [Text Format](#text) \n",
" - [Web Scraping](#web)\n",
"- [Data Cleaning and preperation](#cleaning)\n",
" - [Handling missing data](#missing)\n",
" - [Data transformation](#transformation)\n",
"\n",
"The common pandas import statment is shown below:"
"metadata": {},
"outputs": [],
"source": [
"# Common pandas import statement\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Structures <a name=\"structures\"></a>\n",
"## Series <a name=\"series\"></a>\n",
"A Series is a one-dimensional array-like object containing a sequence of values and an associated array of data labels called its index.\n",
"\n",
"The easiest way to make a Series is from an array of data:"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"data = pd.Series([4, 7, -5, 3])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now try printing out data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The string representation of a Series displayed interactively shows the index on the left and the values on the right. Because we didn't specify an index, the default one is simply integers 0 through N-1.\n",
"\n",
"You can output only the values of a Series using \n",
"```python\n",
"data.values\n",
"```\n",
"```python\n",
"data.index\n",
"```\n",
"Try it out below!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can specify custom indices when intialising the Series"
"metadata": {},
"outputs": [],
"source": [
"data2 = pd.Series([4, 7, -5, 3], index=[\"a\", \"b\", \"c\", \"d\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can use these labels to access the data similar to a normal array"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2[\"a\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another way to think about Series is as a fixed-length ordered dictionary. Furthermore, you can actually define a Series in a similar manner to a dictionary"
"cities = {\"Glasgow\" : 599650, \"Edinburgh\" : 464990, \"Aberdeen\" : 196670, \"Dundee\" : 147710}\n",
"data3 = pd.Series(cities)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can do arithmetic operations between Series similar to NumPy arrays. Even if you have 2 datasets with different data, arithmetic operations will be aligned according to their indices.\n",
"\n",
"Let's look at an example"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"cities_uk = {\"Birmingham\" : 1092330, \"Leeds\": 751485, \"Glasgow\" : 599650,\n",
" \"Manchester\" : 503127, \"Edinburgh\" : 464990}\n",
"data4 = pd.Series(cities_uk)"
]
},
{
"cell_type": "code",
"metadata": {
"scrolled": true
},
"source": [
"data3 + data4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how some of the results are NaN? Well, that is because there were no instances of those cities within both of the datasets. You can usually extract NaNs from a Series with\n",
"data.isnull()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A DataFrame represents a rectangular table of data and contains an ordered collection of columns, each of which can be a different value type. The DataFrame has both row and column index and can be thought of as a dict of Series all sharing the same index.\n",
"\n",
"The most common way to create a DataFrame is with dicts"
]
},
{
"cell_type": "code",
"data = {\"cities\" : [\"Glasgow\", \"Edinburgh\", \"Aberdeen\", \"Dundee\"],\n",
" \"population\" : [599650, 464990, 196670, 147710],\n",
" \"year\" : [2011, 2013, 2013, 2013]}\n",
"frame = pd.DataFrame(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try printing it out"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jupyter Notebooks prints it out in a nice table but the basic version of this is also just as readable!\n",
"\n",
"Additionally you can also specify the order of columns during initialisation"
]
},
{
"cell_type": "code",
"frame2 = pd.DataFrame(data, columns=[\"year\", \"cities\", \"population\"])\n",
"frame2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can retrieve a particular column from a DataFrame with\n",
"```python\n",
"frame[\"cities\"]\n",
"```\n",
"The result is going to be a Series.\n",
"Additionally, you can retrieve a row from the dataset using\n",
"frame.iloc[1]\n",
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to add and modify the columns of a DataFrame"
]
},
{
"cell_type": "code",
"metadata": {},
"outputs": [],
"source": [
"frame2[\"size\"] = 100"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame2"
]
},
{
"cell_type": "code",
"frame2[\"size\"] = [175, 264, 65.1, 60] # in km^2\n",
"frame2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to dicts, columns can be deleted using\n",
"```python\n",
"del frame2[\"size\"]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"del frame2[\"size\"]\n",
"frame2"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another common way of creating DataFrames is from a nested dict of dicts:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2 = {\"Glasgow\": {2011: 599650},\n",
" \"Edinburgh\": {2013:464990},\n",
"frame3 = pd.DataFrame(data2)\n",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is a table of different ways of initialising a DataFrame for your reference\n",
"\n",
"| Type | Notes |\n",
"| --- | --- |\n",
"| 2D ndarray | A matrix of data; passing optional row and column labels |\n",
"| dict of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame; all sequences must be the same length |\n",
"| dict of Series | Each value becomes a column; indexes from each Series are unioned together to<br>form the result's row index if not explicit index is passed |\n",
"| dict of dicts | Each inner dict becomes a column; keys are unioned to form the row<br>index as in the \"dict of Series\" case |\n",
"| List of dicts or Series | Each item becomes a row in the DataFrame; union of dict keys or<br>Series indices becomes the DataFrame's column labels |\n",
"| List of lists or tuples | Treated as the \"2D ndarray\" case |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Essential Functionality <a name=\"ess_func\"></a>\n",
"In this section, we will go through the fundamental mechanics of interacting with the data contained in a Series or DaraFrame."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With pandas it is easy to restructure the order of your columns and rows using the `reindex` function. Let's have a look at an example:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# first define a new Series\n",
"s = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])\n",
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now you can reshuffle the indices\n",
"s = s.reindex(['d', 'b', 'a', 'c', 'e'])\n",
"s"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy as that! This can also be extended for DataFrames, where you can reorder both the columns and indices at the same time!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# first define a new Dataframe\n",
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"df"
]
},
{
"cell_type": "code",
"source": [
"# Now we can restructure it with reindex\n",
"df = df.reindex(index=[\"a\", \"d\", \"c\", \"b\"],\n",
" columns=[\"Aberdeen\", \"Glasgow\", \"Edinburgh\", \"Dundee\"])\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice something interesting? We can actually add new indices and columns using the `reindex` method. This results in the new slots in our table to be filled in with `NaN` values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Removing columns/indices <a name=\"removing\"></a>\n",
"Similar to above, it is easy to remove entries. This is done with the `drop()` method and can be applied to both columns and indices:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# define new DataFrame\n",
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.drop(\"b\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can also drop from a column\n",
"df.drop([\"Aberdeen\", \"Edinburgh\"], axis=\"columns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the original data frame is unchanged: `df.drop()` gives us a new data frame with the desired data dropped, and leaves the original data intact. We can ask `.drop()` to operate directly on the original data frame by setting the argument `inplace=True`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Indexing, slicing and filtering <a name=\"indexing\"></a>\n",
"\n",
"### Indexing\n",
"\n",
"Series indexing works analogously to NumPy array indexing (i.e. `data[...]`). You can also use the Series' index values instead of only integers:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])\n",
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[3]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[\"c\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[[1,3]]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[s<2]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A subtle difference when indexing in pandas is that unlike in normal Python, slicing here is inclusive at the end-point."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All of the above also apply to DataFrames:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[:2]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[\"Glasgow\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For DataFrame label-indexing on the rows, you can use `loc` for labels and `iloc` for integer-indexing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.loc[\"b\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.loc[\"b\", [\"Glasgow\", \"Aberdeen\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's try `iloc`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.iloc[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"df.iloc[1, [1,2]]\n",
"df"
"execution_count": null,
"metadata": {},
"outputs": [],
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Summary of indexing:\n",
"\n",
"| Type | Notes |\n",
"| -- | -- |\n",
"| df\\[val\\] | Select single column or sequence of columns from a DataFrame |\n",
"| df.loc\\[val\\] | Select single row or subset of rows from a DataFrame by label |\n",
"| df.loc\\[:, val\\] | Select single column or subset of columns by label |\n",
"| df.loc\\[val1, val2\\] | Select both rows and columns by label |\n",
"| df.iloc\\[idx\\] | Select single row or subset of rows from DataFrame by integer position |\n",
"| df.iloc\\[:, idx\\] | Select single column or subset of columns by integer position |\n",
"| df.iloc\\[idx1, idx2\\] | Select both rows and columns by integer position |\n",
"| reindex method | Select either rows or columns by labels |\n",
"\n",
"As with NumPy arrays, slices in pandas are _views_ of the original structure. Changing a slice will change its source structure and vice versa."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 1\n",
"A dataset of random numbers is created below. Index the 47th row and the 22nd column. You should get the number **4621**.\n",
"\n",
"*Note: Remember that Python uses 0-based indexing*"
]
},
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.reshape(np.arange(10000), (100,100)))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 2\n",
"Using the same DataFrame from the previous exercise, obtain all rows starting from row 85 to 97."
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are performing arithmetic operations between two objects, if any index pairs are not the same, the respective index in the result will be the union of the index pair. Let's have a look"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s1 = pd.Series(np.arange(5), index=[\"a\", \"b\", \"c\", \"d\", \"e\"])\n",
"s2 = pd.Series(np.arange(4), index=[\"b\", \"c\", \"d\", \"k\"])\n",
"s1 + s2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The internal data alignment introduces missing values in the label locations that don't overlap. It is similar for DataFrames:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(12).reshape((3,4)),\n",
" columns=list(\"abcd\"))\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = pd.DataFrame(np.arange(16).reshape((4,4)),\n",
" columns=list(\"cdef\"))\n",
"df2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# adding the two\n",
"df1+df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how where we don't have matching values from `df1` and `df2` the output of the addition operation is `NaN` since there are no two numbers to add.\n",
"\n",
"Well, we can \"fix\" that by filling in the `NaN` values. This effectively tells pandas where there are no two values to add, assume that the missing value is just zero."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1.add(df2, fill_value=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another important point here is although the normal arithmetic operations work here, there also exist dedicated methods like `DataFrame.add()` which achieve the same functionality + a bit extra.\n",
"\n",
"Here's a list of all arithmetic operations within pandas:\n",
"\n",
"| Operator | Method | Description |\n",
"| -- | -- | -- |\n",
"| + | add, radd | Addition |\n",
"| - | sub, rsub | Subtraction |\n",
"| / | div, rdiv | Division |\n",
"| // | floordiv, rfloordiv | Floor division |\n",
"| * | mul, rmul | Multiplication |\n",
"| ** | pow, rpow | Exponentiation |\n",
"\n",
"Notice how some of the methods have `r` in front of them? That stands for reversed and effectively reverses the operands. For example\n",
"\n",
"```python\n",
"df1.rdiv(df2)\n",
"```\n",
"would be the same as\n",
"```python\n",
"df1/df2\n",
"```\n",
"\n",
"but....\n",
"```python\n",
"df1.rdiv(df2)\n",
"```\n",
"would be the same as\n",
"```python\n",
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 3\n",
"Create a (3,3) DataFrame and square all elements in it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Broadcasting\n",
"Similar to numpy, in pandas you can also broadcast data structures. Let's consider a simple example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(16).reshape((4,4)))\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = pd.Series(np.arange(4))\n",
"df2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 - df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how the Series of [0, 1, 2, 3] got removed from each row? That is called broadcasting.\n",
"\n",
"It can also be used for columns, but for that, you have to use the method arithmetic operations."
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1.sub(df2, axis=\"index\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sorting\n",
"Sorting is an important built-in operation of pandas. Let's have a look at how you can do it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(16).reshape((4,4)), index=[\"b\", \"a\", \"d\", \"c\"])\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = df1.sort_index()\n",
"df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy as that. Furthermore, you can also sort along the column axis with\n",
"```python\n",
"df1.sort_index(axis=1)\n",
"```\n",
"\n",
"You can also sort by the actual values inside, but you have to give the column by which you want to sort."
"execution_count": null,
"metadata": {},
"outputs": [],
"df1 = pd.DataFrame([[4, 5, 3], [3, 3, 9], [6, 2, 4], [1, 8, 3], [3, 2, 7], [5, 5, 5]], columns=[\"a\", \"b\", \"c\"])\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],