Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[pandas](http://pandas.pydata.org) provides high-level data structures and functions designed to make working with structured or tabular data fast, easy and expressive. The primary objects in pandas that we will be using are the `DataFrame`, a tabular, column-oriented data structure with both row and column labels, and the `Series`, a one-dimensional labeled array object.\n",
"\n",
"pandas blends the high-performance, array-computing ideas of NumPy with the flexible data manipulation capabilities of spreadsheets and relational databases. It provides sophisticated indexing functionality to make it easy to reshape, slice and perform aggregations.\n",
"While pandas adopts many coding idioms from NumPy, the most significant difference is that pandas is designed for working with tabular or heterogeneous data. NumPy, by contrast, is best suited for working with homogeneous numerical array data.\n",
"- [Data Structures](#structures)\n",
" - [Series](#series)\n",
" - [DataFrame](#dataframe)\n",
"- [Essential Functionality](#ess_func)\n",
" - [Reindexing](#reindexing)\n",
" - [Dropping Entries](#removing)\n",
" - [Indexing, Slicing and Filtering](#indexing)\n",
" - [Arithmetic Operations](#arithmetic)\n",
"- [Summarizing and Computing Descriptive Statistics](#sums)\n",
"- [Loading and storing data](#loading)\n",
" - [Text Format](#text) \n",
" - [Web Scraping](#web)\n",
"- [Data Cleaning and preperation](#cleaning)\n",
" - [Handling missing data](#missing)\n",
" - [Data transformation](#transformation)\n",
"- [String manipulation](#strings)\n",
"\n",
"The common pandas import statment is shown below:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Common pandas import statement\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Structures <a name=\"structures\"></a>\n",
"## Series <a name=\"series\"></a>\n",
"A Series is a one-dimensional array-like object containing a sequence of values and an associated array of data labels called its index.\n",
"\n",
"The easiest way to make a Series is from an array of data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.Series([4, 7, -5, 3])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now try printing out data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The string representation of a Series displayed interactively shows the index on the left and the values on the right. Because we didn't specify an index, the default one is simply integers 0 through N-1.\n",
"\n",
"You can output only the values of a Series using \n",
"```python\n",
"data.values\n",
"```\n",
"```python\n",
"data.index\n",
"```\n",
"Try it out below!"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can specify custom indices when intialising the Series"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2 = pd.Series([4, 7, -5, 3], index=[\"a\", \"b\", \"c\", \"d\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can use these labels to access the data similar to a normal array"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2[\"a\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another way to think about Series is as a fixed-length ordered dictionary. Furthermore, you can actually define a Series in a similar manner to a dictionary"
"execution_count": null,
"cities = {\"Glasgow\" : 599650, \"Edinburgh\" : 464990, \"Aberdeen\" : 196670, \"Dundee\" : 147710}\n",
"data3 = pd.Series(cities)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can do arithmetic operations between Series similar to NumPy arrays. Even if you have 2 datasets with different data, arithmetic operations will be aligned according to their indices.\n",
"\n",
"Let's look at an example"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cities_uk = {\"Birmingham\" : 1092330, \"Leeds\": 751485, \"Glasgow\" : 599650,\n",
" \"Manchester\" : 503127, \"Edinburgh\" : 464990}\n",
"data4 = pd.Series(cities_uk)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"data3 + data4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how some of the results are NaN? Well, that is because there were no instances of those cities within both of the datasets. You can usually extract NaNs from a Series with\n",
"data.isnull()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A DataFrame represents a rectangular table of data and contains an ordered collection of columns, each of which can be a different value type. The DataFrame has both row and column index and can be thought of as a dict of Series all sharing the same index.\n",
"\n",
"The most common way to create a DataFrame is with dicts"
]
},
{
"cell_type": "code",
"execution_count": null,
"data = {\"cities\" : [\"Glasgow\", \"Edinburgh\", \"Aberdeen\", \"Dundee\"],\n",
" \"population\" : [599650, 464990, 196670, 147710],\n",
" \"year\" : [2011, 2013, 2013, 2013]}\n",
"frame = pd.DataFrame(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try printing it out"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Jupyter Notebooks prints it out in a nice table but the basic version of this is also just as readable!\n",
"\n",
"Additionally you can also specify the order of columns during initialisation"
]
},
{
"cell_type": "code",
"execution_count": null,
"frame2 = pd.DataFrame(data, columns=[\"year\", \"cities\", \"population\"])\n",
"frame2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can retrieve a particular column from a DataFrame with\n",
"```python\n",
"frame[\"cities\"]\n",
"```\n",
"The result is going to be a Series.\n",
"Additionally, you can retrieve a row from the dataset using\n",
"frame.iloc[1]\n",
"execution_count": null,
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to add and modify the columns of a DataFrame"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame2[\"size\"] = 100"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frame2"
]
},
{
"cell_type": "code",
"execution_count": null,
"frame2[\"size\"] = [175, 264, 65.1, 60] # in km^2\n",
"frame2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to dicts, columns can be deleted using\n",
"```python\n",
"del frame2[\"size\"]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"del frame2[\"size\"]\n",
"frame2"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another common way of creating DataFrames is from a nested dict of dicts:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data2 = {\"Glasgow\": {2011: 599650},\n",
" \"Edinburgh\": {2013:464990},\n",
"frame3 = pd.DataFrame(data2)\n",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is a table of different ways of initialising a DataFrame for your reference\n",
"\n",
"| Type | Notes |\n",
"| --- | --- |\n",
"| 2D ndarray | A matrix of data; passing optional row and column labels |\n",
"| dict of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame; all sequences must be the same length |\n",
"| NumPy structured/recorded array | Treated as with the \"dict of arrays, lists or tuples\" case |\n",
"| dict of Series | Each value becomes a column; indexes from each Series are unioned together to<br>form the result's row index if not explicit index is passed |\n",
"| dict of dicts | Each inner dict becomes a column; keys are unioned to form the row<br>index as in the \"dict of Series\" case |\n",
"| List of dicts or Series | Each item becomes a row in the DataFrame; union of dict keys or<br>Series indices becomes the DataFrame's column labels |\n",
"| List of lists or tuples | Treated as the \"2D ndarray\" case |\n",
"| Another DataFrame | The DataFrame's indexes are used unless different ones are passed |\n",
"| NumPy MaskedArray | Like the \"2D ndarray\" case except masked values become NA/missing in the DataFrame |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Essential Functionality <a name=\"ess_func\"></a>\n",
"In this section, we will go through the fundamental mechanics of interacting with the data contained in a Series or DaraFrame."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With pandas it is easy to restructure the order of your columns and rows using the `reindex` function. Let's have a look at an example:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# first define a new Series\n",
"s = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])\n",
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now you can reshuffle the indices\n",
"s = s.reindex(['d', 'b', 'a', 'c', 'e'])\n",
"s"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy as that! This can also be extended for DataFrames, where you can reorder both the columns and indices at the same time!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# first define a new Dataframe\n",
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"# Now we can restructure it with reindex\n",
"df = df.reindex(index=[\"a\", \"d\", \"c\", \"b\"],\n",
" columns=[\"Aberdeen\", \"Glasgow\", \"Edinburgh\", \"Dundee\"])\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice something interesting? We can actually add new indices and columns using the `reindex` method. This results in the new slots in our table to be filled in with `NaN` values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Removing columns/indices <a name=\"removing\"></a>\n",
"Similarly to above, it is easy to remove entries. This is done with the `drop` method and can be applied to both columns and indices:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# define new DataFrame\n",
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.drop(\"b\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can also drop from a column\n",
"df.drop([\"Aberdeen\", \"Edinburgh\"], axis=\"columns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the original data frame is unchanged: `df.drop()` gives us a view of the data frame with the desired data dropped, and leaves the original data intact."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Indexing, slicing and filtering <a name=\"indexing\"></a>\n",
"\n",
"### Indexing\n",
"\n",
"Series indexing works analogously to NumPy array indexing (i.e. data[...]). You can also use the Series' index values instead of only integers:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])\n",
"s"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[3]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[\"c\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[[1,3]]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[s<2]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A subtle difference when indexing in pandas is that unlike in normal Python, slicing here is inclusive at the end-point."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[\"b\":\"c\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All of the above also apply to DataFrames:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = np.reshape(np.arange(9), (3,3))\n",
"df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n",
" columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n",
"\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[:2]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[\"Glasgow\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For DataFrame label-indexing on the rows, you can use `loc` for labels and `iloc` for integer-indexing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.loc[\"b\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.loc[\"b\", [\"Glasgow\", \"Aberdeen\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's try `iloc`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.iloc[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.iloc[1, [1,2]]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Summary of indexing:\n",
"\n",
"| Type | Notes |\n",
"| -- | -- |\n",
"| df\\[val\\] | Select single column or sequence of columns from a DataFrame |\n",
"| df.loc\\[val\\] | Select single row or subset of rows from a DataFrame by label |\n",
"| df.loc\\[:, val\\] | Select single column or subset of columns by label |\n",
"| df.loc\\[val1, val2\\] | Select both rows and columns by label |\n",
"| df.iloc\\[idx\\] | Select single row or subset of rows from DataFrame by integer position |\n",
"| df.iloc\\[:, idx\\] | Select single column or subset of columns by integer position |\n",
"| df.iloc\\[idx1, idx2\\] | Select both rows and columns by integer position |\n",
"| reindex method | Select either rows or columns by labels |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 1\n",
"A dataset of random numbers is created below. Index the 47th row and the 22nd column. You should get the number **4621**.\n",
"\n",
"*Note: Remember that Python uses 0-based indexing*"
]
},
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.reshape(np.arange(10000), (100,100)))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 2\n",
"Using the same DataFrame from the previous exercise, obtain all rows starting from row 85 to 97."
"execution_count": null,
"metadata": {},
"outputs": [],
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you are performing arithmetic operations between two objects, if any index pairs are not the same, the respective index in the result will be the union of the index pair. Let's have a look"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s1 = pd.Series(np.arange(5), index=[\"a\", \"b\", \"c\", \"d\", \"e\"])\n",
"s2 = pd.Series(np.arange(4), index=[\"b\", \"c\", \"d\", \"k\"])\n",
"s1 + s2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The internal data alignment introduces missing values in the label locations that don't overlap. It is similar for DataFrames:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(12).reshape((3,4)),\n",
" columns=list(\"abcd\"))\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = pd.DataFrame(np.arange(16).reshape((4,4)),\n",
" columns=list(\"cdef\"))\n",
"df2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# adding the two\n",
"df1+df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how where we don't have matching values from `df1` and `df2` the output of the addition operation is `NaN` since there are no two numbers to add.\n",
"\n",
"Well, we can \"fix\" that by filling in the `NaN` values. This effectively tells pandas where there are no two values to add, assume that the missing value is just zero."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1.add(df2, fill_value=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another important point here is although the normal arithmetic operations work here, there also exist dedicated methods like `DataFrame.add()` which achieve the same functionality + a bit extra.\n",
"\n",
"Here's a list of all arithmetic operations within pandas:\n",
"\n",
"| Operator | Method | Description |\n",
"| -- | -- | -- |\n",
"| + | add, radd | Addition |\n",
"| - | sub, rsub | Subtraction |\n",
"| / | div, rdiv | Division |\n",
"| // | floordiv, rfloordiv | Floor division |\n",
"| * | mul, rmul | Multiplication |\n",
"| ** | pow, rpow | Exponentiation |\n",
"\n",
"Notice how some of the methods have `r` in front of them? That stands for reversed and effectively reverses the operands. For example\n",
"\n",
"```python\n",
"df1.rdiv(df2)\n",
"```\n",
"would be the same as\n",
"```python\n",
"df2/df1\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 3\n",
"Create a (3,3) DataFrame and square all elements in it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Broadcasting\n",
"Similar to numpy, in pandas you can also broadcast data structures. Let's consider a simple example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(16).reshape((4,4)))\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = pd.Series(np.arange(4))\n",
"df2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 - df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how the Series of [0, 1, 2, 3] got removed from each row? That is called broadcasting.\n",
"\n",
"It can also be used for columns, but for that, you have to use the method arithmetic operations."
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1.sub(df2, axis=\"index\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sorting\n",
"Sorting is an important built-in operation of pandas. Let's have a look at how you can do it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame(np.arange(16).reshape((4,4)), index=[\"b\", \"a\", \"d\", \"c\"])\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df2 = df1.sort_index()\n",
"df2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy as that. Furthermore, you can also sort along the column axis with\n",
"```python\n",
"df1.sort_index(axis=1)\n",
"```\n",
"\n",
"You can also sort by the actual values inside, but you have to give the column by which you want to sort."
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1 = pd.DataFrame([[4, 5, 3], [3, 3, 9], [6, 2, 4], [1, 8, 3], [3, 2, 7], [5, 5, 5]], columns=[\"a\", \"b\", \"c\"])\n",
"df1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df1.sort_values(by=\"a\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summarizing and computing descriptive stats <a name=\"sums\"></a>\n",
"`pandas` is equipped with common mathematical and statistical methods. Most of which fall into the category of reductions or summary statistics. These are methods that extract a single value from a list of values. For example, you can extract the sum of a `Series` object like this:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.arange(20).reshape(5,4),\n",
" columns=[\"a\", \"b\", \"c\", \"d\"])\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.sum()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how that created the sum of each column?\n",
"\n",
"Well you can actually make that the other way around by adding an extra option to `sum()`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.sum(axis=\"columns\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A similar method also exists for obtaining the mean of data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the mother of the methods we discussed here is `describe()` "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
"source": [
"df.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here are all of the summary methods:\n",
"\n",
"| Method | Description |\n",
"| -- | -- |\n",
"| count | Number of non-NA values |\n",
"| describe | Set of summary statistics |\n",
"| min, max | Minimum, maximum values |\n",
"| argmin, argmax | Index locations at which the minimum or maximum value is obtained | \n",
"| quantile | Compute sample quantile ranging from 0 to 1 |\n",
"| sum | Sum of values |\n",
"| mean | Mean of values |\n",
"| median | Arithmetic median of values |\n",
"| mad | Mean absolute deviation from mean value |\n",
"| prod | Product of all values |\n",
"| var | Sample variance of values\n",
"| std | Sample standard deviation of values\n",
"| cumsum | Cumulative sum of values |\n",
"| cummin, cummax | Cumulative minimum or maximum of values, respectively |\n",
"| cumprod | Cumulative product of values |\n",
"| value_counts() | Counts the number of occurrences of each unique element in a column |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 4\n",
"\n",
"A random DataFrame is created below. Find it's mean and standard deviation, then normalise it column-wise according to the formula:\n",
"\n",
"$$ Y = \\frac{X - \\mu}{\\sigma} $$\n",
"\n",
"Where X is your dataset, $\\mu$ is the mean and $\\sigma$ is the standard deviation.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.random.uniform(0, 10, (100, 100)))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Loading and Storing <a name=\"loading\"></a>\n",
"Accessing data is a necessary first step for data science. In this section, the focus will be on data input and output in various formats using `pandas`\n",
"\n",
"Data usually fall into these categories:\n",
"- text files\n",
"- binary files (more efficient space-wise)\n",
"- web data\n",
"\n",
"## Text formats <a name=\"text\"></a>\n",
"The most common format in this category is by far `.csv`. This is an easy to read file format which is usually visualised like a spreadsheet. The data itself is usually separated with a `,` which is called the **delimiter**.\n",
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
"\n",
"Here is an example of a `.csv` file:\n",
"\n",
"```\n",
"\"Sell\", \"List\", \"Living\", \"Rooms\", \"Beds\", \"Baths\", \"Age\", \"Acres\", \"Taxes\"\n",
"142, 160, 28, 10, 5, 3, 60, 0.28, 3167\n",
"175, 180, 18, 8, 4, 1, 12, 0.43, 4033\n",
"129, 132, 13, 6, 3, 1, 41, 0.33, 1471\n",
"138, 140, 17, 7, 3, 1, 22, 0.46, 3204\n",
"232, 240, 25, 8, 4, 3, 5, 2.05, 3613\n",
"135, 140, 18, 7, 4, 3, 9, 0.57, 3028\n",
"150, 160, 20, 8, 4, 3, 18, 4.00, 3131\n",
"207, 225, 22, 8, 4, 2, 16, 2.22, 5158\n",
"271, 285, 30, 10, 5, 2, 30, 0.53, 5702\n",
" 89, 90, 10, 5, 3, 1, 43, 0.30, 2054\n",
" ```\n",
"\n",
"It detailed home sale statistics. The first line is called the header, and you can imagine that it is the name of the columns of a spreadsheet.\n",
"\n",
"Let's now see how we can load this data and analyse it. The file is located in the folder `data` and is called `homes.csv`. We can read it like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"homes = pd.read_csv(\"data/homes.csv\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"homes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy right?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Find the mean selling price of the homes in `data/homes.csv`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `read_csv` function has a lot of optional arguments (more than 50). It's impossible to memorise all of them - it's usually best just to look up the particular functionality when you need it. \n",
"\n",
"You can search `pandas read_csv` online and find all of the documentation.\n",
"\n",
"There are also many other functions that can read textual data. Here are some of them:\n",
"\n",
"| Function | Description\n",
"| -- | -- |\n",
"| read_csv | Load delimited data from a file, URL, or file-like object. The default delimiter is a comma `,` |\n",
"| read_table | Load delimited data from a file, URL, or file-like object. The default delimiter is tab `\\t` |\n",
"| read_fwf | Read data in fixed0width column format (i.e. no delimiters |\n",
"| read_clipboard | Reads the last object you have copied (Ctrl-C) |\n",
"| read_excel | Read tabular data from Excel XLS or XLSX file |\n",
"| read_hdf | Read HDF5 file written by pandas |\n",
"| read_html | Read all tables found in the given HTML document |\n",
"| read_json | Read data from a JSON string representation |\n",
"| read_sql | Read the results of a SQL query |\n",
"\n",
"*Note: there are also other loading functions which are not touched upon here*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 6\n",
"There is another file in the data folder called `homes.xlsx`. Can you read it? Can you spot anything different?"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Writing CSV files\n",
"Easy!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"homes.to_csv(\"test.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a DataFrame which consists of all numbers 1 to 1000. Reshape it into 50 rows and save it to a `.csv` file. How many columns did you end up with?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There is a dataset `data/yob2012.txt` which lists the number of newborns registered in 2012 with their names and sex. Open the dataset in pandas, explore it and derive the ratio between male and female newborns."
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Web scraping <a name=\"web\"></a>\n",
"It is also very easy to scrape webpages and extract tables from them.\n",
"\n",
"For example, let's consider extracting the table of failed American banks."
"execution_count": null,
"metadata": {},
"outputs": [],
"## This can't find the lxml package\n",
"url = \"https://www.fdic.gov/bank/individual/failed/banklist.html\"\n",
"banks = pd.read_html(url2)\n",
"banks = banks[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"banks"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Powerful no? Now let's turn that into an exercise.\n",
"\n",
"Given the data you just extracted above, can you analyse how many banks have failed per state?\n",
"\n",
"Georgia (GA) should be the state with the most failed banks!\n",
"\n",
"*Hint: try searching the web for pandas counting occurrences* "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Cleaning <a name=\"cleaning\"></a>\n",
"While doing data analysis and modeling, a significant amount of time is spent on data preparation: loading, cleaning, transforming and rearranging. Such tasks are often reported to take **up to 80%** or more of a data analyst's time. Often the way the data is stored in files isn't in the correct format and needs to be modified. Researchers usually do this on an ad-hoc basis using programming languages like Python.\n",
"In this chapter, we will discuss tools for handling missing data, duplicate data, string manipulation, and some other analytical data transformations.\n",
"Missing data occurs commonly in many data analysis applications. One of the goals of pandas is to make working with missing data as painless as possible.\n",
"In pandas, missing numeric data is represented by `NaN` (Not a Number) and can easily be handled:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"string_data = pd.Series(['orange', 'tomato', np.nan, 'avocado'])\n",
"string_data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"string_data.isnull()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Furthermore, the pandas `NaN` is functionally equlevant to the standard Python type `NoneType` which can be defined with `x = None`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"string_data[0] = None\n",
"string_data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"string_data.isnull()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here are some other methods which you can find useful:\n",
" \n",
"| dropna | Filter axis labels based on whether the values of each label have missing data|\n",
"| fillna | Fill in missing data with some value |\n",
"| isnull | Return boolean values indicating which values are missing |\n",
"| notnull | Negation of isnull |\n",
"\n",
"Remove the missing data below using the appropriate method"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"data = pd.Series([1, None, 3, 4, None, 6])\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`dropna()` by default removes any row/column that has a missing value. What if we want to remove only rows in which all of the data is missing though?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame([[1., 6.5, 3.], [1., None, None],\n",
" [None, None, None], [None, 6.5, 3.]])\n",
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.dropna()\n",
"data"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.dropna(how=\"all\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's fine if we want to remove missing data, what if we want to fill in missing data? Do you know of a way? Try to fill in all of the missing values from the data below with **0s**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame([[1., 6.5, 3.], [2., None, None],\n",
" [None, None, None], [None, 1.5, 9.]])\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"pandas also allows us to interpolate the data instead of just filling it with a constant. The easiest way to do that is shown below, but there are more complex ones that are not covered in this course."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.fillna(method=\"ffill\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want you can explore the other capabilities of [`fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html), as well as the method [`interpolate`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html), for more ways to fill empty data values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Transformation <a name=\"transformation\"></a>\n",
"### Removing duplicates\n",
"Duplicate data can be a serious issue, luckily pandas offers a simple way to remove duplicates"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame([1, 2, 3, 4, 3, 2, 1])\n",
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.drop_duplicates()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also select which rows to keep"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.drop_duplicates(keep=\"last\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You've already seen how you can fill in missing data with `fillna`. That is actually a special case of more general value replacement. That is done via the `replace` method.\n",
"\n",
"Let's consider an example where the dataset given to us had `-999` as sentinel values for missing data instead of `NaN`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame([1., -999., 2., -999., 3., 4., -999, -999, 7.])\n",
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data.replace(-999, np.nan)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Renaming axis indexes\n",
"Similar to `replace` you can also rename the labels of your axis"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame(np.arange(12).reshape((3, 4)),\n",
" index=['Edinburgh', 'Glasgow', 'Aberdeen'])\n",
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
"source": [
"# create a map using a standard Python dictionary\n",
"mapping = { 0 : \"one\",\n",
" 1 : \"two\",\n",
" 2 : \"three\",\n",
" 3 : \"four\"}\n",
"\n",
"# now rename the columns\n",
"data.rename(columns=mapping)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Rows can be renamed in a similar fashion"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Detection and Filtering Outliers\n",
"Filtering or transforming outliers is largely a matter of applying array operations. Consider a DataFrame with some normally distributed data:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.DataFrame(np.random.randn(1000, 4))\n",
"data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Suppose you now want to lower all absolute values exceeding 3 from one of the columns"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"col = data[2]\n",
"col[np.abs(col) > 3]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
"source": [
"data[np.abs(data) > 3] = np.sign(data) * 3\n",
"data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Permutation and Random Sampling"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Permuting (randomly reordering) of rows in pandas is easy to do using the `numpy.random.permutation` function. Calling permutation with the length of the axis you want to permute produces an array of integers indicating the new ordering:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(np.arange(5 * 4).reshape((5, 4)))\n",
"df"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# generate random order\n",
"sampler = np.random.permutation(5)\n",
"sampler"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.take(sampler)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To select a random subset without replacement, you can use the sample method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.sample(n=3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# String manipulation <a name=\"strings\"></a>\n",
"Python has long been popular for its raw data manipulation in part due to its ease of use for string and text processing. Most text operations are made simple with the string object's built-in methods. For more complex pattern matching and text manipulations, regular expressions may be needed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Basics\n",
"Let's refresh what normal `str` (String objects) are capable of in Python"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# complex strings can be broken into small bits\n",
"val = \"Edinburgh is great\"\n",
"val.split(\" \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# substrings can be concatinated together with +\n",
"first, second, last = val.split(\" \")\n",
"first + \"::\" + second + \"::\" + last"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Remember that Strings are just lists of individual charecters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"val = \"Edinburgh\"\n",
"for each in val:\n",
" print(each)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use standard list operations with them"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"val.find(\"n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"val.find(\"x\") # -1 means that there is no such element"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# and of course remember about upper() and lower()\n",
"val.upper()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to learn more about strings you can always refer to the [Python manual](https://docs.python.org/2/library/string.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Regular expressions\n",
"provide a flexible way to search or match (often more complex) string patterns in text. A single expression, commonly called *regex*, is a string formed according to the regular expression language. Python's built-in module is responsible for applying regular expression of strings via the `re` package"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"text = \"foo bar\\t baz \\tqux\"\n",
"text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"re.split(\"\\s+\", text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"this expression effectively removed all whitespaces and tab characters (`\\t`) which was stated with the `\\s` regex and then the `+` after it means to remove any number of sequential occurrences of that character.\n",
"Let's have a look at a more complex example - identifying email addresses in a text file:"
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text = \"\"\"Dave dave@google.com\n",
"Steve steve@gmail.com\n",
"Rob rob@gmail.com\n",
"Ryan ryan@yahoo.com\n",
"\"\"\"\n",
"\n",
"# pattern to be used for searching\n",
"pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4}'\n",
"\n",
"# re.IGNORECASE makes the regex case-insensitive\n",
"regex = re.compile(pattern, flags=re.IGNORECASE)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"regex.findall(text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's dissect the regex part by part:\n",
"```\n",
"pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4}'\n",
"```\n",
"\n",
"- the `r` prefix before the string signals that the string should keep special characters such as the newline character `\\n`. Otherwise, Python would just treat it as a newline\n",
"- `A-Z` means all letters from A to Z including lowercase and uppercase\n",
"- `0-9` similarly means all characters from 0 to 9\n",
"- the concatenation `._%+-` means just include those characters\n",
"- the square brackets [ ] means to combine all of the regular expressions inside. For example `[A-Z0-9._%+-]` would mean include all letters A to Z, all numbers 0 to 9, and the characters ._%+-\n",
"- `+` means to concatenate the strings patterns\n",
"- `{2,4}` means consider only 2 to 4 character strings\n",
"\n",
"To summarise the pattern above searches for any combination of letters and numbers, followed by a `@`, then any combination of letters and numbers followed by a `.` with only 2 to 4 letters after it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Regular expressions and pandas\n",
"Let's see how they can be combined. Replicating the example above"
]
},
{
"cell_type": "code",
"source": [
"data = pd.Series({'Dave': 'Daves email dave@google.com', 'Steve': 'Steves email steve@gmail.com',\n",
" 'Rob': 'Robs rob@gmail.com', 'Wes': np.nan})\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can reuse the same `pattern` variable from above"
]
},
{
"cell_type": "code",
"source": [
"data.str.findall(pattern, flags=re.IGNORECASE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"pandas also offers more standard string operations. For example, we can check if a string is contained within a data row:"
"source": [
"data.str.contains(\"gmail\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Many more of these methods exist:\n",
" \n",
" \n",
"| -- | -- |\n",
"| cat | Concatenate strings element-wise with optional delimiter |\n",
"| contains | Return boolean array if each string contains pattern/regex |\n",
"| extract | Use a regex with groups to extract one or more strings from a Series |\n",
"| findall | Computer list of all occurrences of pattern/regex for each string |\n",
"| get | Index into each element |\n",
"| isdecimal | Checks if the string is a decimal number |\n",
"| isdigit | Checks if the string is a digit |\n",
"| islower | Checks if the string is in lower case |\n",
"| isupper | Checks if the string is in upper case |\n",
"| join | Join strings in each element of the Series with passed seperator |\n",
"| lower, upper | Convert cases |\n",
"| match | Returns matched groups as a list |\n",
"| pad | Adds whitespace to left, right or both sides of strings |\n",
"| repeat | Duplicate string values |\n",
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
"| slice | Slice each string in the Series |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 12\n",
"There is a `dataset data/yob2012.txt` which lists the number of newborns registered in 2018 with their names and sex. Using regular expressions, extract all names from the dataset which start with letters A to C. How many names did you find?\n",
"\n",
"Note: `^` is the \"starting with\" operator in regular expressions, "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Thanks"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",