{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Notebook 5 - pandas\n", "[pandas](http://pandas.pydata.org) provides high-level data structures and functions designed to make working with structured or tabular data fast, easy and expressive. The primary objects in pandas that we will be using are the `DataFrame`, a tabular, column-oriented data structure with both row and column labels, and the `Series`, a one-dimensional labeled array object.\n", "\n", "pandas blends the high-performance, array-computing ideas of NumPy with the flexible data manipulation capabilities of spreadsheets and relational databases. It provides sophisticated indexing functinoality to make it easy to reshape , slice and perform aggregations.\n", "\n", "While pandas adopts many coding idioms from NumPy, the biggest difference is that pandas is designed for working with tabular or heterogeneous data. NumPy, by contrast, is best suited for working with homogeneous numerical array data.\n", "<br>\n", "\n", "## Table of Contents:\n", "- Data Structures\n", " - Series\n", " - DataFrame\n", "- Essential Functionality\n", " - Reindexing\n", " - Dropping Entries\n", " - Indexing, Slicing and Filtering\n", " - Arithmetic Operations\n", " - Sorting and ranking\n", "- Summarizing and Computing Descriptive Statistics\n", " - Correlation and Covariance\n", " - Unique values, value counts and Membership\n", "- Reading and storing data\n", " - Text Format\n", " - Text Format Writing\n", " - XML and HTML Web Scraping\n", " - Reading excel files\n", " - mention that pandas allow interfacing with web APIs and SQL databases\n", "- Data Cleaning and preperation\n", " - Missing data\n", " - Data transformation\n", " - String manipulation incl. regexp\n", "- Data wrangling\n", "- Plotting?\n", " " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# Common pandas import statement\n", "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Structures\n", "## Series\n", "a one-dimensional array-like object containing a sequence of values and an associated array of data labels, called its index.\n", "\n", "The easiest way to make a Series is from an array of data:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "data = pd.Series([4, 7, -5, 3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now try printing out data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The string representation of a Seires displayed interactively shows the index on the left and the values on the right. Because we didn't specify an index, the default on is simply integers 0 through N-1.\n", "\n", "You can output only the values of a Series using \n", "```python\n", "data.values\n", "```\n", "or you can get only the indeces using\n", "```python\n", "data.index\n", "```\n", "Try it out below!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can specify custom indeces when intialising the Series" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "data2 = pd.Series([4, 7, -5, 3], index=[\"a\", \"b\", \"c\", \"d\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you can use these labels to access the data similar to a normal array" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "4" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data2[\"a\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another way to think about Serieses is as a fixed-length ordered dictionary. Furthermore, you can actually define a Series in a similar manner to a dictionary" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "cities = {\"Glasgow\" : 599650, \"Edinburgh\" : 464990, \"Abardeen\" : 196670, \"Dundee\" : 147710}\n", "data3 = pd.Series(cities)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Glasgow 599650\n", "Edinburgh 464990\n", "Abardeen 196670\n", "Dundee 147710\n", "dtype: int64" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can do arithmetic operations between Serieses similar to NumPy arrays. Even if you have 2 datasets with different data, arithmetic operations will be aligned according to their indeces.\n", "\n", "Let's look at an example" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "cities_uk = {\"Birmingham\" : 1092330, \"Leeds\": 751485, \"Glasgow\" : 599650,\n", " \"Manchester\" : 503127, \"Edinburgh\" : 464990}\n", "data4 = pd.Series(cities_uk)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Abardeen NaN\n", "Birmingham NaN\n", "Dundee NaN\n", "Edinburgh 929980.0\n", "Glasgow 1199300.0\n", "Leeds NaN\n", "Manchester NaN\n", "dtype: float64" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data3 + data4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how some of the results are NaN? Well that is because there were no instances of those cities within both of the datasets. You can usually extract NaNs from a Series with\n", "```python\n", "data4.isnull()\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrame\n", "A DataFrame represents a rectangular table of data and contains an ordered collection of columns, each of which can be a different value type. The DataFrame has both row and column index and can be thought of as a dict of Series all sharing the same index.\n", "\n", "The most common way to create a DataFrame is with dicts" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "data = {\"cities\" : [\"Glasgow\", \"Edinburgh\", \"Abardeen\", \"Dundee\"],\n", " \"population\" : [599650, 464990, 196670, 147710],\n", " \"year\" : [2011, 2013, 2013, 2013]}\n", "frame = pd.DataFrame(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try printing it out" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>cities</th>\n", " <th>population</th>\n", " <th>year</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>Glasgow</td>\n", " <td>599650</td>\n", " <td>2011</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>Edinburgh</td>\n", " <td>464990</td>\n", " <td>2013</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>Abardeen</td>\n", " <td>196670</td>\n", " <td>2013</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>Dundee</td>\n", " <td>147710</td>\n", " <td>2013</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " cities population year\n", "0 Glasgow 599650 2011\n", "1 Edinburgh 464990 2013\n", "2 Abardeen 196670 2013\n", "3 Dundee 147710 2013" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "frame" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Jupyter Notebooks prints it out in a nice table but the basic version of this is also just as readable!\n", "\n", "Additionally you can also specify the order of columns during initialisation" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "frame2 = pd.DataFrame(data, columns=[\"year\", \"cities\", \"population\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can retrieve a particular column from a DataFrame with\n", "```python\n", "frame[\"cities\"]\n", "```\n", "The result is going to be a Series\n", "\n", "Additionally you can retrieve a row from the dataset using\n", "```python\n", "frame[1]\n", "```\n", "Try it out below" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is also possible to add and modify the columns of a DataFrame" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "frame2[\"size\"] = 100" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>year</th>\n", " <th>cities</th>\n", " <th>population</th>\n", " <th>size</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>2011</td>\n", " <td>Glasgow</td>\n", " <td>599650</td>\n", " <td>100</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>2013</td>\n", " <td>Edinburgh</td>\n", " <td>464990</td>\n", " <td>100</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>2013</td>\n", " <td>Abardeen</td>\n", " <td>196670</td>\n", " <td>100</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>2013</td>\n", " <td>Dundee</td>\n", " <td>147710</td>\n", " <td>100</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " year cities population size\n", "0 2011 Glasgow 599650 100\n", "1 2013 Edinburgh 464990 100\n", "2 2013 Abardeen 196670 100\n", "3 2013 Dundee 147710 100" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "frame2" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "frame2[\"size\"] = [175, 264, 65.1, 60] # in km^2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similar to dicts, columns can be deleted using\n", "```python\n", "del frame2[\"size\"]\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another common way of creating DataFrames is from a nested dict of dicts:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "ename": "SyntaxError", "evalue": "invalid syntax (<ipython-input-16-a08ab2dd9de2>, line 7)", "output_type": "error", "traceback": [ "\u001b[0;36m File \u001b[0;32m\"<ipython-input-16-a08ab2dd9de2>\"\u001b[0;36m, line \u001b[0;32m7\u001b[0m\n\u001b[0;31m \"Abardeen\": }\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n" ] } ], "source": [ "data2 = {\"cities\" : [\"Glasgow\", \"Edinburgh\", \"Abardeen\", \"Dundee\"],\n", " \"population\" : [599650, 464990, 196670, 147710],\n", " \"year\" : [2011, 2013, 2013, 2013]}\n", "\n", "data2 = {\"Glasgow\": {2011: 599650},\n", " \"Edinburgh\": {2013:464990},\n", " \"Abardeen\": }\n", "\n", "frame3 = pd.DataFrame(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a table of different ways of intialising a DataFrame for your reference\n", "\n", "| Type | Notes |\n", "| --- | --- |\n", "| 2D ndarray | A matrix of data; passing optional row and column labels |\n", "| dict of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame; all sequences must be the same length |\n", "| NumPy structured/recorded array | Treated as with the \"dict of arrays, lists or tuples\" case |\n", "| dict of Serires | Each value becomes a column; indexes from each Series are unioned together to<br>form the result's row index if not explicit index is passed |\n", "| dict of dicts | Each inner dict becomes a column; keys are unioned to form the row<br>index as in the \"dict of Series\" case |\n", "| List of dicts or Series | Each item becomes a row in the DataFrame; union of dict keys or<br>Series indeces become the DataFrame's column labels |\n", "| List of lists or tuples | Treated as the \"2D ndarray\" case |\n", "| Another DataFrame | The DataFrame's indexes are used unless different ones are passed |\n", "| NumPy MaskedArray | Like the \"2D ndarray\" case except masked values become NA/missing in the DataFrame |" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Essential Functionality\n", "In this section we will go through the fundemental mechanics of interacting with the data contained in a Series or DaraFrame." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reindexing\n", "With pandas it is easy to restructure the order of your columns and rows using the `reindex` function. Let's have a look at an example:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b 2\n", "c 3\n", "d 4\n", "e 5\n", "dtype: int64" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# first define a new Series\n", "s = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])\n", "s" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "d 4\n", "b 2\n", "a 1\n", "c 3\n", "e 5\n", "dtype: int64" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Now you can reshuffle the indices\n", "s = s.reindex(['d', 'b', 'a', 'c', 'e'])\n", "s" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy as that! This can also be extended for DataFrames, where you can reorder both the columns and indices at the same time!" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Edinburgh</th>\n", " <th>Glasgow</th>\n", " <th>Aberdeen</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>3</td>\n", " <td>4</td>\n", " <td>5</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>6</td>\n", " <td>7</td>\n", " <td>8</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Edinburgh Glasgow Aberdeen\n", "a 0 1 2\n", "b 3 4 5\n", "c 6 7 8" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# first define a new Dataframe\n", "data = np.reshape(np.arange(9), (3,3))\n", "df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n", " columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n", "df" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "# Now we can restructure it with reindex\n", "df = df.reindex(index=[\"a\", \"d\", \"c\", \"b\"],\n", " columns=[\"Aberdeen\", \"Glasgow\", \"Edinburgh\", \"Dundee\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice something interesting? We can actually add new indices and columns using the `reindex` method. This results in the new slots in our table to be filled in with `NaN` values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Removing columns/indices\n", "Similarl to above, it is easy to remove entries. This is done with the `drop` method and can be applied to both columns and indices:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Edinburgh</th>\n", " <th>Glasgow</th>\n", " <th>Aberdeen</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>6</td>\n", " <td>7</td>\n", " <td>8</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Edinburgh Glasgow Aberdeen\n", "a 0 1 2\n", "c 6 7 8" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# define new DataFrame\n", "data = np.reshape(np.arange(9), (3,3))\n", "df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n", " columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n", "\n", "df.drop(\"b\")" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Glasgow</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>7</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Glasgow\n", "a 1\n", "b 4\n", "c 7" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# You can also drop from a column\n", "df.drop([\"Aberdeen\", \"Edinburgh\"], axis=\"columns\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Indexing\n", "Series indexing works analogously to NumPy array indexing (ie. data[...]). You can also use the Serie's index values instead of only integers:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 0\n", "b 1\n", "c 2\n", "d 3\n", "dtype: int64" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])\n", "s" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[1]" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[3]" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[\"c\"]" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "b 1\n", "d 3\n", "dtype: int64" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[[1,3]]" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 0\n", "b 1\n", "dtype: int64" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[s<2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A subtle difference when indexing in pandas is that unlike in normal Python, slicing here is inclusive at the end-point." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "b 1\n", "c 2\n", "dtype: int64" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s[\"b\":\"c\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of the above also apply to DataFrames:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Edinburgh</th>\n", " <th>Glasgow</th>\n", " <th>Aberdeen</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>3</td>\n", " <td>4</td>\n", " <td>5</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>6</td>\n", " <td>7</td>\n", " <td>8</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Edinburgh Glasgow Aberdeen\n", "a 0 1 2\n", "b 3 4 5\n", "c 6 7 8" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = np.reshape(np.arange(9), (3,3))\n", "df = pd.DataFrame(data, index=[\"a\", \"b\", \"c\"],\n", " columns=[\"Edinburgh\", \"Glasgow\", \"Aberdeen\"])\n", "\n", "df" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Edinburgh</th>\n", " <th>Glasgow</th>\n", " <th>Aberdeen</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>3</td>\n", " <td>4</td>\n", " <td>5</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Edinburgh Glasgow Aberdeen\n", "a 0 1 2\n", "b 3 4 5" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df[:2]" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 1\n", "b 4\n", "c 7\n", "Name: Glasgow, dtype: int64" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df[\"Glasgow\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## loc and iloc\n", "For DataFrame label-indexing on the rows, you can use `loc` for labels and `iloc` for integer-indexing." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Edinburgh 3\n", "Glasgow 4\n", "Aberdeen 5\n", "Name: b, dtype: int64" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.loc[\"b\"]" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Glasgow 4\n", "Aberdeen 5\n", "Name: b, dtype: int64" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.loc[\"b\", [\"Glasgow\", \"Aberdeen\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's try `iloc`" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Edinburgh 3\n", "Glasgow 4\n", "Aberdeen 5\n", "Name: b, dtype: int64" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.iloc[1]" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Glasgow 4\n", "Aberdeen 5\n", "Name: b, dtype: int64" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.iloc[1, [1,2]]" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Edinburgh</th>\n", " <th>Glasgow</th>\n", " <th>Aberdeen</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>3</td>\n", " <td>4</td>\n", " <td>5</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Edinburgh Glasgow Aberdeen\n", "a 0 1 2\n", "b 3 4 5" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.iloc[:2]" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "# TODO : add table summary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Arithmetic\n", "When you are performing arithmetic operations between two objects, if any index pairs are not the same, the respective index in the result will be the union of the index pair. Let's have a look" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a NaN\n", "b 1.0\n", "c 3.0\n", "d 5.0\n", "e NaN\n", "k NaN\n", "dtype: float64" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "s1 = pd.Series(np.arange(5), index=[\"a\", \"b\", \"c\", \"d\", \"e\"])\n", "s2 = pd.Series(np.arange(4), index=[\"b\", \"c\", \"d\", \"k\"])\n", "s1 + s2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The internal data alignment introduces missing values in the label locations that don't overlap. It is similar for DataFrames:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " <th>b</th>\n", " <th>c</th>\n", " <th>d</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a b c d\n", "0 0 1 2 3\n", "1 4 5 6 7\n", "2 8 9 10 11" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 = pd.DataFrame(np.arange(12).reshape((3,4)),\n", " columns=list(\"abcd\"))\n", "df1" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>c</th>\n", " <th>d</th>\n", " <th>e</th>\n", " <th>f</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " c d e f\n", "0 0 1 2 3\n", "1 4 5 6 7\n", "2 8 9 10 11\n", "3 12 13 14 15" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2 = pd.DataFrame(np.arange(16).reshape((4,4)),\n", " columns=list(\"cdef\"))\n", "df2" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " <th>b</th>\n", " <th>c</th>\n", " <th>d</th>\n", " <th>e</th>\n", " <th>f</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>2.0</td>\n", " <td>4.0</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>10.0</td>\n", " <td>12.0</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>18.0</td>\n", " <td>20.0</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a b c d e f\n", "0 NaN NaN 2.0 4.0 NaN NaN\n", "1 NaN NaN 10.0 12.0 NaN NaN\n", "2 NaN NaN 18.0 20.0 NaN NaN\n", "3 NaN NaN NaN NaN NaN NaN" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# adding the two\n", "df1+df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how where we don't have matching values from `df1` and `df2` the output of the addition opretion is `NaN`, since there are no two numbers to add.\n", "\n", "Well, we can \"fix\" that by filling in the `NaN` values. This effectively tells pandas where there are no two values to add, assume that the missing value is just zero." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " <th>b</th>\n", " <th>c</th>\n", " <th>d</th>\n", " <th>e</th>\n", " <th>f</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0.0</td>\n", " <td>1.0</td>\n", " <td>2.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4.0</td>\n", " <td>5.0</td>\n", " <td>10.0</td>\n", " <td>12.0</td>\n", " <td>6.0</td>\n", " <td>7.0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8.0</td>\n", " <td>9.0</td>\n", " <td>18.0</td>\n", " <td>20.0</td>\n", " <td>10.0</td>\n", " <td>11.0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>12.0</td>\n", " <td>13.0</td>\n", " <td>14.0</td>\n", " <td>15.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a b c d e f\n", "0 0.0 1.0 2.0 4.0 2.0 3.0\n", "1 4.0 5.0 10.0 12.0 6.0 7.0\n", "2 8.0 9.0 18.0 20.0 10.0 11.0\n", "3 NaN NaN 12.0 13.0 14.0 15.0" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1.add(df2, fill_value=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another important point here is although the normal arithmetic operations work here, there also exist dedicated methods like `DataFrame.add()` which achieve the same functionality + a bit extra.\n", "\n", "Here's a list of all arithmetic operations within pandas:\n", "\n", "| Operator | Method | Description |\n", "| -- | -- | -- |\n", "| + | add, radd | Addition |\n", "| - | sub, rsub | Subtraction |\n", "| / | div, rdiv | Division |\n", "| // | floordiv, rfloordiv | Floor division |\n", "| * | mul, rmul | Multiplication |\n", "| ** | pow, rpow | Exponentiation |\n", "\n", "Notice how some of the methods have `r` in front of them? That stands for reversed and effectively reverses the operands. For example\n", "\n", "```python\n", "df1.div(df2)\n", "```\n", "would be the same as\n", "```python\n", "df2/df1\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Broadcasting\n", "Similar to numpy, in pandas you can also broadcast data structures. Let's consider a simple example:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "0 0 1 2 3\n", "1 4 5 6 7\n", "2 8 9 10 11\n", "3 12 13 14 15" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 = pd.DataFrame(np.arange(16).reshape((4,4)))\n", "df1" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 0\n", "1 1\n", "2 2\n", "3 3\n", "dtype: int64" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2 = pd.Series(np.arange(4))\n", "df2" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>0</td>\n", " <td>0</td>\n", " <td>0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>4</td>\n", " <td>4</td>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>8</td>\n", " <td>8</td>\n", " <td>8</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>12</td>\n", " <td>12</td>\n", " <td>12</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "0 0 0 0 0\n", "1 4 4 4 4\n", "2 8 8 8 8\n", "3 12 12 12 12" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 - df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how the Series of [0, 1, 2, 3] got removed from each row? That is called broadcasting.\n", "\n", "It can also be used for columns but for that you have to use the method arirthmetic operations" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>3</td>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>6</td>\n", " <td>7</td>\n", " <td>8</td>\n", " <td>9</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " <td>12</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "0 0 1 2 3\n", "1 3 4 5 6\n", "2 6 7 8 9\n", "3 9 10 11 12" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1.sub(df2, axis=\"index\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sorting\n", "Sorting is an important built-in operation of pandas. Let's have a look at how you can do it:" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>b</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>a</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>d</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "b 0 1 2 3\n", "a 4 5 6 7\n", "d 8 9 10 11\n", "c 12 13 14 15" ] }, "execution_count": 68, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 = pd.DataFrame(np.arange(16).reshape((4,4)), index=[\"b\", \"a\", \"d\", \"c\"])\n", "df1" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>a</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>b</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>c</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " <tr>\n", " <th>d</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "a 4 5 6 7\n", "b 0 1 2 3\n", "c 12 13 14 15\n", "d 8 9 10 11" ] }, "execution_count": 71, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2 = df1.sort_index()\n", "df2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy as that. Furthremore, you can also sort along the column axis with\n", "```python\n", "df1.sort_index(axis=1)\n", "```\n", "\n", "You can also sort by the actual values inside but you have to give the column by which you want to sort." ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>6</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>5</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a\n", "0 4\n", "1 3\n", "2 6\n", "3 1\n", "4 3\n", "5 5" ] }, "execution_count": 81, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 = pd.DataFrame([4, 3, 6, 1, 3, 5], columns=[\"a\"])\n", "df1" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>3</th>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>0</th>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>5</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>6</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a\n", "3 1\n", "1 3\n", "4 3\n", "0 4\n", "5 5\n", "2 6" ] }, "execution_count": 82, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1.sort_values(by=\"a\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summarizing and computing descriptive stats\n", "`pandas` is equipped with common mathematical and statistical methods. Most of which fall into the category of reductions or summary statistics. These are mthods that extract a single value from a list of values. For example, you can extract the mean of a `Series` object like this:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " <th>b</th>\n", " <th>c</th>\n", " <th>d</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>16</td>\n", " <td>17</td>\n", " <td>18</td>\n", " <td>19</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a b c d\n", "0 0 1 2 3\n", "1 4 5 6 7\n", "2 8 9 10 11\n", "3 12 13 14 15\n", "4 16 17 18 19" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame(np.arange(20).reshape(5,4),\n", " columns=[\"a\", \"b\", \"c\", \"d\"])\n", "df" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 40\n", "b 45\n", "c 50\n", "d 55\n", "dtype: int64" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice how that created the sum of each column?\n", "\n", "Well you can actually make that the other way around by adding an extra option to `sum()`" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 6\n", "1 22\n", "2 38\n", "3 54\n", "4 70\n", "dtype: int64" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.sum(axis=\"columns\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A similar method also exists for obtaining the mean of data:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "a 8.0\n", "b 9.0\n", "c 10.0\n", "d 11.0\n", "dtype: float64" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, the mother of the methods we discussed here is `describe()` " ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>a</th>\n", " <th>b</th>\n", " <th>c</th>\n", " <th>d</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>count</th>\n", " <td>5.000000</td>\n", " <td>5.000000</td>\n", " <td>5.000000</td>\n", " <td>5.000000</td>\n", " </tr>\n", " <tr>\n", " <th>mean</th>\n", " <td>8.000000</td>\n", " <td>9.000000</td>\n", " <td>10.000000</td>\n", " <td>11.000000</td>\n", " </tr>\n", " <tr>\n", " <th>std</th>\n", " <td>6.324555</td>\n", " <td>6.324555</td>\n", " <td>6.324555</td>\n", " <td>6.324555</td>\n", " </tr>\n", " <tr>\n", " <th>min</th>\n", " <td>0.000000</td>\n", " <td>1.000000</td>\n", " <td>2.000000</td>\n", " <td>3.000000</td>\n", " </tr>\n", " <tr>\n", " <th>25%</th>\n", " <td>4.000000</td>\n", " <td>5.000000</td>\n", " <td>6.000000</td>\n", " <td>7.000000</td>\n", " </tr>\n", " <tr>\n", " <th>50%</th>\n", " <td>8.000000</td>\n", " <td>9.000000</td>\n", " <td>10.000000</td>\n", " <td>11.000000</td>\n", " </tr>\n", " <tr>\n", " <th>75%</th>\n", " <td>12.000000</td>\n", " <td>13.000000</td>\n", " <td>14.000000</td>\n", " <td>15.000000</td>\n", " </tr>\n", " <tr>\n", " <th>max</th>\n", " <td>16.000000</td>\n", " <td>17.000000</td>\n", " <td>18.000000</td>\n", " <td>19.000000</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " a b c d\n", "count 5.000000 5.000000 5.000000 5.000000\n", "mean 8.000000 9.000000 10.000000 11.000000\n", "std 6.324555 6.324555 6.324555 6.324555\n", "min 0.000000 1.000000 2.000000 3.000000\n", "25% 4.000000 5.000000 6.000000 7.000000\n", "50% 8.000000 9.000000 10.000000 11.000000\n", "75% 12.000000 13.000000 14.000000 15.000000\n", "max 16.000000 17.000000 18.000000 19.000000" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are all of the summary methods:\n", "\n", "| Method | Description |\n", "| -- | -- |\n", "| count | Number of non-NA values |\n", "| describe | Set of summary statistics |\n", "| min, max | Minimum, maximum values |\n", "| argmin, argmax | Index locations at which the minimum or maximum value is obtained | \n", "| quantile | Compute sample quantile ranging from 0 to 1 |\n", "| sum | Sum of values |\n", "| mean | Mean of values |\n", "| median | Arithmetic median of values |\n", "| mad | Mean absolute deviation from mean value |\n", "| prod | Product of all values |\n", "| var | Sample variance of values\n", "| std | Sample standard deviation of values\n", "| cumsum | Cumulative sum of values |\n", "| cummin, cummax | Cumulative minimum or maximum of values, respectively |\n", "| cumprod | Cumulative product of values |" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Loading and Storing\n", "Accessing data is a necessary first step for data science. In this section the focus will be on data input and output in various formats using `pandas`\n", "\n", "Data usually fall into these categories:\n", "- text files\n", "- binary files (more efficient space-wise)\n", "- web data\n", "\n", "## Text formats\n", "The most comman format in this category is by far `.csv`. This is an easy to read file format which is usually visualised like a spreadsheet. The data itself is usually seperated with a `,` which is called the **delimiter**.\n", "\n", "Here is an example of a `.csv` file:\n", "\n", "```\n", "\"Sell\", \"List\", \"Living\", \"Rooms\", \"Beds\", \"Baths\", \"Age\", \"Acres\", \"Taxes\"\n", "142, 160, 28, 10, 5, 3, 60, 0.28, 3167\n", "175, 180, 18, 8, 4, 1, 12, 0.43, 4033\n", "129, 132, 13, 6, 3, 1, 41, 0.33, 1471\n", "138, 140, 17, 7, 3, 1, 22, 0.46, 3204\n", "232, 240, 25, 8, 4, 3, 5, 2.05, 3613\n", "135, 140, 18, 7, 4, 3, 9, 0.57, 3028\n", "150, 160, 20, 8, 4, 3, 18, 4.00, 3131\n", "207, 225, 22, 8, 4, 2, 16, 2.22, 5158\n", "271, 285, 30, 10, 5, 2, 30, 0.53, 5702\n", " 89, 90, 10, 5, 3, 1, 43, 0.30, 2054\n", " ```\n", "\n", "It detailed home sale statistics. The first line is called the header, and you can imagine that it is the name of the columns of a spreadsheet.\n", "\n", "Let's now see how we can load this data and analyse it. The file is located in the folder `data` and is called `homes.csv`. We can read it like this:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "homes = pd.read_csv(\"data/homes.csv\")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Sell</th>\n", " <th>List</th>\n", " <th>Living</th>\n", " <th>Rooms</th>\n", " <th>Beds</th>\n", " <th>Baths</th>\n", " <th>Age</th>\n", " <th>Acres</th>\n", " <th>Taxes</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>142</td>\n", " <td>160</td>\n", " <td>28</td>\n", " <td>10</td>\n", " <td>5</td>\n", " <td>3</td>\n", " <td>60</td>\n", " <td>0.28</td>\n", " <td>3167</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>175</td>\n", " <td>180</td>\n", " <td>18</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>12</td>\n", " <td>0.43</td>\n", " <td>4033</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>129</td>\n", " <td>132</td>\n", " <td>13</td>\n", " <td>6</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>41</td>\n", " <td>0.33</td>\n", " <td>1471</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>138</td>\n", " <td>140</td>\n", " <td>17</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>22</td>\n", " <td>0.46</td>\n", " <td>3204</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>232</td>\n", " <td>240</td>\n", " <td>25</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>3</td>\n", " <td>5</td>\n", " <td>2.05</td>\n", " <td>3613</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>135</td>\n", " <td>140</td>\n", " <td>18</td>\n", " <td>7</td>\n", " <td>4</td>\n", " <td>3</td>\n", " <td>9</td>\n", " <td>0.57</td>\n", " <td>3028</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>150</td>\n", " <td>160</td>\n", " <td>20</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>3</td>\n", " <td>18</td>\n", " <td>4.00</td>\n", " <td>3131</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>207</td>\n", " <td>225</td>\n", " <td>22</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>16</td>\n", " <td>2.22</td>\n", " <td>5158</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>271</td>\n", " <td>285</td>\n", " <td>30</td>\n", " <td>10</td>\n", " <td>5</td>\n", " <td>2</td>\n", " <td>30</td>\n", " <td>0.53</td>\n", " <td>5702</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>89</td>\n", " <td>90</td>\n", " <td>10</td>\n", " <td>5</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>43</td>\n", " <td>0.30</td>\n", " <td>2054</td>\n", " </tr>\n", " <tr>\n", " <th>10</th>\n", " <td>153</td>\n", " <td>157</td>\n", " <td>22</td>\n", " <td>8</td>\n", " <td>3</td>\n", " <td>3</td>\n", " <td>18</td>\n", " <td>0.38</td>\n", " <td>4127</td>\n", " </tr>\n", " <tr>\n", " <th>11</th>\n", " <td>87</td>\n", " <td>90</td>\n", " <td>16</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>50</td>\n", " <td>0.65</td>\n", " <td>1445</td>\n", " </tr>\n", " <tr>\n", " <th>12</th>\n", " <td>234</td>\n", " <td>238</td>\n", " <td>25</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>2</td>\n", " <td>1.61</td>\n", " <td>2087</td>\n", " </tr>\n", " <tr>\n", " <th>13</th>\n", " <td>106</td>\n", " <td>116</td>\n", " <td>20</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>13</td>\n", " <td>0.22</td>\n", " <td>2818</td>\n", " </tr>\n", " <tr>\n", " <th>14</th>\n", " <td>175</td>\n", " <td>180</td>\n", " <td>22</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>15</td>\n", " <td>2.06</td>\n", " <td>3917</td>\n", " </tr>\n", " <tr>\n", " <th>15</th>\n", " <td>165</td>\n", " <td>170</td>\n", " <td>17</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>33</td>\n", " <td>0.46</td>\n", " <td>2220</td>\n", " </tr>\n", " <tr>\n", " <th>16</th>\n", " <td>166</td>\n", " <td>170</td>\n", " <td>23</td>\n", " <td>9</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>37</td>\n", " <td>0.27</td>\n", " <td>3498</td>\n", " </tr>\n", " <tr>\n", " <th>17</th>\n", " <td>136</td>\n", " <td>140</td>\n", " <td>19</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>22</td>\n", " <td>0.63</td>\n", " <td>3607</td>\n", " </tr>\n", " <tr>\n", " <th>18</th>\n", " <td>148</td>\n", " <td>160</td>\n", " <td>17</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>2</td>\n", " <td>13</td>\n", " <td>0.36</td>\n", " <td>3648</td>\n", " </tr>\n", " <tr>\n", " <th>19</th>\n", " <td>151</td>\n", " <td>153</td>\n", " <td>19</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>24</td>\n", " <td>0.34</td>\n", " <td>3561</td>\n", " </tr>\n", " <tr>\n", " <th>20</th>\n", " <td>180</td>\n", " <td>190</td>\n", " <td>24</td>\n", " <td>9</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>10</td>\n", " <td>1.55</td>\n", " <td>4681</td>\n", " </tr>\n", " <tr>\n", " <th>21</th>\n", " <td>293</td>\n", " <td>305</td>\n", " <td>26</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>3</td>\n", " <td>6</td>\n", " <td>0.46</td>\n", " <td>7088</td>\n", " </tr>\n", " <tr>\n", " <th>22</th>\n", " <td>167</td>\n", " <td>170</td>\n", " <td>20</td>\n", " <td>9</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>46</td>\n", " <td>0.46</td>\n", " <td>3482</td>\n", " </tr>\n", " <tr>\n", " <th>23</th>\n", " <td>190</td>\n", " <td>193</td>\n", " <td>22</td>\n", " <td>9</td>\n", " <td>5</td>\n", " <td>2</td>\n", " <td>37</td>\n", " <td>0.48</td>\n", " <td>3920</td>\n", " </tr>\n", " <tr>\n", " <th>24</th>\n", " <td>184</td>\n", " <td>190</td>\n", " <td>21</td>\n", " <td>9</td>\n", " <td>5</td>\n", " <td>2</td>\n", " <td>27</td>\n", " <td>1.30</td>\n", " <td>4162</td>\n", " </tr>\n", " <tr>\n", " <th>25</th>\n", " <td>157</td>\n", " <td>165</td>\n", " <td>20</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>7</td>\n", " <td>0.30</td>\n", " <td>3785</td>\n", " </tr>\n", " <tr>\n", " <th>26</th>\n", " <td>110</td>\n", " <td>115</td>\n", " <td>16</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>26</td>\n", " <td>0.29</td>\n", " <td>3103</td>\n", " </tr>\n", " <tr>\n", " <th>27</th>\n", " <td>135</td>\n", " <td>145</td>\n", " <td>18</td>\n", " <td>7</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>35</td>\n", " <td>0.43</td>\n", " <td>3363</td>\n", " </tr>\n", " <tr>\n", " <th>28</th>\n", " <td>567</td>\n", " <td>625</td>\n", " <td>64</td>\n", " <td>11</td>\n", " <td>4</td>\n", " <td>4</td>\n", " <td>4</td>\n", " <td>0.85</td>\n", " <td>12192</td>\n", " </tr>\n", " <tr>\n", " <th>29</th>\n", " <td>180</td>\n", " <td>185</td>\n", " <td>20</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>11</td>\n", " <td>1.00</td>\n", " <td>3831</td>\n", " </tr>\n", " <tr>\n", " <th>30</th>\n", " <td>183</td>\n", " <td>188</td>\n", " <td>17</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>2</td>\n", " <td>16</td>\n", " <td>3.00</td>\n", " <td>3564</td>\n", " </tr>\n", " <tr>\n", " <th>31</th>\n", " <td>185</td>\n", " <td>193</td>\n", " <td>20</td>\n", " <td>9</td>\n", " <td>3</td>\n", " <td>2</td>\n", " <td>56</td>\n", " <td>6.49</td>\n", " <td>3765</td>\n", " </tr>\n", " <tr>\n", " <th>32</th>\n", " <td>152</td>\n", " <td>155</td>\n", " <td>17</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>33</td>\n", " <td>0.70</td>\n", " <td>3361</td>\n", " </tr>\n", " <tr>\n", " <th>33</th>\n", " <td>148</td>\n", " <td>153</td>\n", " <td>13</td>\n", " <td>6</td>\n", " <td>3</td>\n", " <td>2</td>\n", " <td>22</td>\n", " <td>0.39</td>\n", " <td>3950</td>\n", " </tr>\n", " <tr>\n", " <th>34</th>\n", " <td>152</td>\n", " <td>159</td>\n", " <td>15</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>25</td>\n", " <td>0.59</td>\n", " <td>3055</td>\n", " </tr>\n", " <tr>\n", " <th>35</th>\n", " <td>146</td>\n", " <td>150</td>\n", " <td>16</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>31</td>\n", " <td>0.36</td>\n", " <td>2950</td>\n", " </tr>\n", " <tr>\n", " <th>36</th>\n", " <td>170</td>\n", " <td>190</td>\n", " <td>24</td>\n", " <td>10</td>\n", " <td>3</td>\n", " <td>2</td>\n", " <td>33</td>\n", " <td>0.57</td>\n", " <td>3346</td>\n", " </tr>\n", " <tr>\n", " <th>37</th>\n", " <td>127</td>\n", " <td>130</td>\n", " <td>20</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>65</td>\n", " <td>0.40</td>\n", " <td>3334</td>\n", " </tr>\n", " <tr>\n", " <th>38</th>\n", " <td>265</td>\n", " <td>270</td>\n", " <td>36</td>\n", " <td>10</td>\n", " <td>6</td>\n", " <td>3</td>\n", " <td>33</td>\n", " <td>1.20</td>\n", " <td>5853</td>\n", " </tr>\n", " <tr>\n", " <th>39</th>\n", " <td>157</td>\n", " <td>163</td>\n", " <td>18</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>12</td>\n", " <td>1.13</td>\n", " <td>3982</td>\n", " </tr>\n", " <tr>\n", " <th>40</th>\n", " <td>128</td>\n", " <td>135</td>\n", " <td>17</td>\n", " <td>9</td>\n", " <td>4</td>\n", " <td>1</td>\n", " <td>25</td>\n", " <td>0.52</td>\n", " <td>3374</td>\n", " </tr>\n", " <tr>\n", " <th>41</th>\n", " <td>110</td>\n", " <td>120</td>\n", " <td>15</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>11</td>\n", " <td>0.59</td>\n", " <td>3119</td>\n", " </tr>\n", " <tr>\n", " <th>42</th>\n", " <td>123</td>\n", " <td>130</td>\n", " <td>18</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>43</td>\n", " <td>0.39</td>\n", " <td>3268</td>\n", " </tr>\n", " <tr>\n", " <th>43</th>\n", " <td>212</td>\n", " <td>230</td>\n", " <td>39</td>\n", " <td>12</td>\n", " <td>5</td>\n", " <td>3</td>\n", " <td>202</td>\n", " <td>4.29</td>\n", " <td>3648</td>\n", " </tr>\n", " <tr>\n", " <th>44</th>\n", " <td>145</td>\n", " <td>145</td>\n", " <td>18</td>\n", " <td>8</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>44</td>\n", " <td>0.22</td>\n", " <td>2783</td>\n", " </tr>\n", " <tr>\n", " <th>45</th>\n", " <td>129</td>\n", " <td>135</td>\n", " <td>10</td>\n", " <td>6</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>15</td>\n", " <td>1.00</td>\n", " <td>2438</td>\n", " </tr>\n", " <tr>\n", " <th>46</th>\n", " <td>143</td>\n", " <td>145</td>\n", " <td>21</td>\n", " <td>7</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>10</td>\n", " <td>1.20</td>\n", " <td>3529</td>\n", " </tr>\n", " <tr>\n", " <th>47</th>\n", " <td>247</td>\n", " <td>252</td>\n", " <td>29</td>\n", " <td>9</td>\n", " <td>4</td>\n", " <td>2</td>\n", " <td>4</td>\n", " <td>1.25</td>\n", " <td>4626</td>\n", " </tr>\n", " <tr>\n", " <th>48</th>\n", " <td>111</td>\n", " <td>120</td>\n", " <td>15</td>\n", " <td>8</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>97</td>\n", " <td>1.11</td>\n", " <td>3205</td>\n", " </tr>\n", " <tr>\n", " <th>49</th>\n", " <td>133</td>\n", " <td>145</td>\n", " <td>26</td>\n", " <td>7</td>\n", " <td>3</td>\n", " <td>1</td>\n", " <td>42</td>\n", " <td>0.36</td>\n", " <td>3059</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Sell List Living Rooms Beds Baths Age Acres Taxes\n", "0 142 160 28 10 5 3 60 0.28 3167\n", "1 175 180 18 8 4 1 12 0.43 4033\n", "2 129 132 13 6 3 1 41 0.33 1471\n", "3 138 140 17 7 3 1 22 0.46 3204\n", "4 232 240 25 8 4 3 5 2.05 3613\n", "5 135 140 18 7 4 3 9 0.57 3028\n", "6 150 160 20 8 4 3 18 4.00 3131\n", "7 207 225 22 8 4 2 16 2.22 5158\n", "8 271 285 30 10 5 2 30 0.53 5702\n", "9 89 90 10 5 3 1 43 0.30 2054\n", "10 153 157 22 8 3 3 18 0.38 4127\n", "11 87 90 16 7 3 1 50 0.65 1445\n", "12 234 238 25 8 4 2 2 1.61 2087\n", "13 106 116 20 8 4 1 13 0.22 2818\n", "14 175 180 22 8 4 2 15 2.06 3917\n", "15 165 170 17 8 4 2 33 0.46 2220\n", "16 166 170 23 9 4 2 37 0.27 3498\n", "17 136 140 19 7 3 1 22 0.63 3607\n", "18 148 160 17 7 3 2 13 0.36 3648\n", "19 151 153 19 8 4 2 24 0.34 3561\n", "20 180 190 24 9 4 2 10 1.55 4681\n", "21 293 305 26 8 4 3 6 0.46 7088\n", "22 167 170 20 9 4 2 46 0.46 3482\n", "23 190 193 22 9 5 2 37 0.48 3920\n", "24 184 190 21 9 5 2 27 1.30 4162\n", "25 157 165 20 8 4 2 7 0.30 3785\n", "26 110 115 16 8 4 1 26 0.29 3103\n", "27 135 145 18 7 4 1 35 0.43 3363\n", "28 567 625 64 11 4 4 4 0.85 12192\n", "29 180 185 20 8 4 2 11 1.00 3831\n", "30 183 188 17 7 3 2 16 3.00 3564\n", "31 185 193 20 9 3 2 56 6.49 3765\n", "32 152 155 17 8 4 1 33 0.70 3361\n", "33 148 153 13 6 3 2 22 0.39 3950\n", "34 152 159 15 7 3 1 25 0.59 3055\n", "35 146 150 16 7 3 1 31 0.36 2950\n", "36 170 190 24 10 3 2 33 0.57 3346\n", "37 127 130 20 8 4 1 65 0.40 3334\n", "38 265 270 36 10 6 3 33 1.20 5853\n", "39 157 163 18 8 4 2 12 1.13 3982\n", "40 128 135 17 9 4 1 25 0.52 3374\n", "41 110 120 15 8 4 2 11 0.59 3119\n", "42 123 130 18 8 4 2 43 0.39 3268\n", "43 212 230 39 12 5 3 202 4.29 3648\n", "44 145 145 18 8 4 2 44 0.22 2783\n", "45 129 135 10 6 3 1 15 1.00 2438\n", "46 143 145 21 7 4 2 10 1.20 3529\n", "47 247 252 29 9 4 2 4 1.25 4626\n", "48 111 120 15 8 3 1 97 1.11 3205\n", "49 133 145 26 7 3 1 42 0.36 3059" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "homes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Easy right?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "Find the mean selling price of the homes in `data/homes.csv`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `read_csv` function has a lot of optional arguments (more than 50). It's impossible to memorise all of them - it's usually best just to look up the particular functionality when you need it. \n", "\n", "You can search `pandas read_csv` online and find all of the documentation.\n", "\n", "There's also many other functions that can read textual data. Here are some of them:\n", "\n", "| Function | Description\n", "| -- | -- |\n", "| read_csv | Load delimited data from a file, URL, or file-like object. Default delimiter is comma `,` |\n", "| read_table | Load delimited data from a file, URL, or file-like object. Default delimiter is tab `\\t` |\n", "| read_fwf | Read data in fixed0width column format (i.e. no delimiters |\n", "| read_clipboard | Reads the last object you have copied (Ctrl-C) |\n", "| read_excel | Read tabular data from Excel XLS or XLSX file |\n", "| read_hdf | Read HDF5 file written by pandas |\n", "| read_html | Read all tables found in the given HTML document |\n", "| read_json | Read dara from a JSON string representation |\n", "| read_sql | Read the results of a SQL query |\n", "\n", "*Note: there are also other loading functions which are not touched upon here*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "There is another file in the data folder called `homes.xlsx`. Can you read it? Can you spot anythin different?" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Sell</th>\n", " <th>List</th>\n", " <th>Living</th>\n", " <th>Rooms</th>\n", " <th>Beds</th>\n", " <th>Baths</th>\n", " <th>Age</th>\n", " <th>Acres</th>\n", " <th>Taxes</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>142</td>\n", " <td>160.0</td>\n", " <td>28.0</td>\n", " <td>10.0</td>\n", " <td>5.0</td>\n", " <td>3.0</td>\n", " <td>60.0</td>\n", " <td>0.28</td>\n", " <td>3167.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>175</td>\n", " <td>180.0</td>\n", " <td>18.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>12.0</td>\n", " <td>0.43</td>\n", " <td>4033.0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>129</td>\n", " <td>132.0</td>\n", " <td>13.0</td>\n", " <td>6.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>41.0</td>\n", " <td>0.33</td>\n", " <td>1471.0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>138</td>\n", " <td>140.0</td>\n", " <td>17.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>22.0</td>\n", " <td>0.46</td>\n", " <td>3204.0</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>232</td>\n", " <td>240.0</td>\n", " <td>25.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>3.0</td>\n", " <td>5.0</td>\n", " <td>2.05</td>\n", " <td>3613.0</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>135</td>\n", " <td>140.0</td>\n", " <td>18.0</td>\n", " <td>7.0</td>\n", " <td>4.0</td>\n", " <td>3.0</td>\n", " <td>9.0</td>\n", " <td>0.57</td>\n", " <td>3028.0</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>150</td>\n", " <td>160.0</td>\n", " <td>20.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>3.0</td>\n", " <td>18.0</td>\n", " <td>4.00</td>\n", " <td>3131.0</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>207</td>\n", " <td>225.0</td>\n", " <td>22.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>16.0</td>\n", " <td>2.22</td>\n", " <td>5158.0</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>271</td>\n", " <td>285.0</td>\n", " <td>30.0</td>\n", " <td>10.0</td>\n", " <td>5.0</td>\n", " <td>2.0</td>\n", " <td>30.0</td>\n", " <td>0.53</td>\n", " <td>5702.0</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>89</td>\n", " <td>90.0</td>\n", " <td>10.0</td>\n", " <td>5.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>43.0</td>\n", " <td>0.30</td>\n", " <td>2054.0</td>\n", " </tr>\n", " <tr>\n", " <th>10</th>\n", " <td>153</td>\n", " <td>157.0</td>\n", " <td>22.0</td>\n", " <td>8.0</td>\n", " <td>3.0</td>\n", " <td>3.0</td>\n", " <td>18.0</td>\n", " <td>0.38</td>\n", " <td>4127.0</td>\n", " </tr>\n", " <tr>\n", " <th>11</th>\n", " <td>87</td>\n", " <td>90.0</td>\n", " <td>16.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>50.0</td>\n", " <td>0.65</td>\n", " <td>1445.0</td>\n", " </tr>\n", " <tr>\n", " <th>12</th>\n", " <td>234</td>\n", " <td>238.0</td>\n", " <td>25.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>2.0</td>\n", " <td>1.61</td>\n", " <td>2087.0</td>\n", " </tr>\n", " <tr>\n", " <th>13</th>\n", " <td>106</td>\n", " <td>116.0</td>\n", " <td>20.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>13.0</td>\n", " <td>0.22</td>\n", " <td>2818.0</td>\n", " </tr>\n", " <tr>\n", " <th>14</th>\n", " <td>175</td>\n", " <td>180.0</td>\n", " <td>22.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>15.0</td>\n", " <td>2.06</td>\n", " <td>3917.0</td>\n", " </tr>\n", " <tr>\n", " <th>15</th>\n", " <td>165</td>\n", " <td>170.0</td>\n", " <td>17.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>33.0</td>\n", " <td>0.46</td>\n", " <td>2220.0</td>\n", " </tr>\n", " <tr>\n", " <th>16</th>\n", " <td>166</td>\n", " <td>170.0</td>\n", " <td>23.0</td>\n", " <td>9.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>37.0</td>\n", " <td>0.27</td>\n", " <td>3498.0</td>\n", " </tr>\n", " <tr>\n", " <th>17</th>\n", " <td>136</td>\n", " <td>140.0</td>\n", " <td>19.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>22.0</td>\n", " <td>0.63</td>\n", " <td>3607.0</td>\n", " </tr>\n", " <tr>\n", " <th>18</th>\n", " <td>148</td>\n", " <td>160.0</td>\n", " <td>17.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>2.0</td>\n", " <td>13.0</td>\n", " <td>0.36</td>\n", " <td>3648.0</td>\n", " </tr>\n", " <tr>\n", " <th>19</th>\n", " <td>151</td>\n", " <td>153.0</td>\n", " <td>19.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>24.0</td>\n", " <td>0.34</td>\n", " <td>3561.0</td>\n", " </tr>\n", " <tr>\n", " <th>20</th>\n", " <td>180</td>\n", " <td>190.0</td>\n", " <td>24.0</td>\n", " <td>9.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>10.0</td>\n", " <td>1.55</td>\n", " <td>4681.0</td>\n", " </tr>\n", " <tr>\n", " <th>21</th>\n", " <td>293</td>\n", " <td>305.0</td>\n", " <td>26.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>3.0</td>\n", " <td>6.0</td>\n", " <td>0.46</td>\n", " <td>7088.0</td>\n", " </tr>\n", " <tr>\n", " <th>22</th>\n", " <td>167</td>\n", " <td>170.0</td>\n", " <td>20.0</td>\n", " <td>9.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>46.0</td>\n", " <td>0.46</td>\n", " <td>3482.0</td>\n", " </tr>\n", " <tr>\n", " <th>23</th>\n", " <td>190</td>\n", " <td>193.0</td>\n", " <td>22.0</td>\n", " <td>9.0</td>\n", " <td>5.0</td>\n", " <td>2.0</td>\n", " <td>37.0</td>\n", " <td>0.48</td>\n", " <td>3920.0</td>\n", " </tr>\n", " <tr>\n", " <th>24</th>\n", " <td>184</td>\n", " <td>190.0</td>\n", " <td>21.0</td>\n", " <td>9.0</td>\n", " <td>5.0</td>\n", " <td>2.0</td>\n", " <td>27.0</td>\n", " <td>1.30</td>\n", " <td>4162.0</td>\n", " </tr>\n", " <tr>\n", " <th>25</th>\n", " <td>157</td>\n", " <td>165.0</td>\n", " <td>20.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>7.0</td>\n", " <td>0.30</td>\n", " <td>3785.0</td>\n", " </tr>\n", " <tr>\n", " <th>26</th>\n", " <td>110</td>\n", " <td>115.0</td>\n", " <td>16.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>26.0</td>\n", " <td>0.29</td>\n", " <td>3103.0</td>\n", " </tr>\n", " <tr>\n", " <th>27</th>\n", " <td>135</td>\n", " <td>145.0</td>\n", " <td>18.0</td>\n", " <td>7.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>35.0</td>\n", " <td>0.43</td>\n", " <td>3363.0</td>\n", " </tr>\n", " <tr>\n", " <th>28</th>\n", " <td>567</td>\n", " <td>625.0</td>\n", " <td>64.0</td>\n", " <td>11.0</td>\n", " <td>4.0</td>\n", " <td>4.0</td>\n", " <td>4.0</td>\n", " <td>0.85</td>\n", " <td>12192.0</td>\n", " </tr>\n", " <tr>\n", " <th>29</th>\n", " <td>180</td>\n", " <td>185.0</td>\n", " <td>20.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>11.0</td>\n", " <td>1.00</td>\n", " <td>3831.0</td>\n", " </tr>\n", " <tr>\n", " <th>30</th>\n", " <td>183</td>\n", " <td>188.0</td>\n", " <td>17.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>2.0</td>\n", " <td>16.0</td>\n", " <td>3.00</td>\n", " <td>3564.0</td>\n", " </tr>\n", " <tr>\n", " <th>31</th>\n", " <td>185</td>\n", " <td>193.0</td>\n", " <td>20.0</td>\n", " <td>9.0</td>\n", " <td>3.0</td>\n", " <td>2.0</td>\n", " <td>56.0</td>\n", " <td>6.49</td>\n", " <td>3765.0</td>\n", " </tr>\n", " <tr>\n", " <th>32</th>\n", " <td>152</td>\n", " <td>155.0</td>\n", " <td>17.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>33.0</td>\n", " <td>0.70</td>\n", " <td>3361.0</td>\n", " </tr>\n", " <tr>\n", " <th>33</th>\n", " <td>148</td>\n", " <td>153.0</td>\n", " <td>13.0</td>\n", " <td>6.0</td>\n", " <td>3.0</td>\n", " <td>2.0</td>\n", " <td>22.0</td>\n", " <td>0.39</td>\n", " <td>3950.0</td>\n", " </tr>\n", " <tr>\n", " <th>34</th>\n", " <td>152</td>\n", " <td>159.0</td>\n", " <td>15.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>25.0</td>\n", " <td>0.59</td>\n", " <td>3055.0</td>\n", " </tr>\n", " <tr>\n", " <th>35</th>\n", " <td>146</td>\n", " <td>150.0</td>\n", " <td>16.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>31.0</td>\n", " <td>0.36</td>\n", " <td>2950.0</td>\n", " </tr>\n", " <tr>\n", " <th>36</th>\n", " <td>170</td>\n", " <td>190.0</td>\n", " <td>24.0</td>\n", " <td>10.0</td>\n", " <td>3.0</td>\n", " <td>2.0</td>\n", " <td>33.0</td>\n", " <td>0.57</td>\n", " <td>3346.0</td>\n", " </tr>\n", " <tr>\n", " <th>37</th>\n", " <td>127</td>\n", " <td>130.0</td>\n", " <td>20.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>65.0</td>\n", " <td>0.40</td>\n", " <td>3334.0</td>\n", " </tr>\n", " <tr>\n", " <th>38</th>\n", " <td>265</td>\n", " <td>270.0</td>\n", " <td>36.0</td>\n", " <td>10.0</td>\n", " <td>6.0</td>\n", " <td>3.0</td>\n", " <td>33.0</td>\n", " <td>1.20</td>\n", " <td>5853.0</td>\n", " </tr>\n", " <tr>\n", " <th>39</th>\n", " <td>157</td>\n", " <td>163.0</td>\n", " <td>18.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>12.0</td>\n", " <td>1.13</td>\n", " <td>3982.0</td>\n", " </tr>\n", " <tr>\n", " <th>40</th>\n", " <td>128</td>\n", " <td>135.0</td>\n", " <td>17.0</td>\n", " <td>9.0</td>\n", " <td>4.0</td>\n", " <td>1.0</td>\n", " <td>25.0</td>\n", " <td>0.52</td>\n", " <td>3374.0</td>\n", " </tr>\n", " <tr>\n", " <th>41</th>\n", " <td>110</td>\n", " <td>120.0</td>\n", " <td>15.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>11.0</td>\n", " <td>0.59</td>\n", " <td>3119.0</td>\n", " </tr>\n", " <tr>\n", " <th>42</th>\n", " <td>123</td>\n", " <td>130.0</td>\n", " <td>18.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>43.0</td>\n", " <td>0.39</td>\n", " <td>3268.0</td>\n", " </tr>\n", " <tr>\n", " <th>43</th>\n", " <td>212</td>\n", " <td>230.0</td>\n", " <td>39.0</td>\n", " <td>12.0</td>\n", " <td>5.0</td>\n", " <td>3.0</td>\n", " <td>202.0</td>\n", " <td>4.29</td>\n", " <td>3648.0</td>\n", " </tr>\n", " <tr>\n", " <th>44</th>\n", " <td>145</td>\n", " <td>145.0</td>\n", " <td>18.0</td>\n", " <td>8.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>44.0</td>\n", " <td>0.22</td>\n", " <td>2783.0</td>\n", " </tr>\n", " <tr>\n", " <th>45</th>\n", " <td>129</td>\n", " <td>135.0</td>\n", " <td>10.0</td>\n", " <td>6.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>15.0</td>\n", " <td>1.00</td>\n", " <td>2438.0</td>\n", " </tr>\n", " <tr>\n", " <th>46</th>\n", " <td>143</td>\n", " <td>145.0</td>\n", " <td>21.0</td>\n", " <td>7.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>10.0</td>\n", " <td>1.20</td>\n", " <td>3529.0</td>\n", " </tr>\n", " <tr>\n", " <th>47</th>\n", " <td>247</td>\n", " <td>252.0</td>\n", " <td>29.0</td>\n", " <td>9.0</td>\n", " <td>4.0</td>\n", " <td>2.0</td>\n", " <td>4.0</td>\n", " <td>1.25</td>\n", " <td>4626.0</td>\n", " </tr>\n", " <tr>\n", " <th>48</th>\n", " <td>111</td>\n", " <td>120.0</td>\n", " <td>15.0</td>\n", " <td>8.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>97.0</td>\n", " <td>1.11</td>\n", " <td>3205.0</td>\n", " </tr>\n", " <tr>\n", " <th>49</th>\n", " <td>133</td>\n", " <td>145.0</td>\n", " <td>26.0</td>\n", " <td>7.0</td>\n", " <td>3.0</td>\n", " <td>1.0</td>\n", " <td>42.0</td>\n", " <td>0.36</td>\n", " <td>3059.0</td>\n", " </tr>\n", " <tr>\n", " <th>50</th>\n", " <td></td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " Sell List Living Rooms Beds Baths Age Acres Taxes\n", "0 142 160.0 28.0 10.0 5.0 3.0 60.0 0.28 3167.0\n", "1 175 180.0 18.0 8.0 4.0 1.0 12.0 0.43 4033.0\n", "2 129 132.0 13.0 6.0 3.0 1.0 41.0 0.33 1471.0\n", "3 138 140.0 17.0 7.0 3.0 1.0 22.0 0.46 3204.0\n", "4 232 240.0 25.0 8.0 4.0 3.0 5.0 2.05 3613.0\n", "5 135 140.0 18.0 7.0 4.0 3.0 9.0 0.57 3028.0\n", "6 150 160.0 20.0 8.0 4.0 3.0 18.0 4.00 3131.0\n", "7 207 225.0 22.0 8.0 4.0 2.0 16.0 2.22 5158.0\n", "8 271 285.0 30.0 10.0 5.0 2.0 30.0 0.53 5702.0\n", "9 89 90.0 10.0 5.0 3.0 1.0 43.0 0.30 2054.0\n", "10 153 157.0 22.0 8.0 3.0 3.0 18.0 0.38 4127.0\n", "11 87 90.0 16.0 7.0 3.0 1.0 50.0 0.65 1445.0\n", "12 234 238.0 25.0 8.0 4.0 2.0 2.0 1.61 2087.0\n", "13 106 116.0 20.0 8.0 4.0 1.0 13.0 0.22 2818.0\n", "14 175 180.0 22.0 8.0 4.0 2.0 15.0 2.06 3917.0\n", "15 165 170.0 17.0 8.0 4.0 2.0 33.0 0.46 2220.0\n", "16 166 170.0 23.0 9.0 4.0 2.0 37.0 0.27 3498.0\n", "17 136 140.0 19.0 7.0 3.0 1.0 22.0 0.63 3607.0\n", "18 148 160.0 17.0 7.0 3.0 2.0 13.0 0.36 3648.0\n", "19 151 153.0 19.0 8.0 4.0 2.0 24.0 0.34 3561.0\n", "20 180 190.0 24.0 9.0 4.0 2.0 10.0 1.55 4681.0\n", "21 293 305.0 26.0 8.0 4.0 3.0 6.0 0.46 7088.0\n", "22 167 170.0 20.0 9.0 4.0 2.0 46.0 0.46 3482.0\n", "23 190 193.0 22.0 9.0 5.0 2.0 37.0 0.48 3920.0\n", "24 184 190.0 21.0 9.0 5.0 2.0 27.0 1.30 4162.0\n", "25 157 165.0 20.0 8.0 4.0 2.0 7.0 0.30 3785.0\n", "26 110 115.0 16.0 8.0 4.0 1.0 26.0 0.29 3103.0\n", "27 135 145.0 18.0 7.0 4.0 1.0 35.0 0.43 3363.0\n", "28 567 625.0 64.0 11.0 4.0 4.0 4.0 0.85 12192.0\n", "29 180 185.0 20.0 8.0 4.0 2.0 11.0 1.00 3831.0\n", "30 183 188.0 17.0 7.0 3.0 2.0 16.0 3.00 3564.0\n", "31 185 193.0 20.0 9.0 3.0 2.0 56.0 6.49 3765.0\n", "32 152 155.0 17.0 8.0 4.0 1.0 33.0 0.70 3361.0\n", "33 148 153.0 13.0 6.0 3.0 2.0 22.0 0.39 3950.0\n", "34 152 159.0 15.0 7.0 3.0 1.0 25.0 0.59 3055.0\n", "35 146 150.0 16.0 7.0 3.0 1.0 31.0 0.36 2950.0\n", "36 170 190.0 24.0 10.0 3.0 2.0 33.0 0.57 3346.0\n", "37 127 130.0 20.0 8.0 4.0 1.0 65.0 0.40 3334.0\n", "38 265 270.0 36.0 10.0 6.0 3.0 33.0 1.20 5853.0\n", "39 157 163.0 18.0 8.0 4.0 2.0 12.0 1.13 3982.0\n", "40 128 135.0 17.0 9.0 4.0 1.0 25.0 0.52 3374.0\n", "41 110 120.0 15.0 8.0 4.0 2.0 11.0 0.59 3119.0\n", "42 123 130.0 18.0 8.0 4.0 2.0 43.0 0.39 3268.0\n", "43 212 230.0 39.0 12.0 5.0 3.0 202.0 4.29 3648.0\n", "44 145 145.0 18.0 8.0 4.0 2.0 44.0 0.22 2783.0\n", "45 129 135.0 10.0 6.0 3.0 1.0 15.0 1.00 2438.0\n", "46 143 145.0 21.0 7.0 4.0 2.0 10.0 1.20 3529.0\n", "47 247 252.0 29.0 9.0 4.0 2.0 4.0 1.25 4626.0\n", "48 111 120.0 15.0 8.0 3.0 1.0 97.0 1.11 3205.0\n", "49 133 145.0 26.0 7.0 3.0 1.0 42.0 0.36 3059.0\n", "50 NaN NaN NaN NaN NaN NaN NaN NaN" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.read_excel(\"data/homes.xlsx\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Writing CSV files\n", "Easy!" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "homes.to_csv(\"test.csv\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise \n", "Create a DataFrame which consists of all numbers 0 to 1000. Reshape it into 50 rows and save it to a `.csv` file. How many columns did you end up with?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Web scraping\n", "It is also very easy to scrape webpages and extract tables from them.\n", "\n", "For example, let's consdier extracting the table of failed American banks." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "url = \"https://www.fdic.gov/bank/individual/failed/banklist.html\"\n", "banks = pd.read_html(url)\n", "banks = banks[0]" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>Bank Name</th>\n", " <th>City</th>\n", " <th>ST</th>\n", " <th>CERT</th>\n", " <th>Acquiring Institution</th>\n", " <th>Closing Date</th>\n", " <th>Updated Date</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>Washington Federal Bank for Savings</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>30570</td>\n", " <td>Royal Savings Bank</td>\n", " <td>December 15, 2017</td>\n", " <td>February 21, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>The Farmers and Merchants State Bank of Argonia</td>\n", " <td>Argonia</td>\n", " <td>KS</td>\n", " <td>17719</td>\n", " <td>Conway Bank</td>\n", " <td>October 13, 2017</td>\n", " <td>February 21, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>Fayette County Bank</td>\n", " <td>Saint Elmo</td>\n", " <td>IL</td>\n", " <td>1802</td>\n", " <td>United Fidelity Bank, fsb</td>\n", " <td>May 26, 2017</td>\n", " <td>July 26, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>Guaranty Bank, (d/b/a BestBank in Georgia & Mi...</td>\n", " <td>Milwaukee</td>\n", " <td>WI</td>\n", " <td>30003</td>\n", " <td>First-Citizens Bank & Trust Company</td>\n", " <td>May 5, 2017</td>\n", " <td>March 22, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>First NBC Bank</td>\n", " <td>New Orleans</td>\n", " <td>LA</td>\n", " <td>58302</td>\n", " <td>Whitney Bank</td>\n", " <td>April 28, 2017</td>\n", " <td>December 5, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>Proficio Bank</td>\n", " <td>Cottonwood Heights</td>\n", " <td>UT</td>\n", " <td>35495</td>\n", " <td>Cache Valley Bank</td>\n", " <td>March 3, 2017</td>\n", " <td>March 7, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>Seaway Bank and Trust Company</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>19328</td>\n", " <td>State Bank of Texas</td>\n", " <td>January 27, 2017</td>\n", " <td>May 18, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>Harvest Community Bank</td>\n", " <td>Pennsville</td>\n", " <td>NJ</td>\n", " <td>34951</td>\n", " <td>First-Citizens Bank & Trust Company</td>\n", " <td>January 13, 2017</td>\n", " <td>May 18, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>Allied Bank</td>\n", " <td>Mulberry</td>\n", " <td>AR</td>\n", " <td>91</td>\n", " <td>Today's Bank</td>\n", " <td>September 23, 2016</td>\n", " <td>September 25, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>9</th>\n", " <td>The Woodbury Banking Company</td>\n", " <td>Woodbury</td>\n", " <td>GA</td>\n", " <td>11297</td>\n", " <td>United Bank</td>\n", " <td>August 19, 2016</td>\n", " <td>June 1, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>10</th>\n", " <td>First CornerStone Bank</td>\n", " <td>King of Prussia</td>\n", " <td>PA</td>\n", " <td>35312</td>\n", " <td>First-Citizens Bank & Trust Company</td>\n", " <td>May 6, 2016</td>\n", " <td>November 13, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>11</th>\n", " <td>Trust Company Bank</td>\n", " <td>Memphis</td>\n", " <td>TN</td>\n", " <td>9956</td>\n", " <td>The Bank of Fayette County</td>\n", " <td>April 29, 2016</td>\n", " <td>September 6, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>12</th>\n", " <td>North Milwaukee State Bank</td>\n", " <td>Milwaukee</td>\n", " <td>WI</td>\n", " <td>20364</td>\n", " <td>First-Citizens Bank & Trust Company</td>\n", " <td>March 11, 2016</td>\n", " <td>March 13, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>13</th>\n", " <td>Hometown National Bank</td>\n", " <td>Longview</td>\n", " <td>WA</td>\n", " <td>35156</td>\n", " <td>Twin City Bank</td>\n", " <td>October 2, 2015</td>\n", " <td>February 19, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>14</th>\n", " <td>The Bank of Georgia</td>\n", " <td>Peachtree City</td>\n", " <td>GA</td>\n", " <td>35259</td>\n", " <td>Fidelity Bank</td>\n", " <td>October 2, 2015</td>\n", " <td>July 9, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>15</th>\n", " <td>Premier Bank</td>\n", " <td>Denver</td>\n", " <td>CO</td>\n", " <td>34112</td>\n", " <td>United Fidelity Bank, fsb</td>\n", " <td>July 10, 2015</td>\n", " <td>February 20, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>16</th>\n", " <td>Edgebrook Bank</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>57772</td>\n", " <td>Republic Bank of Chicago</td>\n", " <td>May 8, 2015</td>\n", " <td>July 12, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>17</th>\n", " <td>Doral Bank En Espanol</td>\n", " <td>San Juan</td>\n", " <td>PR</td>\n", " <td>32102</td>\n", " <td>Banco Popular de Puerto Rico</td>\n", " <td>February 27, 2015</td>\n", " <td>May 13, 2015</td>\n", " </tr>\n", " <tr>\n", " <th>18</th>\n", " <td>Capitol City Bank & Trust Company</td>\n", " <td>Atlanta</td>\n", " <td>GA</td>\n", " <td>33938</td>\n", " <td>First-Citizens Bank & Trust Company</td>\n", " <td>February 13, 2015</td>\n", " <td>April 21, 2015</td>\n", " </tr>\n", " <tr>\n", " <th>19</th>\n", " <td>Highland Community Bank</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>20290</td>\n", " <td>United Fidelity Bank, fsb</td>\n", " <td>January 23, 2015</td>\n", " <td>November 15, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>20</th>\n", " <td>First National Bank of Crestview</td>\n", " <td>Crestview</td>\n", " <td>FL</td>\n", " <td>17557</td>\n", " <td>First NBC Bank</td>\n", " <td>January 16, 2015</td>\n", " <td>November 15, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>21</th>\n", " <td>Northern Star Bank</td>\n", " <td>Mankato</td>\n", " <td>MN</td>\n", " <td>34983</td>\n", " <td>BankVista</td>\n", " <td>December 19, 2014</td>\n", " <td>January 3, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>22</th>\n", " <td>Frontier Bank, FSB D/B/A El Paseo Bank</td>\n", " <td>Palm Desert</td>\n", " <td>CA</td>\n", " <td>34738</td>\n", " <td>Bank of Southern California, N.A.</td>\n", " <td>November 7, 2014</td>\n", " <td>November 10, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>23</th>\n", " <td>The National Republic Bank of Chicago</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>916</td>\n", " <td>State Bank of Texas</td>\n", " <td>October 24, 2014</td>\n", " <td>January 6, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>24</th>\n", " <td>NBRS Financial</td>\n", " <td>Rising Sun</td>\n", " <td>MD</td>\n", " <td>4862</td>\n", " <td>Howard Bank</td>\n", " <td>October 17, 2014</td>\n", " <td>February 19, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>25</th>\n", " <td>GreenChoice Bank, fsb</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>28462</td>\n", " <td>Providence Bank, LLC</td>\n", " <td>July 25, 2014</td>\n", " <td>December 12, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>26</th>\n", " <td>Eastside Commercial Bank</td>\n", " <td>Conyers</td>\n", " <td>GA</td>\n", " <td>58125</td>\n", " <td>Community & Southern Bank</td>\n", " <td>July 18, 2014</td>\n", " <td>October 6, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>27</th>\n", " <td>The Freedom State Bank</td>\n", " <td>Freedom</td>\n", " <td>OK</td>\n", " <td>12483</td>\n", " <td>Alva State Bank & Trust Company</td>\n", " <td>June 27, 2014</td>\n", " <td>February 21, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>28</th>\n", " <td>Valley Bank</td>\n", " <td>Fort Lauderdale</td>\n", " <td>FL</td>\n", " <td>21793</td>\n", " <td>Landmark Bank, National Association</td>\n", " <td>June 20, 2014</td>\n", " <td>February 14, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>29</th>\n", " <td>Valley Bank</td>\n", " <td>Moline</td>\n", " <td>IL</td>\n", " <td>10450</td>\n", " <td>Great Southern Bank</td>\n", " <td>June 20, 2014</td>\n", " <td>June 26, 2015</td>\n", " </tr>\n", " <tr>\n", " <th>...</th>\n", " <td>...</td>\n", " <td>...</td>\n", " <td>...</td>\n", " <td>...</td>\n", " <td>...</td>\n", " <td>...</td>\n", " <td>...</td>\n", " </tr>\n", " <tr>\n", " <th>525</th>\n", " <td>ANB Financial, NA</td>\n", " <td>Bentonville</td>\n", " <td>AR</td>\n", " <td>33901</td>\n", " <td>Pulaski Bank and Trust Company</td>\n", " <td>May 9, 2008</td>\n", " <td>August 28, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>526</th>\n", " <td>Hume Bank</td>\n", " <td>Hume</td>\n", " <td>MO</td>\n", " <td>1971</td>\n", " <td>Security Bank</td>\n", " <td>March 7, 2008</td>\n", " <td>August 28, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>527</th>\n", " <td>Douglass National Bank</td>\n", " <td>Kansas City</td>\n", " <td>MO</td>\n", " <td>24660</td>\n", " <td>Liberty Bank and Trust Company</td>\n", " <td>January 25, 2008</td>\n", " <td>October 26, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>528</th>\n", " <td>Miami Valley Bank</td>\n", " <td>Lakeview</td>\n", " <td>OH</td>\n", " <td>16848</td>\n", " <td>The Citizens Banking Company</td>\n", " <td>October 4, 2007</td>\n", " <td>September 12, 2016</td>\n", " </tr>\n", " <tr>\n", " <th>529</th>\n", " <td>NetBank</td>\n", " <td>Alpharetta</td>\n", " <td>GA</td>\n", " <td>32575</td>\n", " <td>ING DIRECT</td>\n", " <td>September 28, 2007</td>\n", " <td>August 28, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>530</th>\n", " <td>Metropolitan Savings Bank</td>\n", " <td>Pittsburgh</td>\n", " <td>PA</td>\n", " <td>35353</td>\n", " <td>Allegheny Valley Bank of Pittsburgh</td>\n", " <td>February 2, 2007</td>\n", " <td>October 27, 2010</td>\n", " </tr>\n", " <tr>\n", " <th>531</th>\n", " <td>Bank of Ephraim</td>\n", " <td>Ephraim</td>\n", " <td>UT</td>\n", " <td>1249</td>\n", " <td>Far West Bank</td>\n", " <td>June 25, 2004</td>\n", " <td>April 9, 2008</td>\n", " </tr>\n", " <tr>\n", " <th>532</th>\n", " <td>Reliance Bank</td>\n", " <td>White Plains</td>\n", " <td>NY</td>\n", " <td>26778</td>\n", " <td>Union State Bank</td>\n", " <td>March 19, 2004</td>\n", " <td>April 9, 2008</td>\n", " </tr>\n", " <tr>\n", " <th>533</th>\n", " <td>Guaranty National Bank of Tallahassee</td>\n", " <td>Tallahassee</td>\n", " <td>FL</td>\n", " <td>26838</td>\n", " <td>Hancock Bank of Florida</td>\n", " <td>March 12, 2004</td>\n", " <td>April 17, 2018</td>\n", " </tr>\n", " <tr>\n", " <th>534</th>\n", " <td>Dollar Savings Bank</td>\n", " <td>Newark</td>\n", " <td>NJ</td>\n", " <td>31330</td>\n", " <td>No Acquirer</td>\n", " <td>February 14, 2004</td>\n", " <td>April 9, 2008</td>\n", " </tr>\n", " <tr>\n", " <th>535</th>\n", " <td>Pulaski Savings Bank</td>\n", " <td>Philadelphia</td>\n", " <td>PA</td>\n", " <td>27203</td>\n", " <td>Earthstar Bank</td>\n", " <td>November 14, 2003</td>\n", " <td>October 6, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>536</th>\n", " <td>First National Bank of Blanchardville</td>\n", " <td>Blanchardville</td>\n", " <td>WI</td>\n", " <td>11639</td>\n", " <td>The Park Bank</td>\n", " <td>May 9, 2003</td>\n", " <td>June 5, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>537</th>\n", " <td>Southern Pacific Bank</td>\n", " <td>Torrance</td>\n", " <td>CA</td>\n", " <td>27094</td>\n", " <td>Beal Bank</td>\n", " <td>February 7, 2003</td>\n", " <td>October 20, 2008</td>\n", " </tr>\n", " <tr>\n", " <th>538</th>\n", " <td>Farmers Bank of Cheneyville</td>\n", " <td>Cheneyville</td>\n", " <td>LA</td>\n", " <td>16445</td>\n", " <td>Sabine State Bank & Trust</td>\n", " <td>December 17, 2002</td>\n", " <td>October 20, 2004</td>\n", " </tr>\n", " <tr>\n", " <th>539</th>\n", " <td>Bank of Alamo</td>\n", " <td>Alamo</td>\n", " <td>TN</td>\n", " <td>9961</td>\n", " <td>No Acquirer</td>\n", " <td>November 8, 2002</td>\n", " <td>March 18, 2005</td>\n", " </tr>\n", " <tr>\n", " <th>540</th>\n", " <td>AmTrade International Bank En Espanol</td>\n", " <td>Atlanta</td>\n", " <td>GA</td>\n", " <td>33784</td>\n", " <td>No Acquirer</td>\n", " <td>September 30, 2002</td>\n", " <td>September 11, 2006</td>\n", " </tr>\n", " <tr>\n", " <th>541</th>\n", " <td>Universal Federal Savings Bank</td>\n", " <td>Chicago</td>\n", " <td>IL</td>\n", " <td>29355</td>\n", " <td>Chicago Community Bank</td>\n", " <td>June 27, 2002</td>\n", " <td>October 6, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>542</th>\n", " <td>Connecticut Bank of Commerce</td>\n", " <td>Stamford</td>\n", " <td>CT</td>\n", " <td>19183</td>\n", " <td>Hudson United Bank</td>\n", " <td>June 26, 2002</td>\n", " <td>February 14, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>543</th>\n", " <td>New Century Bank</td>\n", " <td>Shelby Township</td>\n", " <td>MI</td>\n", " <td>34979</td>\n", " <td>No Acquirer</td>\n", " <td>March 28, 2002</td>\n", " <td>March 18, 2005</td>\n", " </tr>\n", " <tr>\n", " <th>544</th>\n", " <td>Net 1st National Bank</td>\n", " <td>Boca Raton</td>\n", " <td>FL</td>\n", " <td>26652</td>\n", " <td>Bank Leumi USA</td>\n", " <td>March 1, 2002</td>\n", " <td>April 9, 2008</td>\n", " </tr>\n", " <tr>\n", " <th>545</th>\n", " <td>NextBank, NA</td>\n", " <td>Phoenix</td>\n", " <td>AZ</td>\n", " <td>22314</td>\n", " <td>No Acquirer</td>\n", " <td>February 7, 2002</td>\n", " <td>February 5, 2015</td>\n", " </tr>\n", " <tr>\n", " <th>546</th>\n", " <td>Oakwood Deposit Bank Co.</td>\n", " <td>Oakwood</td>\n", " <td>OH</td>\n", " <td>8966</td>\n", " <td>The State Bank & Trust Company</td>\n", " <td>February 1, 2002</td>\n", " <td>October 25, 2012</td>\n", " </tr>\n", " <tr>\n", " <th>547</th>\n", " <td>Bank of Sierra Blanca</td>\n", " <td>Sierra Blanca</td>\n", " <td>TX</td>\n", " <td>22002</td>\n", " <td>The Security State Bank of Pecos</td>\n", " <td>January 18, 2002</td>\n", " <td>November 6, 2003</td>\n", " </tr>\n", " <tr>\n", " <th>548</th>\n", " <td>Hamilton Bank, NA En Espanol</td>\n", " <td>Miami</td>\n", " <td>FL</td>\n", " <td>24382</td>\n", " <td>Israel Discount Bank of New York</td>\n", " <td>January 11, 2002</td>\n", " <td>September 21, 2015</td>\n", " </tr>\n", " <tr>\n", " <th>549</th>\n", " <td>Sinclair National Bank</td>\n", " <td>Gravette</td>\n", " <td>AR</td>\n", " <td>34248</td>\n", " <td>Delta Trust & Bank</td>\n", " <td>September 7, 2001</td>\n", " <td>October 6, 2017</td>\n", " </tr>\n", " <tr>\n", " <th>550</th>\n", " <td>Superior Bank, FSB</td>\n", " <td>Hinsdale</td>\n", " <td>IL</td>\n", " <td>32646</td>\n", " <td>Superior Federal, FSB</td>\n", " <td>July 27, 2001</td>\n", " <td>August 19, 2014</td>\n", " </tr>\n", " <tr>\n", " <th>551</th>\n", " <td>Malta National Bank</td>\n", " <td>Malta</td>\n", " <td>OH</td>\n", " <td>6629</td>\n", " <td>North Valley Bank</td>\n", " <td>May 3, 2001</td>\n", " <td>November 18, 2002</td>\n", " </tr>\n", " <tr>\n", " <th>552</th>\n", " <td>First Alliance Bank & Trust Co.</td>\n", " <td>Manchester</td>\n", " <td>NH</td>\n", " <td>34264</td>\n", " <td>Southern New Hampshire Bank & Trust</td>\n", " <td>February 2, 2001</td>\n", " <td>February 18, 2003</td>\n", " </tr>\n", " <tr>\n", " <th>553</th>\n", " <td>National State Bank of Metropolis</td>\n", " <td>Metropolis</td>\n", " <td>IL</td>\n", " <td>3815</td>\n", " <td>Banterra Bank of Marion</td>\n", " <td>December 14, 2000</td>\n", " <td>March 17, 2005</td>\n", " </tr>\n", " <tr>\n", " <th>554</th>\n", " <td>Bank of Honolulu</td>\n", " <td>Honolulu</td>\n", " <td>HI</td>\n", " <td>21029</td>\n", " <td>Bank of the Orient</td>\n", " <td>October 13, 2000</td>\n", " <td>March 17, 2005</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "<p>555 rows × 7 columns</p>\n", "</div>" ], "text/plain": [ " Bank Name City \\\n", "0 Washington Federal Bank for Savings Chicago \n", "1 The Farmers and Merchants State Bank of Argonia Argonia \n", "2 Fayette County Bank Saint Elmo \n", "3 Guaranty Bank, (d/b/a BestBank in Georgia & Mi... Milwaukee \n", "4 First NBC Bank New Orleans \n", "5 Proficio Bank Cottonwood Heights \n", "6 Seaway Bank and Trust Company Chicago \n", "7 Harvest Community Bank Pennsville \n", "8 Allied Bank Mulberry \n", "9 The Woodbury Banking Company Woodbury \n", "10 First CornerStone Bank King of Prussia \n", "11 Trust Company Bank Memphis \n", "12 North Milwaukee State Bank Milwaukee \n", "13 Hometown National Bank Longview \n", "14 The Bank of Georgia Peachtree City \n", "15 Premier Bank Denver \n", "16 Edgebrook Bank Chicago \n", "17 Doral Bank En Espanol San Juan \n", "18 Capitol City Bank & Trust Company Atlanta \n", "19 Highland Community Bank Chicago \n", "20 First National Bank of Crestview Crestview \n", "21 Northern Star Bank Mankato \n", "22 Frontier Bank, FSB D/B/A El Paseo Bank Palm Desert \n", "23 The National Republic Bank of Chicago Chicago \n", "24 NBRS Financial Rising Sun \n", "25 GreenChoice Bank, fsb Chicago \n", "26 Eastside Commercial Bank Conyers \n", "27 The Freedom State Bank Freedom \n", "28 Valley Bank Fort Lauderdale \n", "29 Valley Bank Moline \n", ".. ... ... \n", "525 ANB Financial, NA Bentonville \n", "526 Hume Bank Hume \n", "527 Douglass National Bank Kansas City \n", "528 Miami Valley Bank Lakeview \n", "529 NetBank Alpharetta \n", "530 Metropolitan Savings Bank Pittsburgh \n", "531 Bank of Ephraim Ephraim \n", "532 Reliance Bank White Plains \n", "533 Guaranty National Bank of Tallahassee Tallahassee \n", "534 Dollar Savings Bank Newark \n", "535 Pulaski Savings Bank Philadelphia \n", "536 First National Bank of Blanchardville Blanchardville \n", "537 Southern Pacific Bank Torrance \n", "538 Farmers Bank of Cheneyville Cheneyville \n", "539 Bank of Alamo Alamo \n", "540 AmTrade International Bank En Espanol Atlanta \n", "541 Universal Federal Savings Bank Chicago \n", "542 Connecticut Bank of Commerce Stamford \n", "543 New Century Bank Shelby Township \n", "544 Net 1st National Bank Boca Raton \n", "545 NextBank, NA Phoenix \n", "546 Oakwood Deposit Bank Co. Oakwood \n", "547 Bank of Sierra Blanca Sierra Blanca \n", "548 Hamilton Bank, NA En Espanol Miami \n", "549 Sinclair National Bank Gravette \n", "550 Superior Bank, FSB Hinsdale \n", "551 Malta National Bank Malta \n", "552 First Alliance Bank & Trust Co. Manchester \n", "553 National State Bank of Metropolis Metropolis \n", "554 Bank of Honolulu Honolulu \n", "\n", " ST CERT Acquiring Institution Closing Date \\\n", "0 IL 30570 Royal Savings Bank December 15, 2017 \n", "1 KS 17719 Conway Bank October 13, 2017 \n", "2 IL 1802 United Fidelity Bank, fsb May 26, 2017 \n", "3 WI 30003 First-Citizens Bank & Trust Company May 5, 2017 \n", "4 LA 58302 Whitney Bank April 28, 2017 \n", "5 UT 35495 Cache Valley Bank March 3, 2017 \n", "6 IL 19328 State Bank of Texas January 27, 2017 \n", "7 NJ 34951 First-Citizens Bank & Trust Company January 13, 2017 \n", "8 AR 91 Today's Bank September 23, 2016 \n", "9 GA 11297 United Bank August 19, 2016 \n", "10 PA 35312 First-Citizens Bank & Trust Company May 6, 2016 \n", "11 TN 9956 The Bank of Fayette County April 29, 2016 \n", "12 WI 20364 First-Citizens Bank & Trust Company March 11, 2016 \n", "13 WA 35156 Twin City Bank October 2, 2015 \n", "14 GA 35259 Fidelity Bank October 2, 2015 \n", "15 CO 34112 United Fidelity Bank, fsb July 10, 2015 \n", "16 IL 57772 Republic Bank of Chicago May 8, 2015 \n", "17 PR 32102 Banco Popular de Puerto Rico February 27, 2015 \n", "18 GA 33938 First-Citizens Bank & Trust Company February 13, 2015 \n", "19 IL 20290 United Fidelity Bank, fsb January 23, 2015 \n", "20 FL 17557 First NBC Bank January 16, 2015 \n", "21 MN 34983 BankVista December 19, 2014 \n", "22 CA 34738 Bank of Southern California, N.A. November 7, 2014 \n", "23 IL 916 State Bank of Texas October 24, 2014 \n", "24 MD 4862 Howard Bank October 17, 2014 \n", "25 IL 28462 Providence Bank, LLC July 25, 2014 \n", "26 GA 58125 Community & Southern Bank July 18, 2014 \n", "27 OK 12483 Alva State Bank & Trust Company June 27, 2014 \n", "28 FL 21793 Landmark Bank, National Association June 20, 2014 \n", "29 IL 10450 Great Southern Bank June 20, 2014 \n", ".. .. ... ... ... \n", "525 AR 33901 Pulaski Bank and Trust Company May 9, 2008 \n", "526 MO 1971 Security Bank March 7, 2008 \n", "527 MO 24660 Liberty Bank and Trust Company January 25, 2008 \n", "528 OH 16848 The Citizens Banking Company October 4, 2007 \n", "529 GA 32575 ING DIRECT September 28, 2007 \n", "530 PA 35353 Allegheny Valley Bank of Pittsburgh February 2, 2007 \n", "531 UT 1249 Far West Bank June 25, 2004 \n", "532 NY 26778 Union State Bank March 19, 2004 \n", "533 FL 26838 Hancock Bank of Florida March 12, 2004 \n", "534 NJ 31330 No Acquirer February 14, 2004 \n", "535 PA 27203 Earthstar Bank November 14, 2003 \n", "536 WI 11639 The Park Bank May 9, 2003 \n", "537 CA 27094 Beal Bank February 7, 2003 \n", "538 LA 16445 Sabine State Bank & Trust December 17, 2002 \n", "539 TN 9961 No Acquirer November 8, 2002 \n", "540 GA 33784 No Acquirer September 30, 2002 \n", "541 IL 29355 Chicago Community Bank June 27, 2002 \n", "542 CT 19183 Hudson United Bank June 26, 2002 \n", "543 MI 34979 No Acquirer March 28, 2002 \n", "544 FL 26652 Bank Leumi USA March 1, 2002 \n", "545 AZ 22314 No Acquirer February 7, 2002 \n", "546 OH 8966 The State Bank & Trust Company February 1, 2002 \n", "547 TX 22002 The Security State Bank of Pecos January 18, 2002 \n", "548 FL 24382 Israel Discount Bank of New York January 11, 2002 \n", "549 AR 34248 Delta Trust & Bank September 7, 2001 \n", "550 IL 32646 Superior Federal, FSB July 27, 2001 \n", "551 OH 6629 North Valley Bank May 3, 2001 \n", "552 NH 34264 Southern New Hampshire Bank & Trust February 2, 2001 \n", "553 IL 3815 Banterra Bank of Marion December 14, 2000 \n", "554 HI 21029 Bank of the Orient October 13, 2000 \n", "\n", " Updated Date \n", "0 February 21, 2018 \n", "1 February 21, 2018 \n", "2 July 26, 2017 \n", "3 March 22, 2018 \n", "4 December 5, 2017 \n", "5 March 7, 2018 \n", "6 May 18, 2017 \n", "7 May 18, 2017 \n", "8 September 25, 2017 \n", "9 June 1, 2017 \n", "10 November 13, 2018 \n", "11 September 6, 2016 \n", "12 March 13, 2017 \n", "13 February 19, 2018 \n", "14 July 9, 2018 \n", "15 February 20, 2018 \n", "16 July 12, 2016 \n", "17 May 13, 2015 \n", "18 April 21, 2015 \n", "19 November 15, 2017 \n", "20 November 15, 2017 \n", "21 January 3, 2018 \n", "22 November 10, 2016 \n", "23 January 6, 2016 \n", "24 February 19, 2018 \n", "25 December 12, 2016 \n", "26 October 6, 2017 \n", "27 February 21, 2018 \n", "28 February 14, 2018 \n", "29 June 26, 2015 \n", ".. ... \n", "525 August 28, 2012 \n", "526 August 28, 2012 \n", "527 October 26, 2012 \n", "528 September 12, 2016 \n", "529 August 28, 2012 \n", "530 October 27, 2010 \n", "531 April 9, 2008 \n", "532 April 9, 2008 \n", "533 April 17, 2018 \n", "534 April 9, 2008 \n", "535 October 6, 2017 \n", "536 June 5, 2012 \n", "537 October 20, 2008 \n", "538 October 20, 2004 \n", "539 March 18, 2005 \n", "540 September 11, 2006 \n", "541 October 6, 2017 \n", "542 February 14, 2012 \n", "543 March 18, 2005 \n", "544 April 9, 2008 \n", "545 February 5, 2015 \n", "546 October 25, 2012 \n", "547 November 6, 2003 \n", "548 September 21, 2015 \n", "549 October 6, 2017 \n", "550 August 19, 2014 \n", "551 November 18, 2002 \n", "552 February 18, 2003 \n", "553 March 17, 2005 \n", "554 March 17, 2005 \n", "\n", "[555 rows x 7 columns]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "banks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Powerful no? Now let's turn that into an exercise.\n", "\n", "### Exercise\n", "Given the data you just extracted above, can you analyse how many banks have failed per state?\n", "\n", "Georgia (GA) should be the state with the most failed banks!\n", "\n", "*Hint: try searching the web for pandas counting occurances* " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Cleaning\n", "During the course of doing data analysis and modeling, a significant amount of time is spent on data preparation: loading, cleaning, transofrming and rearranging. Such tasks are often reported to take **up to 80%** or more of a data analyst's time. Often the way the data is stored in files isn't in the correct format and needs to be modified. Reseraches usually do this on an ad-hoc basis using programming languages like Python.\n", "\n", "In this chapter we will discuss tools for handling missing data, duplicate data, string manipulation, and some other analytical data transformations.\n", "\n", "## Handling missing data\n", "Mussing data occurs commonly in many data analysis applications. One of the goals of pandas is to make working with missing data as painless as possible.\n", "\n", "In pandas, missing numberic data is represented by `NaN` (Not a Number) and can easily be handled:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 orange\n", "1 tomato\n", "2 NaN\n", "3 avocado\n", "dtype: object" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "string_data = pd.Series(['orange', 'tomato', np.nan, 'avocado'])\n", "string_data" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 False\n", "1 False\n", "2 True\n", "3 False\n", "dtype: bool" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "string_data.isnull()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Furthermore, the pandas `NaN` is functionally equlevant to the standard Python type `NoneType` which can be defined with `x = None`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 None\n", "1 tomato\n", "2 NaN\n", "3 avocado\n", "dtype: object" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "string_data[0] = None\n", "string_data" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 True\n", "1 False\n", "2 True\n", "3 False\n", "dtype: bool" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "string_data.isnull()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some other methods which you can find useful:\n", " \n", "| Method | Decription |\n", "| -- | -- |\n", "| dropna | Filter axis labels based on whether values of each label have missing data|\n", "| fillna | Fill in missing data with some vallue |\n", "| isnull | Return boolean values indicating which vallues are missin |\n", "| notnull | Negation of isnull |\n", "\n", "### Exercise\n", "Remove the missing data below using the appropriate method" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 1.0\n", "2 3.0\n", "3 4.0\n", "5 6.0\n", "dtype: float64" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.Series([1, None, 3, 4, None, 6])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`dropna()` by default removes any row/column that has a missing value. What if we want to remove only rows in which all of the data is missing though?" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>1.0</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>NaN</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2\n", "0 1.0 6.5 3.0\n", "1 1.0 NaN NaN\n", "2 NaN NaN NaN\n", "3 NaN 6.5 3.0" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.DataFrame([[1., 6.5, 3.], [1., None, None],\n", " [None, None, None], [None, 6.5, 3.]])\n", "data" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2\n", "0 1.0 6.5 3.0" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.dropna()" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>1.0</td>\n", " <td>NaN</td>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>NaN</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2\n", "0 1.0 6.5 3.0\n", "1 1.0 NaN NaN\n", "3 NaN 6.5 3.0" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.dropna(how=\"all\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "That's fine if we want to remove missing data, what if we want to fill in missing data? Do you know of a way? Try to fill in all of the missing values from the data below with **0s**" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "data = pd.DataFrame([[1., 6.5, 3.], [2., None, None],\n", " [None, None, None], [None, 1.5, 9.]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "pandas also allows us to interpolate the data instead of just filling it with a constant. The easiest way to do that is shown below, but there are more complex ones that are not covered in this course." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>2.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>2.0</td>\n", " <td>6.5</td>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>2.0</td>\n", " <td>1.5</td>\n", " <td>9.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2\n", "0 1.0 6.5 3.0\n", "1 2.0 6.5 3.0\n", "2 2.0 6.5 3.0\n", "3 2.0 1.5 9.0" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.fillna(method=\"ffill\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want you can explore the other capabilities of [`fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Transformation\n", "### Removing duplicates\n", "Duplicate data can be a serious issue, luckily pandas offers a simple way to remove duplicates" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>1</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0\n", "0 1\n", "1 2\n", "2 3\n", "3 4\n", "4 3\n", "5 2\n", "6 1" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.DataFrame([1, 2, 3, 4, 3, 2, 1])\n", "data" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>4</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0\n", "0 1\n", "1 2\n", "2 3\n", "3 4" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.drop_duplicates()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also select which rows to keep" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>3</th>\n", " <td>4</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>2</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>1</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0\n", "3 4\n", "4 3\n", "5 2\n", "6 1" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.drop_duplicates(keep=\"last\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Replacing data\n", "You've already seen how you can fill in missing data with `fillna`. That is actually a special case of more general value replacement. That is done via the `replace` method.\n", "\n", "Let's consider an example where the dataset given to us had `-999` as sentinel values for missing data instead of `NaN`." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>-999.0</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>2.0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>-999.0</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>4.0</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>-999.0</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>-999.0</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>7.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0\n", "0 1.0\n", "1 -999.0\n", "2 2.0\n", "3 -999.0\n", "4 3.0\n", "5 4.0\n", "6 -999.0\n", "7 -999.0\n", "8 7.0" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.DataFrame([1., -999., 2., -999., 3., 4., -999, -999, 7.])\n", "data" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>1.0</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>2.0</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>3.0</td>\n", " </tr>\n", " <tr>\n", " <th>5</th>\n", " <td>4.0</td>\n", " </tr>\n", " <tr>\n", " <th>6</th>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>7</th>\n", " <td>NaN</td>\n", " </tr>\n", " <tr>\n", " <th>8</th>\n", " <td>7.0</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0\n", "0 1.0\n", "1 NaN\n", "2 2.0\n", "3 NaN\n", "4 3.0\n", "5 4.0\n", "6 NaN\n", "7 NaN\n", "8 7.0" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.replace(-999, np.nan)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Renaming axis indexes\n", "Similar to `replace` you can also rename the labels of your axis" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>Edinburgh</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>Glasgow</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>Aberdeen</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "Edinburgh 0 1 2 3\n", "Glasgow 4 5 6 7\n", "Aberdeen 8 9 10 11" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.DataFrame(np.arange(12).reshape((3, 4)),\n", " index=['Edinburgh', 'Glasgow', 'Aberdeen'])\n", "data" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>one</th>\n", " <th>two</th>\n", " <th>three</th>\n", " <th>four</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>Edinburgh</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>Glasgow</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>Aberdeen</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " one two three four\n", "Edinburgh 0 1 2 3\n", "Glasgow 4 5 6 7\n", "Aberdeen 8 9 10 11" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# create a map using a standard Python dictionary\n", "mapping = { 0 : \"one\",\n", " 1 : \"two\",\n", " 2 : \"three\",\n", " 3 : \"four\"}\n", "\n", "# now rename the columns\n", "data.rename(columns=mapping)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rows can be renamed in a similar fashion" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Detection and Filtering Outliers\n", "Filtering or transforming outliers is largely a matter of applying array operations. Consider a DataFrame with some normally distributed data:" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>count</th>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " </tr>\n", " <tr>\n", " <th>mean</th>\n", " <td>0.010460</td>\n", " <td>0.007875</td>\n", " <td>-0.032370</td>\n", " <td>-0.004241</td>\n", " </tr>\n", " <tr>\n", " <th>std</th>\n", " <td>0.972291</td>\n", " <td>1.001179</td>\n", " <td>1.008105</td>\n", " <td>1.025515</td>\n", " </tr>\n", " <tr>\n", " <th>min</th>\n", " <td>-3.144009</td>\n", " <td>-3.090688</td>\n", " <td>-3.085149</td>\n", " <td>-3.287256</td>\n", " </tr>\n", " <tr>\n", " <th>25%</th>\n", " <td>-0.610280</td>\n", " <td>-0.662137</td>\n", " <td>-0.625019</td>\n", " <td>-0.701817</td>\n", " </tr>\n", " <tr>\n", " <th>50%</th>\n", " <td>0.041220</td>\n", " <td>0.069653</td>\n", " <td>-0.047272</td>\n", " <td>0.014475</td>\n", " </tr>\n", " <tr>\n", " <th>75%</th>\n", " <td>0.678561</td>\n", " <td>0.721522</td>\n", " <td>0.621929</td>\n", " <td>0.728463</td>\n", " </tr>\n", " <tr>\n", " <th>max</th>\n", " <td>2.978960</td>\n", " <td>3.103928</td>\n", " <td>3.185358</td>\n", " <td>3.081887</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "count 1000.000000 1000.000000 1000.000000 1000.000000\n", "mean 0.010460 0.007875 -0.032370 -0.004241\n", "std 0.972291 1.001179 1.008105 1.025515\n", "min -3.144009 -3.090688 -3.085149 -3.287256\n", "25% -0.610280 -0.662137 -0.625019 -0.701817\n", "50% 0.041220 0.069653 -0.047272 0.014475\n", "75% 0.678561 0.721522 0.621929 0.728463\n", "max 2.978960 3.103928 3.185358 3.081887" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.DataFrame(np.random.randn(1000, 4))\n", "data.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose you now want to lower all absolute values exseeding 3 from one of the columns" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "151 3.185358\n", "808 -3.085149\n", "Name: 2, dtype: float64" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "col = data[2]\n", "col[np.abs(col) > 3]" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>count</th>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " <td>1000.000000</td>\n", " </tr>\n", " <tr>\n", " <th>mean</th>\n", " <td>0.010674</td>\n", " <td>0.007862</td>\n", " <td>-0.032470</td>\n", " <td>-0.004035</td>\n", " </tr>\n", " <tr>\n", " <th>std</th>\n", " <td>0.971615</td>\n", " <td>1.000586</td>\n", " <td>1.007275</td>\n", " <td>1.024391</td>\n", " </tr>\n", " <tr>\n", " <th>min</th>\n", " <td>-3.000000</td>\n", " <td>-3.000000</td>\n", " <td>-3.000000</td>\n", " <td>-3.000000</td>\n", " </tr>\n", " <tr>\n", " <th>25%</th>\n", " <td>-0.610280</td>\n", " <td>-0.662137</td>\n", " <td>-0.625019</td>\n", " <td>-0.701817</td>\n", " </tr>\n", " <tr>\n", " <th>50%</th>\n", " <td>0.041220</td>\n", " <td>0.069653</td>\n", " <td>-0.047272</td>\n", " <td>0.014475</td>\n", " </tr>\n", " <tr>\n", " <th>75%</th>\n", " <td>0.678561</td>\n", " <td>0.721522</td>\n", " <td>0.621929</td>\n", " <td>0.728463</td>\n", " </tr>\n", " <tr>\n", " <th>max</th>\n", " <td>2.978960</td>\n", " <td>3.000000</td>\n", " <td>3.000000</td>\n", " <td>3.000000</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "count 1000.000000 1000.000000 1000.000000 1000.000000\n", "mean 0.010674 0.007862 -0.032470 -0.004035\n", "std 0.971615 1.000586 1.007275 1.024391\n", "min -3.000000 -3.000000 -3.000000 -3.000000\n", "25% -0.610280 -0.662137 -0.625019 -0.701817\n", "50% 0.041220 0.069653 -0.047272 0.014475\n", "75% 0.678561 0.721522 0.621929 0.728463\n", "max 2.978960 3.000000 3.000000 3.000000" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data[np.abs(data) > 3] = np.sign(data) * 3\n", "data.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Permutation and Random Sampling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Permuting (randomly reordering) of rows in pandas is easy to do using the `numpy.random.permutation` function. Calling permutation with the length of the axis you want to permute produces an array of integers indicating the new ordering:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>16</td>\n", " <td>17</td>\n", " <td>18</td>\n", " <td>19</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "0 0 1 2 3\n", "1 4 5 6 7\n", "2 8 9 10 11\n", "3 12 13 14 15\n", "4 16 17 18 19" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.DataFrame(np.arange(5 * 4).reshape((5, 4)))\n", "df" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([3, 0, 2, 1, 4])" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# generate random order\n", "sampler = np.random.permutation(5)\n", "sampler" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>3</th>\n", " <td>12</td>\n", " <td>13</td>\n", " <td>14</td>\n", " <td>15</td>\n", " </tr>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>1</th>\n", " <td>4</td>\n", " <td>5</td>\n", " <td>6</td>\n", " <td>7</td>\n", " </tr>\n", " <tr>\n", " <th>4</th>\n", " <td>16</td>\n", " <td>17</td>\n", " <td>18</td>\n", " <td>19</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "3 12 13 14 15\n", "0 0 1 2 3\n", "2 8 9 10 11\n", "1 4 5 6 7\n", "4 16 17 18 19" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.take(sampler)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To select a random subset without replacement, you can use the sample method:" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/html": [ "<div>\n", "<style scoped>\n", " .dataframe tbody tr th:only-of-type {\n", " vertical-align: middle;\n", " }\n", "\n", " .dataframe tbody tr th {\n", " vertical-align: top;\n", " }\n", "\n", " .dataframe thead th {\n", " text-align: right;\n", " }\n", "</style>\n", "<table border=\"1\" class=\"dataframe\">\n", " <thead>\n", " <tr style=\"text-align: right;\">\n", " <th></th>\n", " <th>0</th>\n", " <th>1</th>\n", " <th>2</th>\n", " <th>3</th>\n", " </tr>\n", " </thead>\n", " <tbody>\n", " <tr>\n", " <th>4</th>\n", " <td>16</td>\n", " <td>17</td>\n", " <td>18</td>\n", " <td>19</td>\n", " </tr>\n", " <tr>\n", " <th>2</th>\n", " <td>8</td>\n", " <td>9</td>\n", " <td>10</td>\n", " <td>11</td>\n", " </tr>\n", " <tr>\n", " <th>0</th>\n", " <td>0</td>\n", " <td>1</td>\n", " <td>2</td>\n", " <td>3</td>\n", " </tr>\n", " </tbody>\n", "</table>\n", "</div>" ], "text/plain": [ " 0 1 2 3\n", "4 16 17 18 19\n", "2 8 9 10 11\n", "0 0 1 2 3" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.sample(n=3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## String manipulation\n", "Python has long been popular for its raw data manipulation in part due to its ease of use for string and text processing. Most text operations are made simple with the string object's built-in methods. For more complex pattern mathcing and text manipulations, regular expressions may be needed." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Basics\n", "Let's refresh what normal `str` (String objects) are capable of in Python" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['Edinburgh', 'is', 'great']" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# complex strings can be broken into small bits\n", "val = \"Edinburgh is great\"\n", "val.split(\" \")" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Edinburgh::is::great'" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# substrings can be concatinated together with +\n", "first, second, last = val.split(\" \")\n", "first + \"::\" + second + \"::\" + last" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember that Strings are just lists of individual charecters" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "E\n", "d\n", "i\n", "n\n", "b\n", "u\n", "r\n", "g\n", "h\n" ] } ], "source": [ "val = \"Edinburgh\"\n", "for each in val:\n", " print(each)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can use standard list operations with them" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "val.find(\"n\")" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-1" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "val.find(\"x\") # -1 means that there is no such element" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'EDINBURGH'" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# and of course remember about upper() and lower()\n", "val.upper()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to learn more about strings you can always refer to the [Python manual](https://docs.python.org/2/library/string.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Regular expressions\n", "provide a flexible way to search or match (often more complex) string patterns in text. A single expression, commonly called *regex*, is a string formed according to the regular expression language. Python's built-on module is responsible for applying regular expression of strings via the `re` package" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'foo bar\\t baz \\tqux'" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import re\n", "text = \"foo bar\\t baz \\tqux\"\n", "text" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['foo', 'bar', 'baz', 'qux']" ] }, "execution_count": 67, "metadata": {}, "output_type": "execute_result" } ], "source": [ "re.split(\"\\s+\", text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "this expression effectively removed all whitespaces and tab charecters (`\\t`) which was stated with the `\\s` regex and then the `+` after it means remove any number of sequential occurances of that charecter.\n", "\n", "Let's have a look at a more complex example - identifying email addresses in a textfile:" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": [ "text = \"\"\"Dave dave@google.com\n", "Steve steve@gmail.com\n", "Rob rob@gmail.com\n", "Ryan ryan@yahoo.com\n", "\"\"\"\n", "\n", "# pattern to be used for searching\n", "pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4}'\n", "\n", "# re.IGNORECASE makes the regex case-insensitive\n", "regex = re.compile(pattern, flags=re.IGNORECASE)" ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['dave@google.com', 'steve@gmail.com', 'rob@gmail.com', 'ryan@yahoo.com']" ] }, "execution_count": 69, "metadata": {}, "output_type": "execute_result" } ], "source": [ "regex.findall(text)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's dissect the regex part by part:\n", "```\n", "pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,4}'\n", "```\n", "\n", "- the `r` prefix before the string signals that the string should keep special charecters such as the newline charecter `\\n`. Otherwise Python would just treat it as a newline\n", "- `A-Z` means all letters from A to Z including lowercase and uppercse\n", "- `0-9` similarly means all charecters from 0 to 9\n", "- the concatenation `._%+-` menas just include those charecters\n", "- the square brackets [ ] means to combine all of the regular expressions inside. For example `[A-Z0-9._%+-]` would mean include all letters A to Z, all numbers 0 to 9, and the charecters ._%+-\n", "- `+` means concatenate the strings patterns\n", "- `{2,4}` means consider only 2 to 4 charecter strings\n", "\n", "To summarise the pattern above searches for any combination of letters and numbers, followed by a `@`, then any combination of letters and numbers followed by a `.` with only 2 to 4 letters after it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Regular expressions and pandas\n", "Let's see how they can be combined. Replicating the example above" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Dave Daves email dave@google.com\n", "Steve Steves email steve@gmail.com\n", "Rob Robs rob@gmail.com\n", "Wes NaN\n", "dtype: object" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data = pd.Series({'Dave': 'Daves email dave@google.com', 'Steve': 'Steves email steve@gmail.com',\n", " 'Rob': 'Robs rob@gmail.com', 'Wes': np.nan})\n", "data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can reuse the same `pattern` variable from above" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Dave [dave@google.com]\n", "Steve [steve@gmail.com]\n", "Rob [rob@gmail.com]\n", "Wes NaN\n", "dtype: object" ] }, "execution_count": 77, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.str.findall(pattern, flags=re.IGNORECASE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "pandas also offers more standard string operations. For example we can check if a string is contained within a data row:" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Dave False\n", "Steve True\n", "Rob True\n", "Wes NaN\n", "dtype: object" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "data.str.contains(\"gmail\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Many more of these methods exist:\n", " \n", " \n", "| Mehods | Description |\n", "| -- | -- |\n", "| cat | Concatenate strings element-wise with optional delimiter |\n", "| contains | Return boolean array if each string contains pattern/regex |\n", "| count | Count occurances of a pattern |\n", "| extract | Use a regex with groups to extract one or more strings from a Series |\n", "| findall | Computer list of all occurrances of pattern/regex for each string |\n", "| get | Index into each element |\n", "| isdecimal | Checks if the string is a decimal number |\n", "| isdigit | Checks if the string is a digit |\n", "| islower | Checks if the string is in lower case |\n", "| isupper | Checks if the string is in upper case |\n", "| join | Join strings in each element of the Series with passed seperator |\n", "| len | Compute length of each string |\n", "| lower, upper | Convert cases |\n", "| match | Returns matched groups as a list |\n", "| pad | Adds whitespace to left, right or both sides of strings |\n", "| repeat | Duplicate string values |\n", "| slice | Slice each string in the Series |\n", "| split | S" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" } }, "nbformat": 4, "nbformat_minor": 2 }