pull/1/head
cif2cif 8 years ago
parent 034d3fe808
commit de9e582e18

@ -0,0 +1,110 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to Machine Learning\n",
"\n",
"This lecture provides a quick introduction to Machine Learning in Python using the Iris dataset as an example. \n",
"\n",
"In this session we will focus on applying multiclass classification algorithms."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. [Home](2_0_0_Intro_ML.ipynb)\n",
" 1. [Objectives](2_0_1_Objectives.ipynb)\n",
"1. [What is scikit-learn](2_1_Intro_ScikitLearn.ipynb)\n",
"1. [Reading Data](2_2_Read_Data.ipynb)\n",
"2. [Visualisation](2_3_0_Visualisation.ipynb)\n",
" 1. [Advanced visualisation](2_3_1_Advanced_Visualisation.ipynb)\n",
"3. [Preprocessing](2_4_Preprocessing.ipynb)\n",
"4. [Machine learning](2_5_0_Machine_Learning.ipynb)\n",
" 1. [kNN Model](2_5_1_kNN_Model.ipynb)\n",
" 1. [Decision Tree Learning Model](2_5_2_Decision_Tree_Model.ipynb)\n",
"4. [Model tuning](2_6_Model_Tuning.ipynb)\n",
"4. [Model persistence](2_7_Model_Persistence.ipynb)\n",
"4. [Conclusions](2_8_Conclusions.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Scikit-learn web page](http://scikit-learn.org/stable/)\n",
"* [Scikit-learn videos](http://blog.kaggle.com/author/kevin-markham/) and [notebooks](https://github.com/justmarkham/scikit-learn-videos) by Kevin Marham\n",
"* [Learning scikit-learn: Machine Learning in Python](http://proquest.safaribooksonline.com/book/programming/python/9781783281930/1dot-machine-learning-a-gentle-introduction/ch01s02_html), Raúl Garreta; Guillermo Moncecchi, Packt Publishing, 2013.\n",
"* [Python Machine Learning](http://proquest.safaribooksonline.com/book/programming/python/9781783555130), Sebastian Raschka, Packt Publishing, 2015."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licence\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

@ -0,0 +1,103 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to Machine Learning\n",
"\n",
"This lecture provides a quick introduction to Machine Learning in Python using the Iris dataset as an example. In this session we will focus on applying multiclass classification algorithms.\n",
"\n",
"The main objectives of this session are:\n",
"\n",
"* Learn to use scikit-learn\n",
"* Learn the basic steps to apply machine learning techniques: dataset analysis, load, preprocessing, training, validation, optimization and persistence.\n",
"* Learn how to do a exploratory data analysis\n",
"* Learn how to visualise a dataset\n",
"* Learn how to load a bundled dataset\n",
"* Learn how to separate the dataset into traning and testing datasets\n",
"* Learn how to train a classifier\n",
"* Learn how to predict with a trained classifier\n",
"* Learn how to evaluate the predictions\n",
"* Learn how to optimize the configuration of a classifier\n",
"* Learn how to save a model\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Scikit-learn web page](http://scikit-learn.org/stable/)\n",
"* [Scikit-learn videos](http://blog.kaggle.com/author/kevin-markham/) and [notebooks](https://github.com/justmarkham/scikit-learn-videos) by Kevin Marham\n",
"* [Learning scikit-learn: Machine Learning in Python](http://proquest.safaribooksonline.com/book/programming/python/9781783281930/1dot-machine-learning-a-gentle-introduction/ch01s02_html), Raúl Garreta; Guillermo Moncecchi, Packt Publishing, 2013.\n",
"* [Python Machine Learning](http://proquest.safaribooksonline.com/book/programming/python/9781783555130), Sebastian Raschka, Packt Publishing, 2015."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LIcence\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

@ -0,0 +1,184 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"* [Introduction to scikit-learn](#Introduction-to-scikit-learn)\n",
"* [What is scikit-learn?](#What-is-scikit-learn?)\n",
"* [Problems that scikit-learn can solve](#Problems-that-scikit-learn-can-solve)\n",
"* [Helpers for Machine Learning](#Helpers-for-Machine-Learning)\n",
"* [How to install scikit-learn](#How-to-install-scikit-learn)\n",
"* [References](#References)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to scikit-learn"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This lecture provides a quick introduction to [scikit-learn](http://scikit-learn.org/stable/), a Python library for machine learning."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What is scikit-learn?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Scikit-Learn is a Python library that provides a wealth of machine learning algorithms. \n",
"\n",
"The library is built upon SciPy (Scientific Python) that should be installed before using scikit-learn.\n",
"\n",
"In particular, scikit-learn uses:\n",
"* **NumPy**: package for managing n-dimensional arrays (http://www.numpy.org/)\n",
"* **pandas**: data analysis toolkit (http://pandas.pydata.org/pandas-docs/stable/index.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problems that scikit-learn can solve"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Scikit-learn provides algorithms for solving the following problems:\n",
"* **Classification**: Identifying to which category an object belongs to. Some of the available [classification algorithms](http://scikit-learn.org/stable/supervised_learning.html#supervised-learning) are decision trees (ID3, kNN, ...), SVM, Random forest, Perceptron, etc. \n",
"* **Clustering**: Automatic grouping of similar objects into sets. Some of the available [clustering algorithms](http://scikit-learn.org/stable/modules/clustering.html#clustering) are k-Means, Affinity propagation, etc.\n",
"* **Regression**: Predicting a continuous-valued attribute associated with an object. Some of the available [regression algorithms](http://scikit-learn.org/stable/supervised_learning.html#supervised-learning) are linear regression, logistic regression, etc.\n",
"* ** Dimensionality reduction**: Reducing the number of random variables to consider. Some of the available [dimensionality reduction algorithms](http://scikit-learn.org/stable/modules/decomposition.html#decompositions) are SVD, PCA, etc."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Helpers for Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In addition, scikit-learn helps in several tasks:\n",
"* ** Model selection**: Comparing, validating, choosing parameters and models, and persisting models. Some of the [available functionalities](http://scikit-learn.org/stable/model_selection.html#model-selection) are cross-validation or grid search for optimizing the parameters. \n",
"* ** Preprocessing**: Several common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for the downstream estimators. Some of the available [preprocessing functions](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing) are scaling and normalizing data, or imputing missing values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to install scikit-learn"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you installed the conda distribution, scikit-learn is already installed! This is the best option.\n",
"\n",
"In case it is an old installation, you can updated it using conda: `conda update scikit-learn`.\n",
"\n",
"If it is not installed, install it with conda: `conda install scikit-learn`.\n",
"\n",
"If you have installed scipy and numpy, you can also installed using pip: `pip install -U scikit-learn`.\n",
"\n",
"It is not recommended to use pip for installing scipy and numpy. Instead, use conda or install the linux package *python-sklearn*."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Scikit-learn site](http://scikit-learn.org/stable/index.html)\n",
"* [How to install Scikit-learn](http://scikit-learn.org/stable/install.html/)\n",
"* [An introduction to NumPy and Scipy](http://www.engr.ucsb.edu/~shell/che210d/numpy.pdf)\n",
"* [NumPy tutorial](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licence\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

@ -0,0 +1,633 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"* [Reading Data](#Reading-Data)\n",
"* [Iris flower dataset](#Iris-flower-dataset)\n",
"* [References](#References)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Reading Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal of this notebook is to learn how to read and load a sample dataset.\n",
"\n",
"Scikit-learn come with some bundled [datasets](http://scikit-learn.org/stable/datasets/): iris, digits, boston, etc.\n",
"\n",
"In this notebook we are going to use the Iris dataset."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Iris flower dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [Iris flower dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), available at [UCI dataset repository](https://archive.ics.uci.edu/ml/datasets/Iris), is a classic dataset for classification.\n",
"\n",
"The dataset consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres. Based on the combination of these four features.\n",
"\n",
"![Iris](files/images/iris-dataset.jpg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In ordert to read the dataset, we import the bundle datasets and then load the Iris dataset. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# import datasets from scikit-learn\n",
"from sklearn import datasets\n",
"\n",
"# load iris dataset\n",
"iris = datasets.load_iris()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A dataset is a dictionary-like object that holds all the data and some metadata about the data. This data is stored in the `.data` member, which is a 2D (`n_samples`, `n_features`) array. In the case of supervised problem, one or more response variables are stored in the `.target` member."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"sklearn.datasets.base.Bunch"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#type 'bunch' of a dataset\n",
"type(iris)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iris Plants Database\n",
"\n",
"Notes\n",
"-----\n",
"Data Set Characteristics:\n",
" :Number of Instances: 150 (50 in each of three classes)\n",
" :Number of Attributes: 4 numeric, predictive attributes and the class\n",
" :Attribute Information:\n",
" - sepal length in cm\n",
" - sepal width in cm\n",
" - petal length in cm\n",
" - petal width in cm\n",
" - class:\n",
" - Iris-Setosa\n",
" - Iris-Versicolour\n",
" - Iris-Virginica\n",
" :Summary Statistics:\n",
"\n",
" ============== ==== ==== ======= ===== ====================\n",
" Min Max Mean SD Class Correlation\n",
" ============== ==== ==== ======= ===== ====================\n",
" sepal length: 4.3 7.9 5.84 0.83 0.7826\n",
" sepal width: 2.0 4.4 3.05 0.43 -0.4194\n",
" petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n",
" petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n",
" ============== ==== ==== ======= ===== ====================\n",
"\n",
" :Missing Attribute Values: None\n",
" :Class Distribution: 33.3% for each of 3 classes.\n",
" :Creator: R.A. Fisher\n",
" :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n",
" :Date: July, 1988\n",
"\n",
"This is a copy of UCI ML iris datasets.\n",
"http://archive.ics.uci.edu/ml/datasets/Iris\n",
"\n",
"The famous Iris database, first used by Sir R.A Fisher\n",
"\n",
"This is perhaps the best known database to be found in the\n",
"pattern recognition literature. Fisher's paper is a classic in the field and\n",
"is referenced frequently to this day. (See Duda & Hart, for example.) The\n",
"data set contains 3 classes of 50 instances each, where each class refers to a\n",
"type of iris plant. One class is linearly separable from the other 2; the\n",
"latter are NOT linearly separable from each other.\n",
"\n",
"References\n",
"----------\n",
" - Fisher,R.A. \"The use of multiple measurements in taxonomic problems\"\n",
" Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n",
" Mathematical Statistics\" (John Wiley, NY, 1950).\n",
" - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n",
" (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n",
" - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n",
" Structure and Classification Rule for Recognition in Partially Exposed\n",
" Environments\". IEEE Transactions on Pattern Analysis and Machine\n",
" Intelligence, Vol. PAMI-2, No. 1, 67-71.\n",
" - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n",
" on Information Theory, May 1972, 431-433.\n",
" - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n",
" conceptual clustering system finds 3 classes in the data.\n",
" - Many, many more ...\n",
"\n"
]
}
],
"source": [
"# print descrition of the dataset\n",
"print (iris.DESCR)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\n"
]
}
],
"source": [
"# names of the features (attributes of the entities)\n",
"print(iris.feature_names)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['setosa' 'versicolor' 'virginica']\n"
]
}
],
"source": [
"#names of the targets(classes of the classifier)\n",
"print(iris.target_names)"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"numpy.ndarray"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#type numpy array\n",
"type(iris.data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we are going to inspect the dataset. You can consult the NumPy tutorial listed in the references."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[ 5.1 3.5 1.4 0.2]\n",
" [ 4.9 3. 1.4 0.2]\n",
" [ 4.7 3.2 1.3 0.2]\n",
" [ 4.6 3.1 1.5 0.2]\n",
" [ 5. 3.6 1.4 0.2]\n",
" [ 5.4 3.9 1.7 0.4]\n",
" [ 4.6 3.4 1.4 0.3]\n",
" [ 5. 3.4 1.5 0.2]\n",
" [ 4.4 2.9 1.4 0.2]\n",
" [ 4.9 3.1 1.5 0.1]\n",
" [ 5.4 3.7 1.5 0.2]\n",
" [ 4.8 3.4 1.6 0.2]\n",
" [ 4.8 3. 1.4 0.1]\n",
" [ 4.3 3. 1.1 0.1]\n",
" [ 5.8 4. 1.2 0.2]\n",
" [ 5.7 4.4 1.5 0.4]\n",
" [ 5.4 3.9 1.3 0.4]\n",
" [ 5.1 3.5 1.4 0.3]\n",
" [ 5.7 3.8 1.7 0.3]\n",
" [ 5.1 3.8 1.5 0.3]\n",
" [ 5.4 3.4 1.7 0.2]\n",
" [ 5.1 3.7 1.5 0.4]\n",
" [ 4.6 3.6 1. 0.2]\n",
" [ 5.1 3.3 1.7 0.5]\n",
" [ 4.8 3.4 1.9 0.2]\n",
" [ 5. 3. 1.6 0.2]\n",
" [ 5. 3.4 1.6 0.4]\n",
" [ 5.2 3.5 1.5 0.2]\n",
" [ 5.2 3.4 1.4 0.2]\n",
" [ 4.7 3.2 1.6 0.2]\n",
" [ 4.8 3.1 1.6 0.2]\n",
" [ 5.4 3.4 1.5 0.4]\n",
" [ 5.2 4.1 1.5 0.1]\n",
" [ 5.5 4.2 1.4 0.2]\n",
" [ 4.9 3.1 1.5 0.1]\n",
" [ 5. 3.2 1.2 0.2]\n",
" [ 5.5 3.5 1.3 0.2]\n",
" [ 4.9 3.1 1.5 0.1]\n",
" [ 4.4 3. 1.3 0.2]\n",
" [ 5.1 3.4 1.5 0.2]\n",
" [ 5. 3.5 1.3 0.3]\n",
" [ 4.5 2.3 1.3 0.3]\n",
" [ 4.4 3.2 1.3 0.2]\n",
" [ 5. 3.5 1.6 0.6]\n",
" [ 5.1 3.8 1.9 0.4]\n",
" [ 4.8 3. 1.4 0.3]\n",
" [ 5.1 3.8 1.6 0.2]\n",
" [ 4.6 3.2 1.4 0.2]\n",
" [ 5.3 3.7 1.5 0.2]\n",
" [ 5. 3.3 1.4 0.2]\n",
" [ 7. 3.2 4.7 1.4]\n",
" [ 6.4 3.2 4.5 1.5]\n",
" [ 6.9 3.1 4.9 1.5]\n",
" [ 5.5 2.3 4. 1.3]\n",
" [ 6.5 2.8 4.6 1.5]\n",
" [ 5.7 2.8 4.5 1.3]\n",
" [ 6.3 3.3 4.7 1.6]\n",
" [ 4.9 2.4 3.3 1. ]\n",
" [ 6.6 2.9 4.6 1.3]\n",
" [ 5.2 2.7 3.9 1.4]\n",
" [ 5. 2. 3.5 1. ]\n",
" [ 5.9 3. 4.2 1.5]\n",
" [ 6. 2.2 4. 1. ]\n",
" [ 6.1 2.9 4.7 1.4]\n",
" [ 5.6 2.9 3.6 1.3]\n",
" [ 6.7 3.1 4.4 1.4]\n",
" [ 5.6 3. 4.5 1.5]\n",
" [ 5.8 2.7 4.1 1. ]\n",
" [ 6.2 2.2 4.5 1.5]\n",
" [ 5.6 2.5 3.9 1.1]\n",
" [ 5.9 3.2 4.8 1.8]\n",
" [ 6.1 2.8 4. 1.3]\n",
" [ 6.3 2.5 4.9 1.5]\n",
" [ 6.1 2.8 4.7 1.2]\n",
" [ 6.4 2.9 4.3 1.3]\n",
" [ 6.6 3. 4.4 1.4]\n",
" [ 6.8 2.8 4.8 1.4]\n",
" [ 6.7 3. 5. 1.7]\n",
" [ 6. 2.9 4.5 1.5]\n",
" [ 5.7 2.6 3.5 1. ]\n",
" [ 5.5 2.4 3.8 1.1]\n",
" [ 5.5 2.4 3.7 1. ]\n",
" [ 5.8 2.7 3.9 1.2]\n",
" [ 6. 2.7 5.1 1.6]\n",
" [ 5.4 3. 4.5 1.5]\n",
" [ 6. 3.4 4.5 1.6]\n",
" [ 6.7 3.1 4.7 1.5]\n",
" [ 6.3 2.3 4.4 1.3]\n",
" [ 5.6 3. 4.1 1.3]\n",
" [ 5.5 2.5 4. 1.3]\n",
" [ 5.5 2.6 4.4 1.2]\n",
" [ 6.1 3. 4.6 1.4]\n",
" [ 5.8 2.6 4. 1.2]\n",
" [ 5. 2.3 3.3 1. ]\n",
" [ 5.6 2.7 4.2 1.3]\n",
" [ 5.7 3. 4.2 1.2]\n",
" [ 5.7 2.9 4.2 1.3]\n",
" [ 6.2 2.9 4.3 1.3]\n",
" [ 5.1 2.5 3. 1.1]\n",
" [ 5.7 2.8 4.1 1.3]\n",
" [ 6.3 3.3 6. 2.5]\n",
" [ 5.8 2.7 5.1 1.9]\n",
" [ 7.1 3. 5.9 2.1]\n",
" [ 6.3 2.9 5.6 1.8]\n",
" [ 6.5 3. 5.8 2.2]\n",
" [ 7.6 3. 6.6 2.1]\n",
" [ 4.9 2.5 4.5 1.7]\n",
" [ 7.3 2.9 6.3 1.8]\n",
" [ 6.7 2.5 5.8 1.8]\n",
" [ 7.2 3.6 6.1 2.5]\n",
" [ 6.5 3.2 5.1 2. ]\n",
" [ 6.4 2.7 5.3 1.9]\n",
" [ 6.8 3. 5.5 2.1]\n",
" [ 5.7 2.5 5. 2. ]\n",
" [ 5.8 2.8 5.1 2.4]\n",
" [ 6.4 3.2 5.3 2.3]\n",
" [ 6.5 3. 5.5 1.8]\n",
" [ 7.7 3.8 6.7 2.2]\n",
" [ 7.7 2.6 6.9 2.3]\n",
" [ 6. 2.2 5. 1.5]\n",
" [ 6.9 3.2 5.7 2.3]\n",
" [ 5.6 2.8 4.9 2. ]\n",
" [ 7.7 2.8 6.7 2. ]\n",
" [ 6.3 2.7 4.9 1.8]\n",
" [ 6.7 3.3 5.7 2.1]\n",
" [ 7.2 3.2 6. 1.8]\n",
" [ 6.2 2.8 4.8 1.8]\n",
" [ 6.1 3. 4.9 1.8]\n",
" [ 6.4 2.8 5.6 2.1]\n",
" [ 7.2 3. 5.8 1.6]\n",
" [ 7.4 2.8 6.1 1.9]\n",
" [ 7.9 3.8 6.4 2. ]\n",
" [ 6.4 2.8 5.6 2.2]\n",
" [ 6.3 2.8 5.1 1.5]\n",
" [ 6.1 2.6 5.6 1.4]\n",
" [ 7.7 3. 6.1 2.3]\n",
" [ 6.3 3.4 5.6 2.4]\n",
" [ 6.4 3.1 5.5 1.8]\n",
" [ 6. 3. 4.8 1.8]\n",
" [ 6.9 3.1 5.4 2.1]\n",
" [ 6.7 3.1 5.6 2.4]\n",
" [ 6.9 3.1 5.1 2.3]\n",
" [ 5.8 2.7 5.1 1.9]\n",
" [ 6.8 3.2 5.9 2.3]\n",
" [ 6.7 3.3 5.7 2.5]\n",
" [ 6.7 3. 5.2 2.3]\n",
" [ 6.3 2.5 5. 1.9]\n",
" [ 6.5 3. 5.2 2. ]\n",
" [ 6.2 3.4 5.4 2.3]\n",
" [ 5.9 3. 5.1 1.8]]\n"
]
}
],
"source": [
"#Data in the iris dataset. The value of the features of the samples.\n",
"print(iris.data)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
" 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
" 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2\n",
" 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n",
" 2 2]\n"
]
}
],
"source": [
"# Target. Category of every sample\n",
"print(iris.target)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(150, 4)\n"
]
}
],
"source": [
"# Iris data is a numpy array\n",
"# We can inspect its shape (rows, columns). In our case, (n_samples, n_features)\n",
"print(iris.data.shape)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2\n"
]
}
],
"source": [
"#Using numpy, I can print the dimensions (here we are working with 2D matriz)\n",
"print(iris.data.ndim)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"150\n"
]
}
],
"source": [
"# I can print n_samples\n",
"print(iris.data.shape[0])"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4\n"
]
}
],
"source": [
"# ... n_features\n",
"print(iris.data.shape[1])"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\n"
]
}
],
"source": [
"# names of the features\n",
"print(iris.feature_names)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In another session, we will learn how to load a dataset from a file (csv, excel, ...). We will use the library pandas for this purpose."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set)\n",
"* [How to load an example dataset with scikit-learn](http://scikit-learn.org/stable/tutorial/basic/tutorial.html#loading-example-dataset)\n",
"* [Dataset loading utilities in scikit-learn](http://scikit-learn.org/stable/datasets/)\n",
"* [How to plot the Iris dataset](http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html)\n",
"* [An introduction to NumPy and Scipy](http://www.engr.ucsb.edu/~shell/che210d/numpy.pdf)\n",
"* [NumPy tutorial](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licence\n",
"\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -0,0 +1,314 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"* [Preprocessing](#Preprocessing)\n",
"* [Training set and Test set](#Training-set-and-Test-set)\n",
"* [Preprocessing](#Preprocessing)\n",
"* [References](#References)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Preprocessing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal of this notebook is to learn how separate the dataset into training and test datasets and then preprocess the data."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"iris = datasets.load_iris()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training set and Test set"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A common practice in machine learning to evaluate an algorithm is to split the data at hand into two sets, one that we call the **training set** on which we learn data properties and one that we call the **testing set** on which we test these properties. \n",
"\n",
"We are going to use *scikit-learn* to split the data into random training and testing sets. We follow the ration 75% for training and 25% for testing. We use `random_state` to ensure that the result is always the same and it is reproducible. (Otherwise, we would get different training and testing sets every time)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn.cross_validation import train_test_split\n",
"x_iris, y_iris = iris.data, iris.target\n",
"# Test set will be the 25% taken randomly\n",
"x_train, x_test, y_train, y_test = train_test_split(x_iris, y_iris, test_size=0.25, random_state=33)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(112, 4) (38, 4)\n"
]
}
],
"source": [
"# Dimensions of train and testing\n",
"print(x_train.shape, x_test.shape)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[ 5.7 2.9 4.2 1.3]\n",
" [ 6.7 3.1 4.4 1.4]\n",
" [ 4.7 3.2 1.6 0.2]\n",
" [ 6.5 2.8 4.6 1.5]\n",
" [ 6.1 2.6 5.6 1.4]\n",
" [ 6.3 3.3 6. 2.5]\n",
" [ 4.8 3.4 1.9 0.2]\n",
" [ 5.1 3.5 1.4 0.3]\n",
" [ 6.4 3.1 5.5 1.8]\n",
" [ 6.9 3.2 5.7 2.3]\n",
" [ 6.8 3.2 5.9 2.3]\n",
" [ 4.4 3. 1.3 0.2]\n",
" [ 6.3 3.4 5.6 2.4]\n",
" [ 6.1 2.9 4.7 1.4]\n",
" [ 6.9 3.1 5.1 2.3]\n",
" [ 6.4 2.9 4.3 1.3]\n",
" [ 6. 3. 4.8 1.8]\n",
" [ 5.2 3.5 1.5 0.2]\n",
" [ 6.3 3.3 4.7 1.6]\n",
" [ 7.2 3.2 6. 1.8]\n",
" [ 4.9 3.1 1.5 0.1]\n",
" [ 5.7 3.8 1.7 0.3]\n",
" [ 6.5 3. 5.8 2.2]\n",
" [ 4.8 3. 1.4 0.1]\n",
" [ 6. 2.2 5. 1.5]\n",
" [ 6.2 2.8 4.8 1.8]\n",
" [ 6.1 3. 4.6 1.4]\n",
" [ 6.1 2.8 4. 1.3]\n",
" [ 6.5 3. 5.2 2. ]\n",
" [ 5.9 3. 5.1 1.8]\n",
" [ 5.6 2.7 4.2 1.3]\n",
" [ 6.7 3.1 4.7 1.5]\n",
" [ 5.6 2.8 4.9 2. ]\n",
" [ 6.4 3.2 5.3 2.3]\n",
" [ 6.7 3.1 5.6 2.4]\n",
" [ 6.7 3. 5.2 2.3]\n",
" [ 5.8 2.7 5.1 1.9]\n",
" [ 5.7 3. 4.2 1.2]]\n"
]
}
],
"source": [
"#Test set\n",
"print (x_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Preprocessing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Standardization of datasets is a common requirement for many machine learning estimators implemented in the scikit; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.\n",
"\n",
"The preprocessing module further provides a utility class `StandardScaler` to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing set."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Standardize the features\n",
"from sklearn import preprocessing\n",
"scaler = preprocessing.StandardScaler().fit(x_train)\n",
"x_train = scaler.transform(x_train)\n",
"x_test = scaler.transform(x_test)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[-0.09752318 -0.32858743 0.34599443 0.25682671]\n",
" [ 1.06445511 0.09442168 0.45718919 0.39124069]\n",
" [-1.25950146 0.30592623 -1.09953753 -1.22172707]\n",
" [ 0.83205945 -0.54009199 0.56838396 0.52565467]\n",
" [ 0.36726814 -0.9631011 1.12435779 0.39124069]\n",
" [ 0.59966379 0.51743079 1.34674732 1.86979447]\n",
" [-1.14330363 0.72893534 -0.93274538 -1.22172707]\n",
" [-0.79471015 0.9404399 -1.2107323 -1.08731309]\n",
" [ 0.71586162 0.09442168 1.06876041 0.92889661]\n",
" [ 1.29685076 0.30592623 1.17995517 1.60096651]\n",
" [ 1.18065293 0.30592623 1.29114994 1.60096651]\n",
" [-1.60809495 -0.11708288 -1.26632968 -1.22172707]\n",
" [ 0.59966379 0.72893534 1.12435779 1.73538049]\n",
" [ 0.36726814 -0.32858743 0.62398134 0.39124069]\n",
" [ 1.29685076 0.09442168 0.84637087 1.60096651]\n",
" [ 0.71586162 -0.32858743 0.40159181 0.25682671]\n",
" [ 0.25107031 -0.11708288 0.67957873 0.92889661]\n",
" [-0.67851232 0.9404399 -1.15513491 -1.22172707]\n",
" [ 0.59966379 0.51743079 0.62398134 0.66006865]\n",
" [ 1.64544425 0.30592623 1.34674732 0.92889661]\n",
" [-1.0271058 0.09442168 -1.15513491 -1.35614105]\n",
" [-0.09752318 1.57495356 -1.04394015 -1.08731309]\n",
" [ 0.83205945 -0.11708288 1.23555256 1.46655253]\n",
" [-1.14330363 -0.11708288 -1.2107323 -1.35614105]\n",
" [ 0.25107031 -1.80911932 0.79077349 0.52565467]\n",
" [ 0.48346596 -0.54009199 0.67957873 0.92889661]\n",
" [ 0.36726814 -0.11708288 0.56838396 0.39124069]\n",
" [ 0.36726814 -0.54009199 0.23479966 0.25682671]\n",
" [ 0.83205945 -0.11708288 0.90196826 1.19772457]\n",
" [ 0.13487248 -0.11708288 0.84637087 0.92889661]\n",
" [-0.21372101 -0.75159654 0.34599443 0.25682671]\n",
" [ 1.06445511 0.09442168 0.62398134 0.52565467]\n",
" [-0.21372101 -0.54009199 0.73517611 1.19772457]\n",
" [ 0.71586162 0.30592623 0.95756564 1.60096651]\n",
" [ 1.06445511 0.09442168 1.12435779 1.73538049]\n",
" [ 1.06445511 -0.11708288 0.90196826 1.60096651]\n",
" [ 0.01867465 -0.75159654 0.84637087 1.06331059]\n",
" [-0.09752318 -0.11708288 0.34599443 0.12241273]]\n"
]
}
],
"source": [
"# As we see, the iris dataset is now normalized\n",
"print(x_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Feature selection](http://scikit-learn.org/stable/modules/feature_selection.html)\n",
"* [Classification probability](http://scikit-learn.org/stable/auto_examples/classification/plot_classification_probability.html)\n",
"* [Mastering Pandas](http://proquest.safaribooksonline.com/book/programming/python/9781783981960), Femi Anthony, Packt Publishing, 2015.\n",
"* [Matplotlib web page](http://matplotlib.org/index.html)\n",
"* [Using matlibplot in IPython](http://ipython.readthedocs.org/en/stable/interactive/plotting.html)\n",
"* [Seaborn Tutorial](https://stanford.edu/~mwaskom/software/seaborn/tutorial.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Licences\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

@ -0,0 +1,201 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"\n",
"* [Machine Learning](#Machine-Learning)\n",
"* [Machine learning algorithms](#Machine-learning-algorithms)\n",
"\t\t* [Supervised machine learning model](#Supervised-machine-learning-model)\n",
"\t\t* [Unsupervised machine learning model](#Unsupervised-machine-learning-model)\n",
"* [sklearn interface](#sklearn-interface)\n",
"* [References](#References)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is an introduction of general ideas about machine learning and the general interface of scikit-learn, taken from the [scikit-learn tutorial](http://www.astroml.org/sklearn_tutorial/general_concepts.html). \n",
"\n",
"You can skip it during the lab session and read it later,"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Machine learning algorithms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Machine learning algorithms are programs that learn a model from a dataset with the aim of making predictions or learning structures to organize the data.\n",
"\n",
"In scikit-learn, machine learning algorithms take as an input a *numpy* array (n_samples, n_features), where\n",
"* **n_samples**: number of samples. Each sample is an item to process (i.e. classify). A sample can be a document, a picture, a sound, a video, a row in database or CSV file, or whatever you can describe with a fixed set of quantitative traits.\n",
"* **n_features**: The number of features or distinct traits that can be used to describe each item in a quantitative manner.\n",
"\n",
"The number of features should be defined in advanced and it can be very high dimensional (e.g. millions of features) with most of them being zeros for a given sample. In this case we may use (scipy.sparse) sparse matrices instead of (numpy) arrays so as to make the data fit in memory.\n",
"\n",
"The first step in machine learning is **identifying the relevant features** from the input data, and the second step is **extracting the features** from the input data. \n",
"\n",
"[Machine learning algorithms](http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/) can be classified according to learning style into:\n",
"* **Supervised learning**: input data (training dataset) has a known label or result. Example problems are classification and regression. A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.\n",
"* **Unsupervised learning**: input data is not labeled. A model is prepared by deducing structures present in the input data. This may be to extract general rules. Example problems are clustering, dimensionality reduction and association rule learning.\n",
"* **Semi-supervised learning**:i nput data is a mixture of labeled and unlabeled examples. There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions. Example problems are classification and regression."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Supervised machine learning model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In *supervised machine learning models*, the machine learning algorithm takes as an input a training dataset, composed of feature vectors and labels, and produces a predictive model which is used for make prediction on new data.\n",
"![](files/images/plot_ML_flow_chart_1.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Unsupervised machine learning model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In *unsupervised machine learning models*, the machine learning model algorithm takes as an input the feature vectors and produces a predictive model that is used to fit its parameters so as to best summarize regularities found in the data.\n",
"![](files/images/plot_ML_flow_chart_3.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## sklearn interface"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"scikit-learn has a uniform interface for all the estimators, some methods are only available is the estimator is supervised or unsupervised:\n",
"\n",
"* Available in *all estimators*:\n",
" * **model.fit()**: fit training data. For supervised learning applications, this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)). For unsupervised learning applications, this accepts only a single argument, the data X (e.g. model.fit(X)).\n",
"\n",
"* Available in *supervised estimators*:\n",
" * **model.predict()**: given a trained model, predict the label of a new set of data. This method accepts one argument, the new data X_new (e.g. model.predict(X_new)), and returns the learned label for each object in the array.\n",
" * **model.predict_proba()**: For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by model.predict().\n",
"\n",
"* Available in *unsupervised estimators*:\n",
" * **model.transform()**: given an unsupervised model, transform new data into the new basis. This also accepts one argument X_new, and returns the new representation of the data based on the unsupervised model.\n",
" * **model.fit_transform()**: some estimators implement this method, which performs a fit and a transform on the same input data.\n",
"\n",
"\n",
"![](files/images/plot_ML_flow_chart_2.png)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [General concepts of machine learning with scikit-learn](http://www.astroml.org/sklearn_tutorial/general_concepts.html)\n",
"* [A Tour of Machine Learning Algorithms](http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Licence"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

@ -0,0 +1,204 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"* [Model Persistence](#Model-Persistence)\n",
"* [References](#References)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Model Persistence"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal of this notebook is to learn how to save a model in the the scikit by using Pythons built-in persistence model, namely pickle\n",
"\n",
"First we recap the previous tasks: load data, preprocess and train the model."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(steps=[('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('KNN', KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',\n",
" metric_params=None, n_jobs=1, n_neighbors=5, p=2,\n",
" weights='uniform'))])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# load iris\n",
"from sklearn import datasets\n",
"iris = datasets.load_iris()\n",
"\n",
"# Training and test spliting\n",
"from sklearn.cross_validation import train_test_split\n",
"x_iris, y_iris = iris.data, iris.target\n",
"# Test set will be the 25% taken randomly\n",
"x_train, x_test, y_train, y_test = train_test_split(x_iris, y_iris, test_size=0.25, random_state=33)\n",
"\n",
"# Create the model using the pipeline\n",
"from sklearn.pipeline import Pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"# create a composite estimator made by a pipeline of preprocessing and the KNN model\n",
"model = Pipeline([\n",
" ('scaler', StandardScaler()),\n",
" ('KNN', KNeighborsClassifier())\n",
"])\n",
"\n",
"# Train the model\n",
"model.fit(x_train, y_train) \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we are going to save the model to a data structure called *pickle*. A pickle is a dictionary and can be used as a file or a string."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"array([0])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pickle\n",
"s = pickle.dumps(model)\n",
"model2 = pickle.loads(s)\n",
"model2.predict(x_iris[0:1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A more efficient alternative to pickle is joblib, especially for big data problems. In this case the model can only be saved to a file and not to a string."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# save model\n",
"from sklearn.externals import joblib\n",
"joblib.dump(model, 'filename.pkl') \n",
"\n",
"#load model\n",
"model2 = joblib.load('filename.pkl') "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Tutorial scikit-learn](http://scikit-learn.org/stable/tutorial/basic/tutorial.html)\n",
"* [Model persistence in scikit-learn](http://scikit-learn.org/stable/modules/model_persistence.html#model-persistence)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licence\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

@ -0,0 +1,130 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Table of Contents\n",
"* [Conclusions](#Conclusions)\n",
"* [References](#References)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this chapter, we have introduced the essentials of machine learning in a practical way. \n",
"\n",
"We have gone through some of the most interesting features offered by scikit-learn. They essentially concern the machine learning features, and the visualisation features brought by the matplotlib and seaborn libraries. In the following session we will analyse other machine learning algorithms, such as SVM and Perceptron.\n",
"\n",
"Before concluding this session, we include a comparison of the algorithms reviewed in this session on synthetic datasets, based on the sample code of [sklearn](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#example-classification-plot-classifier-comparison-py).\n",
"\n",
"Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers.\n",
"\n",
"The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set.\n",
"\n",
"The [DummyClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html#sklearn.dummy.DummyClassifier) is a classifier that makes predictions using simple rules. It is useful as a simple baseline to compare with other (real) classifiers. \n",
"\n",
"As previosly, we import a function defined in the file [plotml.py](files/plotml.py) using the *magic command* **%run**."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# display plots in the notebook \n",
"#%matplotlib inline\n",
"\n",
"# Run in a separate window to make it bigger\n",
"%matplotlib qt\n",
"%run plotml\n",
"plot_classifiers()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* [Classifier comparison¶](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#example-classification-plot-classifier-comparison-py)\n",
"* [DummyClassifier ](http://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licence\n",
"The notebook is freely licensed under under the [Creative Commons Attribution Share-Alike license](https://creativecommons.org/licenses/by/2.0/). \n",
"\n",
"© 2016 Carlos A. Iglesias, Universidad Politécnica de Madrid."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

@ -0,0 +1,112 @@
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.dummy import DummyClassifier
# Taken from http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#example-classification-plot-classifier-comparison-py
def plot_classifiers():
"""
Plot classifiers in synthetic datasets, taken from http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
A comparison of a several classifiers in scikit-learn on synthetic datasets. The point of this example is to illustrate the nature of decision boundaries of different classifiers. This should be taken with a grain of salt, as the intuition conveyed by these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set.
"""
h = .02 # step size in the mesh
names = ["DummyClassifier", "Nearest Neighbors", "Decision Tree", "Naive Bayes", "Linear SVM", "RBF SVM",
"Random Forest"]
classifiers = [
DummyClassifier(strategy="prior"),
KNeighborsClassifier(3),
DecisionTreeClassifier(max_depth=5),
GaussianNB(),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1)
]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1), linearly_separable]
ds_names = ["Dataset moons", "Dataset circles", "Dataset linearly_separable"]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_name, ds in zip(ds_names, datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
ax.set_title(ds_name)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.suptitle("Comparison of Classifiers in synthetic datasets", fontsize=18)
plt.show()

@ -0,0 +1,117 @@
import numpy as np
# Taken from http://chrisstrelioff.ws/sandbox/2015/06/25/decision_trees_in_python_again_cross_validation.html
def get_code(tree, feature_names, target_names,
spacer_base=" "):
"""Produce psuedo-code for decision tree.
Args
----
tree -- scikit-leant DescisionTree.
feature_names -- list of feature names.
target_names -- list of target (class) names.
spacer_base -- used for spacing code (default: " ").
Notes
-----
based on http://stackoverflow.com/a/30104792.
"""
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
def recurse(left, right, threshold, features, node, depth):
spacer = spacer_base * depth
if (threshold[node] != -2):
print(spacer + "if ( " + features[node] + " <= " + \
str(threshold[node]) + " ) {")
if left[node] != -1:
recurse(left, right, threshold, features,
left[node], depth+1)
print(spacer + "}\n" + spacer +"else {")
if right[node] != -1:
recurse(left, right, threshold, features,
right[node], depth+1)
print(spacer + "}")
else:
target = value[node]
for i, v in zip(np.nonzero(target)[1],
target[np.nonzero(target)]):
target_name = target_names[i]
target_count = int(v)
print(spacer + "return " + str(target_name) + \
" ( " + str(target_count) + " examples )")
recurse(left, right, threshold, features, 0, 0)
# Taken from http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html#example-tree-plot-iris-py
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
def plot_tree_iris():
"""
Taken fromhttp://scikit-learn.org/stable/auto_examples/tree/plot_iris.html
"""
# Parameters
n_classes = 3
plot_colors = "bry"
plot_step = 0.02
# Load data
iris = load_iris()
for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3],
[1, 2], [1, 3], [2, 3]]):
# We only take the two corresponding features
X = iris.data[:, pair]
y = iris.target
# Shuffle
idx = np.arange(X.shape[0])
np.random.seed(13)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# Standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
# Train
model = DecisionTreeClassifier(max_depth=3, random_state=1).fit(X, y)
# Plot the decision boundary
plt.subplot(2, 3, pairidx + 1)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.xlabel(iris.feature_names[pair[0]])
plt.ylabel(iris.feature_names[pair[1]])
plt.axis("tight")
# Plot the training points
for i, color in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i],
cmap=plt.cm.Paired)
plt.axis("tight")
plt.suptitle("Decision surface of a decision tree using paired features")
plt.legend()
plt.show()

@ -0,0 +1,51 @@
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
from sklearn.neighbors import KNeighborsClassifier
# Taken from http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
def plot_classification_iris():
"""
Plot knn classification of the iris dataset
"""
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
n_neighbors = 15
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
Loading…
Cancel
Save