1
0
mirror of https://github.com/gsi-upm/sitc synced 2025-12-15 09:38:16 +00:00
This commit is contained in:
J. Fernando Sánchez
2016-03-28 12:26:20 +02:00
parent 65d1dc162f
commit 62f4fce1ed
12 changed files with 816 additions and 773 deletions

View File

@@ -4,27 +4,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Course Notes for Learning Intelligent Systems"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](files/images/EscUpmPolit_p.gif \"UPM\")\n",
"\n",
"# Course Notes for Learning Intelligent Systems\n",
"\n",
"Department of Telematic Engineering Systems, Universidad Politécnica de Madrid, © 2016 Carlos A. Iglesias\n",
"\n",
"## [Introduction to Machine Learning](2_0_0_Intro_ML.ipynb)"
]
},
@@ -51,7 +36,7 @@
"source": [
"The goal of this notebook is to learn how to read and load a sample dataset.\n",
"\n",
"Scikit-learn come with some bundled [datasets](http://scikit-learn.org/stable/datasets/): iris, digits, boston, etc.\n",
"Scikit-learn comes with some bundled [datasets](http://scikit-learn.org/stable/datasets/): iris, digits, boston, etc.\n",
"\n",
"In this notebook we are going to use the Iris dataset."
]
@@ -78,12 +63,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In ordert to read the dataset, we import the bundle datasets and then load the Iris dataset. "
"In ordert to read the dataset, we import the datasets bundle and then load the Iris dataset. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 8,
"metadata": {
"collapsed": false
},
@@ -105,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 9,
"metadata": {
"collapsed": false
},
@@ -116,7 +101,7 @@
"sklearn.datasets.base.Bunch"
]
},
"execution_count": 2,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -128,7 +113,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 10,
"metadata": {
"collapsed": false
},
@@ -204,12 +189,12 @@
],
"source": [
"# print descrition of the dataset\n",
"print (iris.DESCR)"
"print(iris.DESCR)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"execution_count": 11,
"metadata": {
"collapsed": false
},
@@ -229,7 +214,7 @@
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 12,
"metadata": {
"collapsed": false
},
@@ -249,7 +234,7 @@
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 13,
"metadata": {
"collapsed": false
},
@@ -260,7 +245,7 @@
"numpy.ndarray"
]
},
"execution_count": 33,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -279,7 +264,7 @@
},
{
"cell_type": "code",
"execution_count": 37,
"execution_count": 14,
"metadata": {
"collapsed": false
},
@@ -472,7 +457,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 16,
"metadata": {
"collapsed": false
},
@@ -493,7 +478,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 17,
"metadata": {
"collapsed": false
},
@@ -513,7 +498,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 18,
"metadata": {
"collapsed": false
},
@@ -533,7 +518,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 19,
"metadata": {
"collapsed": false
},
@@ -553,7 +538,7 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 20,
"metadata": {
"collapsed": false
},
@@ -575,7 +560,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In another session, we will learn how to load a dataset from a file (csv, excel, ...). We will use the library pandas for this purpose."
"In following sessions we will learn how to load a dataset from a file (csv, excel, ...) using the pandas library."
]
},
{
@@ -625,7 +610,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.1"
"version": "3.5.1+"
}
},
"nbformat": 4,