Lecture 3. Supervised Learning 3
Learning Goals: You should understand the basics of bagging and how to train random forests. You should also be able to explain the kernel trick and how it can be used to generalize many algorithms, such as ridge regression. You should be able to run and make modifications to basic matlab and python (scikit learn) code for such algorithms.
Lecture 3 slides Download Lecture 3 slides
Lecture03small.pdf Download Lecture03small.pdf
Lecture 3 video:
Also available at youtube: https://youtu.be/4nxBGiJpn_I
Links to an external site.
Reading Assignment (this refers to the book Supervised Machine Learning (Links to an external site.) by Lindholm et al):
- Ch 7.1-7.2 Bagging, Random Forests
- Ch 8 Nonlinear Input Transformations and Kernels
Code used in the lecture:
- bagging.ipynb (Links to an external site.)
- irisfeatures.ipynb (Links to an external site.)
- kernelridge.ipynb (Links to an external site.)
A correction: At the end of the lecture, it should say that you can start with Exercise 3 now (you need to wait with Exercise 4 until after you have studied Lecture 4.)
Meeting plan: First quiz on Lec2 and Lec3. Then we go through the main points of Lec2 and Lec3, and we run and discuss some of the *.ipynb-codes available for Lecture2 and Lecture3. We also briefly introduced the lab1, the music taste prediction competition.
We recommend you have a look at the Data Science Cheat-sheets available under Additional Material, that might be useful for your coding work
Additional Material on supervised learning is available here