Lab 3: Neural vision sensor and iterative learning control
For lab 3, we will return to the omnibot of Lab 1.
The lab has two parts:
Part 1 is about controlling the omnibot while estimating it's position using a neural-network-driven camera sensor.
Part 2 is about using ILC to iteratively improve the trajectory of the robot.
The lab is held in the same room as the previous labs (Lab B, M-building floor 2 in the eastern corridor). You will once again work in pairs and you may use either your own laptops or the lab computers. (In the first part of the lab, you will have to use a lab computer.)
The lab content can be found on the jupyterhub servers under the latest version of frtn75-vision-sensor-and-ilc.
Mandatory preparation
There are preparatory exercises for the lab. Make sure that you have completed them or you will be asked kindly to leave.
- Complete computer exercise 10. It is important that you have saved your_model.bson, small_model.bson and big_model.bson and that you can access them during the lab session.
- Complete the mandatory paper exercise found here Download here (establish a model for our omnibot, and sketch the block diagrams which describe the system). You will need these models at the lab.
Questions?
If you have questions about the preparation, you are encouraged to ask them during one of the exercise sessions preceding the lab.
Lab responsible: Yde Sinnema (yde.sinnema@control.lth.se)