Lab 3: SDF Scheduler Implementation
- Due 2 May 2023 by 17:00
- Points 1
Estimated effort: approx 3h
This lab is based on Lab 2. Your assignment is to write a C (alternatively Python, Julia) implementation of the system in Lab 2.
To repeat, you have the SDF in the figure below, with actors A, B, C, D, and the specified token production/consumption specified in the figure. For this exercise, disregard the actual functionality and treat them as generic SDF actors that can fire when there are enough tokens on their inputs.
For the moment, let's work on a single processor. You will need to implement:
- a buffer model. You do not have to actually manage said tokens, but only to keep track of the number of tokens in the buffer. I suggest you implement at least three functions (but feel free to add any other functions you need to check your buffer occupancy or extract other type of information):
init(b, M)
- initialize buffer b of maximum size Mpush(b, N)
- store N tokens in the buffer if there is space for them, otherwise store none; return true/false depending on successpop(b, N)
- remove N tokes from the buffer, if all are available, else remove none; return true/false depending on success
- actor firing functions for A, B, C, D using the buffer models above. The specific functions should model your specific SDF from above. For instance for actor B, its firing function should consume 3 tokens from buffer AB, a computation (logging printouts or an empty block will be enough, since this is just a model) and produce 1 token in buffer BC. Note that when using buffers of limited size, there are some tricky situations you have to handle properly...
- a sequencing controller implementing your schedules. In its simplest form this is means just repeatedly calling the firing functions for actors A,B,C,D in sequence.
Once you have this basic structure in place, you can extract some information about your system and compare it to the numbers from Lab 2. For instance, you should be able to show:
- the number of tokens in each buffer, after each call to a firing function (as well as the maximum number of tokens in each buffer)
- the number of times each actor fired successfully (and failed to fire)
Modeling of Time
To make this implementation more useful, you should add a time variable to your system. Each successful firing of an actor will then need to increase the time by a value corresponding to the latency of that actor. Note also that failed firings should also increase time somewhat, since checking buffers also takes time. E.g. add 10 to the global time in each actor function returning true and add 1 for each actor function returning false.
Your Job
Your task is to write the code as specified above, for a single processor system, run it for a sufficiently long (simulated) time interval and compare your buffers occupancy and firing data to the Lab 2 data.
- Run your code for the round-robin and the optimal schedule (as in Lab 2) for a longer time and make it match it to the Lab 2 simulation. What can you say about successful vs. failed firings for the two scheduling cases?
- What are the "tricky situations" mentioned above and how do these influence the way you fire your actors?
- Limit each buffer to its minimum (found in Lab 2) to see whether the system still runs as in the Lab 2 simulation. What happens if you limit one of the buffers even more?
- Think of the changes you would need to make to your code in order to simulate a multi-processor model as in Lab 2; Discuss these with your TA.
- Compare your implementation to the Ptolemy II, Lab 2 model. What are advantages and drawbacks of working with one or the other?