diff --git a/lectures/parallelism/slides.qmd b/lectures/parallelism/slides.qmd
index 385e12eaad02b519051988cade459e62df4b6d0b..59f9c6a2d1495d69407c0018f285f4cda411002b 100644
--- a/lectures/parallelism/slides.qmd
+++ b/lectures/parallelism/slides.qmd
@@ -107,28 +107,31 @@ N = 8
 ```bash
 module load gcc
 ```
-2. Compile and run the serial example 
+2. Compile and run the serial example
 ```bash
-gcc main.c -o serial.x -lm 
+gcc main.c -o serial.x -lm
 time ./serial.x   # use `time` to check the runtime
 ```
 3. Compile and run the example using OpenMP
 ```bash
-gcc -fopenmp main.c -o parallel.x -lm 
-OMP_NUM_THREADS=2 time ./parallel.x 
+gcc -fopenmp main.c -o parallel.x -lm
+OMP_NUM_THREADS=2 time ./parallel.x
 ```
 4. See next slide!
 
 # Hands-on Session! {background-color=var(--dark-bg-color) .leftalign}
-4. Now add
-   * `schedule(static,1)`
-   * `schedule(static,10)`
-   * `schedule(FIXMEsomethingelse)`
-   * `schedule(FIXMEsomethingelse)`
-and find out how the OpenMP runtime decomposes the problem domain.
 
-FIXME: Maybe add something varying the number of threads, so that one can see
-first ideas of strong/weak scaling.
+4. Now compile/run with
+```bash
+gcc -fopenmp main.c -o parallel.x -lm -DWRITE_DECOMP
+OMP_NUM_THREADS=4 time ./parallel.x
+```
+5. What does the additional output mean?
+
+4. Now uncomment/adapt
+   * `schedule(static,100)`
+   * `schedule(static,10)`
+and interpret the results.
 
 # Scaling
 
@@ -257,7 +260,7 @@ Wikipedia
 "Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem"
 
 :::{.smaller}
--- *Introduction to Parallel Computing Tutorial, LLNL * 
+-- *Introduction to Parallel Computing Tutorial, LLNL *
 :::
 
 
@@ -267,7 +270,7 @@ Wikipedia
   * what we've been discussing
 * Task-level parallelism
   * Example: Atmosphere ocean coupling
-  
+
 
 ## Precondition for parallel execution
 
@@ -441,9 +444,13 @@ S1 and S2 can NOT be executed in parallel!
 
 # FIXME
 * Homework:
-    * Do something where you run into hardware-constraints (i.e. Numa, too many threads, ...)
-    * Give some example with race condition or stuff and have them find it.
-    * Have them discuss the concepts from the lecture using the metaphor of a kitchen workflow?
+    * Revisit `schedule` and try `dynamic` and explain why that happens.
+    * Parallelize the loop in `maxval`.
+    * Do a strong-scaling experiment starting with 2 threads and up to 32
+      threads and plot the result.
+    * If you were to increase the number of threads, do you expect the speedup
+      to continue indefinitely? If not, which limits can you imagine? Feel free
+      to use kitchen metaphors.
 
 
 # Additional reading