diff --git a/lectures/hardware/slides.qmd b/lectures/hardware/slides.qmd index 9594b6e8e7919703f4a189c341f6f1d9ba04ab40..dd8aab478fbe4fe7cbaa4b44620ae4d129e34fc6 100644 --- a/lectures/hardware/slides.qmd +++ b/lectures/hardware/slides.qmd @@ -296,27 +296,34 @@ FPGA = Field-Programmable Gata Arrays ## CPU vs GPU (on Levante) +::: {.smaller} ::::{.columns} -:::{.column width="50"} +:::{.column width="40%"} +1 CPU node has 2x [AMD EPYC 7763 Milan](https://www.amd.com/en/products/processors/server/epyc/7003-series/amd-epyc-7763.html)  ::: -:::{.column width="50%"} -[NVIDIA A100](https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf) has 128 SM +:::{.column width="40%"} +1 GPU node has 4x +[NVIDIA A100](https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf), each with 128 SM {width="85%"} ::: - :::: +:::{.info} +More insights in the "Memory hierarchies" lecture on July 2nd +::: +::: + ## Categorization * MIMD: Multiple Instructions, Multiple Data * Each unit executes a **different** instruction on **different** chunks of data * E.g. multiple cores in a CPU * SIMD: Single Instruction, Multiple Data - * Each unit executes **the same instruction** on **different** chunks of data + * Each unit executes **the same** instruction on **different** chunks of data * E.g. vector engines <!--* "Warps"? inside GPUs * Vectorization? inside a CPU core-->