To put the numbers into context, one tinybox offers 37% of Nvidia H100 compute performance (FP16) but slightly more memory (96GB instead of 80GB) and considerably higher peak memory bandwidth (21T ...
As for key AI performance metrics, AMD said the MI300X is 30 percent faster than the H100 for TensorFloat-32 or TF32 (653.7 teraflops), half-precision floating point or FP16 (1307.4 teraflops ...