“Phison’s booth at GTC 2024 held an unexpected surprise: The company demoed a single workstation with four GPUs using SSDs and DRAM to expand the effective memory space for AI workloads, allowing it to run a workload that typically requires 1.4TB of VRAM spread across 24 H100 GPUs. The company’s new aiDaptiv+ platform is designed to lower the barriers of AI LLM training by employing system DRAM and SSDs to augment the amount of GPU VRAM available for training, which Phison says will allow users to accomplish intense generative AI training workloads at a fraction of the cost of just using standard GPUs, albeit trading the lower cost of entry for reduced performance and thus longer training times..”
Source: Tom’s Hardware