Phison Demonstrates 405B Parameter LLM Fine-Tuning with aiDAPTIV+ on Just Two GPUs
At SC25, Phison showed off the potential of its aiDAPTIV+ hardware and software solution by fine-tune training the Llama 3.1 405 billion parameter model on a single server equipped with two GPUs and 192 GB of VRAM. This task normally requires a combined VRAM pool...





