{"id":87940,"date":"2026-01-14T06:00:00","date_gmt":"2026-01-14T14:00:00","guid":{"rendered":"https:\/\/phisonblog.com\/?p=87940"},"modified":"2026-01-22T10:04:10","modified_gmt":"2026-01-22T18:04:10","slug":"phison-demonstrates-405b-parameter-llm-fine-tuning-with-aidaptiv-on-just-two-gpus","status":"publish","type":"post","link":"https:\/\/phisonblog.com\/zh-tw\/phison-demonstrates-405b-parameter-llm-fine-tuning-with-aidaptiv-on-just-two-gpus\/","title":{"rendered":"\u7fa4\u806f\u96fb\u5b50\u5229\u7528 aiDAPTIV+ \u5728\u50c5\u5169\u584a GPU \u4e0a\u5c55\u793a\u4e86 4050 \u5104\u53c3\u6578 LLM \u5fae\u8abf\u00a0"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;0px||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;default&#8221; width=&#8221;100%&#8221; max_width=&#8221;100%&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_2_line_height=&#8221;1.7em&#8221; header_3_line_height=&#8221;1.7em&#8221; custom_margin=&#8221;||-10px||false|false&#8221; custom_padding=&#8221;||0px||false|false&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span data-contrast=\"auto\">At SC25, Phison showed off the potential of its aiDAPTIV+ hardware and software solution by fine-tune training the Llama 3.1 405 billion parameter model on a single server equipped with two GPUs and 192 GB of VRAM. This task normally requires a combined VRAM pool of over 7 TB and multiple NVIDIA servers \u2014 a large and costly setup. It was all done on a single system that would cost around $50,000, thanks to the power of aiDAPTIV+.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The secret sauce in making this work was an 8TB aiDAPTIVCache SSD, used to store the calculated weights and values for the LLM. aiDAPTIV+ breaks down the complex task of fine-tuning into smaller, more manageable chunks rather than trying to fit everything into VRAM. The result is the democratization of access to more powerful AI models for a wide range of organizations.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">As an example of the versatility that aiDAPTIV+ brings to LLM fine-tune training, this system completed a task that would normally require a dozen NVIDIA HGX H100 servers, each with eight H100 GPUs. These servers require far more space and electricity, and costs are significantly higher than the aiDAPTIV+ server that accomplished the same task inside the Phison booth.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<div class=\"banner_wrapper\" style=\"height: 83px;\"><div class=\"banner  banner-71399 bottom vert custom-banners-theme-default_style\" style=\"\"><img decoding=\"async\" width=\"1080\" height=\"150\" src=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2024\/03\/964_2464965148.jpg\" class=\"attachment-full size-full\" alt=\"\" style=\"height: 83px;\" srcset=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2024\/03\/964_2464965148.jpg 1080w, https:\/\/phisonblog.com\/wp-content\/uploads\/2024\/03\/964_2464965148-980x136.jpg 980w, https:\/\/phisonblog.com\/wp-content\/uploads\/2024\/03\/964_2464965148-480x67.jpg 480w\" sizes=\"(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) 1080px, 100vw\" \/><a class=\"custom_banners_big_link\" href=\"https:\/\/www.phison.com\/en\/aidaptiv-plus-ai-data-storage-solution\" target=\"_blank\" rel=\"noopener\"><\/a><div class=\"banner_caption\" style=\"\"><div class=\"banner_caption_inner\"><div class=\"banner_caption_text\" style=\"\">View:  Phison aiDAPTIV+ Solution<\/div><\/div><\/div><\/div><\/div>\n<h3 aria-level=\"2\">\u00a0<\/h3>\n<h3 aria-level=\"2\"><span data-contrast=\"none\">What is aiDAPTIV+?<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h3>\n<p><span data-contrast=\"auto\">aiDAPTIV+ is a hardware and software solution engineered to extend the memory space for fine-tune training of AI models and accelerate time to first token (TTFT) for inference workloads. A specialized aiDAPTIVCache SSD serves as a staging area for the unpacked LLM\u2019s tensor weights and vectors for training. The aiDAPTIVLink middleware manages memory allocation and determines how to best utilize available resources, shifting data to and from the aiDAPTIVCache SSD into VRAM or DRAM as needed.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">aiDAPTIVCache SSDs are purpose built for this task, with an extremely high endurance rating of 100 DWPD (drive writes per day). The SSDs achieve this level of endurance and availability of constant high-speed throughput utilizing SLC NAND. It\u2019s possible to write over 300TB per day to an aiDAPTIVCache SSD under heavy use, which could burn out regular SSDs in less than two months. aiDAPTIVCache drives are rated to handle this demanding scenario for five years of constant use.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">While the time to complete a fine-tune training process increases compared with utilizing pure GPU VRAM, most of the reduced performance comes from having fewer GPUs and less compute available. The aiDAPTIVCache SSDs are fast enough to not cause a severe bottleneck, resulting in training times around 5% longer than pure GPU-based training on a similar number of GPUs. Direct comparisons aren\u2019t particularly meaningful, however, as aiDAPTIV+ enables the use of significantly larger models on GPUs that otherwise can\u2019t fine-tune the dataset.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-87950 size-full\" src=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132253.png\" alt=\"\" width=\"709\" height=\"403\" srcset=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132253.png 709w, https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132253-480x273.png 480w\" sizes=\"(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 709px, 100vw\" \/><\/p>\n<p><i><span data-contrast=\"auto\">aiDAPTIV+ AITPC running Pro Suite on two RTX 4000 Ada GPUs.<\/span><\/i><span data-ccp-props=\"{&quot;335551550&quot;:2,&quot;335551620&quot;:2}\">\u00a0<\/span><\/p>\n<h3 aria-level=\"2\">\u00a0<\/h3>\n<h3 aria-level=\"2\"><span data-contrast=\"none\">Llama 3.1 405B aiDAPTIV+ server training details<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h3>\n<p><span data-contrast=\"auto\">The server utilized by Phison at SC25 featured the following hardware that enabled training of the Llama 3.1 405B model.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-87976 \" src=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Llama-3.1-405B-model-Image-scaled.png\" alt=\"\" width=\"711\" height=\"400\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><span data-contrast=\"auto\">aiDAPTIV+ supports a variety of modern platforms, including Intel and AMD CPUs, ARM processors, a variety of motherboards and chipsets, and a wide range of memory capacities. The CPU isn\u2019t a major factor in running aiDAPTIV+, but workstation and server platforms provide significantly higher connectivity options. Client solutions work but don\u2019t support as many PCIe lanes, which has a direct impact on the performance of the storage devices as well as the GPUs \u2014 especially when using more than one GPU and multiple SSDs.<\/span><span data-ccp-props=\"{&quot;335559738&quot;:240}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The Phison SC25 server used Intel\u2019s Xeon W5-3435X processor from the Sapphire Rapids family, which provides 112 total lanes of PCI Express 5.0 connectivity. Each of the RTX Pro 6000 GPUs uses a x16 PCIe Gen5 slot, with the SSDs using U.2 x4 connections. The server as configured represents more of a baseline configuration, with plenty of room for expansion options including additional aiDAPTIVCache SSDs and GPUs to increase performance.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The aiDAPTIVCache SSD used at SC25 was an 8TB U.2 model, a required capacity to handle the training of a 405 billion parameter model. Fine-tune training of an LLM requires approximately 20 times the memory capacity as the number of parameters. In this case, 405 billion parameters equates to around 8 TB of memory. That memory is traditionally VRAM, or system RAM in some scenarios, both of which are very expensive in such quantities. aiDAPTIV+ augments the available memory with 8 TB of NAND flash storage.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">It requires around 26 of the latest NVIDIA B300 Ultra accelerators to train a 405B model using a traditional approach of keeping all the data in VRAM rather than with aiDAPTIV+, which means using four servers with eight GPUs each \u2014 and each individual GPU would cost as much as the entire aiDAPTIV+ server. Alternatively, it takes about 40 NVIDIA B200 GPUs spread over five servers, 53 H200 GPUs running in seven servers, or 93 H100 accelerators housed in a dozen HGX H100 servers. It would also require all the networking and interconnect hardware, with a total power use in the 40 kW to 80 kW range. The aiDAPTIV+ server only used about 1 kW and required no additional infrastructure.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<h3 aria-level=\"2\">\u00a0<\/h3>\n<h3 aria-level=\"2\"><span data-contrast=\"none\">aiDAPTIV+ training results and performance<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h3>\n<p><span data-contrast=\"auto\">Phison used the Dolly creative writing benchmark found on Hugging Face to fine-tune the Llama 3.1 405B LLM for the SC25 demonstration. Different datasets affect the amount of time required for training, though scaling would still be similar across different hardware solutions.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The base Llama-3.1-405B-Instruct model is 2 TB in size, using the Safetensors format. The first step of the training process consists of unpacking the LLM tensors into memory \u2014 or to the aiDAPTIVCache SSD in this case. This results in over 5 TB of data and requires about 30 minutes to generate, with peak requirements during the training process of over 7 TB of memory utilization.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The Dolly dataset consists of 771K characters and 672 Q\/A pairs, and the aiDAPTIV+ server completed the fine-tune training at a rate of 25 hours and 50 minutes per epoch. The entire process finished in two days and four hours with a training run of two epochs.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Over the four days the aiDAPTIV+ server ran at SC25, aiDAPTIV+ wrote hundreds of terabytes of data to the aiDAPTIVCache SSD. For typical enterprise SSDs rated at 1 DWPD, that level of writes will wear out the flash in a matter of weeks \u2014 less for lower capacity drives. With an endurance rating of 100 DWPD, Pascari aiDAPTIVCache SSDs are tailor made to handle this type of workload, running 24\/7\/365 for five years.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-87949 size-full\" src=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132331.png\" alt=\"\" width=\"716\" height=\"540\" srcset=\"https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132331.png 716w, https:\/\/phisonblog.com\/wp-content\/uploads\/2026\/01\/Screenshot-2026-01-13-132331-480x362.png 480w\" sizes=\"(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 716px, 100vw\" \/><\/p>\n<p><i><span data-contrast=\"auto\">Pascari AI200E 8TB U.2 SSD used for fine-tune training of Llama 405B.<\/span><\/i><span data-ccp-props=\"{&quot;335551550&quot;:2,&quot;335551620&quot;:2}\">\u00a0<\/span><\/p>\n<h3 aria-level=\"2\">\u00a0<\/h3>\n<h3 aria-level=\"2\"><span data-contrast=\"none\">Start doing more with aiDAPTIV+<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h3>\n<p><span data-contrast=\"auto\">aiDAPTIV+ provides an important path for expanding the available memory for fine-tune training, and with a global shortage of DRAM causing a spike in memory prices, it does so at a fraction of the cost. It opens up the potential to leverage AI at all levels of business, using on-premises hardware that doesn\u2019t require a massive financial investment, a data center, or large amounts of electricity. aiDAPTIV+ works for research and educational facilities, allowing experimentation with larger models without expensive cloud contracts or data sovereignty concerns.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Phison\u2019s SC25 demonstration proves that a massive data center and costly infrastructure isn\u2019t needed to work with the biggest AI models. aiDAPTIV+ delivers a practical, affordable path to fine-tuning models once reserved for hyperscalers by combining purpose-built aiDAPTIVCache SSDs with intelligent memory orchestration. For organizations looking to unlock larger models and true on-premises AI independence, aiDAPTIV+ represents a compelling shift in what\u2019s possible. <\/span><\/p>\n<p><a href=\"https:\/\/www.phison.com\/en\/aidaptiv-plus-ai-data-storage-solution\" target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">Visit Phison<\/span><\/a><span data-contrast=\"auto\"> to explore how aiDAPTIV+ can transform your AI workflows and see the technology in action.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;default&#8221; width=&#8221;100%&#8221; max_width=&#8221;100%&#8221; custom_margin=&#8221;||||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; disabled=&#8221;on&#8221; saved_tabs=&#8221;all&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>Frequently Asked Questions (FAQ) :<\/strong><\/h3>\n<p>[\/et_pb_text][et_pb_toggle title=&#8221;How are universities driving the next wave of AI innovation?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW190708308 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Universities are combining interdisciplinary research with AI from genomics to climate modeling and social sciences to solve global problems. Their open, collaborative environments accelerate model training and experimentation at a scale once reserved for <\/span><span class=\"NormalTextRun SpellingErrorV2Themed SCXW190708308 BCX0\" data-ccp-parastyle=\"Normal (Web)\">hyperscalers<\/span><span class=\"NormalTextRun SCXW190708308 BCX0\" data-ccp-parastyle=\"Normal (Web)\">.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;Why is data throughput critical in academic AI research?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW439013 BCX0\" data-ccp-parastyle=\"Normal (Web)\">AI models thrive on massive datasets. Without high-speed storage and efficient data movement, GPUs <\/span><span class=\"NormalTextRun SCXW439013 BCX0\" data-ccp-parastyle=\"Normal (Web)\">remain<\/span><span class=\"NormalTextRun SCXW439013 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> idle. Each millisecond of delay increases costs and slows breakthroughs in genomics, climate prediction, and behavioral studies.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;What challenges do universities face when scaling AI infrastructure?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW242238695 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Budgets, compliance, and legacy storage architectures often constrain performance. Many academic clusters have expanded GPU capacity but neglected the I\/O bottlenecks that prevent full <\/span><span class=\"NormalTextRun SCXW242238695 BCX0\" data-ccp-parastyle=\"Normal (Web)\">utilization<\/span><span class=\"NormalTextRun SCXW242238695 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> of compute power.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;How does Phison\u2019s aiDAPTIV+ address these performance bottlenecks?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW209260692 BCX0\" data-ccp-parastyle=\"Normal (Web)\">aiDAPTIV<\/span><span class=\"NormalTextRun SCXW209260692 BCX0\" data-ccp-parastyle=\"Normal (Web)\">+ enhances throughput by offloading pre-processing and data management directly to <\/span><span class=\"NormalTextRun SCXW209260692 BCX0\" data-ccp-parastyle=\"Normal (Web)\">the SSD layer. This reduces <\/span><span class=\"NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW209260692 BCX0\" data-ccp-parastyle=\"Normal (Web)\">GPU<\/span><span class=\"NormalTextRun SCXW209260692 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> idle time, accelerates workloads, and maximizes return on existing hardware investments.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;What types of research benefit most from Phison\u2019s technology?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\">aiDAPTIV<\/span><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\">+ delivers measurable improvements across <\/span><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\">genomics<\/span><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> sequencing, high resolution climate simulations, and social data analytics, anywhere rapid access to small or unstructured files <\/span><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\">determines<\/span><span class=\"NormalTextRun SCXW180656999 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> discovery speed.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;Can universities maintain compliance while using aiDAPTIV+?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW222213171 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Yes. <\/span><span class=\"NormalTextRun SpellingErrorV2Themed SCXW222213171 BCX0\" data-ccp-parastyle=\"Normal (Web)\">aiDAPTIV<\/span><span class=\"NormalTextRun SCXW222213171 BCX0\" data-ccp-parastyle=\"Normal (Web)\">+ is engineered for data sovereignty. Sensitive research data stays on premises or in private clouds, ensuring compliance with institutional and cross border data protection mandates.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;How does Phison\u2019s SSD controller technology differ from traditional storage?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Unlike standard SSDs that act as passive storage, <\/span><span class=\"NormalTextRun SpellingErrorV2Themed SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Phison\u2019s<\/span><span class=\"NormalTextRun SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> controllers actively <\/span><span class=\"NormalTextRun SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\">optimize<\/span><span class=\"NormalTextRun SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> I\/O flow through intelligent data pathing. This transforms storage into a <\/span><span class=\"NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\">high performance<\/span><span class=\"NormalTextRun SCXW45286248 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> processing engine for AI workloads.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;How does aiDAPTIV+ lower total cost of research computing?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"TextRun SCXW120266594 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW120266594 BCX0\" data-ccp-parastyle=\"Normal (Web)\">By increasing performance through smarter storage orchestration, universities achieve faster results without continuously expanding GPU fleets, reducing both capital and operational expenses.<\/span><\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;What role will aiDAPTIV+ play in AI sustainability?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"NormalTextRun SCXW189417198 BCX0\" data-ccp-parastyle=\"Normal (Web)\">aiDAPTIV<\/span><span class=\"NormalTextRun SCXW189417198 BCX0\" data-ccp-parastyle=\"Normal (Web)\">+ improves energy efficiency by minimizing GPU idle cycles and unnecessary data movement. This enables universities to advance AI research while lowering the carbon and energy footprint of their <\/span><span class=\"NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW189417198 BCX0\" data-ccp-parastyle=\"Normal (Web)\">compute<\/span><span class=\"NormalTextRun SCXW189417198 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> operations.<\/span><\/p>\n<p>[\/et_pb_toggle][et_pb_toggle title=&#8221;How can universities get started with Phison\u2019s aiDAPTIV+?&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span class=\"TextRun SCXW81076065 BCX0\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\">Phison<\/span><span class=\"NormalTextRun SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> offers <\/span><span class=\"NormalTextRun SpellingErrorV2Themed SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\">aiDAPTIV<\/span><span class=\"NormalTextRun SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\">+ as a turnkey platform that <\/span><span class=\"NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\">integrates with<\/span><span class=\"NormalTextRun SCXW81076065 BCX0\" data-ccp-parastyle=\"Normal (Web)\"> existing HPC and AI clusters. Universities can scale node counts as research grows, gaining hyperscale level performance within academic budgets.<\/span><\/span><span class=\"EOP SCXW81076065 BCX0\" data-ccp-props=\"{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p>[\/et_pb_toggle][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At SC25, Phison showed off the potential of its aiDAPTIV+ hardware and software solution by fine-tune training the Llama 3.1 405 billion parameter model on a single server equipped with two GPUs and 192 GB of VRAM. This task normally requires a combined VRAM pool of over 7 TB and multiple NVIDIA servers \u2014 a [&hellip;]<\/p>\n","protected":false},"author":78,"featured_media":87963,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[120,23],"tags":[22],"class_list":["post-87940","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-all-posts","tag-long-content"],"acf":[],"_links":{"self":[{"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/posts\/87940","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/users\/78"}],"replies":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/comments?post=87940"}],"version-history":[{"count":17,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/posts\/87940\/revisions"}],"predecessor-version":[{"id":88077,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/posts\/87940\/revisions\/88077"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/media\/87963"}],"wp:attachment":[{"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/media?parent=87940"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/categories?post=87940"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phisonblog.com\/zh-tw\/wp-json\/wp\/v2\/tags?post=87940"}],"curies":[{"name":"\u53ef\u6fd5\u6027\u7c89\u5291","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}