{"id":85576,"date":"2025-05-10T18:00:55","date_gmt":"2025-05-11T01:00:55","guid":{"rendered":"https:\/\/phisonblog.com\/?p=85576"},"modified":"2025-06-02T10:50:24","modified_gmt":"2025-06-02T17:50:24","slug":"phison-using-ssds-to-reduce-ai-training-costs","status":"publish","type":"post","link":"https:\/\/phisonblog.com\/ja\/phison-using-ssds-to-reduce-ai-training-costs\/","title":{"rendered":"PHISON \u2013 SSD\u3092\u4f7f\u7528\u3057\u3066AI\u30c8\u30ec\u30fc\u30cb\u30f3\u30b0\u30b3\u30b9\u30c8\u3092\u524a\u6e1b"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;]&#8221;Phison is leading this campaign with its\u00a0<a href=\"https:\/\/phisonblog.com\/phison-expands-aidaptiv-gpu-memory-extension-capabilities-for-additional-platforms-to-enable-llm-training-and-improve-inferencing-on-premises\/\" target=\"_blank\" rel=\"noopener\">aiDAPTIV+ LLM Training Integrative Solution<\/a>.\u00a0 That name is pretty long, though, so I\u2019ll simply use the word aiDAPTIV+ from here on.<\/p>\n<p>. . .<\/p>\n<p>Well the folks at Phison decided to apply this principal to AI.\u00a0 Their aiDAPTIV+ design uses specialized SSDs to reduce the amount of HBM DRAM required in an LLM training system.<\/p>\n<p>Phison argues that the number of GPUs needed in a system is determined by the size of the DRAM the problem requires.\u00a0 Their argument is that if a GPU has 80GB of HBM, and it\u2019s running a model that needs 800GB of parameters, then you need to use ten GPUs.&#8221;- pcper.com<\/p>\n<p>Source: <a href=\"https:\/\/thessdguy.com\/using-ssds-to-reduce-ai-training-costs\/\" target=\"_blank\" rel=\"noopener\">pcper.com<\/a><br \/>\n[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;Phison is leading this campaign with its\u00a0aiDAPTIV+ LLM Training Integrative Solution.\u00a0 That name is pretty long, though, so I\u2019ll simply use the word aiDAPTIV+ from here on. . . . Well the folks at Phison decided to apply this principal to AI.\u00a0 Their aiDAPTIV+ design uses specialized SSDs to reduce the amount of HBM DRAM [&hellip;]<\/p>\n","protected":false},"author":10,"featured_media":86057,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"<p>\"Phison is leading this campaign with its\u00a0<a href=\"https:\/\/phisonblog.com\/phison-expands-aidaptiv-gpu-memory-extension-capabilities-for-additional-platforms-to-enable-llm-training-and-improve-inferencing-on-premises\/\" target=\"_blank\" rel=\"noopener\">aiDAPTIV+ LLM Training Integrative Solution<\/a>.\u00a0 That name is pretty long, though, so I\u2019ll simply use the word aiDAPTIV+ from here on.<\/p><p>. . .<\/p><p>Well the folks at Phison decided to apply this principal to AI.\u00a0 Their aiDAPTIV+ design uses specialized SSDs to reduce the amount of HBM DRAM required in an LLM training system.<\/p><p>Phison argues that the number of GPUs needed in a system is determined by the size of the DRAM the problem requires.\u00a0 Their argument is that if a GPU has 80GB of HBM, and it\u2019s running a model that needs 800GB of parameters, then you need to use ten GPUs.\"- pcper.com<\/p><p>Source: <a href=\"https:\/\/thessdguy.com\/using-ssds-to-reduce-ai-training-costs\/\">pcper.com<\/a><\/p>","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[114],"tags":[22],"class_list":["post-85576","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-phison-in-the-news","tag-long-content"],"acf":[],"_links":{"self":[{"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/posts\/85576","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/comments?post=85576"}],"version-history":[{"count":4,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/posts\/85576\/revisions"}],"predecessor-version":[{"id":86066,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/posts\/85576\/revisions\/86066"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/media\/86057"}],"wp:attachment":[{"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/media?parent=85576"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/categories?post=85576"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phisonblog.com\/ja\/wp-json\/wp\/v2\/tags?post=85576"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}