{"id":84504,"date":"2025-03-03T18:00:00","date_gmt":"2025-03-04T02:00:00","guid":{"rendered":"https:\/\/phisonblog.com\/?p=84504"},"modified":"2025-04-18T09:54:24","modified_gmt":"2025-04-18T16:54:24","slug":"phison-aidaptiv-hybrid-ai-solution-combines-ssd-caching-and-gpus-for-training-llm","status":"publish","type":"post","link":"https:\/\/phisonblog.com\/zh\/phison-aidaptiv-hybrid-ai-solution-combines-ssd-caching-and-gpus-for-training-llm\/","title":{"rendered":"Phison aiDAPTIV+\uff1a\u6df7\u5408 AI \u89e3\u51b3\u65b9\u6848 \u2013 \u7ed3\u5408 SSD \u7f13\u5b58\u548c GPU \u8fdb\u884c LLM \u8bad\u7ec3"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;]&#8221;The Hybrid AI hardware solution allows small and medium-sized businesses to train significantly larger AI models beyond previous limitations, scaling up to\u00a0<strong>Llama-3 70B<\/strong>\u00a0and\u00a0<strong>Falcon 180B<\/strong>. It features seamless integration with\u00a0<strong>GPU memory<\/strong>, utilizing\u00a0<strong>aiDAPTIVCache<\/strong>\u00a0for low-latency processing. It also includes high-endurance\u00a0<strong>AI100E SSDs with a 100 DWPD<\/strong>\u00a0rating for durability. Besides, with a focus on\u00a0<strong>security, ease of deployment, and cost savings<\/strong>, aiDAPTIV+ is ideal for\u00a0<strong>AI applications<\/strong>\u00a0in industries requiring on-premises data control and scalable model training.<\/p>\n<p>This design enables organizations to train complex models like\u00a0<strong>Llama-2 7B<\/strong>\u00a0on standard off-the-shelf hardware while ensuring low-latency processing and high-endurance storage\u00a0<strong>(100 DWPD) with SLC NAND<\/strong>.<\/p>\n<p>. . .<\/p>\n<p>According to the company, they have a technique similar to\u00a0<strong>OS-level swapping<\/strong>, where less frequently used data is offloaded from RAM to storage to free up memory for active processes. Likewise, aiDAPTIV+ utilizes high-speed, high-capacity\u00a0<strong>AI100E M.2 SSDs<\/strong>\u00a0as\u00a0<strong>aiDAPTIVCache<\/strong>, effectively extending GPU memory. Engineers specifically optimize these SSDs to handle the intensive demands of AI workloads, ensuring efficient data management and seamless model training.&#8221;- electronics-lab.com<\/p>\n<p>Source: <a href=\"https:\/\/www.electronics-lab.com\/phison-aidaptiv-hybrid-ai-solution-combines-ssd-caching-and-gpus-for-training-large-language-models\/\" target=\"_blank\" rel=\"noopener\">electronics-lab.com<\/a><br \/>\n[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;The Hybrid AI hardware solution allows small and medium-sized businesses to train significantly larger AI models beyond previous limitations, scaling up to\u00a0Llama-3 70B\u00a0and\u00a0Falcon 180B. It features seamless integration with\u00a0GPU memory, utilizing\u00a0aiDAPTIVCache\u00a0for low-latency processing. It also includes high-endurance\u00a0AI100E SSDs with a 100 DWPD\u00a0rating for durability. Besides, with a focus on\u00a0security, ease of deployment, and cost savings, [&hellip;]<\/p>\n","protected":false},"author":10,"featured_media":84505,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"<p>\"The Hybrid AI hardware solution allows small and medium-sized businesses to train significantly larger AI models beyond previous limitations, scaling up to\u00a0<strong>Llama-3 70B<\/strong>\u00a0and\u00a0<strong>Falcon 180B<\/strong>. It features seamless integration with\u00a0<strong>GPU memory<\/strong>, utilizing\u00a0<strong>aiDAPTIVCache<\/strong>\u00a0for low-latency processing. It also includes high-endurance\u00a0<strong>AI100E SSDs with a 100 DWPD<\/strong>\u00a0rating for durability. Besides, with a focus on\u00a0<strong>security, ease of deployment, and cost savings<\/strong>, aiDAPTIV+ is ideal for\u00a0<strong>AI applications<\/strong>\u00a0in industries requiring on-premises data control and scalable model training.<\/p><p>This design enables organizations to train complex models like\u00a0<strong>Llama-2 7B<\/strong>\u00a0on standard off-the-shelf hardware while ensuring low-latency processing and high-endurance storage\u00a0<strong>(100 DWPD) with SLC NAND<\/strong>.<\/p><p>. . .<\/p><p>According to the company, they have a technique similar to\u00a0<strong>OS-level swapping<\/strong>, where less frequently used data is offloaded from RAM to storage to free up memory for active processes. Likewise, aiDAPTIV+ utilizes high-speed, high-capacity\u00a0<strong>AI100E M.2 SSDs<\/strong>\u00a0as\u00a0<strong>aiDAPTIVCache<\/strong>, effectively extending GPU memory. Engineers specifically optimize these SSDs to handle the intensive demands of AI workloads, ensuring efficient data management and seamless model training.\"- electronics-lab.com<\/p><p>Source: <a href=\"https:\/\/www.electronics-lab.com\/phison-aidaptiv-hybrid-ai-solution-combines-ssd-caching-and-gpus-for-training-large-language-models\/\">electronics-lab.com<\/a><\/p>","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[114],"tags":[22],"class_list":["post-84504","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-phison-in-the-news","tag-long-content"],"acf":[],"_links":{"self":[{"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/posts\/84504","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/comments?post=84504"}],"version-history":[{"count":3,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/posts\/84504\/revisions"}],"predecessor-version":[{"id":85289,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/posts\/84504\/revisions\/85289"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/media\/84505"}],"wp:attachment":[{"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/media?parent=84504"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/categories?post=84504"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phisonblog.com\/zh\/wp-json\/wp\/v2\/tags?post=84504"}],"curies":[{"name":"\u53ef\u6e7f\u6027\u7c89\u5242","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}