{"id":86180,"date":"2025-05-31T18:00:29","date_gmt":"2025-06-01T01:00:29","guid":{"rendered":"https:\/\/phisonblog.com\/?p=86180"},"modified":"2025-07-30T11:56:08","modified_gmt":"2025-07-30T18:56:08","slug":"phison-research-note-phison-aidaptiv-is-cost-effective-with-large-models-to-enable-gen-ai","status":"publish","type":"post","link":"https:\/\/phisonblog.com\/ko\/phison-research-note-phison-aidaptiv-is-cost-effective-with-large-models-to-enable-gen-ai\/","title":{"rendered":"PHISON \u2013 \uc5f0\uad6c \ub178\ud2b8 Phison aiDAPTIV+\ub294 Gen-AI\ub97c \ud65c\uc131\ud654\ud558\uae30 \uc704\ud574 \ub300\uaddc\ubaa8 \ubaa8\ub378\uc744 \uc0ac\uc6a9\ud558\uc5ec \ube44\uc6a9 \ud6a8\uc728\uc801\uc785\ub2c8\ub2e4."},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<p>&#8220;Phison aiDAPTIV+ is Cost-Effective With Large Models to Enable Generative AI<\/p>\n<p class=\"ai-optimize-13\"><strong>Analyst Take:<\/strong> Using a fast SSD as a tier to expand usable RAM is not a new concept; most operating systems have swap files to enable more system memory than the installed RAM. Phison aiDAPTIV+ is a set of hardware and software that enables the same concept for GPU memory, making AI cost-effective with large models. The high-bandwidth memory in a GPU is a significant part of the GPU\u2019s price. As AI models become larger, the need for more memory follows. The fastest option is still to keep the entire model and data in GPU memory, which requires high-end GPUs for large models and often involves using cluster GPUs to accommodate the whole model. Phison aiDAPTIV+ is not intended to address use-cases that require the highest possible training throughput or the lowest inference latency, where the performance justifies the cost of the GPUs. Phison aiDAPTIV+ will deliver cost-effective results in use cases where lower throughput or longer response latency is acceptable for a significant cost saving. For example, for a 66% cost reduction, a weekly task of fine-tuning a model might be completed overnight, rather than in two hours. The aiDAPTIV+ solution comprises the aiDAPTIVcache hardware and aiDAPTIVlink drivers, with the aiDAPTIVProSuite as an optional AI software development environment.&#8221;- futurumgroup.com<\/p>\n<p>Source: <a href=\"https:\/\/futurumgroup.com\/insights\/phison-aidaptiv-is-cost-effective-with-large-models-to-enable-generative-ai\/\" target=\"_blank\" rel=\"noopener\">futurumgroup.com<\/a><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;Phison aiDAPTIV+ is Cost-Effective With Large Models to Enable Generative AI Analyst Take: Using a fast SSD as a tier to expand usable RAM is not a new concept; most operating systems have swap files to enable more system memory than the installed RAM. Phison aiDAPTIV+ is a set of hardware and software that enables [&hellip;]<\/p>\n","protected":false},"author":10,"featured_media":86181,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"on","_et_pb_old_content":"<p>\"What is Covered in this Article:<\/p><ul><li class=\"ai-optimize-8\">Phison aiDAPTIV+ enables cost-effective running of large AI models on modest GPUs<\/li><li class=\"ai-optimize-9\">The combination of the aiDAPTIVCache hardware and aiDAPTIVLink drivers runs unmodified PyTorch applications<\/li><li class=\"ai-optimize-10\">Training is vital to the successful adoption of AI into an organization<\/li><\/ul><p class=\"ai-optimize-11\"><strong>The Event \u2013 Major Themes & Vendor Moves:<\/strong>\u00a0AI Infrastructure Field Day is a semiannual, invitation-only event held in Santa Clara, organized by Tech Field Day. Independent industry experts join presenting companies to learn about product innovations. The event is live-streamed, and then videos are published on the Tech Field Day YouTube channel.<\/p><p class=\"ai-optimize-11\">Phison aiDAPTIV+ is Cost-Effective With Large Models to Enable Generative AI<\/p><p class=\"ai-optimize-13\"><strong>Analyst Take:<\/strong> Using a fast SSD as a tier to expand usable RAM is not a new concept; most operating systems have swap files to enable more system memory than the installed RAM. Phison aiDAPTIV+ is a set of hardware and software that enables the same concept for GPU memory, making AI cost-effective with large models. The high-bandwidth memory in a GPU is a significant part of the GPU\u2019s price. As AI models become larger, the need for more memory follows. The fastest option is still to keep the entire model and data in GPU memory, which requires high-end GPUs for large models and often involves using cluster GPUs to accommodate the whole model. Phison aiDAPTIV+ is not intended to address use-cases that require the highest possible training throughput or the lowest inference latency, where the performance justifies the cost of the GPUs. Phison aiDAPTIV+ will deliver cost-effective results in use cases where lower throughput or longer response latency is acceptable for a significant cost saving. For example, for a 66% cost reduction, a weekly task of fine-tuning a model might be completed overnight, rather than in two hours. The aiDAPTIV+ solution comprises the aiDAPTIVcache hardware and aiDAPTIVlink drivers, with the aiDAPTIVProSuite as an optional AI software development environment.\"- futurumgroup.com<\/p><p>Source: <a href=\"https:\/\/futurumgroup.com\/insights\/phison-aidaptiv-is-cost-effective-with-large-models-to-enable-generative-ai\/\">futurumgroup.com<\/a><\/p>","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[114],"tags":[22],"class_list":["post-86180","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-phison-in-the-news","tag-long-content"],"acf":[],"_links":{"self":[{"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/posts\/86180","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/comments?post=86180"}],"version-history":[{"count":6,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/posts\/86180\/revisions"}],"predecessor-version":[{"id":86840,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/posts\/86180\/revisions\/86840"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/media\/86181"}],"wp:attachment":[{"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/media?parent=86180"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/categories?post=86180"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phisonblog.com\/ko\/wp-json\/wp\/v2\/tags?post=86180"}],"curies":[{"name":"\uc6cc\ub4dc\ud504\ub808\uc2a4","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}