Reasoning models are a new class of large language models (LLMs) designed to tackle highly complex tasks by employing chain-of-thought (CoT) reasoning with the tradeoff of taking longer to respond. The DeepSeek R1 is a recently released frontier “reasoning” model which has been distilled into highly capable smaller models. Deploying these DeepSeek R1 distilled models on AMD Ryzen™ AI processors and Radeon™ graphics cards is incredibly easy and available now through LM Studio.
Reasoning models add a “thinking” stage before the final output – which you can see by expanding the “thinking” window before the model gives its final answer. Unlike conventional LLMs, which one-shot the response, CoT LLMs perform extensive reasoning before answering. The assumptions and self-reflection the LLM performs are visible to the user and this improves the reasoning and analytical capability of the model – albeit at the cost of significantly longer time-to-first-(final output)token.
A reasoning model may first spend thousands of tokens (and you can view this chain of thought!) to analyze the problem before giving a final response. This allows the model to be excellent at complex problem-solving tasks involving math and science and attack a complex problem from all angles before deciding on a response. Depending on your AMD hardware, each of these models will offer state-of-the-art reasoning capability on your AMD Ryzen™ AI processor or Radeon™ graphics cards.
How to run DeepSeek R1 Distilled “Reasoning” Models on AMD Ryzen™ AI and Radeon™ Graphics Cards
Follow these simple steps to get up and running with DeepSeek R1 distillations in just a few minutes (dependent upon download speed).
𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲
𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 Facebook, LinkedIn, Twitter, Instagram