👋 Welcome to  Cuterwrite 's Blog

Featured image of post RDMA: Shared Receive Queue

RDMA: Shared Receive Queue

This article is reprinted from Zhihu Column: 11. RDMA Shared Receive Queue, Author: Savir. The IB protocol significantly reduces the memory capacity requirements on the receiving end through the SRQ mechanism. This article mainly introduces the principles of SRQ and the similarities and differences with RQ.

Featured image of post RDMA: Completion Queue

RDMA: Completion Queue

This article is reprinted from Zhihu Column: 10. RDMA and Completion Queue, Author: Savir. CQ and QP are interdependent and serve as the medium for hardware to "report task status" to software. This article provides analysis and explanation of most of the content related to CQ in the protocol.

Featured image of post RDMA: Queue Pair

RDMA: Queue Pair

This article is reprinted from Zhihu Column: 9. RDMA Queue Pair, Author: Savir. QP is the most critical concept in RDMA technology, serving as the medium for software to "issue commands" to hardware. This article analyzes and explains most of the content related to QP in the protocol.

Featured image of post Implementing Hugo Progressive Web App Based on Workbox

Implementing Hugo Progressive Web App Based on Workbox

This article discusses how to use Workbox to add PWA functionality to Hugo static websites, enhancing loading speed and user experience through Service Worker. The advantages of PWA include fast loading, offline access, push notifications, and installation to the home screen. The article provides a detailed introduction to registering Service Worker, using Workbox's caching strategies, and offers detailed configuration steps and example code.

Featured image of post Ollama: From Beginner to Advanced

Ollama: From Beginner to Advanced

This article focuses on introducing the basic concepts of Ollama and its notable advantages, including its open-source and free nature, ease of use, rich models, and low resource consumption. It then provides a detailed installation and usage guide, covering detailed installation steps for different operating systems and Docker environments, as well as how to download and run models. Additionally, the article introduces how to deploy Ollama on HPC clusters and how to create a local code completion assistant by integrating IDE plugins.

本博客已稳定运行
总访客数: Loading
总访问量: Loading
发表了 25 篇文章 · 总计 60.67k

Built with Hugo
Theme Stack designed by Jimmy
基于 v3.27.0 分支版本修改