[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAGHCLaREA4xzP7CkJrpqu4C=PKw_3GppOUPWZKn0Fxom_3Z9Qw@mail.gmail.com>
Date: Sat, 24 Jan 2026 09:10:54 -0800
From: Cong Wang <cwang@...tikernel.io>
To: linux-fsdevel@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, Cong Wang <xiyou.wangcong@...il.com>
Subject: [ANNOUNCE] DAXFS: A zero-copy, dmabuf-friendly filesystem for shared memory
Hello,
I would like to introduce DAXFS, a simple read-only filesystem
designed to operate directly on shared physical memory via the DAX
(Direct Access).
Unlike ramfs or tmpfs, which operate within the kernel’s page cache
and result in fragmented, per-instance memory allocation, DAXFS
provides a mechanism for zero-copy reads from contiguous memory
regions. It bypasses the traditional block I/O stack, buffer heads,
and page cache entirely.
Key Features
- Zero-Copy Efficiency: File reads resolve to direct memory loads,
eliminating page cache duplication and CPU-driven copies.
- True Physical Sharing: By mapping a contiguous physical address or a
dma-buf, multiple kernel instances or containers can share the same
physical pages.
- Hardware Integration: Supports mounting memory exported by GPUs,
FPGAs, or CXL devices via the dma-buf API.
- Simplicity: Uses a self-contained, read-only image format with no
runtime allocation or complex device management.
Primary Use Cases
- Multikernel Environments: Sharing a common Docker image across
independent kernel instances via shared memory.
- CXL Memory Pooling: Accessing read-only data across multiple hosts
without network I/O.
- Container Rootfs Sharing: Using a single DAXFS base image for
multiple containers (via OverlayFS) to save physical RAM.
- Accelerator Data: Zero-copy access to model weights or lookup tables
stored in device memory.
The source includes a kernel module and a mkdaxfs user-space tool for
image creation, it is available here:
https://github.com/multikernel/daxfs
I am looking forward to your feedback on the architecture and its
potential integration to the upstream Linux kernel.
Best regards,
Cong Wang
Powered by blists - more mailing lists