[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250604020850.1304633-2-yi.zhang@huaweicloud.com>
Date: Wed, 4 Jun 2025 10:08:41 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: linux-fsdevel@...r.kernel.org,
linux-ext4@...r.kernel.org,
linux-block@...r.kernel.org,
dm-devel@...ts.linux.dev,
linux-nvme@...ts.infradead.org,
linux-scsi@...r.kernel.org
Cc: linux-xfs@...r.kernel.org,
linux-kernel@...r.kernel.org,
hch@....de,
tytso@....edu,
djwong@...nel.org,
john.g.garry@...cle.com,
bmarzins@...hat.com,
chaitanyak@...dia.com,
shinichiro.kawasaki@....com,
brauner@...nel.org,
martin.petersen@...cle.com,
yi.zhang@...wei.com,
yi.zhang@...weicloud.com,
chengzhihao1@...wei.com,
yukuai3@...wei.com,
yangerkun@...wei.com
Subject: [PATCH 01/10] block: introduce BLK_FEAT_WRITE_ZEROES_UNMAP to queue limits features
From: Zhang Yi <yi.zhang@...wei.com>
Currently, disks primarily implement the write zeroes command (aka
REQ_OP_WRITE_ZEROES) through two mechanisms: the first involves
physically writing zeros to the disk media (e.g., HDDs), while the
second performs an unmap operation on the logical blocks, effectively
putting them into a deallocated state (e.g., SSDs). The first method is
generally slow, while the second method is typically very fast.
For example, on certain NVMe SSDs that support NVME_NS_DEAC, submitting
REQ_OP_WRITE_ZEROES requests with the NVME_WZ_DEAC bit can accelerate
the write zeros operation by placing disk blocks into a deallocated
state, which opportunistically avoids writing zeroes to media while
still guaranteeing that subsequent reads from the specified block range
will return zeroed data. This is a best-effort optimization, not a
mandatory requirement, some devices may partially fall back to writing
physical zeroes due to factors such as misalignment or being asked to
clear a block range smaller than the device's internal allocation unit.
Therefore, the speed of this operation is not guaranteed.
It is difficult to determine whether the storage device supports unmap
write zeroes operation. We cannot determine this by only querying
bdev_limits(bdev)->max_write_zeroes_sectors. First, add a new queue
limit feature, BLK_FEAT_WRITE_ZEROES_UNMAP, to indicate whether a device
supports this unmap write zeroes operation. Then, add a new counterpart
flag, BLK_FLAG_WRITE_ZEROES_UNMAP_DISABLED and a sysfs entry, which
allow users to disable this operation if the speed is very slow on some
sepcial devices.
Finally, for the stacked devices cases, the BLK_FEAT_WRITE_ZEROES_UNMAP
should be supported both by the stacking driver and all underlying
devices.
Thanks to Martin K. Petersen for optimizing the documentation of the
write_zeroes_unmap sysfs interface.
Signed-off-by: Zhang Yi <yi.zhang@...wei.com>
---
Documentation/ABI/stable/sysfs-block | 20 ++++++++++++++++++++
block/blk-settings.c | 6 ++++++
block/blk-sysfs.c | 25 +++++++++++++++++++++++++
include/linux/blkdev.h | 18 ++++++++++++++++++
4 files changed, 69 insertions(+)
diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 4ba771b56b3b..8e7d513286c4 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -778,6 +778,26 @@ Description:
0, write zeroes is not supported by the device.
+What: /sys/block/<disk>/queue/write_zeroes_unmap
+Date: January 2025
+Contact: Zhang Yi <yi.zhang@...wei.com>
+Description:
+ [RW] When read, this file will display whether the device has
+ enabled the unmap write zeroes operation. This operation
+ indicates that the device supports zeroing data in a specified
+ block range without incurring the cost of physically writing
+ zeroes to media for each individual block. It implements a
+ zeroing operation which opportunistically avoids writing zeroes
+ to media while still guaranteeing that subsequent reads from the
+ specified block range will return zeroed data. This operation is
+ a best-effort optimization, a device may fall back to physically
+ writing zeroes to media due to other factors such as
+ misalignment or being asked to clear a block range smaller than
+ the device's internal allocation unit. So the speed of this
+ operation is not guaranteed. Writing a value of '0' to this file
+ disables this operation.
+
+
What: /sys/block/<disk>/queue/zone_append_max_bytes
Date: May 2020
Contact: linux-block@...r.kernel.org
diff --git a/block/blk-settings.c b/block/blk-settings.c
index a000daafbfb4..de99763fd668 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -698,6 +698,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->features &= ~BLK_FEAT_NOWAIT;
if (!(b->features & BLK_FEAT_POLL))
t->features &= ~BLK_FEAT_POLL;
+ if (!(b->features & BLK_FEAT_WRITE_ZEROES_UNMAP))
+ t->features &= ~BLK_FEAT_WRITE_ZEROES_UNMAP;
t->flags |= (b->flags & BLK_FLAG_MISALIGNED);
@@ -820,6 +822,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->zone_write_granularity = 0;
t->max_zone_append_sectors = 0;
}
+
+ if (!t->max_write_zeroes_sectors)
+ t->features &= ~BLK_FEAT_WRITE_ZEROES_UNMAP;
+
blk_stack_atomic_writes_limits(t, b, start);
return ret;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index b2b9b89d6967..e918b2c93aed 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -457,6 +457,29 @@ static int queue_wc_store(struct gendisk *disk, const char *page,
return 0;
}
+static ssize_t queue_write_zeroes_unmap_show(struct gendisk *disk, char *page)
+{
+ return sysfs_emit(page, "%u\n",
+ blk_queue_write_zeroes_unmap(disk->queue));
+}
+
+static int queue_write_zeroes_unmap_store(struct gendisk *disk,
+ const char *page, size_t count, struct queue_limits *lim)
+{
+ unsigned long val;
+ ssize_t ret;
+
+ ret = queue_var_store(&val, page, count);
+ if (ret < 0)
+ return ret;
+
+ if (val)
+ lim->flags &= ~BLK_FLAG_WRITE_ZEROES_UNMAP_DISABLED;
+ else
+ lim->flags |= BLK_FLAG_WRITE_ZEROES_UNMAP_DISABLED;
+ return 0;
+}
+
#define QUEUE_RO_ENTRY(_prefix, _name) \
static struct queue_sysfs_entry _prefix##_entry = { \
.attr = { .name = _name, .mode = 0444 }, \
@@ -514,6 +537,7 @@ QUEUE_LIM_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes");
QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes");
QUEUE_LIM_RO_ENTRY(queue_max_write_zeroes_sectors, "write_zeroes_max_bytes");
+QUEUE_LIM_RW_ENTRY(queue_write_zeroes_unmap, "write_zeroes_unmap");
QUEUE_LIM_RO_ENTRY(queue_max_zone_append_sectors, "zone_append_max_bytes");
QUEUE_LIM_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity");
@@ -662,6 +686,7 @@ static struct attribute *queue_attrs[] = {
&queue_atomic_write_unit_min_entry.attr,
&queue_atomic_write_unit_max_entry.attr,
&queue_max_write_zeroes_sectors_entry.attr,
+ &queue_write_zeroes_unmap_entry.attr,
&queue_max_zone_append_sectors_entry.attr,
&queue_zone_write_granularity_entry.attr,
&queue_rotational_entry.attr,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 332b56f323d9..6f1cf97b1f00 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -340,6 +340,9 @@ typedef unsigned int __bitwise blk_features_t;
#define BLK_FEAT_ATOMIC_WRITES \
((__force blk_features_t)(1u << 16))
+/* supports unmap write zeroes command */
+#define BLK_FEAT_WRITE_ZEROES_UNMAP ((__force blk_features_t)(1u << 17))
+
/*
* Flags automatically inherited when stacking limits.
*/
@@ -360,6 +363,10 @@ typedef unsigned int __bitwise blk_flags_t;
/* passthrough command IO accounting */
#define BLK_FLAG_IOSTATS_PASSTHROUGH ((__force blk_flags_t)(1u << 2))
+/* disable the unmap write zeroes operation */
+#define BLK_FLAG_WRITE_ZEROES_UNMAP_DISABLED \
+ ((__force blk_flags_t)(1u << 3))
+
struct queue_limits {
blk_features_t features;
blk_flags_t flags;
@@ -1378,6 +1385,17 @@ static inline unsigned int bdev_write_zeroes_sectors(struct block_device *bdev)
return bdev_limits(bdev)->max_write_zeroes_sectors;
}
+static inline bool blk_queue_write_zeroes_unmap(struct request_queue *q)
+{
+ return (q->limits.features & BLK_FEAT_WRITE_ZEROES_UNMAP) &&
+ !(q->limits.flags & BLK_FLAG_WRITE_ZEROES_UNMAP_DISABLED);
+}
+
+static inline bool bdev_write_zeroes_unmap(struct block_device *bdev)
+{
+ return blk_queue_write_zeroes_unmap(bdev_get_queue(bdev));
+}
+
static inline bool bdev_nonrot(struct block_device *bdev)
{
return blk_queue_nonrot(bdev_get_queue(bdev));
--
2.46.1
Powered by blists - more mailing lists