[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250106124119.1318428-4-john.g.garry@oracle.com>
Date: Mon, 6 Jan 2025 12:41:17 +0000
From: John Garry <john.g.garry@...cle.com>
To: axboe@...nel.dk, agk@...hat.com, snitzer@...nel.org, hch@....de
Cc: mpatocka@...hat.com, martin.petersen@...cle.com,
linux-block@...r.kernel.org, dm-devel@...ts.linux.dev,
linux-kernel@...r.kernel.org, John Garry <john.g.garry@...cle.com>
Subject: [PATCH RFC 3/5] dm-table: Atomic writes support
Support stacking atomic write limits for DM devices.
All the pre-existing code in blk_stack_atomic_writes_limits() already takes
care of finding the aggregate limits from the bottom devices.
Feature flag DM_TARGET_ATOMIC_WRITES is introduced so that atomic writes
can be enabled on personalities selectively. This is to ensure that atomic
writes are only enabled when verified to be working properly (for a
specific personality). In addition, it just may not make sense to enable
atomic writes on some personalities (so this flag also helps there).
When testing for bottom device atomic writes support, only the bdev
queue limits are tested. There is no need to test the bottom bdev
start sector (like which bdev_can_atomic_write() does), as this would
already be checked in the dm_calculate_queue_limits() -> ..
iterate_devices() -> dm_set_device_limits() -> blk_stack_limits()
callchain.
Signed-off-by: John Garry <john.g.garry@...cle.com>
---
drivers/md/dm-table.c | 12 ++++++++++++
include/linux/device-mapper.h | 3 +++
2 files changed, 15 insertions(+)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index bd8b796ae683..1e0b7e364546 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1593,6 +1593,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
struct queue_limits ti_limits;
unsigned int zone_sectors = 0;
bool zoned = false;
+ bool atomic_writes = true;
dm_set_stacking_limits(limits);
@@ -1602,8 +1603,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
if (!dm_target_passes_integrity(ti->type))
t->integrity_supported = false;
+ if (!dm_target_supports_atomic_writes(ti->type))
+ atomic_writes = false;
}
+ if (atomic_writes)
+ limits->features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
for (unsigned int i = 0; i < t->num_targets; i++) {
struct dm_target *ti = dm_table_get_target(t, i);
@@ -1616,6 +1621,13 @@ int dm_calculate_queue_limits(struct dm_table *t,
goto combine_limits;
}
+ /*
+ * dm_set_device_limits() -> blk_stack_limits() considers
+ * ti_limits as 'top', so set BLK_FEAT_ATOMIC_WRITES_STACKED
+ * here also.
+ */
+ if (atomic_writes)
+ ti_limits.features |= BLK_FEAT_ATOMIC_WRITES_STACKED;
/*
* Combine queue limits of all the devices this target uses.
*/
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 8321f65897f3..bcc6d7b69470 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -299,6 +299,9 @@ struct target_type {
#define dm_target_supports_mixed_zoned_model(type) (false)
#endif
+#define DM_TARGET_ATOMIC_WRITES 0x00000400
+#define dm_target_supports_atomic_writes(type) ((type)->features & DM_TARGET_ATOMIC_WRITES)
+
struct dm_target {
struct dm_table *table;
struct target_type *type;
--
2.31.1
Powered by blists - more mailing lists