[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250605150857.4061971-5-john.g.garry@oracle.com>
Date: Thu, 5 Jun 2025 15:08:57 +0000
From: John Garry <john.g.garry@...cle.com>
To: agk@...hat.com, snitzer@...nel.org, mpatocka@...hat.com, song@...nel.org,
yukuai3@...wei.com, hch@....de, nilay@...ux.ibm.com, axboe@...nel.dk
Cc: dm-devel@...ts.linux.dev, linux-kernel@...r.kernel.org,
linux-raid@...r.kernel.org, linux-block@...r.kernel.org,
ojaswin@...ux.ibm.com, martin.petersen@...cle.com,
John Garry <john.g.garry@...cle.com>
Subject: [PATCH RFC 4/4] block: use chunk_sectors when evaluating stacked atomic write limits
The atomic write unit max is limited by any stack device stripe size.
It is required that the atomic write unit is a power-of-2 factor of the
stripe size.
Currently we use io_min limit to hold the stripe size, and check for a
io_min <= SECTOR_SIZE when deciding if we have a striped stacked device.
Nilay reports that this causes a problem when the physical block size is
greater than SECTOR_SIZE [0].
Furthermore, io_min may be mutated when stacking devices, and this makes
it a poor candidate to hold the stripe size. Such an example would be
when the io_min is less than the physical block size.
Use chunk_sectors to hold the stripe size, which is more appropriate.
[0] https://lore.kernel.org/linux-block/888f3b1d-7817-4007-b3b3-1a2ea04df771@linux.ibm.com/T/#mecca17129f72811137d3c2f1e477634e77f06781
Signed-off-by: John Garry <john.g.garry@...cle.com>
---
block/blk-settings.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index a000daafbfb4..5b0f1a854e81 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -594,11 +594,13 @@ static bool blk_stack_atomic_writes_boundary_head(struct queue_limits *t,
static bool blk_stack_atomic_writes_head(struct queue_limits *t,
struct queue_limits *b)
{
+ unsigned int chunk_size = t->chunk_sectors << SECTOR_SHIFT;
+
if (b->atomic_write_hw_boundary &&
!blk_stack_atomic_writes_boundary_head(t, b))
return false;
- if (t->io_min <= SECTOR_SIZE) {
+ if (!t->chunk_sectors) {
/* No chunk sectors, so use bottom device values directly */
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
t->atomic_write_hw_unit_min = b->atomic_write_hw_unit_min;
@@ -617,12 +619,12 @@ static bool blk_stack_atomic_writes_head(struct queue_limits *t,
* aligned with both limits, i.e. 8K in this example.
*/
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
- while (t->io_min % t->atomic_write_hw_unit_max)
+ while (chunk_size % t->atomic_write_hw_unit_max)
t->atomic_write_hw_unit_max /= 2;
t->atomic_write_hw_unit_min = min(b->atomic_write_hw_unit_min,
t->atomic_write_hw_unit_max);
- t->atomic_write_hw_max = min(b->atomic_write_hw_max, t->io_min);
+ t->atomic_write_hw_max = min(b->atomic_write_hw_max, chunk_size);
return true;
}
--
2.31.1
Powered by blists - more mailing lists