lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240903150748.2179966-5-john.g.garry@oracle.com>
Date: Tue,  3 Sep 2024 15:07:48 +0000
From: John Garry <john.g.garry@...cle.com>
To: axboe@...nel.dk, song@...nel.org, yukuai3@...wei.com, kbusch@...nel.org,
        hch@....de, sagi@...mberg.me, James.Bottomley@...senPartnership.com,
        martin.petersen@...cle.com
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-raid@...r.kernel.org, linux-nvme@...ts.infradead.org,
        linux-scsi@...r.kernel.org, John Garry <john.g.garry@...cle.com>
Subject: [PATCH RFC 4/4] md/raid0: Atomic write support

Set BLK_FEAT_ATOMIC_WRITES to enable atomic writes. All other stacked
device request queue limits should automatically be set properly. With
regards to atomic write max bytes limit, this will be set at
hw_max_sectors and this is limited by the stripe width, which we want.

Atomic requests may not be split, so error an attempt to do so.

It is noted that returning false from .make_request CB results in
bio_io_error() being called for the bio, which results in BLK_STS_IOERR.
This is not suitable for atomic writes, as BLK_STS_INVAL should be
returned.

Signed-off-by: John Garry <john.g.garry@...cle.com>
---
 drivers/md/raid0.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 32d587524778..caf1ecb55d11 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -384,6 +384,7 @@ static int raid0_set_limits(struct mddev *mddev)
 	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * mddev->raid_disks;
+	lim.features |= BLK_FEAT_ATOMIC_WRITES;
 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
 	if (err) {
 		queue_limits_cancel_update(mddev->gendisk->queue);
@@ -606,7 +607,12 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
 		 : sector_div(sector, chunk_sects));
 
 	if (sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, sectors, GFP_NOIO,
+		struct bio *split;
+
+		if (bio->bi_opf & REQ_ATOMIC)
+			return false;
+
+		split = bio_split(bio, sectors, GFP_NOIO,
 					      &mddev->bio_set);
 		bio_chain(split, bio);
 		raid0_map_submit_bio(mddev, bio);
-- 
2.31.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ