lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190111131058.120784730@linuxfoundation.org>
Date:   Fri, 11 Jan 2019 15:08:41 +0100
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Ming Lin <ming.l@....samsung.com>,
        Mike Snitzer <snitzer@...hat.com>,
        Christoph Hellwig <hch@....de>, Xiao Ni <xni@...hat.com>,
        Mariusz Dabrowski <mariusz.dabrowski@...el.com>,
        Ming Lei <ming.lei@...hat.com>, Jens Axboe <axboe@...nel.dk>,
        Sudip Mukherjee <sudipm.mukherjee@...il.com>
Subject: [PATCH 4.4 72/88] block: dont deal with discard limit in blkdev_issue_discard()

4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Ming Lei <ming.lei@...hat.com>

commit 744889b7cbb56a64f957e65ade7cb65fe3f35714 upstream.

blk_queue_split() does respect this limit via bio splitting, so no
need to do that in blkdev_issue_discard(), then we can align to
normal bio submit(bio_add_page() & submit_bio()).

More importantly, this patch fixes one issue introduced in a22c4d7e34402cc
("block: re-add discard_granularity and alignment checks"), in which
zero discard bio may be generated in case of zero alignment.

Fixes: a22c4d7e34402ccdf3 ("block: re-add discard_granularity and alignment checks")
Cc: stable@...r.kernel.org
Cc: Ming Lin <ming.l@....samsung.com>
Cc: Mike Snitzer <snitzer@...hat.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Xiao Ni <xni@...hat.com>
Tested-by: Mariusz Dabrowski <mariusz.dabrowski@...el.com>
Signed-off-by: Ming Lei <ming.lei@...hat.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@...il.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 block/blk-lib.c |   28 ++--------------------------
 1 file changed, 2 insertions(+), 26 deletions(-)

--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -43,8 +43,6 @@ int blkdev_issue_discard(struct block_de
 	DECLARE_COMPLETION_ONSTACK(wait);
 	struct request_queue *q = bdev_get_queue(bdev);
 	int type = REQ_WRITE | REQ_DISCARD;
-	unsigned int granularity;
-	int alignment;
 	struct bio_batch bb;
 	struct bio *bio;
 	int ret = 0;
@@ -56,10 +54,6 @@ int blkdev_issue_discard(struct block_de
 	if (!blk_queue_discard(q))
 		return -EOPNOTSUPP;
 
-	/* Zero-sector (unknown) and one-sector granularities are the same.  */
-	granularity = max(q->limits.discard_granularity >> 9, 1U);
-	alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
-
 	if (flags & BLKDEV_DISCARD_SECURE) {
 		if (!blk_queue_secdiscard(q))
 			return -EOPNOTSUPP;
@@ -72,8 +66,8 @@ int blkdev_issue_discard(struct block_de
 
 	blk_start_plug(&plug);
 	while (nr_sects) {
-		unsigned int req_sects;
-		sector_t end_sect, tmp;
+		unsigned int req_sects = nr_sects;
+		sector_t end_sect;
 
 		bio = bio_alloc(gfp_mask, 1);
 		if (!bio) {
@@ -81,28 +75,10 @@ int blkdev_issue_discard(struct block_de
 			break;
 		}
 
-		/*
-		 * Issue in chunks of the user defined max discard setting,
-		 * ensuring that bi_size doesn't overflow
-		 */
-		req_sects = min_t(sector_t, nr_sects,
-					q->limits.max_discard_sectors);
 		if (req_sects > UINT_MAX >> 9)
 			req_sects = UINT_MAX >> 9;
 
-		/*
-		 * If splitting a request, and the next starting sector would be
-		 * misaligned, stop the discard at the previous aligned sector.
-		 */
 		end_sect = sector + req_sects;
-		tmp = end_sect;
-		if (req_sects < nr_sects &&
-		    sector_div(tmp, granularity) != alignment) {
-			end_sect = end_sect - alignment;
-			sector_div(end_sect, granularity);
-			end_sect = end_sect * granularity + alignment;
-			req_sects = end_sect - sector;
-		}
 
 		bio->bi_iter.bi_sector = sector;
 		bio->bi_end_io = bio_batch_end_io;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ