[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200901151000.152159836@linuxfoundation.org>
Date: Tue, 1 Sep 2020 17:10:30 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ming Lei <ming.lei@...hat.com>,
Christoph Hellwig <hch@....de>, Coly Li <colyli@...e.de>,
Hannes Reinecke <hare@...e.com>, Xiao Ni <xni@...hat.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Evan Green <evgreen@...omium.org>,
Gwendal Grignou <gwendal@...omium.org>,
Chaitanya Kulkarni <chaitanya.kulkarni@....com>,
Andrzej Pietrasiewicz <andrzej.p@...labora.com>,
Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 5.4 150/214] block: loop: set discard granularity and alignment for block device backed loop
From: Ming Lei <ming.lei@...hat.com>
commit bcb21c8cc9947286211327d663ace69f07d37a76 upstream.
In case of block device backend, if the backend supports write zeros, the
loop device will set queue flag of QUEUE_FLAG_DISCARD. However,
limits.discard_granularity isn't setup, and this way is wrong,
see the following description in Documentation/ABI/testing/sysfs-block:
A discard_granularity of 0 means that the device does not support
discard functionality.
Especially 9b15d109a6b2 ("block: improve discard bio alignment in
__blkdev_issue_discard()") starts to take q->limits.discard_granularity
for computing max discard sectors. And zero discard granularity may cause
kernel oops, or fail discard request even though the loop queue claims
discard support via QUEUE_FLAG_DISCARD.
Fix the issue by setup discard granularity and alignment.
Fixes: c52abf563049 ("loop: Better discard support for block devices")
Signed-off-by: Ming Lei <ming.lei@...hat.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Acked-by: Coly Li <colyli@...e.de>
Cc: Hannes Reinecke <hare@...e.com>
Cc: Xiao Ni <xni@...hat.com>
Cc: Martin K. Petersen <martin.petersen@...cle.com>
Cc: Evan Green <evgreen@...omium.org>
Cc: Gwendal Grignou <gwendal@...omium.org>
Cc: Chaitanya Kulkarni <chaitanya.kulkarni@....com>
Cc: Andrzej Pietrasiewicz <andrzej.p@...labora.com>
Cc: Christoph Hellwig <hch@....de>
Cc: <stable@...r.kernel.org>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/block/loop.c | 33 ++++++++++++++++++---------------
1 file changed, 18 insertions(+), 15 deletions(-)
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -863,6 +863,7 @@ static void loop_config_discard(struct l
struct file *file = lo->lo_backing_file;
struct inode *inode = file->f_mapping->host;
struct request_queue *q = lo->lo_queue;
+ u32 granularity, max_discard_sectors;
/*
* If the backing device is a block device, mirror its zeroing
@@ -875,11 +876,10 @@ static void loop_config_discard(struct l
struct request_queue *backingq;
backingq = bdev_get_queue(inode->i_bdev);
- blk_queue_max_discard_sectors(q,
- backingq->limits.max_write_zeroes_sectors);
- blk_queue_max_write_zeroes_sectors(q,
- backingq->limits.max_write_zeroes_sectors);
+ max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
+ granularity = backingq->limits.discard_granularity ?:
+ queue_physical_block_size(backingq);
/*
* We use punch hole to reclaim the free space used by the
@@ -888,23 +888,26 @@ static void loop_config_discard(struct l
* useful information.
*/
} else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
- q->limits.discard_granularity = 0;
- q->limits.discard_alignment = 0;
- blk_queue_max_discard_sectors(q, 0);
- blk_queue_max_write_zeroes_sectors(q, 0);
+ max_discard_sectors = 0;
+ granularity = 0;
} else {
- q->limits.discard_granularity = inode->i_sb->s_blocksize;
- q->limits.discard_alignment = 0;
-
- blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
- blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
+ max_discard_sectors = UINT_MAX >> 9;
+ granularity = inode->i_sb->s_blocksize;
}
- if (q->limits.max_write_zeroes_sectors)
+ if (max_discard_sectors) {
+ q->limits.discard_granularity = granularity;
+ blk_queue_max_discard_sectors(q, max_discard_sectors);
+ blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
- else
+ } else {
+ q->limits.discard_granularity = 0;
+ blk_queue_max_discard_sectors(q, 0);
+ blk_queue_max_write_zeroes_sectors(q, 0);
blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+ }
+ q->limits.discard_alignment = 0;
}
static void loop_unprepare_queue(struct loop_device *lo)
Powered by blists - more mailing lists