lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <1406067727-19683-80-git-send-email-kamal@canonical.com> Date: Tue, 22 Jul 2014 15:21:30 -0700 From: Kamal Mostafa <kamal@...onical.com> To: linux-kernel@...r.kernel.org, stable@...r.kernel.org, kernel-team@...ts.ubuntu.com Cc: Lukas Czerner <lczerner@...hat.com>, Mike Snitzer <snitzer@...hat.com>, Kamal Mostafa <kamal@...onical.com> Subject: [PATCH 3.8 079/116] dm thin: update discard_granularity to reflect the thin-pool blocksize 3.8.13.27 -stable review patch. If anyone has any objections, please let me know. ------------------ From: Lukas Czerner <lczerner@...hat.com> commit 09869de57ed2728ae3c619803932a86cb0e2c4f8 upstream. DM thinp already checks whether the discard_granularity of the data device is a factor of the thin-pool block size. But when using the dm-thin-pool's discard passdown support, DM thinp was not selecting the max of the underlying data device's discard_granularity and the thin-pool's block size. Update set_discard_limits() to set discard_granularity to the max of these values. This enables blkdev_issue_discard() to properly align the discards that are sent to the DM thin device on a full block boundary. As such each discard will now cover an entire DM thin-pool block and the block will be reclaimed. Reported-by: Zdenek Kabelac <zkabelac@...hat.com> Signed-off-by: Lukas Czerner <lczerner@...hat.com> Signed-off-by: Mike Snitzer <snitzer@...hat.com> Signed-off-by: Kamal Mostafa <kamal@...onical.com> --- drivers/md/dm-thin.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index ca954b1..3d183ca 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -2479,7 +2479,8 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) */ if (pt->adjusted_pf.discard_passdown) { data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits; - limits->discard_granularity = data_limits->discard_granularity; + limits->discard_granularity = max(data_limits->discard_granularity, + pool->sectors_per_block << SECTOR_SHIFT); } else if (block_size_is_power_of_two(pool)) limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; else -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists