lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171003114241.776433866@linuxfoundation.org>
Date:   Tue,  3 Oct 2017 14:28:31 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Seraphime Kirkovski <kirkseraph@...il.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Pavel Machek <pavel@....cz>
Subject: [PATCH 4.13 009/110] mmc: block: Fix incorrectly initialized requests

4.13-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Adrian Hunter <adrian.hunter@...el.com>

commit 01f5bbd17a8066b58dba9b5049fad504bce67322 upstream.

mmc_init_request() depends on card->bouncesz so it must be calculated
before blk_init_allocated_queue() starts allocating requests.

Reported-by: Seraphime Kirkovski <kirkseraph@...il.com>
Fixes: 304419d8a7e9 ("mmc: core: Allocate per-request data using the..")
Signed-off-by: Adrian Hunter <adrian.hunter@...el.com>
Tested-by: Seraphime Kirkovski <kirkseraph@...il.com>
Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
Tested-by: Pavel Machek <pavel@....cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 drivers/mmc/core/queue.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -242,6 +242,12 @@ int mmc_init_queue(struct mmc_queue *mq,
 	if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
 		limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
 
+	/*
+	 * mmc_init_request() depends on card->bouncesz so it must be calculated
+	 * before blk_init_allocated_queue() starts allocating requests.
+	 */
+	card->bouncesz = mmc_queue_calc_bouncesz(host);
+
 	mq->card = card;
 	mq->queue = blk_alloc_queue(GFP_KERNEL);
 	if (!mq->queue)
@@ -265,7 +271,6 @@ int mmc_init_queue(struct mmc_queue *mq,
 	if (mmc_can_erase(card))
 		mmc_queue_setup_discard(mq->queue, card);
 
-	card->bouncesz = mmc_queue_calc_bouncesz(host);
 	if (card->bouncesz) {
 		blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512);
 		blk_queue_max_segments(mq->queue, card->bouncesz / 512);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ