lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Feb 2011 14:25:28 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	linux-kernel@...r.kernel.org, jaxboe@...ionio.com
Cc:	vgoyal@...hat.com, neilb@...e.de
Subject: [PATCH 1/3] block: Initialize ->queue_lock to internal lock at queue allocation time

o There does not seem to be a clear convention whether q->queue_lock is
  initialized or not when blk_cleanup_queue() is called. In the past
  it was not necessary but now blk_throtl_exit() takes up queue lock
  by default and needs queue lock to be available.

  In fact elevator_exit() code also has similar requirement just that
  it is less stringent in the sense that elevator_exit() is called only
  if elevator is initialized.

o Two problems have been noticed because of ambiguity about spin lock
  status.

	- If a driver calls blk_alloc_queue() and then soon calls
	  blk_cleanup_queue() almost immediately, (because some other driver
	  structure allocation failed or some other error happened) then
	  blk_throtl_exit() will run into issues as queue lock is not
	  initialized. Loop driver ran into this issue recently and I noticed
	  error paths in md driver too. Similar error paths should exist in
	  other drivers too.

	- If some driver provided external spin lock and zapped the lock
	  before blk_cleanup_queue(), then it can lead to issues.

o So this patch initializes the default queue lock at queue allocation time.
  block throttling code is one of the users of queue lock and it is
  initialized at the queue allocation time, so it makes sense to
  initialize ->queue_lock also to internal lock. A driver can overide that
  lock later. This will take care of the issue where a driver does not have
  to worry about initializing the queue lock to default before calling
  blk_cleanup_queue()

Signed-off-by: Vivek Goyal <vgoyal@...hat.com>
---
 block/blk-core.c     |   16 +++++++++++++++-
 block/blk-settings.c |    7 -------
 2 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3cc17e6..bc2b7c5 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -446,6 +446,11 @@ void blk_put_queue(struct request_queue *q)
 	kobject_put(&q->kobj);
 }
 
+/*
+ * Note: If a driver supplied the queue lock, it should not zap that lock
+ * unexpectedly as some queue cleanup components like elevator_exit() and
+ * blk_throtl_exit() need queue lock.
+ */
 void blk_cleanup_queue(struct request_queue *q)
 {
 	/*
@@ -540,6 +545,12 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 	mutex_init(&q->sysfs_lock);
 	spin_lock_init(&q->__queue_lock);
 
+	/*
+	 * By default initialize queue_lock to internal lock and driver can
+	 * override it later if need be.
+	 */
+	q->queue_lock = &q->__queue_lock;
+
 	return q;
 }
 EXPORT_SYMBOL(blk_alloc_queue_node);
@@ -624,7 +635,10 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
 	q->unprep_rq_fn		= NULL;
 	q->unplug_fn		= generic_unplug_device;
 	q->queue_flags		= QUEUE_FLAG_DEFAULT;
-	q->queue_lock		= lock;
+
+	/* Override internal queue lock with supplied lock pointer */
+	if (lock)
+		q->queue_lock		= lock;
 
 	/*
 	 * This also sets hw/phys segments, boundary and size
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 36c8c1f..df649fa 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -176,13 +176,6 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
 	blk_queue_max_hw_sectors(q, BLK_SAFE_MAX_SECTORS);
 
 	/*
-	 * If the caller didn't supply a lock, fall back to our embedded
-	 * per-queue locks
-	 */
-	if (!q->queue_lock)
-		q->queue_lock = &q->__queue_lock;
-
-	/*
 	 * by default assume old behaviour and bounce for any highmem page
 	 */
 	blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
-- 
1.7.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ