lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <298b6ff6-9feb-4b70-ec4c-d1295a0e1f41@kernel.dk>
Date:   Mon, 31 Oct 2016 11:24:02 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Kashyap Desai <kashyap.desai@...adcom.com>,
        Omar Sandoval <osandov@...ndov.com>
Cc:     linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-block@...r.kernel.org, Christoph Hellwig <hch@...radead.org>,
        paolo.valente@...aro.org
Subject: Re: Device or HBA level QD throttling creates randomness in sequetial
 workload

Hi,

One guess would be that this isn't around a requeue condition, but
rather the fact that we don't really guarantee any sort of hard FIFO
behavior between the software queues. Can you try this test patch to see
if it changes the behavior for you? Warning: untested...

diff --git a/block/blk-mq.c b/block/blk-mq.c
index f3d27a6dee09..5404ca9c71b2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -772,6 +772,14 @@ static inline unsigned int queued_to_index(unsigned 
int queued)
  	return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1);
  }

+static int rq_pos_cmp(void *priv, struct list_head *a, struct list_head *b)
+{
+	struct request *rqa = container_of(a, struct request, queuelist);
+	struct request *rqb = container_of(b, struct request, queuelist);
+
+	return blk_rq_pos(rqa) < blk_rq_pos(rqb);
+}
+
  /*
   * Run this hardware queue, pulling any software queues mapped to it in.
   * Note that this function currently has various problems around ordering
@@ -812,6 +820,14 @@ static void __blk_mq_run_hw_queue(struct 
blk_mq_hw_ctx *hctx)
  	}

  	/*
+	 * If the device is rotational, sort the list sanely to avoid
+	 * unecessary seeks. The software queues are roughly FIFO, but
+	 * only roughly, there are no hard guarantees.
+	 */
+	if (!blk_queue_nonrot(q))
+		list_sort(NULL, &rq_list, rq_pos_cmp);
+
+	/*
  	 * Start off with dptr being NULL, so we start the first request
  	 * immediately, even if we have more pending.
  	 */

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ