lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110509130316.GB5975@redhat.com>
Date:	Mon, 9 May 2011 09:03:16 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	shaohua.li@...el.com, linux-kernel@...r.kernel.org,
	linux-ide@...r.kernel.org, jaxboe@...ionio.com, hch@...radead.org,
	jgarzik@...ox.com, djwong@...ibm.com, sshtylyov@...sta.com,
	James Bottomley <James.Bottomley@...senPartnership.com>,
	linux-scsi@...r.kernel.org, ricwheeler@...il.com
Subject: Re: [patch v3 2/3] block: hold queue if flush is running for
 non-queueable flush drive

On Thu, May 05, 2011 at 10:38:53AM +0200, Tejun Heo wrote:

[..]
> Similarly, I'd like to suggest something like the following.
> 
> 		/*
> 		 * Hold dispatching of regular requests if non-queueable
> 		 * flush is in progress; otherwise, the low level driver
> 		 * would keep dispatching IO requests just to requeue them
> 		 * until the flush finishes, which not only adds
> 		 * dispatching / requeueing overhead but may also
> 		 * significantly affect throughput when multiple flushes
> 		 * are issued back-to-back.  Please consider the following
> 		 * scenario.
> 		 *
> 		 * - flush1 is dispatched with write1 in the elevator.
> 		 *
> 		 * - Driver dispatches write1 and requeues it.
> 		 *
> 		 * - flush2 is issued and appended to dispatch queue after
> 		 *   the requeued write1.  As write1 has been requeued
> 		 *   flush2 can't be put in front of it.
> 		 *
> 		 * - When flush1 finishes, the driver has to process write1
> 		 *   before flush2 even though there's no fundamental
> 		 *   reason flush2 can't be processed first and, when two
> 		 *   flushes are issued back-to-back without intervening
> 		 *   writes, the second one essentially becomes noop.
> 		 *
> 		 * This phenomena becomes quite visible under heavy
> 		 * concurrent fsync workload and holding the queue while
> 		 * flush is in progress leads to significant throughput
> 		 * gain.
> 		 */

Tejun,

I am assuming that these back-to-back flushes are independent of each
other otherwise write request will anyway get between two flushes.

If that's the case, then should we solve the problem by improving flush
merge logic a bit better. (Say idle a bit before issuing a flush only
if request queue is not empty).

That way multiple back to back flushes can be merged without taking a hit on
throughput and we can avoid this special casing whether driver can queue the
flush or not.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ