lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 09 Dec 2010 21:17:00 +0800
From:	Shaohua Li <shaohua.li@...el.com>
To:	Jens Axboe <jaxboe@...ionio.com>
Cc:	lkml <linux-kernel@...r.kernel.org>,
	"vgoyal@...hat.com" <vgoyal@...hat.com>
Subject: Re: [RFC]block: change sort order of elv_dispatch_sort

On Wed, 2010-12-08 at 22:44 +0800, Jens Axboe wrote:
> On 2010-12-08 22:39, Shaohua Li wrote:
> > On Wed, 2010-12-08 at 16:01 +0800, Jens Axboe wrote:
> >> On 2010-12-08 15:50, Shaohua Li wrote:
> >>> On Wed, 2010-12-08 at 14:56 +0800, Jens Axboe wrote:
> >>>> On 2010-12-08 13:42, Shaohua Li wrote:
> >>>>> Change the sort order a little bit. Makes requests with sector above boundary
> >>>>> in ascendant order, and requests with sector below boundary in descendant
> >>>>> order. The goal is we have less disk spindle move.
> >>>>> For example, boundary is 7, we add sector 8, 1, 9, 2, 3, 4, 10, 12, 5, 11, 6
> >>>>> In the original sort, the sorted list is:
> >>>>> 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6
> >>>>> the spindle move is 8->12->1->6, total movement is 12*2 sectors
> >>>>> with the new sort, the list is:
> >>>>> 8, 9, 10, 11, 12, 6, 5, 4, 3, 2, 1
> >>>>> the spindle move is 8->12->6->1, total movement is 12*1.5 sectors
> >>>>
> >>>> It was actually done this way on purpose, it's been a while since we
> >>>> have done two way elevators even outside the dispatch list sorting
> >>>> itself.
> >>>>
> >>>> Do you have any results to back this change up? I'd argue that
> >>>> continuing to the end, sweeping back, and reading forwards again will be
> >>>> faster then doing backwards reads usually.
> >>> No, have no data, that is why this is a RFC patch. Part reason is I
> >>> don't know when we dispatch several requests to the list. Appears driver
> >>> only takes one request one time. What kind of test do you suggest?
> >>
> >> Yes that is usually the case, it's mainly meant as a holding point for
> >> dispatch, or for requeue, or for request that don't give sort ordering.
> >> Or on io scheduler switches, for instance.
> >
> > Have a test in a hacked way. I use modified noop iosched, and every time
> > when noop tries to dispatch request, it dispatches all requests in its
> > list. Test does random read. The result is actually quite stable. The
> > changed order always gives slightly better throughput, but the
> > improvement is quite small (<1%)
> 
> First of all I think 1% is too close to call, unless your results are
> REALLY stable. Secondly, a truly random workload is not a good test
> case as requests are going to be all over the map anyway. For something
> more realistic (like your example, but of course not fully contig) it
> would be interesting to see.
Tried a random I/O read with block size range from 4k to 64k, I thought
this is more realistic. The test result for the two different sort
methods still shows slightly difference. I'll give up the patch unless
there is better workload to try.

Thanks,
Shaohua

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ