lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Apr 2008 11:14:55 +0200
From:	Fabio Checconi <fchecconi@...il.com>
To:	Pavel Machek <pavel@....cz>
Cc:	axboe@...nel.dk, linux-kernel@...r.kernel.org,
	paolo.valente@...more.it
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler

> From: Pavel Machek <pavel@....cz>
> Date: Wed, Apr 16, 2008 08:44:41PM +0200
>
> On Tue 2008-04-01 17:29:03, Fabio Checconi wrote:
> ...
> > In the first type of tests, to achieve a higher throughput than CFQ
> > (with the default 100 ms time slice), the maximum budget for BFQ
> > had to be set to at least 4k sectors.  Using the same value for the
> 
> Hmm, 4k sectors is ~40 seconds worst case, no? That's quite long...


A process with such a low throughput would be marked as seeky from
the heuristics implemented in cfq/bfq.  Seeky processes are not
treated in the same way as sequential ones and they should not get
their full slice allocated, since they idle only for very short
periods.

BTW looking at the code they can get a full slice, if they always
reissue requests fast enough - within BFQ_MIN_TT - and this is
definitely an issue/error in the current implementation (and we
didn't notice it when converting the code from time-based to
service-based allocation :) ).

An easy solution (without changing the nature of bfq) would be
to use shorter slices for seeky queues, with the same mechanism
we already use for the async ones.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ