lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4807173E.4060300@unimore.it>
Date:	Thu, 17 Apr 2008 11:24:14 +0200
From:	Paolo Valente <paolo.valente@...more.it>
To:	Jens Axboe <jens.axboe@...cle.com>
CC:	Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler

Jens Axboe ha scritto:
>
>
> I was thinking about that too. Generally I've been opposed to doing
> scheduling decisions on anything but time, since that is always
> relevant. When to hand out slices and to what process, that algorithm is
> really basic in CFQ and could do with an improvement.
>
>   
Maybe there is also another middle-ground solution. I'll try to sketch 
it out:
. use sectors instead of time
. impose a penalty to each thread in proportion to the distance between 
its disk requests
. reduce the maximum budget of each thread as a function of this seek 
penalty so as to prevent the thread from stealing more than a given time 
slice (the simple mechanism to limit per-thread budget is already 
implemented in bfq).

By doing so, both fairness and time isolation should be guaranteed.
Finally, this policy should be safe in that, given the maximum time used 
by a seeky thread to consume its maximum budget on a reference disk, the 
time used on any faster disk should be shorter.

Does it seem reasonable?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ