lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Apr 2008 18:51:01 +0300
From:	Avi Kivity <avi@...ranet.com>
To:	Paolo Valente <paolo.valente@...more.it>
CC:	Jens Axboe <jens.axboe@...cle.com>, Pavel Machek <pavel@....cz>,
	linux-kernel@...r.kernel.org
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler

Paolo Valente wrote:
> Avi Kivity ha scritto:
>> Jumping in at random, does "process" here mean task or mms_struct?  
>> If the former, doesn't that mean that a 100-thread process can starve 
>> out a single-threaded process?
>>
>> Perhaps we need hierarchical io scheduling, like cfs has for the cpu.
>>
> Hierarchical would simplify isolating groups of threads or processes.
> However, some simple solution is already available with bfq. For 
> example, if you have to fairly share the disk bandwidth between the 
> above 100 threads and another important thread, you get it by just 
> assigning weight 1 to each of these 100 threads, and weight 100 to the 
> important one.

Doesn't work.  If the 100-thread process wants to use just on thread for 
issuing I/O, it will be starved by the single-threaded process.

[my example has process A with 100 threads, and process B with 1 thread, 
not a 101-thread process with one important thread]


-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ