lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080417135448.GF12774@kernel.dk>
Date:	Thu, 17 Apr 2008 15:54:50 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Aaron Carroll <aaronc@....unsw.edu.au>
Cc:	Fabio Checconi <fchecconi@...il.com>,
	Paolo Valente <paolo.valente@...more.it>,
	Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler

On Thu, Apr 17 2008, Aaron Carroll wrote:
> Fabio Checconi wrote:
> >>From: Aaron Carroll <aaronc@....unsw.edu.au>
> >>How do you figure that?  This is a situation where time-slices work 
> >>nicely,
> >>because they implicitly account for the performance penalty of poor access
> >>patterns.  The sequential-accessing processes (and the system overall) 
> >>ends
> >>up with higher throughput.
> >>
> >
> >The unfairness is not WRT tasks generating poor access patterns.
> >If you have two tasks doing sequential accesses on two different
> >regions of the disk the exact amount of service they receive in the
> >same amount of time depends on the transfer rate of the disk on
> >that regions, and, depending on the media, it is not always the same.
> 
> Ok... you're talking about ZBR.
> 
> I'm not convinced this should be treated differently to, say, random vs.
> sequential workloads.  You still end up with reduced global throughput as
> you've shown in the ``Short-term time guarantees'' table.  It is an
> interesting case though... since the lower performance is not though fault
> of the process it doesn't seem fair to ``punish'' it.

It is indeed a valid observation, but I think we are getting into
details still. CFQ wants to provide fair access to the drive, it doesn't
claim to be 100% fair wrt throughput or transfer sums at all costs. This
is where fairness and real life for an all round scheduler divert
somewhat.

So while it IS true that you could have 40mb/sec at one end and 65mb/sec
at the other and thus give the process at the start an 'unfair' share of
bandwidth, it's honestly mostly a theoretical problem. I can envision
some valid concerns for media streaming filling the entire drive, but
then my solution would be to just bump the time slice if you are not
meeting deadlines. I've never heard anyone complain about this issue.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ