lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Jan 2010 14:49:02 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Shaohua Li <shaohua.li@...el.com>,
	Corrado Zoccolo <czoccolo@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Zhang, Yanmin" <yanmin.zhang@...el.com>
Subject: Re: [RFC]cfq-iosched: quantum check tweak

On Thu, Jan 14 2010, Vivek Goyal wrote:
> On Thu, Jan 14, 2010 at 12:16:24PM +0800, Shaohua Li wrote:
> > On Wed, Jan 13, 2010 at 07:18:07PM +0800, Vivek Goyal wrote:
> > > On Wed, Jan 13, 2010 at 04:17:35PM +0800, Shaohua Li wrote:
> > > [..]
> > > > > >  static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq)
> > > > > >  {
> > > > > >  	unsigned int max_dispatch;
> > > > > > @@ -2258,7 +2273,10 @@ static bool cfq_may_dispatch(struct cfq_
> > > > > >  	if (cfqd->sync_flight && !cfq_cfqq_sync(cfqq))
> > > > > >  		return false;
> > > > > >  
> > > > > > -	max_dispatch = cfqd->cfq_quantum;
> > > > > > +	max_dispatch = cfqd->cfq_quantum / 2;
> > > > > > +	if (max_dispatch < CFQ_SOFT_QUANTUM)
> > > > > 
> > > > > We don't have to hardcode CFQ_SOFT_QUANTUM or in fact we don't need it. We can
> > > > > derive the soft limit from hard limit (cfq_quantum). Say soft limit will be
> > > > > 50% of cfq_quantum value.
> > > > I'm hoping this doesn't give user a surprise. Say cfq_quantum sets to 7, then we
> > > > start doing throttling from 3 requests. Adding the CFQ_SOFT_QUANTUM gives a compatibility
> > > > against old behavior at least. Am I over thinking?
> > > >  
> > > 
> > > I would not worry too much about that. If you are really worried about
> > > that, then create one Documentation/block/cfq-iosched.txt and document
> > > how cfq_quantum works so that users know that cfq_quantum is upper hard
> > > limit and internal soft limit is cfq_quantum/2.
> > Good idea. Looks we don't document cfq tunnables, I'll try to do it later.
> > 
> > Currently a queue can only dispatch up to 4 requests if there are other queues.
> > This isn't optimal, device can handle more requests, for example, AHCI can
> > handle 31 requests. I can understand the limit is for fairness, but we could
> > do a tweak: if the queue still has a lot of slice left, sounds we could
> > ignore the limit.
> 
> Hi Shaohua,
> 
> This looks much better. Though usage of "slice_idle" as measure of service
> times, I find little un-intutive. Especially, I do some testing with
> slice_idle=0, in that case, we will be allowing dispatch of 8 requests
> from each queue even if slice is about to expire.

I agree this is problematic, but I also think we need to do something
about the control of queuing depth. For most users, keeping it low is
what they want - performance doesn't change much with higher depths, you
only pay a latency cost when switching to a new queue. And they don't
want that.

But for other hardware, driving up the queue depth to what the hardware
supports (potentially) can be a big win, and CFQ definitely needs to be
able to do that.

Write caches are again problematic in this area... For reads and writes
on write through caching, just looking at what this cfqq has already
dispatched and completed in this slice would be sufficient. It could
even be carried over to the next slice as a seed value, so you could
dispatch more earlier. What we want to avoid is stuffing the device
queue with tons of writes that complete immediately, only to move the
pentalty of those requests into the slices of other queues.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ