lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20091124153902.GD9595@redhat.com>
Date:	Tue, 24 Nov 2009 10:39:02 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	Linux-Kernel <linux-kernel@...r.kernel.org>,
	Jens Axboe <jens.axboe@...cle.com>,
	Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH 3/4] cfq-iosched: idling on deep seeky sync queues

On Tue, Nov 24, 2009 at 04:24:23PM +0100, Corrado Zoccolo wrote:
> Hi Vivek,
> On Tue, Nov 24, 2009 at 3:33 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> > Hi Corrado,
> >
> > Thinking more about it. This clearing of flag when idle expires might
> > create issues with queues which sent down requests with a burst initially
> > forcing to set "deep" flag and then fall back to low depth. In that case,
> > enable_idle will continue to be 1 and we will be driving queue depth as 1.
> >
> > This is a theoritical explanation looking at the patch. I don't know if
> > in real life we have workloads who do this frequently. At least for my
> > testing, this patch did make sure we don't switch between workload type
> > of queue very frequently.
> >
> I thought at this scenario when developing the patch, but considered
> it too infrequent (and not so costly) to justify the added complexity
> of having a moving average.
> 
> For me, wasting an idle time is something to be punished for, while
> driving the queue at lower depth is not, if the requests are coming
> timely.

Agreed that penalty of idling and not dispatching anything to disk/array
is more as compared to penalty of driving queue depth smaller. But in 
general we don't want to drive shallow queue depths until and unless
required.

> 
> > May be keeping a track of average queue depth of a seeky process might
> > help here like thinktime. If average queue depth is less over a period of
> > time, we move the queue to sync-noidle group to achieve better throughput
> > overall and if average queue depth is high, make is sync-idle.
> >
> > Currently we seem to be taking queue depth into account only for enabling
> > the flag. We don't want too frequent switching of "deep" flag, so some
> > kind of slow moving average might help.
> >
> Averages can still change in the middle of a slice.
> A simpler way could be to reset the deep flag after a full slice, if
> the depth never reached the threshold during that slice.

That's fine. For the time being we can stick to this patch and observe
if there are singifincant cases which hit this condition. If yes, we can
go for your second suggestion of resetting the flag if queue never
achieved the deeper depths again during the slice.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ