lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200910271821.18521.elendil@planet.nl>
Date:	Tue, 27 Oct 2009 18:21:13 +0100
From:	Frans Pop <elendil@...net.nl>
To:	Chris Mason <chris.mason@...cle.com>, Mel Gorman <mel@....ul.ie>,
	David Rientjes <rientjes@...gle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Kernel Testers List <kernel-testers@...r.kernel.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Reinette Chatre <reinette.chatre@...el.com>,
	Bartlomiej Zolnierkiewicz <bzolnier@...il.com>,
	Karol Lewandowski <karol.k.lewandowski@...il.com>,
	Mohamed Abbas <mohamed.abbas@...el.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	"John W. Linville" <linville@...driver.com>, linux-mm@...ck.org
Subject: Re: [Bug #14141] order 2 page allocation failures in iwlagn

On Tuesday 27 October 2009, Chris Mason wrote:
> On Tue, Oct 27, 2009 at 03:52:24PM +0000, Mel Gorman wrote:
> > > So, after the move to async/sync, a lot more pages are getting
> > > queued for writeback - more than three times the number of pages are
> > > queued for writeback with the vanilla kernel. This amount of
> > > congestion might be why direct reclaimers and kswapd's timings have
> > > changed so much.
> >
> > Or more accurately, the vanilla kernel has queued up a lot more pages
> > for IO than when the patch is reverted. I'm not seeing yet why this
> > is.
>
> [ sympathies over confusion about congestion...lots of variables here ]
>
> If wb_kupdate has been able to queue more writes it is because the
> congestion logic isn't stopping it.  We have congestion_wait(), but
> before calling that in the writeback paths it says: are you congested?
> and then backs off if the answer is yes.
>
> Ideally, direct reclaim will never do writeback.  We want it to be able
> to find clean pages that kupdate and friends have already processed.
>
> Waiting for congestion is a funny thing, it only tells us the device has
> managed to finish some IO or that a timeout has passed.  Neither event
> has any relation to figuring out if the IO for reclaimable pages has
> finished.
>
> One option is to have the VM remember the hashed waitqueue for one of
> the pages it direct reclaims and then wait on it.

What people should be aware of is the behavior of the system I see at this 
point. I've already mentioned this in other mails, but it's probably good 
to repeat it here.

While gitk is reading commits with vanilla .31 and .32 kernels there is at 
some point a fairly long period (10-20 seconds) where I see:
- a completely frozen desktop, including frozen mouse cursor
- really very little disk activity (HD led flashes very briefly less than
  once per second)
- reading commits stops completely during this period
- no music.
After that there is a period (another 5-15 seconds) with a huge amount of 
disk activity during which the system gradually becomes responsive again 
and in gitk the count of commits that have been read starts increasing 
again (without a jump in the counter which confirms that no commits were 
read during the freeze).

I cannot really tell what the system is doing during those freezes. Because 
of the frozen desktop I cannot for example see CPU usage. I suspect that, 
as there is hardly any disk activity, the system must be reorganizing RAM 
or something. But it seems quite bad that that gets "bunched up" instead 
of happening more gradually.

With the congestion_wait() change reverted I never see these freezes, only 
much more normal minor latencies (< 2 seconds; mostly < 0.5 seconds), 
which is probably unavoidable during heavy swapping.

Hth,
FJP
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ