lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100312020526.d424f2a8.akpm@linux-foundation.org>
Date:	Fri, 12 Mar 2010 02:05:26 -0500
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Cc:	Mel Gorman <mel@....ul.ie>, linux-mm@...ck.org,
	Nick Piggin <npiggin@...e.de>,
	Chris Mason <chris.mason@...cle.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] Avoid the use of congestion_wait under zone
 pressure

On Fri, 12 Mar 2010 07:39:26 +0100 Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com> wrote:

> 
> 
> Andrew Morton wrote:
> > On Mon,  8 Mar 2010 11:48:20 +0000
> > Mel Gorman <mel@....ul.ie> wrote:
> > 
> >> Under memory pressure, the page allocator and kswapd can go to sleep using
> >> congestion_wait(). In two of these cases, it may not be the appropriate
> >> action as congestion may not be the problem.
> > 
> > clear_bdi_congested() is called each time a write completes and the
> > queue is below the congestion threshold.
> > 
> > So if the page allocator or kswapd call congestion_wait() against a
> > non-congested queue, they'll wake up on the very next write completion.
> 
> Well the issue came up in all kind of loads where you don't have any 
> writes at all that can wake up congestion_wait.
> Thats true for several benchmarks, but also real workload as well e.g. A 
> backup job reading almost all files sequentially and pumping out stuff 
> via network.

Why is reclaim going into congestion_wait() at all if there's heaps of
clean reclaimable pagecache lying around?

(I don't thing the read side of the congestion_wqh[] has ever been used, btw)

> > Hence the above-quoted claim seems to me to be a significant mis-analysis and
> > perhaps explains why the patchset didn't seem to help anything?
> 
> While I might have misunderstood you and it is a mis-analysis in your 
> opinion, it fixes a -80% Throughput regression on sequential read 
> workloads, thats not nothing - its more like absolutely required :-)
> 
> You might check out the discussion with the subject "Performance 
> regression in scsi sequential throughput (iozone)	due to "e084b - 
> page-allocator: preserve PFN ordering when	__GFP_COLD is set"".
> While the original subject is misleading from todays point of view, it 
> contains a lengthy discussion about exactly when/why/where time is lost 
> due to congestion wait with a lot of traces, counters, data attachments 
> and such stuff.

Well if we're not encountering lots of dirty pages in reclaim then we
shouldn't be waiting for writes to retire, of course.

But if we're not encountering lots of dirty pages in reclaim, we should
be reclaiming pages, normally.

I could understand reclaim accidentally going into congestion_wait() if
it hit a large pile of pages which are unreclaimable for reasons other
than being dirty, but is that happening in this case?

If not, we broke it again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ