lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130726155319.21e8a191456bf8a0ff724199@linux-foundation.org>
Date:	Fri, 26 Jul 2013 15:53:19 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Andrea Arcangeli <aarcange@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch 1/3] mm: vmscan: fix numa reclaim balance problem in
 kswapd

On Fri, 19 Jul 2013 16:55:23 -0400 Johannes Weiner <hannes@...xchg.org> wrote:

> When the page allocator fails to get a page from all zones in its
> given zonelist, it wakes up the per-node kswapds for all zones that
> are at their low watermark.
> 
> However, with a system under load and the free page counters being
> per-cpu approximations, the observed counter value in a zone can
> fluctuate enough that the allocation fails but the kswapd wakeup is
> also skipped while the zone is still really close to the low
> watermark.
> 
> When one node misses a wakeup like this, it won't be aged before all
> the other node's zones are down to their low watermarks again.  And
> skipping a full aging cycle is an obvious fairness problem.
> 
> Kswapd runs until the high watermarks are restored, so it should also
> be woken when the high watermarks are not met.  This ages nodes more
> equally and creates a safety margin for the page counter fluctuation.

Well yes, but what guarantee is there that the per-cpu counter error
problem is reliably fixed?  AFAICT this patch "fixes" it because the
gap between the low and high watermarks happens to be larger than the
per-cpu counter fluctuation, yes?  If so, there are surely all sorts of
situations where it will break again.

To fix this reliably, we should be looking at constraining counter
batch sizes or performing a counter summation to get the more accurate
estimate?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ