lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110720191858.GO5349@suse.de>
Date:	Wed, 20 Jul 2011 20:18:58 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan.kim@...il.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: page allocator: Initialise ZLC for first zone
 eligible for zone_reclaim

On Wed, Jul 20, 2011 at 01:08:46PM -0500, Christoph Lameter wrote:
> Hmmm... Looking at get_page_from_freelist and considering speeding that up
> in general: Could we move the whole watermark logic into the slow path?
> Only check when we refill the per cpu queues?

Each CPU list can hold 186 pages (on my currently running
kernel at least) which is 744K. As I'm running with THP enabled,
the min watermark is 25852K so with 34 of more CPUs, there is a
risk that a zone would be fully depleted due to lack of watermark
checking. Bit unlikely that 34 CPUs would be on one node but the risk
is there. Without THP, the min watermark would have been something like
32K where it would be much easier to accidentally consume all memory.

Yes, moving the watermark checks to the slow path would be faster
but under some conditions, the system will lock up.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ