lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Aug 2008 11:31:31 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	cl@...ux-foundation.org, kosaki.motohiro@...fujitsu.com,
	tokunaga.keiich@...fujitsu.com, stable@...nel.org
Subject: Re: [RFC][PATCH 0/2] Quicklist is slighly problematic.

On Wed, 20 Aug 2008 20:05:51 +0900
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:

> Hi Cristoph,
> 
> Thank you for explain your quicklist plan at OLS.
> 
> So, I made summary to issue of quicklist.
> if you have a bit time, Could you please read this mail and patches?
> And, if possible, Could you please tell me your feeling?
> 
> 
> --------------------------------------------------------------------
> 
> Now, Quicklist store some page in each CPU as cache.
> (Each CPU has node_free_pages/16 pages)
> 
> and it is used for page table cache.
> Then, exit() increase cache, the other hand fork() spent it.
> 
> So, if apache type (one parent and many child model) middleware run,
> One CPU process fork(), Other CPU process the middleware work and exit().
> 
> At that time, One CPU don't have page table cache at all,
> Others have maximum caches.
> 
> 	QList_max = (#ofCPUs - 1) x Free / 16
> 	=> QList_max / (Free + QList_max) = (#ofCPUs - 1) / (16 + #ofCPUs - 1)
> 
> So, How much quicklist spent memory at maximum case?
> That is #CPUs proposional because it is per CPU cache but cache amount calculation doesn't use #ofCPUs.
> 
> 	Above calculation mean
> 
> 	 Number of CPUs per node            2    4    8   16
> 	 ==============================  ====================
> 	 QList_max / (Free + QList_max)   5.8%  16%  30%  48%
> 
> 
> Wow! Quicklist can spent about 50% memory at worst case.
> More unfortunately, it doesn't have any cache shrinking mechanism.
> So it cause some wrong thing.
> 
> 1. End user misunderstand to memory leak happend.
> 	=> /proc/meminfo should display amount quicklist
> 
> 2. It can cause OOM killer
> 	=> Amount of quicklists shouldn't be proposional to #ofCPUs.
> 

OK, that's a fatal bug and it's present in 2.6.25.x and 2.6.26.x.  A
serious issue.

The patches do apply to both stable kernels and I have tagged them for
backporting into them.  They're nice and small, but I didn't get a
really solid yes-this-is-what-we-should-do from Christoph?


This (from [patch 2/2]): "(Although its patch applied, quicklist can
waste 64GB on 1TB server (= 1TB / 16), it is still too much??)" is a
bit of a worry.  Yes, 64GB is too much!  But at least this is now only
a performance issue rather than a stability issue, yes?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ