lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Sep 2010 08:56:06 -0500 (CDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Robert Mueller <robm@...tmail.fm>
cc:	Shaohua Li <shaohua.li@...el.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Bron Gondwana <brong@...tmail.fm>,
	linux-mm <linux-mm@...ck.org>, Mel Gorman <mel@....ul.ie>
Subject: Re: Default zone_reclaim_mode = 1 on NUMA kernel is bad for
 file/email/web servers

On Fri, 17 Sep 2010, Robert Mueller wrote:

> > > I don't think this is any fault of how the software works. It's a
> > > *very* standard "pre-fork child processes, allocate incoming
> > > connections to a child process, open and mmap one or more files to
> > > read data from them". That's not exactly a weird programming model,
> > > and it's bad that the kernel is handling that case very badly with
> > > everything default.
> >
> > maybe you incoming connection always happen on one CPU and you do the
> > page allocation in that cpu, so some nodes use out of memory but
> > others have a lot free. Try bind the child process to different nodes
> > might help.
>
> There's are 5000+ child processes (it's a cyrus IMAP server). Neither
> the parent of any of the children are bound to any particular CPU. It
> uses a standard fcntl lock to make sure only one spare child at a time
> calls accept(). I don't think that's the problem.

>From the first look that seems to be the problem. You do not need to be
bound to a particular cpu, the scheduler will just leave a single process
on the same cpu by default. If you then allocate all memory only from this
process then you get the scenario that you described.

There should be multiple processes allocating memory from all processors
to take full advantage of fast local memory. If you cannot do that then
the only choice is to reduce performance by some sort of interleaving
either at the Bios or OS level. OS level interleaving only for this
particular application would be best because then the OS can at least
allocate its own data in memory local to the processors.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ