lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100917140916.GA8474@brong.net>
Date:	Sat, 18 Sep 2010 00:09:16 +1000
From:	Bron Gondwana <brong@...tmail.fm>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Robert Mueller <robm@...tmail.fm>,
	Shaohua Li <shaohua.li@...el.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Bron Gondwana <brong@...tmail.fm>,
	linux-mm <linux-mm@...ck.org>, Mel Gorman <mel@....ul.ie>
Subject: Re: Default zone_reclaim_mode = 1 on NUMA kernel is bad for
 file/email/web servers

On Fri, Sep 17, 2010 at 08:56:06AM -0500, Christoph Lameter wrote:
> On Fri, 17 Sep 2010, Robert Mueller wrote:
> 
> > > > I don't think this is any fault of how the software works. It's a
> > > > *very* standard "pre-fork child processes, allocate incoming
> > > > connections to a child process, open and mmap one or more files to
> > > > read data from them". That's not exactly a weird programming model,
> > > > and it's bad that the kernel is handling that case very badly with
> > > > everything default.
> > >
> > > maybe you incoming connection always happen on one CPU and you do the
> > > page allocation in that cpu, so some nodes use out of memory but
> > > others have a lot free. Try bind the child process to different nodes
> > > might help.
> >
> > There's are 5000+ child processes (it's a cyrus IMAP server). Neither
> > the parent of any of the children are bound to any particular CPU. It
> > uses a standard fcntl lock to make sure only one spare child at a time
> > calls accept(). I don't think that's the problem.
> 
> From the first look that seems to be the problem. You do not need to be
> bound to a particular cpu, the scheduler will just leave a single process
> on the same cpu by default. If you then allocate all memory only from this
> process then you get the scenario that you described.

Huh?  Which bit of forking server makes you think one process is allocating
lots of memory?  They're opening and reading from files.  Unless you're
calling the kernel a "single process".
 
> There should be multiple processes allocating memory from all processors
> to take full advantage of fast local memory. If you cannot do that then
> the only choice is to reduce performance by some sort of interleaving
> either at the Bios or OS level. OS level interleaving only for this
> particular application would be best because then the OS can at least
> allocate its own data in memory local to the processors.

In actual fact we're running 20 different Cyrus instances on this
machine, each with its own config file and own master file.  The only
"parentage" they share is they were most likely started from a single
bash shell at one point, because we start them up after the server is
already running from a management script.

So we're talking 20 Cyrus master processes, each of which forks off
hundreds of imapd processes, each of which listens, opens mailboxes
as required, reads and writes files.

You can't seriously tell me that the scheduler is putting ALL THESE
PROCESSES on a single CPU.

Bron.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ