[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100917230148.GA10636@brong.net>
Date: Sat, 18 Sep 2010 09:01:48 +1000
From: Bron Gondwana <brong@...tmail.fm>
To: Christoph Lameter <cl@...ux.com>
Cc: Bron Gondwana <brong@...tmail.fm>,
Robert Mueller <robm@...tmail.fm>,
Shaohua Li <shaohua.li@...el.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, Mel Gorman <mel@....ul.ie>
Subject: Re: Default zone_reclaim_mode = 1 on NUMA kernel is bad for
file/email/web servers
On Fri, Sep 17, 2010 at 09:22:00AM -0500, Christoph Lameter wrote:
> On Sat, 18 Sep 2010, Bron Gondwana wrote:
>
> > > From the first look that seems to be the problem. You do not need to be
> > > bound to a particular cpu, the scheduler will just leave a single process
> > > on the same cpu by default. If you then allocate all memory only from this
> > > process then you get the scenario that you described.
> >
> > Huh? Which bit of forking server makes you think one process is allocating
> > lots of memory? They're opening and reading from files. Unless you're
> > calling the kernel a "single process".
>
> I have no idea what your app does.
Ok - Cyrus IMAPd has been around for ages. It's an open source email
server built on a very traditional single-process model.
* a master process which reads config files and manages the other process
* multiple imapd processes, one per connection
* multiple pop3d processes, one per connection
* multiple lmtpd processes, one per connection
* periodical "cleanup" processes.
Each of these is started by the lightweight master forking and then
execing the appropriate daemon.
In our configuration we run 20 separate "master" processes, each
managing a single disk partition's worth of email. The reason
for this is reduced locking contention for the central mailboxes
database, and also better replication concurrency, because each
instance runs a single replication process - so replication is
sequential.
> The data that I glanced over looks as
> if most allocations happen for a particular memory node
Sorry, which data?
> and since the
> memory is optimized to be local to that node other memory is not used
> intensively. This can occur because of allocations through one process /
> thread that is always running on the same cpu and therefore always
> allocates from the memory node local to that cpu.
As Rob said, there are thousands of independent processes, each opening
a single mailbox (3 separate metadata files plus possibly hundreds of
individual email files). It's likely that diffenent processes will open
the same mailbox over time - for example an email client opening multiple
concurrent connections, and at the same time an lmtpd connecting and
delivering new emails to the mailbox.
> It can also happen f.e. if a driver always allocates memory local to the
> I/O bus that it is using.
None of what we're doing is super weird advanced stuff, it's a vanilla
forking daemon where a single process run and does stuff on behalf of
a user. The only slightly interesting things:
1) each "service" has a single lock file, and all the idle processes of
that type (i.e. imapd) block on that lock while they're waiting for
a connection. This is to avoid thundering herd on operating systems
which aren't nice about it. The winner does the accept and handles
the connection.
2) once it's finished processing a request, the process will wait for
another connection rather than closing.
Nothing sounds like what you're talking about (one giant process that's
all on one CPU), and I don't know why you keep talking about it. It's
nothing like what we're running on these machines.
Bron.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists