[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1154949908.29877.130.camel@hole.melbourne.sgi.com>
Date: Mon, 07 Aug 2006 21:25:09 +1000
From: Greg Banks <gnb@...bourne.sgi.com>
To: Andrew Morton <akpm@...l.org>
Cc: Neil Brown <neilb@...e.de>,
Linux NFS Mailing List <nfs@...ts.sourceforge.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 010 of 11] knfsd: make rpc threads pools numa aware
On Sun, 2006-08-06 at 19:47, Andrew Morton wrote:
> On Mon, 31 Jul 2006 10:42:34 +1000
> NeilBrown <neilb@...e.de> wrote:
>
> > knfsd: Actually implement multiple pools. On NUMA machines, allocate
> > a svc_pool per NUMA node; on SMP a svc_pool per CPU; otherwise a single
> > global pool. Enqueue sockets on the svc_pool corresponding to the CPU
> > on which the socket bh is run (i.e. the NIC interrupt CPU). Threads
> > have their cpu mask set to limit them to the CPUs in the svc_pool that
> > owns them.
> >
> > This is the patch that allows an Altix to scale NFS traffic linearly
> > beyond 4 CPUs and 4 NICs.
> >
> > Incorporates changes and feedback from Neil Brown, Trond Myklebust,
> > and Christoph Hellwig.
>
> This makes the NFS client go BUG. Simple nfsv3 workload (ie: mount, read
> stuff). Uniproc, FC5.
>
> + BUG_ON(m->mode == SVC_POOL_NONE);
>
Reproduced on RHAS4; this patch fixes it for me.
--
knfsd: Fix a regression on an NFS client where mounting an
NFS filesystem trips a spurious BUG_ON() in the server code.
Tested using cthon04 lock tests on RHAS4-U2 userspace.
Signed-off-by: Greg Banks <gnb@...bourne.sgi.com>
---
net/sunrpc/svc.c | 11 ++++++++++-
1 files changed, 10 insertions(+), 1 deletion(-)
Index: linux-2.6.18-rc2/net/sunrpc/svc.c
===================================================================
--- linux-2.6.18-rc2.orig/net/sunrpc/svc.c
+++ linux-2.6.18-rc2/net/sunrpc/svc.c
@@ -211,6 +211,11 @@ svc_pool_map_set_cpumask(unsigned int pi
struct svc_pool_map *m = &svc_pool_map;
unsigned int node; /* or cpu */
+ /*
+ * The caller checks for sv_nrpools > 1, which
+ * implies that we've been initialized and the
+ * map mode is not NONE.
+ */
BUG_ON(m->mode == SVC_POOL_NONE);
switch (m->mode)
@@ -241,7 +246,11 @@ svc_pool_for_cpu(struct svc_serv *serv,
struct svc_pool_map *m = &svc_pool_map;
unsigned int pidx = 0;
- BUG_ON(m->mode == SVC_POOL_NONE);
+ /*
+ * SVC_POOL_NONE happens in a pure client when
+ * lockd is brought up, so silently treat it the
+ * same as SVC_POOL_GLOBAL.
+ */
switch (m->mode) {
case SVC_POOL_PERCPU:
Greg.
--
Greg Banks, R&D Software Engineer, SGI Australian Software Group.
I don't speak for SGI.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists