[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121115133408.23db9ebb@corrin.poochiereds.net>
Date: Thu, 15 Nov 2012 13:34:08 -0500
From: Jeff Layton <jlayton@...hat.com>
To: "J. Bruce Fields" <bfields@...ldses.org>
Cc: Stanislav Kinsbursky <skinsbursky@...allels.com>,
linux-nfs@...r.kernel.org, devel@...nvz.org,
Trond.Myklebust@...app.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v2 00/15] NFSd state containerization
On Wed, 14 Nov 2012 17:00:36 -0500
"J. Bruce Fields" <bfields@...ldses.org> wrote:
> On Wed, Nov 14, 2012 at 06:20:59PM +0300, Stanislav Kinsbursky wrote:
> > This patch set is my first attempt to containerize NFSv4 state - i.e. make it
> > works in networks namespace context.
> > I admit, that some of this new code could be partially rewritten during future
> > NFSd containerization.
> > But the overall idea look more or less correct to me.
> > So, the main things here are:
> > 1) making nfs4_client network namespace aware.
> > 2) Allocating all hashes (except file_hashtbl and reclaim_str_hashtbl) per
> > network namespace context on NFSd start (not init) and destroying on NFSd
> > state shutdown.
> > 3) Allocating of reclaim_str_hashtbl on legacy tracker start and destroying on
> > legacy tracker stop.
> > 4) Moving of client_lru and close_lru lists to per-net data.
> > 5) Making lundromat network namespace aware.
>
> These look OK and pass my tests. Jeff, do the revised recovery bits
> look OK?
>
> Have you done any testing?
>
> It'd be interesting, for example, to know if there are any pynfs that
> fail against the server in a non-init network namespace, but pass
> normally.
>
> --b.
>
I looked over the patches and they look sane to me. I move that they go
into your -next branch to soak for a bit.
Cheers,
--
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists