[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <50574085.90104@parallels.com>
Date: Mon, 17 Sep 2012 19:23:49 +0400
From: Stanislav Kinsbursky <skinsbursky@...allels.com>
To: Chuck Lever <chuck.lever@...cle.com>
CC: "Myklebust, Trond" <Trond.Myklebust@...app.com>,
"bfields@...ldses.org" <bfields@...ldses.org>,
"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
"devel@...nvz.org" <devel@...nvz.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jlayton@...hat.com" <jlayton@...hat.com>
Subject: Re: [PATCH 0/3] lockd: use per-net refrence-counted NSM clients
17.09.2012 19:10, Chuck Lever пишет:
>
> On Sep 17, 2012, at 6:49 AM, Stanislav Kinsbursky wrote:
>
>> 14.09.2012 23:10, Chuck Lever пишет:
>>>
>>> On Sep 14, 2012, at 1:38 PM, Myklebust, Trond wrote:
>>>
>>>> On Fri, 2012-09-14 at 13:01 -0400, Chuck Lever wrote:
>>>>> What happens if statd is restarted?
>>>>
>>>> Nothing unusual. Why?
>>>
>>> The NSM upcall transport is a potential application for TCP + softconn, now that a persistent rpc_clnt is used. It just depends on what failure mode we'd like to optimize for.
>>>
>>
>> I don't understand, where the problem is.
>> Could you be more specific, please?
>
> I'm suggesting an enhancement.
>
> The change is to use TCP for the NSM upcall transport, and set RPC_TASK_SOFTCONN on the individual RPCs. The advantage of this is that the kernel could discover when statd is not running and fail the upcall immediately, rather than waiting possibly many seconds for each upcall RPC to time out.
>
> The client already has a check in the mount.nfs command to see that statd is running, likely to avoid this lengthly timeout. Since the client already has long-standing logic to avoid it, I think the benefit would be mostly on the server side.
>
> But this change can be done at some later point.
>
Ok, thanks.
Sounds reasonable to me.
I'll do so.
--
Best regards,
Stanislav Kinsbursky
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists