[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210309134118.GA31041@axis.com>
Date: Tue, 9 Mar 2021 14:41:18 +0100
From: Vincent Whitchurch <vincent.whitchurch@...s.com>
To: ronnie sahlberg <ronniesahlberg@...il.com>
CC: Shyam Prasad N <nspmangalore@...il.com>,
CIFS <linux-cifs@...r.kernel.org>,
samba-technical <samba-technical@...ts.samba.org>,
LKML <linux-kernel@...r.kernel.org>,
Steve French <sfrench@...ba.org>, kernel <kernel@...s.com>,
Pavel Shilovsky <pshilov@...rosoft.com>
Subject: Re: [PATCH] CIFS: Prevent error log on spurious oplock break
On Tue, Mar 09, 2021 at 01:05:11AM +0100, ronnie sahlberg wrote:
> On Sun, Mar 7, 2021 at 8:52 PM Shyam Prasad N via samba-technical
> <samba-technical@...ts.samba.org> wrote:
> > The reason for rejecting the request maybe a number of things like:
> > corrupted request, stale request (for some old session), or for a
> > wrong handle.
> > I don't think we should treat any of these cases as a success.
>
> I agree with Shyam here.
> We shouldn't change the return value to pretend success just to
> suppress a warning.
Thank you all for your comments. I see that everyone agrees that the
error print is useful for SMB2, so I will drop this patch.
> However, if it is common to trigger with false positives we might want
> to something to prevent it from
> spamming the logs.
> These messages could be useful if we encounter bugs in our leasing
> code, or bugs in server
> lease code, so we should't throw them away completely. But if false
> positives are common ...
>
> Some thoughts I and Stever brainstormed about could be to change the code in the
> demiltiplex thread where we currently dump the packets that were "invalid"
> to maybe:
> * log once as VFS and then log any future ones as FYI
> * log once as VFS and then only make the others available via dynamic
> trace points
> * rate limit it so we only log it once every n minutes? (this is overkill?)
Thank you for the suggestions. In my case, I've only received some
reports of this error being emitted very rarely (couple of times a month
in our stability tests). Right now it looks like the problem may only
be with a particular NAS, and we're looking into triggering oplock
breaks more often and catching the problem with some more logging.
Powered by blists - more mailing lists