[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <159BAB74-7CB7-4DD5-AB3D-6503106E0A90@linuxhacker.ru>
Date: Tue, 7 Jun 2016 20:46:55 -0400
From: Oleg Drokin <green@...uxhacker.ru>
To: Jeff Layton <jlayton@...chiereds.net>
Cc: "J. Bruce Fields" <bfields@...ldses.org>,
linux-nfs@...r.kernel.org,
"<linux-kernel@...r.kernel.org> Mailing List"
<linux-kernel@...r.kernel.org>
Subject: Re: Files leak from nfsd in 4.7.1-rc1 (and more?)
On Jun 7, 2016, at 8:03 PM, Jeff Layton wrote:
> On Tue, 2016-06-07 at 19:39 -0400, Oleg Drokin wrote:
>> On Jun 7, 2016, at 4:04 PM, Jeff Layton wrote:
>>> So, the way this is supposed to work is that the stateids each hold a
>>> reference to the nfs4_file. They also take an fi_access reference for
>>> the open modes that they represent. This is how we know when it's OK to
>>> close one of the files in the fd[] slots. It might be interesting to
>>> also print out the fi_access counters if the slot hasn't been zeroed
>>> out if you feel up to it.
>> Ok, I have primed that and will let you know next time it triggers.
> Thanks. Maybe that will give us some idea of where to look…
Well, the counters are: 0 1 0 at the warning site, which is what
makes sense, really.
>>> I think though that the open upgrade accounting may be wrong.
>>> __nfs4_file_get_access does an unconditional atomic_inc, so if the
>>> client does a open for write and then opens again for read/write, then
>>> we may end up with too many write fi_access references taken.
>>
>> The read-write access is all fine I think, because when dropping read or write
>> it also checks if the readwrite is filled?
>> Also whoever did that atomic_inc for get_access would eventually do put,
>> and that should free the file, no?
>> I was mostly thinking there's an error exit path somewhere that forgets to do
>> the put.
>> This also would explain the other pieces leaked that are not directly attached
>> into the nfs4_file.
>>
>>>
>>> That said, this code is quite subtle. I'd need to look over it in more
>>> detail before I offer up any fixes. I'd also appreciate it if anyone
>>> else wants to sanity check my analysis there.
>>>
>
> Yeah, I think you're right. It's fine since r/w opens have a distinct
> slot, even though the refcounting just tracks the number of read and
> write references. So yeah, the leak probably is in an error path
> someplace, or maybe a race someplace.
I guess I'll try to put some tracers in the code to try and match
refcount takers vs refcount droppers for lack of better ideas.
Hopefully that'll lead us to see who got the count that was not dropped.
Powered by blists - more mailing lists