[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whJ56_YdH-hqgAuV5WkS0r3Tq2CFX+AQGJXGxrihOLb_Q@mail.gmail.com>
Date: Sun, 28 Jan 2024 14:07:49 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Masami Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Trace Devel <linux-trace-devel@...r.kernel.org>, Christian Brauner <brauner@...nel.org>,
Ajay Kaher <ajay.kaher@...adcom.com>, Geert Uytterhoeven <geert@...ux-m68k.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH] eventfs: Have inodes have unique inode numbers
On Sun, 28 Jan 2024 at 13:43, Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> That's just wrong.
>
> Either you look things up under your own locks, in which case the SRCU
> dance is unnecessary and pointless.
>
> Or you use refcounts.
>
> In which case SRCU is also unnecessary and pointless.
So from what I can see, you actually protect almost everything with
the eventfs_mutex, but the problem is that you then occasionally drop
that mutex in the middle.
The one valid reason for dropping it is the readdir callback, which
does need to write to user space memory.
But no, that's not a valid reason to use SRCU. It's a very *bad*
reason to use SRCU.
The thing is, you can fix it two ways:
- either refcount things properly, ie when you do that lookup under your lock:
mutex_lock(&eventfs_mutex);
ei = READ_ONCE(ti->private);
if (ei && ei->is_freed)
ei = NULL;
mutex_unlock(&eventfs_mutex);
you just go "I now have a ref" to the ei, and you increment the
refcount like you should, and then you dcrement it at the end when
you're done.
Btw, what's with the READ_ONCE()? You have locking.
The other option is to simply re-lookup the ei when you re-get the
eventfs_mutext anyway.
Either of those cases, and the SRCU is entirely pointless. It really
looks wrong, because you seem to take that eventfs_mutex everywhere
anyway.
Linus
Powered by blists - more mailing lists