[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b175faae4bb98d3379a8642fe5f4e00587c3a734.camel@kernel.org>
Date: Fri, 26 Apr 2019 13:30:53 -0400
From: Jeff Layton <jlayton@...nel.org>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ilya Dryomov <idryomov@...il.com>, ceph-devel@...r.kernel.org,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] Ceph fixes for 5.1-rc7
On Fri, 2019-04-26 at 17:50 +0100, Al Viro wrote:
> On Fri, Apr 26, 2019 at 12:25:03PM -0400, Jeff Layton wrote:
>
> > It turns out though that using name_snapshot from ceph is a bit more
> > tricky. In some cases, we have to call ceph_mdsc_build_path to build up
> > a full path string. We can't easily populate a name_snapshot from there
> > because struct external_name is only defined in fs/dcache.c.
>
> Explain, please. For ceph_mdsc_build_path() you don't need name
> snapshots at all and existing code is, AFAICS, just fine, except
> for pointless pr_err() there.
>
Eventually we have to pass back the result of all the
build_dentry_path() shenanigans to create_request_message(), and then
free whatever that result is after using it.
Today we can get back a string+length from ceph_mdsc_build_path or
clone_dentry_name, or we might get direct pointers into the dentry if
the situation allows for it.
Now we want to rip out clone_dentry_name() and start using
take_dentry_name_snapshot(). That returns a name_snapshot that we'll
need to pass back to create_request_message. It will need to deal with
the fact that it could get one of those instead of just a string+length.
My original thought was to always pass back a name_snapshot, but that
turns out to be difficult because its innards are not public. The other
potential solutions that I've tried make this code look even worse than
it already is.
> I _probably_ would take allocation out of the loop (e.g. make it
> __getname(), called unconditionally) and turned it into the
> d_path.c-style read_seqbegin_or_lock()/need_seqretry()/done_seqretry()
> loop, so that the first pass would go under rcu_read_lock(), while
> the second (if needed) would just hold rename_lock exclusive (without
> bumping the refcount). But that's a matter of (theoretical) livelock
> avoidance, not the locking correctness for ->d_name accesses.
>
Yeah, that does sound better. I want to think about this code a bit
> Oh, and
> *base = ceph_ino(d_inode(temp));
> *plen = len;
> probably belongs in critical section - _that_ might be a correctness
> issue, since temp is not held by anything once you are out of there.
>
Good catch. I'll fix that up.
> > I could add some routines to do this, but it feels a lot like I'm
> > abusing internal dcache interfaces. I'll keep thinking about it though.
> >
> > While we're on the subject though:
> >
> > struct external_name {
> > union {
> > atomic_t count;
> > struct rcu_head head;
> > } u;
> > unsigned char name[];
> > };
> >
> > Is it really ok to union the count and rcu_head there?
> >
> > I haven't trawled through all of the code yet, but what prevents someone
> > from trying to access the count inside an RCU critical section, after
> > call_rcu has been called on it?
>
> The fact that no lockless accesses to ->count are ever done?
Thanks,
--
Jeff Layton <jlayton@...nel.org>
Powered by blists - more mailing lists