[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200428203843.pe7d4zbki2ihnq2m@ast-mbp.dhcp.thefacebook.com>
Date: Tue, 28 Apr 2020 13:38:43 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next 02/10] bpf: allocate ID for bpf_link
On Tue, Apr 28, 2020 at 11:56:52AM -0700, Andrii Nakryiko wrote:
> On Tue, Apr 28, 2020 at 10:31 AM Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> >
> > On Mon, Apr 27, 2020 at 10:49:36PM -0700, Andrii Nakryiko wrote:
> > > +int bpf_link_settle(struct bpf_link_primer *primer)
> > > +{
> > > + /* make bpf_link fetchable by ID */
> > > + WRITE_ONCE(primer->link->id, primer->id);
> >
> > what does WRITE_ONCE serve here?
>
> To prevent compiler reordering this write with fd_install. So that by
> the time FD is exposed to user-space, link has properly set ID.
if you wanted memory barrier then it should have been barrier(),
but that wouldn't be enough, since patch 2 and 3 race to read and write
that 32-bit int.
> > bpf_link_settle can only be called at the end of attach.
> > If attach is slow than parallel get_fd_by_id can get an new FD
> > instance for link with zero id.
> > In such case deref of link->id will race with above assignment?
>
> Yes, it does race, but it can either see zero and assume bpf_link is
> not ready (which is fine to do) or will see correct link ID and will
> proceed to create new FD for it. By the time we do context switch back
> to user-space and return link FD, ID will definitely be visible due to
> context switch and associated memory barriers. If anyone is guessing
> FD and trying to create FD_BY_ID before LINK_CREATE syscall returns --
> then returning failure due to link ID not yet set is totally fine,
> IMO.
>
> > But I don't see READ_ONCE in patch 3.
> > It's under link_idr_lock there.
>
> It doesn't need READ_ONCE because it does read under spinlock, so
> compiler can't re-order it with code outside of spinlock.
spin_lock in patch 3 doesn't guarantee that link->id deref in that patch
will be atomic.
So WRITE_ONCE in patch 2 into link->id is still racy with plain
read in patch 3.
Just wait and see kmsan complaining about it.
> > How about grabbing link_idr_lock here as well ?
> > otherwise it's still racy since WRITE_ONCE is not paired.
>
> As indicated above, seems unnecessary? But I also don't object
> strongly, I don't expect this lock for links to be a major bottleneck
> or anything like that.
Either READ_ONCE has to be paired with WRITE_ONCE
(or even better smp_load_acquire with smp_store_release)
or use spin_lock.
> >
> > The mix of spin_lock_irqsave(&link_idr_lock)
> > and spin_lock_bh(&link_idr_lock) looks weird.
> > We do the same for map_idr because maps have complicated freeing logic,
> > but prog_idr is consistent.
> > If you see the need for irqsave variant then please use it in all cases.
>
> No, my bad, I don't see any need to intermix them. I'll stick to
> spin_lock_bh, thanks for catching!
I think that should be fine.
Please double check that situation described in
commit 930651a75bf1 ("bpf: do not disable/enable BH in bpf_map_free_id()")
doesn't apply to link_idr.
Powered by blists - more mailing lists