[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200428224309.pod67otmp77mcspp@ast-mbp.dhcp.thefacebook.com>
Date: Tue, 28 Apr 2020 15:43:09 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...com>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next 02/10] bpf: allocate ID for bpf_link
On Tue, Apr 28, 2020 at 03:33:07PM -0700, Andrii Nakryiko wrote:
> On Tue, Apr 28, 2020 at 1:38 PM Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> >
> > On Tue, Apr 28, 2020 at 11:56:52AM -0700, Andrii Nakryiko wrote:
> > > On Tue, Apr 28, 2020 at 10:31 AM Alexei Starovoitov
> > > <alexei.starovoitov@...il.com> wrote:
> > > >
> > > > On Mon, Apr 27, 2020 at 10:49:36PM -0700, Andrii Nakryiko wrote:
> > > > > +int bpf_link_settle(struct bpf_link_primer *primer)
> > > > > +{
> > > > > + /* make bpf_link fetchable by ID */
> > > > > + WRITE_ONCE(primer->link->id, primer->id);
> > > >
> > > > what does WRITE_ONCE serve here?
> > >
> > > To prevent compiler reordering this write with fd_install. So that by
> > > the time FD is exposed to user-space, link has properly set ID.
> >
> > if you wanted memory barrier then it should have been barrier(),
> > but that wouldn't be enough, since patch 2 and 3 race to read and write
> > that 32-bit int.
> >
> > > > bpf_link_settle can only be called at the end of attach.
> > > > If attach is slow than parallel get_fd_by_id can get an new FD
> > > > instance for link with zero id.
> > > > In such case deref of link->id will race with above assignment?
> > >
> > > Yes, it does race, but it can either see zero and assume bpf_link is
> > > not ready (which is fine to do) or will see correct link ID and will
> > > proceed to create new FD for it. By the time we do context switch back
> > > to user-space and return link FD, ID will definitely be visible due to
> > > context switch and associated memory barriers. If anyone is guessing
> > > FD and trying to create FD_BY_ID before LINK_CREATE syscall returns --
> > > then returning failure due to link ID not yet set is totally fine,
> > > IMO.
> > >
> > > > But I don't see READ_ONCE in patch 3.
> > > > It's under link_idr_lock there.
> > >
> > > It doesn't need READ_ONCE because it does read under spinlock, so
> > > compiler can't re-order it with code outside of spinlock.
> >
> > spin_lock in patch 3 doesn't guarantee that link->id deref in that patch
> > will be atomic.
>
> What do you mean by "atomic" here? Are you saying that we can get torn
> read on u32 on some architectures?
compiler doesn't guarantee that plain 32-bit load/store will stay 32-bit
even on 64-bit archs.
> If that was the case, neither
> WRITE_ONCE/READ_ONCE nor smp_write_release/smp_load_acquire would
> help.
what do you mean? They will. That's the point of these macros.
> But I don't think that's the case, we have code in verifier that
> does similar racy u32 write/read (it uses READ_ONCE/WRITE_ONCE) and
> seems to be working fine.
you mean in btf_resolve_helper_id() ?
What kind of race do you see there?
> > So WRITE_ONCE in patch 2 into link->id is still racy with plain
> > read in patch 3.
> > Just wait and see kmsan complaining about it.
> >
> > > > How about grabbing link_idr_lock here as well ?
> > > > otherwise it's still racy since WRITE_ONCE is not paired.
> > >
> > > As indicated above, seems unnecessary? But I also don't object
> > > strongly, I don't expect this lock for links to be a major bottleneck
> > > or anything like that.
> >
> > Either READ_ONCE has to be paired with WRITE_ONCE
> > (or even better smp_load_acquire with smp_store_release)
> > or use spin_lock.
>
> Sure, let me use smp_load_acquite/smp_store_release.
Since there're locks in other places I would use spin_lock_bh
to update id as well.
>
> >
> > > >
> > > > The mix of spin_lock_irqsave(&link_idr_lock)
> > > > and spin_lock_bh(&link_idr_lock) looks weird.
> > > > We do the same for map_idr because maps have complicated freeing logic,
> > > > but prog_idr is consistent.
> > > > If you see the need for irqsave variant then please use it in all cases.
> > >
> > > No, my bad, I don't see any need to intermix them. I'll stick to
> > > spin_lock_bh, thanks for catching!
> >
> > I think that should be fine.
> > Please double check that situation described in
> > commit 930651a75bf1 ("bpf: do not disable/enable BH in bpf_map_free_id()")
> > doesn't apply to link_idr.
>
> If I understand what was the problem for BPF maps, we were taking lock
> and trying to disable softirqs while softirqs were already disabled by
> caller. This doesn't seem to be the case for links, as far as I can
> tell. So I'll just go with spin_lock_bh() everywhere for consistency.
Sounds good.
Powered by blists - more mailing lists