[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220729190605.6fxlhdetyytv34zm@kafai-mbp.dhcp.thefacebook.com>
Date: Fri, 29 Jul 2022 12:06:05 -0700
From: Martin KaFai Lau <kafai@...com>
To: David Laight <David.Laight@...lab.com>
Cc: 'Jakub Kicinski' <kuba@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
David Miller <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
"kernel-team@...com" <kernel-team@...com>,
Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH bpf-next 02/14] bpf: net: Avoid sock_setsockopt() taking
sk lock when called from bpf
On Fri, Jul 29, 2022 at 10:04:29AM +0000, David Laight wrote:
> From: Jakub Kicinski
> > Sent: 28 July 2022 17:56
> >
> > On Thu, 28 Jul 2022 09:31:04 -0700 Martin KaFai Lau wrote:
> > > If I understand the concern correctly, it may not be straight forward to
> > > grip the reason behind the testings at in_bpf() [ the in_task() and
> > > the current->bpf_ctx test ] ? Yes, it is a valid point.
> > >
> > > The optval.is_bpf bit can be directly traced back to the bpf_setsockopt
> > > helper and should be easier to reason about.
> >
> > I think we're saying the opposite thing. in_bpf() the context checking
> > function is fine. There is a clear parallel to in_task() and combined
> > with the capability check it should be pretty obvious what the code
> > is intending to achieve.
> >
> > sockptr_t::in_bpf which randomly implies that the lock is already held
> > will be hard to understand for anyone not intimately familiar with the
> > BPF code. Naming that bit is_locked seems much clearer.
> >
> > Which is what I believe Stan was proposing.
>
> Or make sk_setsockopt() be called after the integer value
> has been read and with the lock held.
>
> That saves any (horrid) conditional locking.
>
> Also sockptr_t should probably have been a structure with separate
> user and kernel address fields.
> Putting the length in there would (probably) save code.
>
> There then might be scope for pre-copying short user buffers
> into a kernel buffer while still allowing the requests that
> ignore the length copy directly from a user buffer.
Some optnames take its own lock. e.g. some in do_tcp_setsockopt.
Those will need to be broken down to its own locked and unlocked functions.
Not only setsockopt, this applies to the future [g]etsockopt() refactoring also
where most optnames are not under one lock_sock() and each optname could take
the lock or release it in its own optname.
imo, this is unnecessary code churn for long switching cases like
setsockopt without a clear benefit. While the patch is not the first
conditional locking in the kernel, I would like to hear how others think
about doing this in a helper like lock_sock_sockopt() for set/getsockopt().
With in_bpf() helper suggested by Stan, the is_locked can be passed
as one additional argument instead. Then there is no need to change
the sockptr_t and leave sockptr_t to contain the optval itself.
Powered by blists - more mailing lists