lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 19 Apr 2024 17:14:52 +0200
From: Benjamin Tissoires <bentiss@...nel.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>, 
	Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, 
	Martin KaFai Lau <martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, 
	Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>, 
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, 
	Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>, 
	bpf@...r.kernel.org, linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH bpf-next 17/18] bpf: add bpf_wq_start

On Apr 18 2024, Alexei Starovoitov wrote:
> On Tue, Apr 16, 2024 at 04:08:30PM +0200, Benjamin Tissoires wrote:
> > again, copy/paste from bpf_timer_start().
> > 
> > Signed-off-by: Benjamin Tissoires <bentiss@...nel.org>
> > ---
> >  kernel/bpf/helpers.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> > 
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index e5c8adc44619..ed5309a37eda 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -2728,6 +2728,29 @@ __bpf_kfunc int bpf_wq_init(struct bpf_wq *wq, void *map, unsigned int flags)
> >  	return __bpf_async_init(async, map, flags, BPF_ASYNC_TYPE_WQ);
> >  }
> >  
> > +__bpf_kfunc int bpf_wq_start(struct bpf_wq *wq, unsigned int flags)
> > +{
> > +	struct bpf_async_kern *async = (struct bpf_async_kern *)wq;
> > +	struct bpf_work *w;
> > +	int ret = 0;
> > +
> > +	if (in_nmi())
> > +		return -EOPNOTSUPP;
> > +	if (flags)
> > +		return -EINVAL;
> > +	__bpf_spin_lock_irqsave(&async->lock);
> > +	w = async->work;
> > +	if (!w || !w->cb.prog) {
> > +		ret = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	schedule_work(&w->work);
> > +out:
> > +	__bpf_spin_unlock_irqrestore(&async->lock);
> 
> Looks like you're not adding wq_cancel kfunc in this patch set and
> it's probably a good thing not to expose async cancel to bpf users,
> since it's a foot gun.

Honestly I just felt the patch series was big enough for a PoC and
comparison with sleepable bpf_timer. But if we think this needs not to
be added, I guess that works too :)

> Even when we eventually add wq_cancel_sync kfunc it will not be
> removing a callback.

Yeah, I understood that bit :)

> So we can drop spinlock here.
> READ_ONCE of w and cb would be enough.
> Since they cannot get back to NULL once init-ed and cb is set.

Great, thanks for the review (and the other patches).

I'll work toward v2.

Cheers,
Benjamin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ