[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220810011510.c3chrli27e6ebftt@maniforge>
Date: Tue, 9 Aug 2022 20:15:10 -0500
From: David Vernet <void@...ifault.com>
To: Hao Luo <haoluo@...gle.com>
Cc: bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, john.fastabend@...il.com, martin.lau@...ux.dev,
song@...nel.org, yhs@...com, kpsingh@...nel.org, sdf@...gle.com,
jolsa@...nel.org, tj@...nel.org, joannelkoong@...il.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/5] bpf: Add user-space-publisher ringbuffer map type
Hi Hao,
On Mon, Aug 08, 2022 at 11:57:53AM -0700, Hao Luo wrote:
> > Note that one thing that is not included in this patch-set is the ability
> > to kick the kernel from user-space to have it drain messages. The selftests
> > included in this patch-set currently just use progs with syscall hooks to
> > "kick" the kernel and have it drain samples from a user-producer
> > ringbuffer, but being able to kick the kernel using some other mechanism
> > that doesn't rely on such hooks would be very useful as well. I'm planning
> > on adding this in a future patch-set.
> >
>
> This could be done using iters. Basically, you can perform draining in
> bpf_iter programs and export iter links as bpffs files. Then to kick
> the kernel, you simply just read() the file.
Thanks for pointing this out. I agree that iters could be used this way to
kick the kernel, and perhaps that would be a sufficient solution. My
thinking, however, was that it would be useful to provide some APIs that
are a bit more ergonomic, and specifically meant to enable kicking
arbitrary "pre-attached" callbacks in a BPF prog, possibly along with some
payload from userspace.
Iters allow userspace to kick the kernel, but IMO they're meant to enable
data extraction from the kernel, and dumping kernel data into user-space.
What I'm proposing is a more generalizable way of driving logic in the
kernel from user-space.
Does that make sense? Looking forward to hearing your thoughts.
Thanks,
David
Powered by blists - more mailing lists