[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c323bce9-a04e-b1c3-580a-783fde259d60@fb.com>
Date: Wed, 2 Mar 2022 13:23:35 -0800
From: Yonghong Song <yhs@...com>
To: Hao Luo <haoluo@...gle.com>, Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Cc: Martin KaFai Lau <kafai@...com>, Song Liu <songliubraving@...com>,
KP Singh <kpsingh@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Joe Burton <jevburton.kernel@...il.com>,
Tejun Heo <tj@...nel.org>, joshdon@...gle.com, sdf@...gle.com,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v1 4/9] bpf: Introduce sleepable tracepoints
On 2/25/22 3:43 PM, Hao Luo wrote:
> Add a new type of bpf tracepoints: sleepable tracepoints, which allows
> the handler to make calls that may sleep. With sleepable tracepoints, a
> set of syscall helpers (which may sleep) may also be called from
> sleepable tracepoints.
There are some old discussions on sleepable tracepoints, maybe
worthwhile to take a look.
https://lore.kernel.org/bpf/20210218222125.46565-5-mjeanson@efficios.com/T/
>
> In the following patches, we will whitelist some tracepoints to be
> sleepable.
>
> Signed-off-by: Hao Luo <haoluo@...gle.com>
> ---
> include/linux/bpf.h | 10 +++++++-
> include/linux/tracepoint-defs.h | 1 +
> include/trace/bpf_probe.h | 22 ++++++++++++++----
> kernel/bpf/syscall.c | 41 +++++++++++++++++++++++----------
> kernel/trace/bpf_trace.c | 5 ++++
> 5 files changed, 61 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index c36eeced3838..759ade7b24b3 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1810,6 +1810,9 @@ struct bpf_prog *bpf_prog_by_id(u32 id);
> struct bpf_link *bpf_link_by_id(u32 id);
>
> const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
> +const struct bpf_func_proto *
> +tracing_prog_syscall_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog);
> +
> void bpf_task_storage_free(struct task_struct *task);
> bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
> const struct btf_func_model *
> @@ -1822,7 +1825,6 @@ struct bpf_core_ctx {
>
> int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
> int relo_idx, void *insn);
> -
> #else /* !CONFIG_BPF_SYSCALL */
> static inline struct bpf_prog *bpf_prog_get(u32 ufd)
> {
> @@ -2011,6 +2013,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
> return NULL;
> }
>
> +static inline struct bpf_func_proto *
> +tracing_prog_syscall_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> +{
> + return NULL;
> +}
> +
> static inline void bpf_task_storage_free(struct task_struct *task)
> {
> }
> diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
> index e7c2276be33e..c73c7ab3680e 100644
> --- a/include/linux/tracepoint-defs.h
> +++ b/include/linux/tracepoint-defs.h
> @@ -51,6 +51,7 @@ struct bpf_raw_event_map {
> void *bpf_func;
> u32 num_args;
> u32 writable_size;
> + u32 sleepable;
> } __aligned(32);
>
> /*
[...]
Powered by blists - more mailing lists