[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190124035600.ag4d7glkyysiluuu@ast-mbp.dhcp.thefacebook.com>
Date: Wed, 23 Jan 2019 19:57:34 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Stanislav Fomichev <sdf@...gle.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, ast@...nel.org,
daniel@...earbox.net
Subject: Re: [PATCH bpf-next 2/3] bpf: add BPF_PROG_TEST_RUN support for flow
dissector
On Tue, Jan 22, 2019 at 01:23:14PM -0800, Stanislav Fomichev wrote:
> The input is packet data, the output is struct bpf_flow_key. This should
> make it easy to test flow dissector programs without elaborate
> setup.
>
> Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> ---
> include/linux/bpf.h | 3 ++
> net/bpf/test_run.c | 75 ++++++++++++++++++++++++++++++++++++++++++++-
> net/core/filter.c | 1 +
> 3 files changed, 78 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index e734f163bd0b..701ef954a258 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -397,6 +397,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
> union bpf_attr __user *uattr);
> int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
> union bpf_attr __user *uattr);
> +int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> + const union bpf_attr *kattr,
> + union bpf_attr __user *uattr);
>
> /* an array of programs to be executed under rcu_lock.
> *
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index fa2644d276ef..ecad72885f23 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -16,12 +16,26 @@
> static __always_inline u32 bpf_test_run_one(struct bpf_prog *prog, void *ctx,
> struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE])
> {
> + struct bpf_skb_data_end *cb;
> + struct sk_buff *skb;
> u32 ret;
>
> preempt_disable();
> rcu_read_lock();
> bpf_cgroup_storage_set(storage);
> - ret = BPF_PROG_RUN(prog, ctx);
> +
> + switch (prog->type) {
> + case BPF_PROG_TYPE_FLOW_DISSECTOR:
> + skb = (struct sk_buff *)ctx;
> + cb = (struct bpf_skb_data_end *)skb->cb;
> + ret = __skb_flow_bpf_dissect(prog, ctx, &flow_keys_dissector,
> + cb->qdisc_cb.flow_keys);
> + break;
> + default:
> + ret = BPF_PROG_RUN(prog, ctx);
> + break;
> + }
What is the benefit of this minimal code reuse?
It seems to me bpf_test_run_one() gets slower for all,
since prog type needs to be checked before every prog run.
The point of bpf_prog_ops->test_run was to avoid this overhead.
Are you somehow expecting flow_dissector prog using cgroup local storage?
I don't think that's possible.
Powered by blists - more mailing lists