lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Sep 2021 13:58:37 -0700
From:   Martin KaFai Lau <kafai@...com>
To:     Hou Tao <houtao1@...wei.com>
CC:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>, <netdev@...r.kernel.org>,
        <bpf@...r.kernel.org>
Subject: Re: [RFC PATCH bpf-next 1/3] bpf: add dummy BPF STRUCT_OPS for test
 purpose

On Wed, Sep 15, 2021 at 11:37:51AM +0800, Hou Tao wrote:
> Currently the test of BPF STRUCT_OPS depends on the specific bpf
> implementation of tcp_congestion_ops, and it can not cover all
> basic functionalities (e.g, return value handling), so introduce
> a dummy BPF STRUCT_OPS for test purpose.
> 
> Dummy BPF STRUCT_OPS may not being needed for release kernel, so
> adding a kconfig option BPF_DUMMY_STRUCT_OPS to enable it separatedly.
Thanks for the patches !

> diff --git a/include/linux/bpf_dummy_ops.h b/include/linux/bpf_dummy_ops.h
> new file mode 100644
> index 000000000000..b2aad3e6e2fe
> --- /dev/null
> +++ b/include/linux/bpf_dummy_ops.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021. Huawei Technologies Co., Ltd
> + */
> +#ifndef _BPF_DUMMY_OPS_H
> +#define _BPF_DUMMY_OPS_H
> +
> +#ifdef CONFIG_BPF_DUMMY_STRUCT_OPS
> +#include <linux/module.h>
> +
> +struct bpf_dummy_ops_state {
> +	int val;
> +};
> +
> +struct bpf_dummy_ops {
> +	int (*init)(struct bpf_dummy_ops_state *state);
> +	struct module *owner;
> +};
> +
> +extern struct bpf_dummy_ops *bpf_get_dummy_ops(void);
> +extern void bpf_put_dummy_ops(struct bpf_dummy_ops *ops);
> +#else
> +struct bpf_dummy_ops {};
This ';' looks different ;)

It probably has dodged the compiler due to the kconfig.
I think CONFIG_BPF_DUMMY_STRUCT_OPS and the bpf_(get|put)_dummy_ops
are not needed.  More on this later.

> diff --git a/kernel/bpf/bpf_dummy_struct_ops.c b/kernel/bpf/bpf_dummy_struct_ops.c
> new file mode 100644
> index 000000000000..f76c4a3733f0
> --- /dev/null
> +++ b/kernel/bpf/bpf_dummy_struct_ops.c
> @@ -0,0 +1,173 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021. Huawei Technologies Co., Ltd
> + */
> +#include <linux/kernel.h>
> +#include <linux/spinlock.h>
> +#include <linux/bpf_verifier.h>
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <linux/bpf_dummy_ops.h>
> +
> +static struct bpf_dummy_ops *bpf_dummy_ops_singletion;
> +static DEFINE_SPINLOCK(bpf_dummy_ops_lock);
> +
> +static const struct btf_type *dummy_ops_state;
> +
> +struct bpf_dummy_ops *bpf_get_dummy_ops(void)
> +{
> +	struct bpf_dummy_ops *ops;
> +
> +	spin_lock(&bpf_dummy_ops_lock);
> +	ops = bpf_dummy_ops_singletion;
> +	if (ops && !bpf_try_module_get(ops, ops->owner))
> +		ops = NULL;
> +	spin_unlock(&bpf_dummy_ops_lock);
> +
> +	return ops ? ops : ERR_PTR(-ENXIO);
> +}
> +EXPORT_SYMBOL_GPL(bpf_get_dummy_ops);
> +
> +void bpf_put_dummy_ops(struct bpf_dummy_ops *ops)
> +{
> +	bpf_module_put(ops, ops->owner);
> +}
> +EXPORT_SYMBOL_GPL(bpf_put_dummy_ops);

[ ... ]

> +static int bpf_dummy_reg(void *kdata)
> +{
> +	struct bpf_dummy_ops *ops = kdata;
> +	int err = 0;
> +
> +	spin_lock(&bpf_dummy_ops_lock);
> +	if (!bpf_dummy_ops_singletion)
> +		bpf_dummy_ops_singletion = ops;
> +	else
> +		err = -EEXIST;
> +	spin_unlock(&bpf_dummy_ops_lock);
> +
> +	return err;
> +}
I don't think we are interested in testing register/unregister
a struct_ops.  This common infra logic should have already
been covered by bpf_tcp_ca.   Lets see if it can be avoided
such that the above singleton instance and EXPORT_SYMBOL_GPL
can also be removed.

It can reuse the bpf_prog_test_run() which can run a particular
bpf prog.  Then it allows a flexible way to select which prog
to call instead of creating a file and then triggering individual
prog by writing a name string into this new file.

For bpf_prog_test_run(),  it needs a ".test_run" implementation in
"const struct bpf_prog_ops bpf_struct_ops_prog_ops".
This to-be-implemented  ".test_run" can check the prog->aux->attach_btf_id
to ensure it is the bpf_dummy_ops.  The prog->expected_attach_type can
tell which "func" ptr within the bpf_dummy_ops and then ".test_run" will
know how to call it.  The extra thing for the struct_ops's ".test_run" is
to first call arch_prepare_bpf_trampoline() to prepare the trampoline
before calling into the bpf prog.

You can take a look at the other ".test_run" implementations,
e.g. bpf_prog_test_run_skb() and bpf_prog_test_run_tracing().

test_skb_pkt_end.c and fentry_test.c (likely others also) can be
used as reference for prog_tests/ purpose.  For the dummy_ops test in
prog_tests/, it does not need to call bpf_map__attach_struct_ops() since
there is no need to reg().  Instead, directly bpf_prog_test_run() to
exercise each prog in bpf_dummy_ops.skel.h.

bpf_dummy_init_member() should return -ENOTSUPP.
bpf_dummy_reg() and bpf_dummy_unreg() should then be never called.

bpf_dummy_struct_ops.c should be moved into net/bpf/.
No need to have CONFIG_BPF_DUMMY_STRUCT_OPS.  In the future, a generic one
could be created for the test_run related codes, if there is a need.

> +
> +static void bpf_dummy_unreg(void *kdata)
> +{
> +	struct bpf_dummy_ops *ops = kdata;
> +
> +	spin_lock(&bpf_dummy_ops_lock);
> +	if (bpf_dummy_ops_singletion == ops)
> +		bpf_dummy_ops_singletion = NULL;
> +	else
> +		WARN_ON(1);
> +	spin_unlock(&bpf_dummy_ops_lock);
> +}
> +
> +extern struct bpf_struct_ops bpf_bpf_dummy_ops;
> +
> +struct bpf_struct_ops bpf_bpf_dummy_ops = {
> +	.verifier_ops = &bpf_dummy_verifier_ops,
> +	.init = bpf_dummy_init,
> +	.init_member = bpf_dummy_init_member,
> +	.check_member = bpf_dummy_check_member,
> +	.reg = bpf_dummy_reg,
> +	.unreg = bpf_dummy_unreg,
> +	.name = "bpf_dummy_ops",
> +};

Powered by blists - more mailing lists