[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190301222739.GA24192@roeck-us.net>
Date: Fri, 1 Mar 2019 14:27:39 -0800
From: Guenter Roeck <linux@...ck-us.net>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] bpf: enable program stats
On Fri, Mar 01, 2019 at 02:17:40PM -0800, Eric Dumazet wrote:
>
>
> On 03/01/2019 02:03 PM, Guenter Roeck wrote:
> > Hi,
> >
> > On Mon, Feb 25, 2019 at 02:28:39PM -0800, Alexei Starovoitov wrote:
> >> JITed BPF programs are indistinguishable from kernel functions, but unlike
> >> kernel code BPF code can be changed often.
> >> Typical approach of "perf record" + "perf report" profiling and tuning of
> >> kernel code works just as well for BPF programs, but kernel code doesn't
> >> need to be monitored whereas BPF programs do.
> >> Users load and run large amount of BPF programs.
> >> These BPF stats allow tools monitor the usage of BPF on the server.
> >> The monitoring tools will turn sysctl kernel.bpf_stats_enabled
> >> on and off for few seconds to sample average cost of the programs.
> >> Aggregated data over hours and days will provide an insight into cost of BPF
> >> and alarms can trigger in case given program suddenly gets more expensive.
> >>
> >> The cost of two sched_clock() per program invocation adds ~20 nsec.
> >> Fast BPF progs (like selftests/bpf/progs/test_pkt_access.c) will slow down
> >> from ~10 nsec to ~30 nsec.
> >> static_key minimizes the cost of the stats collection.
> >> There is no measurable difference before/after this patch
> >> with kernel.bpf_stats_enabled=0
> >>
> >
> > This patch causes my qemu tests for 'parisc' to crash. Reverting this patch
> > as well as "bpf: expose program stats via bpf_prog_info" fixes the problem.
> >
> > Crash log and bisect results are attached. Bisect ends with the merge;
> > I identified the two patches manually.
> >
> > I suspect that
> > prog->aux->stats = alloc_percpu_gfp(struct bpf_prog_stats, gfp_flags);
> > ...
> > u64_stats_init(&prog->aux->stats->syncp);
> > may be wrong. At the very least it looks odd, and I don't find a similar use
> > of u64_stats_init() anywhere else in the kernel.
>
> Yes, a loop is needed there.
>
> Something like :
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 1c14c347f3cfe1f7c0cf8a7eccff8135b16df81f..3f08c257858e1570339cd64a6351824bcc332ee3 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -109,6 +109,7 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
> {
> gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
> struct bpf_prog *prog;
> + int cpu;
>
> prog = bpf_prog_alloc_no_stats(size, gfp_extra_flags);
> if (!prog)
> @@ -121,7 +122,12 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
> return NULL;
> }
>
> - u64_stats_init(&prog->aux->stats->syncp);
> + for_each_possible_cpu(cpu) {
> + struct bpf_prog_stats *pstats;
> +
> + pstats = per_cpu_ptr(prog->aux->stats, cpu);
> + u64_stats_init(&pstats->syncp);
> + }
> return prog;
> }
> EXPORT_SYMBOL_GPL(bpf_prog_alloc);
>
Yes, that works, or at least my test no longer crashes after applying the
above patch. Feel free to add
Tested-by: Guenter Roeck <linux@...ck-us.net>
Thanks,
Guenter
Powered by blists - more mailing lists