lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 1 Mar 2019 14:17:32 -0800
From:   Eric Dumazet <erdnetdev@...il.com>
To:     Guenter Roeck <linux@...ck-us.net>,
        Alexei Starovoitov <ast@...nel.org>
Cc:     Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] bpf: enable program stats



On 03/01/2019 02:03 PM, Guenter Roeck wrote:
> Hi,
> 
> On Mon, Feb 25, 2019 at 02:28:39PM -0800, Alexei Starovoitov wrote:
>> JITed BPF programs are indistinguishable from kernel functions, but unlike
>> kernel code BPF code can be changed often.
>> Typical approach of "perf record" + "perf report" profiling and tuning of
>> kernel code works just as well for BPF programs, but kernel code doesn't
>> need to be monitored whereas BPF programs do.
>> Users load and run large amount of BPF programs.
>> These BPF stats allow tools monitor the usage of BPF on the server.
>> The monitoring tools will turn sysctl kernel.bpf_stats_enabled
>> on and off for few seconds to sample average cost of the programs.
>> Aggregated data over hours and days will provide an insight into cost of BPF
>> and alarms can trigger in case given program suddenly gets more expensive.
>>
>> The cost of two sched_clock() per program invocation adds ~20 nsec.
>> Fast BPF progs (like selftests/bpf/progs/test_pkt_access.c) will slow down
>> from ~10 nsec to ~30 nsec.
>> static_key minimizes the cost of the stats collection.
>> There is no measurable difference before/after this patch
>> with kernel.bpf_stats_enabled=0
>>
> 
> This patch causes my qemu tests for 'parisc' to crash. Reverting this patch
> as well as "bpf: expose program stats via bpf_prog_info" fixes the problem.
> 
> Crash log and bisect results are attached. Bisect ends with the merge;
> I identified the two patches manually.
> 
> I suspect that
> 	prog->aux->stats = alloc_percpu_gfp(struct bpf_prog_stats, gfp_flags);
> 	...
> 	u64_stats_init(&prog->aux->stats->syncp);
> may be wrong. At the very least it looks odd, and I don't find a similar use
> of u64_stats_init() anywhere else in the kernel.

Yes, a loop is needed there.

Something like :

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 1c14c347f3cfe1f7c0cf8a7eccff8135b16df81f..3f08c257858e1570339cd64a6351824bcc332ee3 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -109,6 +109,7 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
 {
        gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
        struct bpf_prog *prog;
+       int cpu;
 
        prog = bpf_prog_alloc_no_stats(size, gfp_extra_flags);
        if (!prog)
@@ -121,7 +122,12 @@ struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
                return NULL;
        }
 
-       u64_stats_init(&prog->aux->stats->syncp);
+       for_each_possible_cpu(cpu) {
+               struct bpf_prog_stats *pstats;
+
+               pstats = per_cpu_ptr(prog->aux->stats, cpu);
+               u64_stats_init(&pstats->syncp);
+       }
        return prog;
 }
 EXPORT_SYMBOL_GPL(bpf_prog_alloc);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ