lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Feb 2020 20:56:12 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        David Miller <davem@...emloft.net>, bpf@...r.kernel.org,
        netdev@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Sebastian Sewior <bigeasy@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Clark Williams <williams@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC patch 14/19] bpf: Use migrate_disable() in hashtab code

Mathieu Desnoyers <mathieu.desnoyers@...icios.com> writes:
> On 14-Feb-2020 02:39:31 PM, Thomas Gleixner wrote:
>> Replace the preempt_disable/enable() pairs with migrate_disable/enable()
>> pairs to prepare BPF to work on PREEMPT_RT enabled kernels. On a non-RT
>> kernel this maps to preempt_disable/enable(), i.e. no functional change.

...

> Having all those events randomly and silently discarded might be quite
> unexpected from a user standpoint. This turns the deadlock prevention
> mechanism into a random tracepoint-dropping facility, which is
> unsettling.

Well, it randomly drops events which might be unrelated to the syscall
target today already, this will just drop some more. Shrug.

> One alternative approach we could consider to solve this is to make
> this deadlock prevention nesting counter per-thread rather than
> per-cpu.

That should work both on !RT and RT.

> Also, I don't think using __this_cpu_inc() without preempt-disable or
> irq off is safe. You'll probably want to move to this_cpu_inc/dec
> instead, which can be heavier on some architectures.

Good catch.

Thanks,

        tglx

      

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ