lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 04 Oct 2017 21:13:47 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Eric Dumazet <eric.dumazet@...il.com>
CC:     Jakub Kicinski <jakub.kicinski@...ronome.com>, dsahern@...il.com,
        netdev@...r.kernel.org, oss-drivers@...ronome.com,
        david.beckett@...ronome.com
Subject: Re: [RFC] bpf: remove global verifier state

On 10/04/2017 05:43 AM, Alexei Starovoitov wrote:
> On Tue, Oct 03, 2017 at 08:24:06PM -0700, Eric Dumazet wrote:
>> On Tue, 2017-10-03 at 19:52 -0700, Alexei Starovoitov wrote:
>>
>>> yep. looks great.
>>> Please test it and submit officially :)
>>> The commit aafe6ae9cee3 ("bpf: dynamically allocate digest scratch buffer")
>>> fixed the other case where we were relying on the above mutex.
>>> The only other spot to be adjusted is to add spin_lock/mutex or DO_ONCE() to
>>> bpf_get_skb_set_tunnel_proto() to protect md_dst init.
>>> imo that would be it.
>>> Daniel, anything else comes to mind?

Yes, this should be all. DO_ONCE() for the tunnel proto seems a
good choice.

>> 16 MB of log (unswappable kernel memory) per active checker.
>>
>> We might offer a way to oom hosts.
>
> right. good point!
> we need to switch to continuous copy_to_user() after a page or so.
> Can even do it after every vscnprintf()
> but page at a time is probably faster.

Also worst case upper limits on verification side for holding state
aside from the log would need to be checked in terms of how much mem
we end up holding that is not accounted against any process (and not
really "rate-limited" anymore once we drop the mutex).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ