lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 17 Mar 2019 12:42:01 -0400
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     David Miller <davem@...emloft.net>
Cc:     Maxime Chevallier <maxime.chevallier@...tlin.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Network Development <netdev@...r.kernel.org>,
        Willem de Bruijn <willemb@...gle.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>
Subject: Re: [PATCH net] packets: Always register packet sk in the same order

On Sat, Mar 16, 2019 at 9:21 PM David Miller <davem@...emloft.net> wrote:
>
> From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
> Date: Sat, 16 Mar 2019 14:09:33 -0400
>
> > Note that another consequence of this patch is that insertion on
> > packet create is now O(N) with the number of active packet sockets,
> > due to sklist being an hlist.
>
> Exploitable...

With root in userns? The running time is limited by open file rlimit.
This pattern is already used in a sk_add_node_rcu in some way, so
important to be sure.

In practice I see no significant wall clock time difference when
inserting to a fairly standard default limit of 16K. Regardless of
insertion order, running time is dominated by cleanup on process exit
(synchronize_net barriers?). At higher rlimit it does become
problematic.

The packet socket sklist is not easily converted from hlist to a
regular list, due to the use of seq_hlist_next_rcu in packet_seq_ops.
There is no equivalent seq_list_next_rcu. One option might be instead
to leave insertion order as is, but traverse the list in reverse in
packet_notifier on NETDEV_DOWN. That would require an
sk_for_each_reverse_rcu and hlist_for_each_entry_reverse_rcu. These do
not exist, but since an hlist_pprev_rcu does exist, it is probably
feasible. Though not a trivial change.

Another more narrow option may be to work around the ordering in
fanout itself, e.g., record in the socket the initially assigned
location in the fanout array and try to reclaim this spot on
re-insertion.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ