lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <df31afed-13a2-a02b-a5f8-4b76c57631d3@gmail.com>
Date:   Tue, 4 Jan 2022 17:18:12 -0700
From:   David Ahern <dsahern@...il.com>
To:     Lahav Schlesinger <lschlesinger@...venets.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Cc:     netdev@...r.kernel.org, kuba@...nel.org, idosch@...sch.org,
        nicolas.dichtel@...nd.com, nikolay@...dia.com
Subject: Re: [PATCH net-next v6] rtnetlink: Support fine-grained netdevice
 bulk deletion

On 1/4/22 1:40 PM, Lahav Schlesinger wrote:
> I tried using dev->unreg_list but it doesn't work e.g. for veth pairs
> where ->dellink() of a veth automatically adds the peer. Therefore if
> @ifindices contains both peers then the first ->dellink() will remove
> the next device from @list_kill. This caused a page fault when
> @list_kill was further iterated on.

make sure you add a selftest for the bulk delete and cover cases with
veth, vlan, vrf, dummy, bridge, ...

> 
> I opted to add a flag to struct net_device as David suggested in order
> to avoid increasing sizeof(struct net_device), but perhaps it's not that
> big of an issue.
> If it's fine then I'll update it.

I was hoping to avoid bloating net_device with 16B that has such a
limited need. In one config I use, net_device is 2048B - a nice size and
an additional 16B makes netdevs much more expensive. A ubuntu config
comes in at 2368, so not really an issue there.

Staring at the existing list_head options close_list seems like a
candidate for a union with bulk_kill_list. If that does not work we can
add a new one.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ