lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 29 Aug 2019 16:02:39 +0200
From:   Jiri Pirko <jiri@...nulli.us>
To:     David Ahern <dsahern@...il.com>
Cc:     David Ahern <dsahern@...nel.org>, davem@...emloft.net,
        netdev@...r.kernel.org
Subject: Re: [PATCH net] netdevsim: Restore per-network namespace accounting
 for fib entries

Thu, Aug 29, 2019 at 02:54:39PM CEST, dsahern@...il.com wrote:
>On 8/29/19 12:28 AM, Jiri Pirko wrote:
>> Wed, Aug 28, 2019 at 11:26:03PM CEST, dsahern@...il.com wrote:
>>> On 8/28/19 4:37 AM, Jiri Pirko wrote:
>>>> Tue, Aug 06, 2019 at 09:15:17PM CEST, dsahern@...nel.org wrote:
>>>>> From: David Ahern <dsahern@...il.com>
>>>>>
>>>>> Prior to the commit in the fixes tag, the resource controller in netdevsim
>>>>> tracked fib entries and rules per network namespace. Restore that behavior.
>>>>
>>>> David, please help me understand. If the counters are per-device, not
>>>> per-netns, they are both the same. If we have device (devlink instance)
>>>> is in a netns and take only things happening in this netns into account,
>>>> it should count exactly the same amount of fib entries, doesn't it?
>>>
>>> if you are only changing where the counters are stored - net_generic vs
>>> devlink private - then yes, they should be equivalent.
>> 
>> Okay.
>> 
>>>
>>>>
>>>> I re-thinked the devlink netns patchset and currently I'm going in
>>>> slightly different direction. I'm having netns as an attribute of
>>>> devlink reload. So all the port netdevices and everything gets
>>>> re-instantiated into new netns. Works fine with mlxsw. There we also
>>>> re-register the fib notifier.
>>>>
>>>> I think that this can work for your usecase in netdevsim too:
>>>> 1) devlink instance is registering a fib notifier to track all fib
>>>>    entries in a namespace it belongs to. The counters are per-device -
>>>>    counting fib entries in a namespace the device is in.
>>>> 2) another devlink instance can do the same tracking in the same
>>>>    namespace. No problem, it's a separate counter, but the numbers are
>>>>    the same. One can set different limits to different devlink
>>>>    instances, but you can have only one. That is the bahaviour you have
>>>>    now.
>>>> 3) on devlink reload, netdevsim re-instantiates ports and re-registers
>>>>    fib notifier
>>>> 4) on devlink reload with netns change, all should be fine as the
>>>>    re-registered fib nofitier replays the entries. The ports are
>>>>    re-instatiated in new netns.
>>>>
>>>> This way, we would get consistent behaviour between netdevsim and real
>>>> devices (mlxsw), correct devlink-netns implementation (you also
>>>> suggested to move ports to the namespace). Everyone should be happy.
>>>>
>>>> What do you think?
>>>>
>>>
>>> Right now, registering the fib notifier walks all namespaces. That is
>>> not a scalable solution. Are you changing that to replay only a given
>>> netns? Are you changing the notifiers to be per-namespace?
>> 
>> Eventually, that seems like good idea. Currently I want to do
>> if (net==nsim_dev->mynet)
>> 	done
>> check at the beginning of the notifier.
>> 
>
>The per-namespace replay should be done as part of this re-work. It
>should not be that big of a change. Add 'struct net' arg to
>register_fib_notifier. If set, call fib_net_dump only for that
>namespace. The seq check should be made per-namespace.
>
>You mentioned mlxsw works fine with moving ports to a new network
>namespace, so that will be a 'real' example with a known scalability
>problem that should be addressed now.

Fair enough. Will include this now.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ