[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a053a34a-b014-45ca-87bd-425570f16da1@nvidia.com>
Date: Fri, 10 May 2024 14:30:50 -0700
From: William Tu <witu@...dia.com>
To: Stephen Hemminger <stephen@...workplumber.org>
Cc: netdev@...r.kernel.org, jiri@...dia.com, bodong@...dia.com,
kuba@...nel.org, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH RFC net-next] net: cache the __dev_alloc_name()
On 5/8/24 8:27 PM, William Tu wrote:
> External email: Use caution opening links or attachments
>
>
> On 5/7/24 9:24 PM, Stephen Hemminger wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On Mon, 6 May 2024 20:32:07 +0000
>> William Tu <witu@...dia.com> wrote:
>>
>>> When a system has around 1000 netdevs, adding the 1001st device becomes
>>> very slow. The devlink command to create an SF
>>> $ devlink port add pci/0000:03:00.0 flavour pcisf \
>>> pfnum 0 sfnum 1001
>>> takes around 5 seconds, and Linux perf and flamegraph show 19% of time
>>> spent on __dev_alloc_name() [1].
>>>
>>> The reason is that devlink first requests for next available "eth%d".
>>> And __dev_alloc_name will scan all existing netdev to match on "ethN",
>>> set N to a 'inuse' bitmap, and find/return next available number,
>>> in our case eth0.
>>>
>>> And later on based on udev rule, we renamed it from eth0 to
>>> "en3f0pf0sf1001" and with altname below
>>> 14: en3f0pf0sf1001: <BROADCAST,MULTICAST,UP,LOWER_UP> ...
>>> altname enp3s0f0npf0sf1001
>>>
>>> So eth0 is actually never being used, but as we have 1k "en3f0pf0sfN"
>>> devices + 1k altnames, the __dev_alloc_name spends lots of time goint
>>> through all existing netdev and try to build the 'inuse' bitmap of
>>> pattern 'eth%d'. And the bitmap barely has any bit set, and it rescanes
>>> every time.
>>>
>>> I want to see if it makes sense to save/cache the result, or is there
>>> any way to not go through the 'eth%d' pattern search. The RFC patch
>>> adds name_pat (name pattern) hlist and saves the 'inuse' bitmap. It
>>> saves
>>> pattens, ex: "eth%d", "veth%d", with the bitmap, and lookup before
>>> scanning all existing netdevs.
>>>
>>> Note: code is working just for quick performance benchmark, and still
>>> missing lots of stuff. Using hlist seems to overkill, as I think
>>> we only have few patterns
>>> $ git grep alloc_netdev drivers/ net/ | grep %d
>>>
>>> 1. https://github.com/williamtu/net-next/issues/1
>>>
>>> Signed-off-by: William Tu <witu@...dia.com>
> Hi Stephen,
> Thanks for your feedback.
>> Actual patch is bit of a mess, with commented out code, leftover
>> printks,
>> random whitespace changes. Please fix that.
> Yes, working on it.
>>
>> The issue is that bitmap gets to be large and adds bloat to embedded
>> devices.
> the bitmap size is fixed (8*PAGE_SIZE), set_bit is also fast. It's just
> that for each new device, we always re-scan all existing netdevs, set
> bit map, and then free the bitmap.
>>
>> Perhaps you could either force devlink to use the same device each
>> time (eth0)
>> if it is going to be renamed anyway.
> It is working like that now (with udev) in my slow environment. So it's
> always getting eth0, (because bitmap is all 0s), and udev renames it to
> enp0xxx. Then next time rescan and since eth0 is still available,
> __dev_alloc_name still returns eth0, and udev renames it again, and next
> device creations follows the same, and the time to rescan gets longer
> and longer.
>
> Regards,
> William
>
>
Hi Stephen and Paoblo,
Today I realize this isn't an issue.
Basically my perf result doesn't get the full picture. The 19% of time
spent on __dev_alloc_name seems to be OK, becuase:
$ time devlink port add pci/0000:03:00.0 flavour pcisf \
pfnum 0 sfnum 1001
real 0m1.440s
user 0m0.000s
sys 0m0.004s
It's just the 19% of the 'sys' time, not real time.
Thanks for your suggestions
William
Powered by blists - more mailing lists