lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 May 2019 16:59:13 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     David Miller <davem@...emloft.net>
CC:     <hkallweit1@...il.com>, <f.fainelli@...il.com>,
        <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <linuxarm@...wei.com>
Subject: Re: [PATCH net-next] net: link_watch: prevent starvation when
 processing linkwatch wq

On 2019/5/29 14:58, David Miller wrote:
> From: Yunsheng Lin <linyunsheng@...wei.com>
> Date: Mon, 27 May 2019 09:47:54 +0800
> 
>> When user has configured a large number of virtual netdev, such
>> as 4K vlans, the carrier on/off operation of the real netdev
>> will also cause it's virtual netdev's link state to be processed
>> in linkwatch. Currently, the processing is done in a work queue,
>> which may cause worker starvation problem for other work queue.
>>
>> This patch releases the cpu when link watch worker has processed
>> a fixed number of netdev' link watch event, and schedule the
>> work queue again when there is still link watch event remaining.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> 
> Why not rtnl_unlock(); yield(); rtnl_lock(); every "100" events
> processed?
> 
> That seems better than adding all of this overhead to reschedule the
> workqueue every 100 items.

One minor concern, the above solution does not seem to solve the cpu
starvation for other normal workqueue which was scheduled on the same
cpu as linkwatch. Maybe I misunderstand the workqueue or there is other
consideration here? :)

Anyway, I will implemet it as you suggested and test it before posting V2.
Thanks.

> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ