lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <802fef29-d525-2559-f2fc-d88ac3193f06@huawei.com>
Date:   Mon, 3 Jun 2019 09:20:00 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Salil Mehta <salil.mehta@...wei.com>,
        "davem@...emloft.net" <davem@...emloft.net>
CC:     "hkallweit1@...il.com" <hkallweit1@...il.com>,
        "f.fainelli@...il.com" <f.fainelli@...il.com>,
        "stephen@...workplumber.org" <stephen@...workplumber.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Linuxarm <linuxarm@...wei.com>
Subject: Re: [PATCH v2 net-next] net: link_watch: prevent starvation when
 processing linkwatch wq

On 2019/5/31 17:54, Salil Mehta wrote:
>> From: netdev-owner@...r.kernel.org On Behalf Of Yunsheng Lin
>> Sent: Friday, May 31, 2019 10:01 AM
>> To: davem@...emloft.net
>> Cc: hkallweit1@...il.com; f.fainelli@...il.com;
>> stephen@...workplumber.org; netdev@...r.kernel.org; linux-
>> kernel@...r.kernel.org; Linuxarm <linuxarm@...wei.com>
>> Subject: [PATCH v2 net-next] net: link_watch: prevent starvation when
>> processing linkwatch wq
>>
>> When user has configured a large number of virtual netdev, such
>> as 4K vlans, the carrier on/off operation of the real netdev
>> will also cause it's virtual netdev's link state to be processed
>> in linkwatch. Currently, the processing is done in a work queue,
>> which may cause cpu and rtnl locking starvation problem.
>>
>> This patch releases the cpu and rtnl lock when link watch worker
>> has processed a fixed number of netdev' link watch event.
>>
>> Currently __linkwatch_run_queue is called with rtnl lock, so
>> enfore it with ASSERT_RTNL();
> 
> 
> Typo enfore --> enforce ?

My mistake.

thanks.

> 
> 
> 
>> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
>> ---
>> V2: use cond_resched and rtnl_unlock after processing a fixed
>>     number of events
>> ---
>>  net/core/link_watch.c | 17 +++++++++++++++++
>>  1 file changed, 17 insertions(+)
>>
>> diff --git a/net/core/link_watch.c b/net/core/link_watch.c
>> index 7f51efb..07eebfb 100644
>> --- a/net/core/link_watch.c
>> +++ b/net/core/link_watch.c
>> @@ -168,9 +168,18 @@ static void linkwatch_do_dev(struct net_device
>> *dev)
>>
>>  static void __linkwatch_run_queue(int urgent_only)
>>  {
>> +#define MAX_DO_DEV_PER_LOOP	100
>> +
>> +	int do_dev = MAX_DO_DEV_PER_LOOP;
>>  	struct net_device *dev;
>>  	LIST_HEAD(wrk);
>>
>> +	ASSERT_RTNL();
>> +
>> +	/* Give urgent case more budget */
>> +	if (urgent_only)
>> +		do_dev += MAX_DO_DEV_PER_LOOP;
>> +
>>  	/*
>>  	 * Limit the number of linkwatch events to one
>>  	 * per second so that a runaway driver does not
>> @@ -200,6 +209,14 @@ static void __linkwatch_run_queue(int urgent_only)
>>  		}
>>  		spin_unlock_irq(&lweventlist_lock);
>>  		linkwatch_do_dev(dev);
>> +
> 
> 
> A comment like below would be helpful in explaining the reason of the code.
>  
> /* This function is called with rtnl_lock held. If excessive events
>  * are present as part of the watch list, their processing could
>  * monopolize the rtnl_lock and which could lead to starvation in
>  * other modules which want to acquire this lock. Hence, co-operative
>  * scheme like below might be helpful in mitigating the problem.
>  * This also tries to be fair CPU wise by conditional rescheduling.
>  */

Yes, thanks for the helpful comment.

> 
> 
>> +		if (--do_dev < 0) {
>> +			rtnl_unlock();
>> +			cond_resched();
>> +			do_dev = MAX_DO_DEV_PER_LOOP;
>> +			rtnl_lock();
>> +		}
>> +
>>  		spin_lock_irq(&lweventlist_lock);
>>  	}
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ