[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1adb8c68.a950.18d1d237182.Coremail.wangkeqi_chris@163.com>
Date: Thu, 18 Jan 2024 23:14:38 +0800 (CST)
From: wangkeqi <wangkeqi_chris@....com>
To: "Florian Westphal" <fw@...len.de>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
wangkeqi <wangkeqiwang@...iglobal.com>,
"kernel test robot" <oliver.sang@...el.com>, fengwei.yin@...el.com
Subject: Re:Re: [PATCH net v2] connector: Change the judgment conditions for
clearing proc_event_num_listeners
If cn_netlink_has_listeners() is used instead of proc_event_num_listeners, I think proc_event_num_listeners will be completely meaningless.
I read the code and found that there is nothing wrong with cn_netlink_has_listeners as a judgment of whether to send msg.
sock_close will update the listeners. The previous proc_event_num_listeners count was wrong, making it meaningless.
But if I change it to cn_netlink_has_listeners, will it affect some low-probability scenarios?
At 2024-01-17 19:47:13, "Florian Westphal" <fw@...len.de> wrote:
>wangkeqi <wangkeqi_chris@....com> wrote:
>> From: wangkeqi <wangkeqiwang@...iglobal.com>
>>
>> It is inaccurate to judge whether proc_event_num_listeners is
>> cleared by cn_netlink_send_mult returning -ESRCH.
>> In the case of stress-ng netlink-proc, -ESRCH will always be returned,
>> because netlink_broadcast_filtered will return -ESRCH,
>> which may cause stress-ng netlink-proc performance degradation.
>> Therefore, the judgment condition is modified to whether
>> there is a listener.
>>
>> Reported-by: kernel test robot <oliver.sang@...el.com>
>> Closes: https://lore.kernel.org/oe-lkp/202401112259.b23a1567-oliver.sang@intel.com
>> Fixes: c46bfba133 ("connector: Fix proc_event_num_listeners count not cleared")
>> Signed-off-by: wangkeqi <wangkeqiwang@...iglobal.com>
>> Cc: fengwei.yin@...el.com
>> ---
>> drivers/connector/cn_proc.c | 6 ++++--
>> drivers/connector/connector.c | 6 ++++++
>> include/linux/connector.h | 1 +
>> 3 files changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
>> index 3d5e6d705..b09f74ed3 100644
>> --- a/drivers/connector/cn_proc.c
>> +++ b/drivers/connector/cn_proc.c
>> @@ -108,8 +108,10 @@ static inline void send_msg(struct cn_msg *msg)
>> filter_data[1] = 0;
>> }
>>
>> - if (cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
>> - cn_filter, (void *)filter_data) == -ESRCH)
>> + if (netlink_has_listeners(get_cdev_nls(), CN_IDX_PROC))
>> + cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
>> + cn_filter, (void *)filter_data);
>> + else
>> atomic_set(&proc_event_num_listeners, 0);
>
>How is that serialized vs. cn_proc_mcast_ctl?
>
>1. netlink_has_listeners() returns false
>2. other core handles PROC_CN_MCAST_LISTEN, atomic_inc called
>3. This core (re)sets counter to 0, but there are listeners, so
> all functions that do
>
> if (atomic_read(&proc_event_num_listeners) < 1)
> return;
>
>will not get enabled/remain disabled.
>
>Probably better to add cn_netlink_has_listeners() function
>and use that instead of the (inaccurate) counter?
Powered by blists - more mailing lists