lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 17 Jan 2024 12:47:13 +0100
From: Florian Westphal <fw@...len.de>
To: wangkeqi <wangkeqi_chris@....com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
	pabeni@...hat.com, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	wangkeqi <wangkeqiwang@...iglobal.com>,
	kernel test robot <oliver.sang@...el.com>, fengwei.yin@...el.com
Subject: Re: [PATCH net v2] connector: Change the judgment conditions for
 clearing proc_event_num_listeners

wangkeqi <wangkeqi_chris@....com> wrote:
> From: wangkeqi <wangkeqiwang@...iglobal.com>
> 
> It is inaccurate to judge whether proc_event_num_listeners is
> cleared by cn_netlink_send_mult returning -ESRCH.
> In the case of stress-ng netlink-proc, -ESRCH will always be returned,
> because netlink_broadcast_filtered will return -ESRCH,
> which may cause stress-ng netlink-proc performance degradation.
> Therefore, the judgment condition is modified to whether
> there is a listener.
> 
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Closes: https://lore.kernel.org/oe-lkp/202401112259.b23a1567-oliver.sang@intel.com
> Fixes: c46bfba133 ("connector: Fix proc_event_num_listeners count not cleared")
> Signed-off-by: wangkeqi <wangkeqiwang@...iglobal.com>
> Cc: fengwei.yin@...el.com
> ---
>  drivers/connector/cn_proc.c   | 6 ++++--
>  drivers/connector/connector.c | 6 ++++++
>  include/linux/connector.h     | 1 +
>  3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
> index 3d5e6d705..b09f74ed3 100644
> --- a/drivers/connector/cn_proc.c
> +++ b/drivers/connector/cn_proc.c
> @@ -108,8 +108,10 @@ static inline void send_msg(struct cn_msg *msg)
>  		filter_data[1] = 0;
>  	}
>  
> -	if (cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
> -			     cn_filter, (void *)filter_data) == -ESRCH)
> +	if (netlink_has_listeners(get_cdev_nls(), CN_IDX_PROC))
> +		cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
> +			     cn_filter, (void *)filter_data);
> +	else
>  		atomic_set(&proc_event_num_listeners, 0);

How is that serialized vs. cn_proc_mcast_ctl?

1. netlink_has_listeners() returns false
2.  other core handles PROC_CN_MCAST_LISTEN, atomic_inc called
3. This core (re)sets counter to 0, but there are listeners, so
    all functions that do

 if (atomic_read(&proc_event_num_listeners) < 1)
    return;

will not get enabled/remain disabled.

Probably better to add cn_netlink_has_listeners() function
and use that instead of the (inaccurate) counter?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ