lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Feb 2021 09:26:26 +0100
From:   Julian Wiedmann <jwi@...ux.ibm.com>
To:     Taehee Yoo <ap420073@...il.com>, davem@...emloft.net,
        kuba@...nel.org, netdev@...r.kernel.org, dsahern@...nel.org,
        xiyou.wangcong@...il.com, kgraul@...ux.ibm.com, hca@...ux.ibm.com,
        gor@...ux.ibm.com, borntraeger@...ibm.com,
        mareklindner@...mailbox.ch, sw@...onwunderlich.de, a@...table.cc,
        sven@...fation.org, yoshfuji@...ux-ipv6.org
Subject: Re: [PATCH net-next 8/8] mld: change context of mld module

On 08.02.21 18:59, Taehee Yoo wrote:
> MLD module's context is atomic although most logic is called from
> control-path, not data path. Only a few functions are called from
> datapath, most of the functions are called from the control-path.
> Furthermore, MLD's response is not processed immediately because
> MLD protocol is using delayed response.
> It means that If a query is received, the node should have a delay
> in response At this point, it could change the context.
> It means most of the functions can be implemented as the sleepable
> context so that mld functions can use sleepable functions.
> 
> Most resources are protected by spinlock and rwlock so the context
> of mld functions is atomic. So, in order to change context, locking
> scenario should be changed.
> It switches from spinlock/rwlock to mutex and rcu.
> 
> Some locks are deleted and added.
> 1. ipv6->mc_socklist->sflock is deleted
> This is rwlock and it is unnecessary.
> Because it protects ipv6_mc_socklist-sflist but it is now protected
> by rtnl_lock().
> 
> 2. ifmcaddr6->mca_work_lock is added.
> This lock protects ifmcaddr6->mca_work.
> This workqueue can be used by both control-path and data-path.
> It means mutex can't be used.
> So mca_work_lock(spinlock) is added.
> 
> 3. inet6_dev->mc_tomb_lock is deleted
> This lock protects inet6_dev->mc_bom_list.
> But it is protected by rtnl_lock().
> 
> 4. inet6_dev->lock is used for protecting workqueues.
> inet6_dev has its own workqueues(mc_gq_work, mc_ifc_work, mc_delrec_work)
> and it can be started and stop by both control-path and data-path.
> So, mutex can't be used.
> 
> Suggested-by: Cong Wang <xiyou.wangcong@...il.com>
> Signed-off-by: Taehee Yoo <ap420073@...il.com>
> ---
>  drivers/s390/net/qeth_l3_main.c |   6 +-
>  include/net/if_inet6.h          |  29 +-
>  net/batman-adv/multicast.c      |   4 +-
>  net/ipv6/addrconf.c             |   4 +-
>  net/ipv6/mcast.c                | 785 ++++++++++++++++----------------
>  5 files changed, 411 insertions(+), 417 deletions(-)
> 
> diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
> index e49abdeff69c..085afb24482e 100644
> --- a/drivers/s390/net/qeth_l3_main.c
> +++ b/drivers/s390/net/qeth_l3_main.c
> @@ -1098,8 +1098,8 @@ static int qeth_l3_add_mcast_rtnl(struct net_device *dev, int vid, void *arg)
>  	tmp.disp_flag = QETH_DISP_ADDR_ADD;
>  	tmp.is_multicast = 1;
>  
> -	read_lock_bh(&in6_dev->lock);
> -	list_for_each_entry(im6, in6_dev->mc_list, list) {
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(im6, in6_dev->mc_list, list) {

No need for the rcu_read_lock(), we're called under rtnl.
So if there's a v2, please just make this

	list_for_each_entry_rcu(im6, in6_dev->mc_list, list,
				lockdep_rtnl_is_held())

>  		tmp.u.a6.addr = im6->mca_addr;
>  
>  		ipm = qeth_l3_find_addr_by_ip(card, &tmp);
> @@ -1117,7 +1117,7 @@ static int qeth_l3_add_mcast_rtnl(struct net_device *dev, int vid, void *arg)
>  			 qeth_l3_ipaddr_hash(ipm));
>  
>  	}
> -	read_unlock_bh(&in6_dev->lock);
> +	rcu_read_unlock();
>  
>  out:
>  	return 0;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ