lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <MWHPR11MB00291BA918E17176BDB12578E9E99@MWHPR11MB0029.namprd11.prod.outlook.com>
Date:   Fri, 8 Apr 2022 18:17:46 +0000
From:   "Saleem, Shiraz" <shiraz.saleem@...el.com>
To:     Duoming Zhou <duoming@....edu.cn>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "jgg@...pe.ca" <jgg@...pe.ca>,
        "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "dan.carpenter@...cle.com" <dan.carpenter@...cle.com>
Subject: RE: [PATCH V4 09/11] drivers: infiniband: hw: Fix deadlock in
 irdma_cleanup_cm_core()

> Subject: [PATCH V4 09/11] drivers: infiniband: hw: Fix deadlock in
> irdma_cleanup_cm_core()
> 
> There is a deadlock in irdma_cleanup_cm_core(), which is shown
> below:
> 
>    (Thread 1)              |      (Thread 2)
>                            | irdma_schedule_cm_timer()
> irdma_cleanup_cm_core()    |  add_timer()
>  spin_lock_irqsave() //(1) |  (wait a time)
>  ...                       | irdma_cm_timer_tick()
>  del_timer_sync()          |  spin_lock_irqsave() //(2)
>  (wait timer to stop)      |  ...
> 
> We hold cm_core->ht_lock in position (1) of thread 1 and use del_timer_sync() to
> wait timer to stop, but timer handler also need cm_core->ht_lock in position (2) of
> thread 2.
> As a result, irdma_cleanup_cm_core() will block forever.
> 
> This patch removes the check of timer_pending() in irdma_cleanup_cm_core(),
> because the del_timer_sync() function will just return directly if there isn't a pending
> timer. As a result, the lock is redundant, because there is no resource it could
> protect.
> 
> What`s more, we add mod_timer() in order to guarantee the timer in
> irdma_schedule_cm_timer() and irdma_cm_timer_tick() could be executed.
> 
> Signed-off-by: Duoming Zhou <duoming@....edu.cn>
> ---
> Changes in V4:
>   - Add mod_timer() in order to guarantee the timer could be executed.
> 
>  drivers/infiniband/hw/irdma/cm.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
> index dedb3b7edd8..e4117b978bf 100644
> --- a/drivers/infiniband/hw/irdma/cm.c
> +++ b/drivers/infiniband/hw/irdma/cm.c
> @@ -1184,6 +1184,8 @@ int irdma_schedule_cm_timer(struct irdma_cm_node
> *cm_node,
>  	if (!was_timer_set) {
>  		cm_core->tcp_timer.expires = new_send->timetosend;
>  		add_timer(&cm_core->tcp_timer);
> +	} else {
> +		mod_timer(&cm_core->tcp_timer, new_send->timetosend);
>  	}

There is no need to do a mod_timer. In the timer pending case, the handler will fire when the current armed timer expires.

The handler batch process connection nodes of interest. And this connection node is marked for processing.


>  	spin_unlock_irqrestore(&cm_core->ht_lock, flags);
> 
> @@ -1367,6 +1369,8 @@ static void irdma_cm_timer_tick(struct timer_list *t)
>  		if (!timer_pending(&cm_core->tcp_timer)) {
>  			cm_core->tcp_timer.expires = nexttimeout;
>  			add_timer(&cm_core->tcp_timer);
> +		} else {
> +			mod_timer(&cm_core->tcp_timer, nexttimeout);

ditto. Please remove.

>  		}
>  		spin_unlock_irqrestore(&cm_core->ht_lock, flags);
>  	}
> @@ -3251,10 +3255,7 @@ void irdma_cleanup_cm_core(struct irdma_cm_core
> *cm_core)
>  	if (!cm_core)
>  		return;
> 
> -	spin_lock_irqsave(&cm_core->ht_lock, flags);
> -	if (timer_pending(&cm_core->tcp_timer))
> -		del_timer_sync(&cm_core->tcp_timer);
> -	spin_unlock_irqrestore(&cm_core->ht_lock, flags);
> +	del_timer_sync(&cm_core->tcp_timer);

This is fine.

Shiraz


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ