lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2024030623-smoking-marry-f4c5@gregkh>
Date: Wed, 6 Mar 2024 06:25:41 +0000
From: Greg KH <gregkh@...uxfoundation.org>
To: Mike Tipton <quic_mdtipton@...cinc.com>
Cc: djakov@...nel.org, robdclark@...omium.org, quic_rlaggysh@...cinc.com,
	quic_okukatla@...cinc.com, linux-pm@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] interconnect: Don't access req_list while it's being
 manipulated

On Tue, Mar 05, 2024 at 02:56:52PM -0800, Mike Tipton wrote:
> The icc_lock mutex was split into separate icc_lock and icc_bw_lock
> mutexes in [1] to avoid lockdep splats. However, this didn't adequately
> protect access to icc_node::req_list.
> 
> The icc_set_bw() function will eventually iterate over req_list while
> only holding icc_bw_lock, but req_list can be modified while only
> holding icc_lock. This causes races between icc_set_bw(), of_icc_get(),
> and icc_put().
> 
> Example A:
> 
>   CPU0                               CPU1
>   ----                               ----
>   icc_set_bw(path_a)
>     mutex_lock(&icc_bw_lock);
>                                      icc_put(path_b)
>                                        mutex_lock(&icc_lock);
>     aggregate_requests()
>       hlist_for_each_entry(r, ...
>                                        hlist_del(...
>         <r = invalid pointer>
> 
> Example B:
> 
>   CPU0                               CPU1
>   ----                               ----
>   icc_set_bw(path_a)
>     mutex_lock(&icc_bw_lock);
>                                      path_b = of_icc_get()
>                                        of_icc_get_by_index()
>                                          mutex_lock(&icc_lock);
>                                          path_find()
>                                            path_init()
>     aggregate_requests()
>       hlist_for_each_entry(r, ...
>                                              hlist_add_head(...
>         <r = invalid pointer>
> 
> Fix this by ensuring icc_bw_lock is always held before manipulating
> icc_node::req_list. The additional places icc_bw_lock is held don't
> perform any memory allocations, so we should still be safe from the
> original lockdep splats that motivated the separate locks.
> 
> [1] commit af42269c3523 ("interconnect: Fix locking for runpm vs reclaim")
> 
> Signed-off-by: Mike Tipton <quic_mdtipton@...cinc.com>
> Fixes: af42269c3523 ("interconnect: Fix locking for runpm vs reclaim")
> ---
>  drivers/interconnect/core.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 

Hi,

This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
a patch that has triggered this response.  He used to manually respond
to these common problems, but in order to save his sanity (he kept
writing the same thing over and over, yet to different people), I was
created.  Hopefully you will not take offence and will fix the problem
in your patch and resubmit it so that it can be accepted into the Linux
kernel tree.

You are receiving this message because of the following common error(s)
as indicated below:

- You have marked a patch with a "Fixes:" tag for a commit that is in an
  older released kernel, yet you do not have a cc: stable line in the
  signed-off-by area at all, which means that the patch will not be
  applied to any older kernel releases.  To properly fix this, please
  follow the documented rules in the
  Documentation/process/stable-kernel-rules.rst file for how to resolve
  this.

If you wish to discuss this problem further, or you have questions about
how to resolve this issue, please feel free to respond to this email and
Greg will reply once he has dug out from the pending patches received
from other developers.

thanks,

greg k-h's patch email bot

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ