lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 06 Jul 2023 12:47:50 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Niklas Schnelle <schnelle@...ux.ibm.com>, Alexandra Winter
 <wintera@...ux.ibm.com>, Wenjia Zhang <wenjia@...ux.ibm.com>, Heiko
 Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>, Alexander
 Gordeev <agordeev@...ux.ibm.com>, Christian Borntraeger
 <borntraeger@...ux.ibm.com>,  Sven Schnelle <svens@...ux.ibm.com>, Jan
 Karcher <jaka@...ux.ibm.com>, Stefan Raspl <raspl@...ux.ibm.com>,  "David
 S. Miller" <davem@...emloft.net>
Cc: Julian Ruess <julianr@...ux.ibm.com>, linux-s390@...r.kernel.org, 
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] s390/ism: Fix locking for forwarding of IRQs and
 events to clients

On Wed, 2023-07-05 at 14:17 +0200, Niklas Schnelle wrote:
> The clients array references all registered clients and is protected by
> the clients_lock. Besides its use as general list of clients the clients
> array is accessed in ism_handle_irq() to forward IRQs and events to
> clients. This use in an interrupt handler thus requires all code that
> takes the clients_lock to be IRQ save.
> 
> This is problematic since the add() and remove() callbacks which are
> called for all clients when an ISM device is added or removed cannot be
> called directly while iterating over the clients array and holding the
> clients_lock since clients need to allocate and/or take mutexes in these
> callbacks. To deal with this the calls get pushed to workqueues with
> additional housekeeping to be able to wait for the completion outside
> the clients_lock.
> 
> Moreover while the clients_lock is taken in the IRQ handler when calling
> handle_event() it is incorrectly not held during the
> client->handle_irq() call and for the preceding clients[] access. This
> leaves the clients array unprotected. Similarly the accesses to
> ism->sba_client_arr[] in ism_register_dmb() and ism_unregister_dmb() are
> also not protected by any lock. This is especially problematic as the
> the client ID from the ism->sba_client_arr[] is not checked against
> NO_CLIENT.
> 
> Instead of expanding the use of the clients_lock further add a separate
> array in struct ism_dev which references clients subscribed to the
> device's events and IRQs. This array is protected by ism->lock which is
> already taken in ism_handle_irq() and can be taken outside the IRQ
> handler when adding/removing subscribers or the accessing
> ism->sba_client_arr[].
> 
> With the clients_lock no longer accessed from IRQ context it is turned
> into a mutex and the add and remove workqueues plus their housekeeping
> can be removed in favor of simple direct calls.
> 
> Fixes: 89e7d2ba61b7 ("net/ism: Add new API for client registration")
> Tested-by: Julian Ruess <julianr@...ux.ibm.com>
> Reviewed-by: Julian Ruess <julianr@...ux.ibm.com>
> Reviewed-by: Alexandra Winter <wintera@...ux.ibm.com>
> Reviewed-by: Wenjia Zhang <wenjia@...ux.ibm.com>
> Signed-off-by: Niklas Schnelle <schnelle@...ux.ibm.com>
> ---
> Note: I realize this is a rather large patch. So I'd understand if it's not
> acceptable as is and needs to be broken up. That said it removes more lines
> than it adds and the complexity of the resulting code is in my opinion reduced.

This is indeed unusually large for a -net patch. IMHO it would be
better split it in 2 separated patches: 1 introducing the ism->lock and
one turning the clients_lock in a mutex. The series should still target
-net, but should be more easily reviewable.

Thanks,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ