lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e7c9e904-c284-45ab-8022-2009a976a894@lucifer.local>
Date: Tue, 22 Apr 2025 18:55:17 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Jeongjun Park <aha310510@...il.com>
Cc: Liam.Howlett@...cle.com, akpm@...ux-foundation.org, willy@...radead.org,
        linux-kernel@...r.kernel.org,
        syzbot+a2b84e569d06ca3a949c@...kaller.appspotmail.com
Subject: Re: [PATCH v2] ipc: fix to protect IPCS lookups using RCU

You sent this mail twice by mistake, and didn't link to the original
thread, since you sent v1 2 months ago it's really useful to provide a link
(here it's [0] ).

[0]: https://lore.kernel.org/all/20250214180157.10288-1-aha310510@gmail.com/

Your description is entirely missing any 'a syzkaller report encountered an
issue whereby [blah blah blah]'. Please add this.

On Tue, Apr 22, 2025 at 09:48:43PM +0900, Jeongjun Park wrote:
> idr_for_each() is protected by rwsem, but this is not enough. If it is not
> protected by the RCU read-critical region, when we call radix_tree_node_free()
> via call_rcu() to free the struct radix_tree_node, the node will be freed
> immediately, and when we read the next node in radix_tree_for_each_slot(),
> we can read the already freed memory.

What is calling call_rcu(), or radix_tree_for_each_slot() etc.? Presumably
idr_for_each? Maybe worth saying soe xplicitly.

>
> Therefore, we need to add code to make sure that idr_for_each() is protected
> within the RCU read-critical region when we call it in shm_destroy_orphaned().
>
> Reported-by: syzbot+a2b84e569d06ca3a949c@...kaller.appspotmail.com

In Matthew's reply to you in the original thread he says 'if anybody is
wondering what the hell this is about [link]'. It would have been useful to
include this link here [1].

[1]: https://lore.kernel.org/linux-kernel/67af13f8.050a0220.21dd3.0038.GAE@google.com/

> Fixes: b34a6b1da371 ("ipc: introduce shm_rmid_forced sysctl")
> Cc: Matthew Wilcox <willy@...radead.org>

You're cc-ing more people than this? There's no need for you to manually
add Cc: lines anyway. But this is just incorrect.

> Signed-off-by: Jeongjun Park <aha310510@...il.com>

Matthew seemed more aware of the behaviour in this code so I'm going to
keep my review to process nits mostly.

Others can actually review the change... But it seems you are doing what
Matthew suggested broadly.

Have you run with lockdep (i.e. CONFIG_PROVE_LOCKING) enabled? What testing
have you done?

Presumably it's ok to order the rcu read critical section and rwsem locks
this way?

> ---
> v2: Change description and coding style

Actually you completely change what you're doing here, this is a completely
inaccurate changelog, before you dropped the rwsem, now you don't!! You
should note this (i.e. - what it is you're actually doing...)

Not sure what on earth you are changing in the coding style? You're adding
braces and 2 lines of code?

I don't think you really need to note a change in description, that's
implicit.

Also a:

v1:
https://lore.kernel.org/all/20250214180157.10288-1-aha310510@gmail.com/

Is vital esp. after such a delay, as noted above.

> ---
>  ipc/shm.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/ipc/shm.c b/ipc/shm.c
> index 99564c870084..492fcc699985 100644
> --- a/ipc/shm.c
> +++ b/ipc/shm.c
> @@ -431,8 +431,11 @@ static int shm_try_destroy_orphaned(int id, void *p, void *data)
>  void shm_destroy_orphaned(struct ipc_namespace *ns)
>  {
>  	down_write(&shm_ids(ns).rwsem);
> -	if (shm_ids(ns).in_use)
> +	if (shm_ids(ns).in_use) {
> +		rcu_read_lock();
>  		idr_for_each(&shm_ids(ns).ipcs_idr, &shm_try_destroy_orphaned, ns);
> +		rcu_read_unlock();
> +	}
>  	up_write(&shm_ids(ns).rwsem);
>  }
>
> --

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ