lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0CcyfbPqmxJ9uJH@elver.google.com>
Date: Fri, 22 Nov 2024 16:01:29 +0100
From: Marco Elver <elver@...gle.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Andrey Konovalov <andreyknvl@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Vlastimil Babka <vbabka@...e.cz>,
	syzbot <syzbot+39f85d612b7c20d8db48@...kaller.appspotmail.com>,
	Liam.Howlett@...cle.com, akpm@...ux-foundation.org,
	jannh@...gle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	lorenzo.stoakes@...cle.com, syzkaller-bugs@...glegroups.com,
	kasan-dev <kasan-dev@...glegroups.com>,
	Andrey Ryabinin <ryabinin.a.a@...il.com>,
	Alexander Potapenko <glider@...gle.com>,
	Waiman Long <longman@...hat.com>, dvyukov@...gle.com,
	vincenzo.frascino@....com, paulmck@...nel.org, frederic@...nel.org,
	neeraj.upadhyay@...nel.org, joel@...lfernandes.org,
	josh@...htriplett.org, boqun.feng@...il.com, urezki@...il.com,
	rostedt@...dmis.org, mathieu.desnoyers@...icios.com,
	jiangshanlai@...il.com, qiang.zhang1211@...il.com, mingo@...hat.com,
	juri.lelli@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, tj@...nel.org, cl@...ux.com,
	penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
	Thomas Gleixner <tglx@...utronix.de>, roman.gushchin@...ux.dev,
	42.hyeyoo@...il.com, rcu@...r.kernel.org
Subject: Re: [PATCH] kasan: Remove kasan_record_aux_stack_noalloc().

On Fri, Nov 22, 2024 at 12:32PM +0100, Sebastian Andrzej Siewior wrote:
> On 2024-11-19 20:36:56 [+0100], Andrey Konovalov wrote:
> > > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> > > index 6310a180278b6..b18b5944997f8 100644
> > > --- a/mm/kasan/generic.c
> > > +++ b/mm/kasan/generic.c
> > > @@ -521,7 +521,7 @@ size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object)
> > >                         sizeof(struct kasan_free_meta) : 0);
> > >  }
> > >
> > > -static void __kasan_record_aux_stack(void *addr, depot_flags_t depot_flags)
> > 
> > Could you add a comment here that notes the usage, something like:
> > 
> > "This function avoids dynamic memory allocations and thus can be
> > called from contexts that do not allow allocating memory."
> > 
> > > +void kasan_record_aux_stack(void *addr)
> > >  {
> …
> Added but would prefer to add a pointer to stack_depot_save_flags()
> which has this Context: paragraph. Would that work?
> Now looking at it, it says:
> |  * Context: Any context, but setting STACK_DEPOT_FLAG_CAN_ALLOC is required if
> |  *          alloc_pages() cannot be used from the current context. Currently
> |  *          this is the case for contexts where neither %GFP_ATOMIC nor
> |  *          %GFP_NOWAIT can be used (NMI, raw_spin_lock).
> 
> If I understand this correctly then STACK_DEPOT_FLAG_CAN_ALLOC must not
> be specified if invoked from NMI. This will stop
> stack_depot_save_flags() from allocating memory the function will still
> acquire pool_lock, right?
> Do we need to update the comment saying that it must not be used from
> NMI or do we make it jump over the locked section in the NMI case?

Good point. It was meant to also be usable from NMI, because it's very
likely to succeed, and should just take the lock-less fast path once the
stack is in the depot.

But I think we need a fix like this for initial saving of a stack trace:


diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 5ed34cc963fc..245d5b416699 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -630,7 +630,15 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
 			prealloc = page_address(page);
 	}
 
-	raw_spin_lock_irqsave(&pool_lock, flags);
+	if (in_nmi()) {
+		/* We can never allocate in NMI context. */
+		WARN_ON_ONCE(can_alloc);
+		/* Best effort; bail if we fail to take the lock. */
+		if (!raw_spin_trylock_irqsave(&pool_lock, flags))
+			goto exit;
+	} else {
+		raw_spin_lock_irqsave(&pool_lock, flags);
+	}
 	printk_deferred_enter();
 
 	/* Try to find again, to avoid concurrently inserting duplicates. */


If that looks reasonable, I'll turn it into a patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ