lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5dbf6cf9e82ef15ce0febf070608da2d5b128763.camel@gmx.de>
Date:   Sun, 15 Aug 2021 06:17:01 +0200
From:   Mike Galbraith <efault@....de>
To:     Vlastimil Babka <vbabka@...e.cz>,
        Clark Williams <williams@...hat.com>
Cc:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>,
        RT <linux-rt-users@...r.kernel.org>
Subject: Re: [ANNOUNCE] v5.14-rc5-rt8

On Sat, 2021-08-14 at 21:08 +0200, Vlastimil Babka wrote:
>
> Aha! That's helpful. Hopefully it's just a small issue where we
> opportunistically test flags on a page that's protected by the local
> lock we didn't take yet, and I didn't realize there's the VM_BUG_ON
> which can trigger if our page went away (which we would have realized
> after taking the lock).

Speaking of optimistic peeking perhaps going badly, why is the below
not true?  There's protection against ->partial going disappearing
during a preemption... but can't it just as easily appear, so where is
that protection?

If the other side of that window is safe, it could use a comment so
dummies reading this code don't end up asking mm folks why the heck
they don't just take the darn lock and be done with it instead of tap
dancing all around thething :)

---
 mm/slub.c |   14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2937,17 +2937,16 @@ static void *___slab_alloc(struct kmem_c

 new_slab:

+	/*
+	 * To avoid false negative race with put_cpu_partial() during a
+	 * preemption, we must call slub_percpu_partial() under lock.
+	 */
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (slub_percpu_partial(c)) {
-		local_lock_irqsave(&s->cpu_slab->lock, flags);
 		if (unlikely(c->page)) {
 			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 			goto reread_page;
 		}
-		if (unlikely(!slub_percpu_partial(c))) {
-			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
-			/* we were preempted and partial list got empty */
-			goto new_objects;
-		}

 		page = c->page = slub_percpu_partial(c);
 		slub_set_percpu_partial(c, page);
@@ -2955,8 +2954,7 @@ static void *___slab_alloc(struct kmem_c
 		stat(s, CPU_PARTIAL_ALLOC);
 		goto redo;
 	}
-
-new_objects:
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);

 	freelist = get_partial(s, gfpflags, node, &page);
 	if (freelist)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ