lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zw9_imsl2KLf7_GY@J2N7QTR9R3>
Date: Wed, 16 Oct 2024 09:55:38 +0100
From: Mark Rutland <mark.rutland@....com>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: Linus Walleij <linus.walleij@...aro.org>,
	Clement LE GOFFIC <clement.legoffic@...s.st.com>,
	Russell King <linux@...linux.org.uk>,
	"Russell King (Oracle)" <rmk+kernel@...linux.org.uk>,
	Kees Cook <kees@...nel.org>,
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
	Mark Brown <broonie@...nel.org>,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	linux-stm32@...md-mailman.stormreply.com,
	Antonio Borneo <antonio.borneo@...s.st.com>
Subject: Re: Crash on armv7-a using KASAN

On Tue, Oct 15, 2024 at 07:28:06PM +0200, Ard Biesheuvel wrote:
> On Tue, 15 Oct 2024 at 18:27, Mark Rutland <mark.rutland@....com> wrote:
> >
> > On Tue, Oct 15, 2024 at 06:07:00PM +0200, Ard Biesheuvel wrote:
> > > On Tue, 15 Oct 2024 at 17:26, Mark Rutland <mark.rutland@....com> wrote:
> > > > Looking some more, I don't see how VMAP_STACK guarantees that the
> > > > old/active stack is mapped in the new mm when switching from the old mm
> > > > to the new mm (which happens before __switch_to()).
> > > >
> > > > Either I'm missing something, or we have a latent bug. Maybe we have
> > > > some explicit copying/prefaulting elsewhere I'm missing?
> > >
> > > We bump the vmalloc_seq counter for that. Given that the top-level
> > > page table can only gain entries covering the kernel space, this
> > > should be sufficient for the old task's stack to be mapped in the new
> > > task's page tables.
> >
> > Ah, yep -- I had missed that. Thanks for the pointer!
> >
> > From a superficial look, it sounds like it should be possible to extend
> > that to also handle the KASAN shadow of the vmalloc area (which
> > __check_vmalloc_seq() currently doesn't copy), but I'm not sure of
> > exactly when we initialise the shadow for a vmalloc allocation relative
> > to updating vmalloc_seq.
> >
> 
> Indeed. It appears both vmalloc_seq() and arch_sync_kernel_mappings()
> need to take the vmalloc shadow into account specifically. And we may
> also need the dummy read from the stack's shadow in __switch_to - I am
> pretty sure I added that for a reason.

I believe that's necessary for the lazy TLB switch, at least for SMP:

	// CPU 0			// CPU 1

	<< switches to task X's mm >>

					<< creates kthread task Y >>
					<< maps task Y's new stack >>
					<< maps task Y's new shadow >>

					// Y switched out
					context_switch(..., Y, ..., ...);

	// Switch from X to Y
	context_switch(..., X, Y, ...) {
		// prev = X
		// next = Y

		if (!next->mm) { 
			// Y has no mm
			// No switch_mm() here
			// ... so no check_vmalloc_seq()
		} else {
			// not taken
		}

		...

		// X's mm still lacks Y's stack + shadow here

		switch_to(prev, next, prev);
	}

... so probably worth a comment that we're faulting in the new
stack+shadow for for lazy tlb when switching to a task with no mm?

In the lazy tlb case the current/old mappings don't disappear from the
active mm, and so we don't need to go add those to the new mm, which is what
we need check_vmalloc_seq() for.

Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ