lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACRpkda8tO=QLF_zznoNjdNfNZJVntY_3+247E=qK6zNqRnVSA@mail.gmail.com>
Date: Wed, 16 Oct 2024 21:00:22 +0200
From: Linus Walleij <linus.walleij@...aro.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Ard Biesheuvel <ardb@...nel.org>, Clement LE GOFFIC <clement.legoffic@...s.st.com>, 
	Russell King <linux@...linux.org.uk>, 
	"Russell King (Oracle)" <rmk+kernel@...linux.org.uk>, Kees Cook <kees@...nel.org>, 
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>, Mark Brown <broonie@...nel.org>, 
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	linux-stm32@...md-mailman.stormreply.com, 
	Antonio Borneo <antonio.borneo@...s.st.com>
Subject: Re: Crash on armv7-a using KASAN

On Wed, Oct 16, 2024 at 10:55 AM Mark Rutland <mark.rutland@....com> wrote:

> I believe that's necessary for the lazy TLB switch, at least for SMP:
>
>         // CPU 0                        // CPU 1
>
>         << switches to task X's mm >>
>
>                                         << creates kthread task Y >>
>                                         << maps task Y's new stack >>
>                                         << maps task Y's new shadow >>
>
>                                         // Y switched out
>                                         context_switch(..., Y, ..., ...);
>
>         // Switch from X to Y
>         context_switch(..., X, Y, ...) {
>                 // prev = X
>                 // next = Y
>
>                 if (!next->mm) {
>                         // Y has no mm
>                         // No switch_mm() here
>                         // ... so no check_vmalloc_seq()
>                 } else {
>                         // not taken
>                 }
>
>                 ...
>
>                 // X's mm still lacks Y's stack + shadow here
>
>                 switch_to(prev, next, prev);
>         }
>
> ... so probably worth a comment that we're faulting in the new
> stack+shadow for for lazy tlb when switching to a task with no mm?

Switching to a task with no mm == switching to a kernel daemon.

And those only use the kernel memory and relies on that always
being mapped in any previous mm context, right.

But where do we put that comment? In kernel/sched/core.c
context_switch()?

It's no different in any architecture I think, and they pretty much all
use KASAN these days.

Or in ARM32's enter_lazy_tlb() in arch/arm/include/asm/mmu_context.h?

I'm unsure. I would make it a separate patch.

> In the lazy tlb case the current/old mappings don't disappear from the
> active mm, and so we don't need to go add those to the new mm, which is what
> we need check_vmalloc_seq() for.

Yups that's how I understand it too.

Yours,
Linus Walleij

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ