lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a19c1338f2fa4cb19a4f8b7552ff54ded20b403a.camel@intel.com>
Date: Fri, 25 Jul 2025 18:05:22 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "debug@...osinc.com" <debug@...osinc.com>
CC: "nathan@...nel.org" <nathan@...nel.org>, "kito.cheng@...ive.com"
	<kito.cheng@...ive.com>, "jeffreyalaw@...il.com" <jeffreyalaw@...il.com>,
	"lorenzo.stoakes@...cle.com" <lorenzo.stoakes@...cle.com>, "mhocko@...e.com"
	<mhocko@...e.com>, "charlie@...osinc.com" <charlie@...osinc.com>,
	"david@...hat.com" <david@...hat.com>, "masahiroy@...nel.org"
	<masahiroy@...nel.org>, "samitolvanen@...gle.com" <samitolvanen@...gle.com>,
	"conor.dooley@...rochip.com" <conor.dooley@...rochip.com>,
	"bjorn@...osinc.com" <bjorn@...osinc.com>, "linux-riscv@...ts.infradead.org"
	<linux-riscv@...ts.infradead.org>, "nicolas.schier@...ux.dev"
	<nicolas.schier@...ux.dev>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "andrew@...ive.com" <andrew@...ive.com>,
	"monk.chiang@...ive.com" <monk.chiang@...ive.com>, "justinstitt@...gle.com"
	<justinstitt@...gle.com>, "palmer@...belt.com" <palmer@...belt.com>,
	"morbo@...gle.com" <morbo@...gle.com>, "aou@...s.berkeley.edu"
	<aou@...s.berkeley.edu>, "nick.desaulniers+lkml@...il.com"
	<nick.desaulniers+lkml@...il.com>, "rppt@...nel.org" <rppt@...nel.org>,
	"broonie@...nel.org" <broonie@...nel.org>, "ved@...osinc.com"
	<ved@...osinc.com>, "heinrich.schuchardt@...onical.com"
	<heinrich.schuchardt@...onical.com>, "vbabka@...e.cz" <vbabka@...e.cz>,
	"Liam.Howlett@...cle.com" <Liam.Howlett@...cle.com>, "alex@...ti.fr"
	<alex@...ti.fr>, "fweimer@...hat.com" <fweimer@...hat.com>,
	"surenb@...gle.com" <surenb@...gle.com>, "linux-kbuild@...r.kernel.org"
	<linux-kbuild@...r.kernel.org>, "cleger@...osinc.com" <cleger@...osinc.com>,
	"samuel.holland@...ive.com" <samuel.holland@...ive.com>,
	"llvm@...ts.linux.dev" <llvm@...ts.linux.dev>, "paul.walmsley@...ive.com"
	<paul.walmsley@...ive.com>, "ajones@...tanamicro.com"
	<ajones@...tanamicro.com>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
	"apatel@...tanamicro.com" <apatel@...tanamicro.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [PATCH 10/11] scs: generic scs code updated to leverage hw
 assisted shadow stack

On Fri, 2025-07-25 at 10:19 -0700, Deepak Gupta wrote:
> > This doesn't update the direct map alias I think. Do you want to protect it?
> 
> Yes any alternate address mapping which is writeable is a problem and dilutes
> the mechanism. How do I go about updating direct map ? (I pretty new to linux
> kernel and have limited understanding on which kernel api's to use here to
> unmap
> direct map)

Here is some info on how it works:

set_memory_foo() variants should (I didn't check riscv implementation, but on
x86) update the target addresses passed in *and* the direct map alias. And flush
the TLB.

vmalloc_node_range() will just set the permission on the vmalloc alias and not
touch the direct map alias.

vfree() works by trying to batch the flushing for unmap operations to avoid
flushing the TLB too much. When memory is unmapped in userspace, it will only
flush on the CPU's with that MM (process address space). But for kernel memory
the mappings are shared between all CPUs. So, like on a big server or something,
it requires way more work and distance IPIs, etc. So vmalloc will try to be
efficient and keep zapped mappings unflushed until it has enough to clean them
up in bulk. In the meantime it won't reuse that vmalloc address space.

But this means there can also be other vmalloc aliases still in the TLB for any
page that gets allocated from the page allocator. If you want to be fully sure
there are no writable aliases, you need to call vm_unmap_aliases() each time you
change kernel permissions, which will do the vmalloc TLB flush immediately. Many
set_memory() implementations call this automatically, but it looks like not
riscv.


So doing something like vmalloc(), set_memory_shadow_stack() on alloc and
set_memory_rw(), vfree() on free is doing the expensive flush (depends on the
device how expensive) in a previously fast path. Ignoring the direct map alias
is faster. A middle ground would be to do the allocation/conversion and freeing
of a bunch of stacks at once, and recycle them.


You could make it tidy first and then optimize it later, or make it faster first
and maximally secure later. Or try to do it all at once. But there have long
been discussions on batching type kernel memory permission solutions. So it
would could be a whole project itself.

> 
> > 
> > > 
> > >   out:
> > > @@ -59,7 +72,7 @@ void *scs_alloc(int node)
> > >   	if (!s)
> > >   		return NULL;
> > > 
> > > -	*__scs_magic(s) = SCS_END_MAGIC;
> > > +	__scs_store_magic(__scs_magic(s), SCS_END_MAGIC);
> > > 
> > >   	/*
> > >   	 * Poison the allocation to catch unintentional accesses to
> > > @@ -87,6 +100,16 @@ void scs_free(void *s)
> > >   			return;
> > > 
> > >   	kasan_unpoison_vmalloc(s, SCS_SIZE, KASAN_VMALLOC_PROT_NORMAL);
> > > +	/*
> > > +	 * Hardware protected shadow stack is not writeable by regular
> > > stores
> > > +	 * Thus adding this back to free list will raise faults by
> > > vmalloc
> > > +	 * It needs to be writeable again. It's good sanity as well
> > > because
> > > +	 * then it can't be inadvertently accesses and if done, it will
> > > fault.
> > > +	 */
> > > +#ifdef CONFIG_ARCH_HAS_KERNEL_SHADOW_STACK
> > > +	set_memory_rw((unsigned long)s, (SCS_SIZE/PAGE_SIZE));
> > 
> > Above you don't update the direct map permissions. So I don't think you need
> > this. vmalloc should flush the permissioned mapping before re-using it with
> > the
> > lazy cleanup scheme.
> 
> If I didn't do this, I was getting a page fault on this vmalloc address. It
> directly
> uses first 8 bytes to add it into some list and that was the location of
> fault.

Ah right! Because it is using the vfree atomic variant.

You could create your own WQ in SCS and call vfree() in non-atomic context. If
you want to avoid thr set_memory_rw() on free, in the ignoring the direct map
case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ