lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtJ1zJWV60NGI6vi@ghost>
Date: Fri, 30 Aug 2024 18:45:48 -0700
From: Charlie Jenkins <charlie@...osinc.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Arnd Bergmann <arnd@...db.de>,
	Richard Henderson <richard.henderson@...aro.org>,
	Ivan Kokshaysky <ink@...assic.park.msu.ru>,
	Matt Turner <mattst88@...il.com>, Vineet Gupta <vgupta@...nel.org>,
	Russell King <linux@...linux.org.uk>, Guo Ren <guoren@...nel.org>,
	Huacai Chen <chenhuacai@...nel.org>,
	WANG Xuerui <kernel@...0n.name>,
	Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
	"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>,
	Helge Deller <deller@....de>, Michael Ellerman <mpe@...erman.id.au>,
	Nicholas Piggin <npiggin@...il.com>,
	Christophe Leroy <christophe.leroy@...roup.eu>,
	Naveen N Rao <naveen@...nel.org>,
	Alexander Gordeev <agordeev@...ux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
	Heiko Carstens <hca@...ux.ibm.com>,
	Vasily Gorbik <gor@...ux.ibm.com>,
	Christian Borntraeger <borntraeger@...ux.ibm.com>,
	Sven Schnelle <svens@...ux.ibm.com>,
	Yoshinori Sato <ysato@...rs.sourceforge.jp>,
	Rich Felker <dalias@...c.org>,
	John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
	"David S. Miller" <davem@...emloft.net>,
	Andreas Larsson <andreas@...sler.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
	"H. Peter Anvin" <hpa@...or.com>, Andy Lutomirski <luto@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Muchun Song <muchun.song@...ux.dev>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"Liam R. Howlett" <Liam.Howlett@...cle.com>,
	Vlastimil Babka <vbabka@...e.cz>, Shuah Khan <shuah@...nel.org>,
	linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-alpha@...r.kernel.org, linux-snps-arc@...ts.infradead.org,
	linux-arm-kernel@...ts.infradead.org, linux-csky@...r.kernel.org,
	loongarch@...ts.linux.dev, linux-mips@...r.kernel.org,
	linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
	linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
	sparclinux@...r.kernel.org, linux-mm@...ck.org,
	linux-kselftest@...r.kernel.org
Subject: Re: [PATCH RFC v2 0/4] mm: Introduce MAP_BELOW_HINT

On Fri, Aug 30, 2024 at 10:52:01AM +0100, Lorenzo Stoakes wrote:
> On Thu, Aug 29, 2024 at 03:16:53PM GMT, Charlie Jenkins wrote:
> > On Thu, Aug 29, 2024 at 10:54:25AM +0100, Lorenzo Stoakes wrote:
> > > On Thu, Aug 29, 2024 at 09:42:22AM GMT, Lorenzo Stoakes wrote:
> > > > On Thu, Aug 29, 2024 at 12:15:57AM GMT, Charlie Jenkins wrote:
> > > > > Some applications rely on placing data in free bits addresses allocated
> > > > > by mmap. Various architectures (eg. x86, arm64, powerpc) restrict the
> > > > > address returned by mmap to be less than the 48-bit address space,
> > > > > unless the hint address uses more than 47 bits (the 48th bit is reserved
> > > > > for the kernel address space).
> > > >
> > > > I'm still confused as to why, if an mmap flag is desired, and thus programs
> > > > are having to be heavily modified and controlled to be able to do this, why
> > > > you can't just do an mmap() with PROT_NONE early, around a hinted address
> > > > that, sits below the required limit, and then mprotect() or mmap() over it?
> > > >
> > > > Your feature is a major adjustment to mmap(), it needs to be pretty
> > > > significantly justified, especially if taking up a new flag.
> > > >
> > > > >
> > > > > The riscv architecture needs a way to similarly restrict the virtual
> > > > > address space. On the riscv port of OpenJDK an error is thrown if
> > > > > attempted to run on the 57-bit address space, called sv57 [1].  golang
> > > > > has a comment that sv57 support is not complete, but there are some
> > > > > workarounds to get it to mostly work [2].
> > > > >
> > > > > These applications work on x86 because x86 does an implicit 47-bit
> > > > > restriction of mmap() address that contain a hint address that is less
> > > > > than 48 bits.
> > > >
> > > > You mean x86 _has_ to limit to physically available bits in a canonical
> > > > format :) this will not be the case for 5-page table levels though...
> >
> > I might be misunderstanding but I am not talking about pointer masking
> > or canonical addresses here. I am referring to the pattern of:
> >
> > 1. Getting an address from mmap()
> > 2. Writing data into bits assumed to be unused in the address
> > 3. Using the data stored in the address
> > 4. Clearing the data from the address and sign extending
> > 5. Dereferencing the now sign-extended address to conform to canonical
> >    addresses
> >
> > I am just talking about step 1 and 2 here -- getting an address from
> > mmap() that only uses bits that will allow your application to not
> > break. How canonicalization happens is a a separate conversation, that
> > can be handled by LAM for x86, TBI for arm64, or Ssnpm for riscv.
> > While LAM for x86 is only capable of masking addresses to 48 or 57 bits,
> > Ssnpm for riscv allow an arbitrary number of bits to be masked out.
> > A design goal here is to be able to support all of the pointer masking
> > flavors, and not just x86.
> 
> Right I get that, I was just saying that the implicit limitation in x86 is
> due to virtual addresses _having_ to be less than 48 bits. So that's why
> that is right? I mean perhaps I'm mistaken?
> 
> Or is it such that x86 can provide a space for tagging for CPU technology
> that supports it (UAI perhaps?).
> 
> I agree with what Michal and others said about the decision to default to
> the reduced address space size and opt-in for higher bits. Your series
> doesn't do this...
> 
> >
> > > >
> > > > >
> > > > > Instead of implicitly restricting the address space on riscv (or any
> > > > > current/future architecture), a flag would allow users to opt-in to this
> > > > > behavior rather than opt-out as is done on other architectures. This is
> > > > > desirable because it is a small class of applications that do pointer
> > > > > masking.
> > > >
> > > > I raised this last time and you didn't seem to address it so to be more
> > > > blunt:
> > > >
> > > > I don't understand why this needs to be an mmap() flag. From this it seems
> > > > the whole process needs allocations to be below a certain limit.
> >
> > Yeah making it per-process does seem logical, as it would help with
> > pointer masking.
> 
> To me it's the only feasible way forward, you can't control all libraries,
> a map flag continues to seem a strange way to implement this, and I
> understand that your justification is that it is the _least invasive_ way
> of doing this, but as I've said below, it's actually pretty invasive if you
> think about it, the current implementation seems to me to be insufficient
> without having VMA flags etc.
> 
> >
> > > >
> > > > That _could_ be achieved through a 'personality' or similar (though a
> > > > personality is on/off, rather than allowing configuration so maybe
> > > > something else would be needed).
> > > >
> > > > From what you're saying 57-bit is all you really need right? So maybe
> > > > ADDR_LIMIT_57BIT?
> >
> > Addresses will always be limited to 57 bits on riscv and x86 (but not
> > necessarily on other architectures). A flag like that would have no
> > impact, I do not understand what you are suggesting. This patch is to
> > have a configurable number of bits be restricted.
> 
> I get that, but as I say below, I don't think a customisable limit is
> workable.
> 
> So I was trying to find a compromise that _might_ be more workable.
> 
> >
> > If anything, a personality that was ADDR_LIMIT_48BIT would be the
> > closest to what I am trying to achieve. Since the issue is that
> > applications fail to work when the address space is greater than 48
> > bits.
> 
> OK so this is at least some possible road forward given there is quite a
> bit of push-back to alternatives.
> 
> >
> > > >
> > > > I don't see how you're going to actually enforce this in a process either
> > > > via an mmap flag, as a library might decide not to use it, so you'd need to
> > > > control the allocator, the thread library implementation, and everything
> > > > that might allocate.
> >
> > It is reasonable to change the implementation to be per-process but that
> > is not the current proposal.
> 
> I mean maybe I wasn't direct enough - I oppose the current proposal as-is.
> 
> >
> > This flag was designed for applications which already directly manage
> > all of their addresses like OpenJDK and Go.
> >
> > This flag implementation was an attempt to make this feature as least
> > invasive as possible to reduce maintainence burden and implementation
> > complexity.
> 
> I realise, and as I said below, I don't think your implementation is
> correct in this form.
> 
> Also if you can control everything + for whatever reason can _absolutely
> know_ no program will use a FFI or a 3rd party library or whatever that
> mremap()'s, I don't see why you can't use mmap() in a creative way to solve
> this rather than adding maintenance burden.
> 
> A couple ideas:
> 
> 1. mmap(high_address - domain_size - buffer, ..., PROT_NONE, MAP_FIXED,
>    ...) a vast domain. You will almost certainly get the hint you
>    want. Then mprotect() regions to PROT_READ | PROT_WRITE as you use (or
>    even mmap() with MAP_FIXED_REPLACE over them), all will have high bits
>    clear.
> 
> 2. (suggested by Liam separately) mmap() with PROT_NONE addresses in the
>    higher range, which prevents mmap() or any other means of allocating
>    memory from allocating there. Acting as a 'huge guard page'.
> 
> Neither require any changes.
> 
> You kinda can't have it both ways - if you are absolutely controlling all
> allocations with no risk of a 3rd party library doing an allocation outside
> of this - then you can just use existing mechanics.
> 
> If you don't, then MAP_BELOW_HINT is insufficient.
> 
> >
> > > >
> > > > Liam also raised various points about VMA particulars that I'm not sure are
> > > > addressed either.
> > > >
> > > > I just find it hard to believe that everything will fit together.
> > > >
> > > > I'd _really_ need to be convinced that this MAP_ flag is justified, and I"m
> > > > just not.
> > > >
> > > > >
> > > > > This flag will also allow seemless compatibility between all
> > > > > architectures, so applications like Go and OpenJDK that use bits in a
> > > > > virtual address can request the exact number of bits they need in a
> > > > > generic way. The flag can be checked inside of vm_unmapped_area() so
> > > > > that this flag does not have to be handled individually by each
> > > > > architecture.
> > > >
> > > > I'm still very unconvinced and feel the bar needs to be high for making
> > > > changes like this that carry maintainership burden.
> > > >
> >
> > I may be naive but what is the burden here? It's two lines of code to
> > check MAP_BELOW_HINT and restrict the address. There are the additional
> > flags for hint and mmap_addr but those are also trivial to implement.
> 
> You're taking up a MAP_ flag (in short supply) which we have to maintain
> forever across all arches and have to respect a limited map range.
> 
> And everything in this realm has edge cases. I don't see how you can
> implement this correctly or usefully without a VMA flag, and see below for
> my concerns on that.
> 
> This is UAPI (and really UABI) so this is _forever_. The bar is high. To me
> this proposal does not hit that, and as you keep saying this isn't even
> what you want.
> 
> You want something per-process so I think the correct proposal is
> per-process.
> 
> A configurable per-process thing is horrible in itself, so I think the only
> workable proposal is a fixed personality.
> 
> >
> > > > So for me, it's a no really as an overall concept.
> > > >
> > > > Happy to be convinced otherwise, however... (I may be missing details or
> > > > context that provide more justification).
> > > >
> > >
> > > Some more thoughts:
> > >
> > > * If you absolutely must keep allocations below a certain limit, you'd
> > >   probably need to actually associate this information with the VMA so the
> > >   memory can't be mremap()'d somewhere invalid (you might not control all
> > >   code so you can't guarantee this won't happen).
> > > * Keeping a map limit associated with a VMA would be horrid and keeping
> > >   VMAs as small as possible is a key aim, so that'd be a no go. VMA flags
> > >   are in limited supply also.
> >
> > Yes that does seem like it would be challenging.
> 
> Right so to me this rules out the MAP_BELOW_HINT. And makes this
> implementation invalid.
> 
> >
> > > * If we did implement a per-process thing, but it were arbitrary, we'd then
> > >   have to handle all kinds of corner cases forever (this is UAPI, can't
> > >   break it etc.) with crazy-low values, or determine a minimum that might
> > >   vary by arch...
> >
> > Throwing an error if the value is determined to be "too low" seems
> > reasonable.
> 
> What's "too low"? This will vary by arch too right? Keep in mind this is
> 'forever'...
> 
> >
> > > * If we did this we'd absolutely have to implement a check in the brk()
> > >   implementation, which is a very very sensitive bit of code. And of
> > >   course, in mmap() and mremap()... and any arch-specific code that might
> > >   interface with this stuff (these functions are hooked).
> > > * A fixed address limit would make more sense, but it seems difficult to
> > >   know what would work for everybody, and again we'd have to deal with edge
> > >   cases and having a permanent maintenance burden.
> >
> > A fixed value is not ideal, since a single size probably would not be
> > suffiecient for every application. However if necessary we could fix it
> > to 48-bits since arm64 and x86 already do that, and that would still
> > allow a generic way of defining this behavior.
> 
> This is more acceptable. It avoids pretty much all of the rest of the
> issues.
> 
> >
> > > * If you did have a map flag what about merging between VMAs above the
> > >   limit and below it? To avoid that you'd need to implement some kind of a
> > >   'VMA flag that has an arbitrary characteristic' or a 'limit' field,
> > >   adjust all the 'can VMA merge' functions and write extensive testing and
> > >   none of that is frankly acceptable.
> > > * We have some 'weird' arches that might have problem with certain virtual
> > >   address ranges or require arbitrary mappings at a certain address range
> > >   that a limit might not be able to account for.
> > >
> > > I'm absolutely opposed to a new MAP_ flag for this, but even if you
> > > implemented that, it implies a lot of complexity.
> > >
> > > It implies even more complexity if you implement something per-process
> > > except if it were a fixed limit.
> > >
> > > And if you implement a fixed limit, it's hard to see that it'll be
> > > acceptable to everybody, and I suspect we'd still run into some possible
> > > weirdness.
> 
> > >
> > > So again, I'm struggling to see how this concept can be justified in any
> > > form.
> >
> > The piece I am missing here is that this idea is already being used by
> > x86 and arm64. They implicitly force all allocations to be below the
> > 47-bit boundary if the hint address is below 47 bits. This flag is much
> > less invasive because it is opt-in and will not impact any existing
> > code. I am not familiar enough with all of the interactions spread
> > throughout mm to know how these architectures have managed to ensure
> > that this 48-bit limit is enforced across things like mremap() as well.
> >
> 
> I just wrote a bunch above about this and did in the original email.
> 
> The 48-bit limit is much more workable and is across-the-board so it's easy
> to implement. It's the variable thing or map flag thing that's the problem.
> 
> > Are you against the idea that there should be a standard way for
> > applications to consistently obtain address that have free bits, or are
> > you just against this implementation? From your statement I assume you
> > mean that every architecture should continue to have varying behavior
> > and separate implementations for supporting larger address spaces.
> >
> 
> I'm against this implementation, or one with a variable limit.
> 
> An ADDR_LIMIT_48BIT personality I think is the most workable form of
> this. Others may have opinions on this also, but it avoids pretty much all
> of the aforementioned issues and is the least invasive.
> 
> > - Charlie
> >
> 
> Sorry to push back so much on your series, your efforts are appreciated, I
> am simply trying to head off issues in the future especially those that end
> up exposed to userland! :)

No I really appreciate it, thank you for your input! Having a variable
limit is not necessary. The motivation is to create a UABI that would be
flexible into the future where an application may want more (or less)
bits reserved than what ADDR_LIMIT_48BIT would provide. However,
current applications are happy to work with 48-bit address spaces so
having the 48-bit personality is sufficient.

There will still need to be some management in the userland software to
ensure that the personality has been set before any allocations, but
that is unavoidable with any solution.

- Charlie



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ