lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZTxGovqKdhA5hYMz@debug.ba.rivosinc.com>
Date:   Fri, 27 Oct 2023 16:24:18 -0700
From:   Deepak Gupta <debug@...osinc.com>
To:     "Szabolcs.Nagy@....com" <Szabolcs.Nagy@....com>
Cc:     Mark Brown <broonie@...nel.org>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        "dietmar.eggemann@....com" <dietmar.eggemann@....com>,
        "keescook@...omium.org" <keescook@...omium.org>,
        "brauner@...nel.org" <brauner@...nel.org>,
        "shuah@...nel.org" <shuah@...nel.org>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "fweimer@...hat.com" <fweimer@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
        "hjl.tools@...il.com" <hjl.tools@...il.com>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "vschneid@...hat.com" <vschneid@...hat.com>,
        "catalin.marinas@....com" <catalin.marinas@....com>,
        "bristot@...hat.com" <bristot@...hat.com>,
        "will@...nel.org" <will@...nel.org>,
        "hpa@...or.com" <hpa@...or.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "jannh@...gle.com" <jannh@...gle.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "bsegall@...gle.com" <bsegall@...gle.com>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "juri.lelli@...hat.com" <juri.lelli@...hat.com>
Subject: Re: [PATCH RFC RFT 2/5] fork: Add shadow stack support to clone3()

On Fri, Oct 27, 2023 at 12:49:59PM +0100, Szabolcs.Nagy@....com wrote:
>The 10/26/2023 13:40, Deepak Gupta wrote:
>> On Thu, Oct 26, 2023 at 06:53:37PM +0100, Mark Brown wrote:
>> > I'm not sure placement control is essential but the other bit of it is
>> > the freeing of the shadow stack, especially if userspace is doing stack
>> > switches the current behaviour where we free the stack when the thread
>> > is exiting doesn't feel great exactly.  It's mainly an issue for
>> > programs that pivot stacks which isn't the common case but it is a
>> > general sharp edge.
>>
>> In general, I am assuming such placement requirements emanate because
>> regular stack holds data (local args, etc) as well and thus software may
>> make assumptions about how stack frame is prepared and may worry about
>> layout and such. In case of shadow stack, it can only hold return
>
>no. the lifetime is the issue: a stack in principle can outlive
>a thread and resumed even after the original thread exited.
>for that to work the shadow stack has to outlive the thread too.
>

I understand an application can pre-allocate a pool of stack and re-use
them whenever it's spawning new threads using clone3 system call.

However, once a new thread has been spawned how can it resume?
By resume I mean consume the callstack context from an earlier thread.
Or you meant something else by `resume` here?

Can you give an example of such an application or runtime where a newly
created thread consumes callstack context created by going away thread?

>(or the other way around: a stack can be freed before the thread
>exits, if the thread pivots away from that stack.)

This is simply a thread saying that I am moving to a different stack.
Again, interested in learning why would a thread do that. If I've to
speculate on reasons, I could think of user runtime managing it's own
pool of worker items (some people call them green threads) or current
stack became too small.

JIT runtimes (and such stuff like go routines) do such things but in
those cases, kernel has no idea about it. From kernel's perspective
there is a main thread stack (hosting thread for JIT) and then main
thread can take a decision switching stack to execute JITted code.
But in that case all it needs is a shadow stack and managing lifetime of
such shadow stack using `clone` wouldn't be helpful and perhaps
`map_shadow_stack` should be used to create on the fly shadow stack.

Another case I can think of for a thread to move to a different stack
when current stack was too small and it wants larger memory. In such
cases as well, I imagine that particular thread would be issuing `mmap`
to allocate larger memory and thus that particular thread can very well
issue `map_shadow_stack`

In both of these cases, a stack free actually means thread (application)
issuing a system call to free the going away stack memory. It can free up
going away shadow stack memory in same way using `unmap_shadow_stack`

Let me know if I misunderstood something or missing some other usecase of
a stack being freed before the thread exits.

>
>posix threads etc. don't allow this, but the linux syscall abi
>(clone) does allow it.
>
>i think it is reasonable to tie the shadow stack lifetime to the
>thread lifetime, but this clearly introduces a limitation on how
>the clone api can be used. such constraint on the userspace
>programming model is normally a bad decision, but given that most
>software (including all posix conforming code) is not affected,
>i think it is acceptable for an opt-in feature like shadow stack.
>
>IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ