lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZT+V5VlXg/PsIfpM@arm.com>
Date:   Mon, 30 Oct 2023 11:39:17 +0000
From:   "Szabolcs.Nagy@....com" <Szabolcs.Nagy@....com>
To:     Deepak Gupta <debug@...osinc.com>
Cc:     Mark Brown <broonie@...nel.org>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        "dietmar.eggemann@....com" <dietmar.eggemann@....com>,
        "keescook@...omium.org" <keescook@...omium.org>,
        "brauner@...nel.org" <brauner@...nel.org>,
        "shuah@...nel.org" <shuah@...nel.org>,
        "mgorman@...e.de" <mgorman@...e.de>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "fweimer@...hat.com" <fweimer@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
        "hjl.tools@...il.com" <hjl.tools@...il.com>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "vschneid@...hat.com" <vschneid@...hat.com>,
        "catalin.marinas@....com" <catalin.marinas@....com>,
        "bristot@...hat.com" <bristot@...hat.com>,
        "will@...nel.org" <will@...nel.org>,
        "hpa@...or.com" <hpa@...or.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "jannh@...gle.com" <jannh@...gle.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "bsegall@...gle.com" <bsegall@...gle.com>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        "linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "juri.lelli@...hat.com" <juri.lelli@...hat.com>, nd@....com
Subject: Re: [PATCH RFC RFT 2/5] fork: Add shadow stack support to clone3()

The 10/27/2023 16:24, Deepak Gupta wrote:
> On Fri, Oct 27, 2023 at 12:49:59PM +0100, Szabolcs.Nagy@....com wrote:
> > no. the lifetime is the issue: a stack in principle can outlive
> > a thread and resumed even after the original thread exited.
> > for that to work the shadow stack has to outlive the thread too.
> 
> I understand an application can pre-allocate a pool of stack and re-use
> them whenever it's spawning new threads using clone3 system call.
> 
> However, once a new thread has been spawned how can it resume?

a thread can getcontext then exit. later another thread
can setcontext and execute on the stack of the exited
thread and return to a previous stack frame there.

(unlikely to work on runtimes where tls or thread id is
exposed and thus may be cached on the stack. so not for
posix.. but e.g. a go runtime could do this)

> By resume I mean consume the callstack context from an earlier thread.
> Or you meant something else by `resume` here?
> 
> Can you give an example of such an application or runtime where a newly
> created thread consumes callstack context created by going away thread?

my claim was not that existing runtimes are doing this,
but that the linux interface contract allows this and
tieing the stack lifetime to the thread is a change of
contract.

> > (or the other way around: a stack can be freed before the thread
> > exits, if the thread pivots away from that stack.)
> 
> This is simply a thread saying that I am moving to a different stack.
> Again, interested in learning why would a thread do that. If I've to
> speculate on reasons, I could think of user runtime managing it's own
> pool of worker items (some people call them green threads) or current
> stack became too small.

switching stack is common, freeing the original stack may not be,
but there is nothing that prevents this and then the corresponding
shadow stack is clearly leaked if the kernel manages it. the amount
of leak is proportional to the number of live threads and the sum
of their original stack size which can be big.

but as i said i think this lifetime issue is minor compared
to other shadow stack issues, so it is ok if the shadow stack
is kernel managed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ