[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5433D429.3020804@imgtec.com>
Date: Tue, 7 Oct 2014 12:53:13 +0100
From: James Hogan <james.hogan@...tec.com>
To: David Daney <david.s.daney@...il.com>,
"Kevin D. Kissell" <kevink@...alogos.com>,
David Daney <ddaney.cavm@...il.com>,
<libc-alpha@...rceware.org>, Leonid <Leonid.Yegoshin@...tec.com>
CC: <linux-kernel@...r.kernel.org>, <linux-mips@...ux-mips.org>,
David Daney <david.daney@...ium.com>
Subject: Re: [PATCH resend] MIPS: Allow FPU emulator to use non-stack area.
On 07/10/14 05:32, David Daney wrote:
> If the kernel automatically allocated the emulation locations, what
> would happen if there were a signal that interrupted the emulation, and
> the signal handler did a longjump to somewhere else? How would we clean
> up the now unused emulation memory allocations?
AFAICT, Leonid's implementation also has this problem, and that has a
separate stack of emuframes per thread managed completely by the kernel.
Essentially the kernel doesn't manage the stack, userland does, and
userland can choose to skip over sigframes and emuframes with siglongjmp
without telling the kernel.
Userland can even switch between contexts (which includes stack) with
setcontext (coroutines etc) which breaks the assumption in Leonid's
patches that emuframes will be completed in reverse order to them being
started, again demonstrating that it is essentially userland that
manages the stack.
I think any attempt by the kernel to keep track of user stacks (e.g. by
storing a stack pointer along with the emuframe so that unused emuframes
can be discarded later when stack pointer goes high again) will be
foiled by setcontext.
Hmm, I can't see a way forward that doesn't involve invasive userland
handling & ABI changes other than giving up with non-executable stacks
or limiting permitted instructions in delay slots to those Linux knows
how to emulate directly.
Cheers
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists