[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzrKQm1vxLgnxvHtmmmVQ2L7E15bVkCRsr1FZ-jrdfnXg@mail.gmail.com>
Date: Thu, 13 Aug 2015 17:08:21 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Stas Sergeev <stsp@...t.ru>
Cc: Andy Lutomirski <luto@...capital.net>,
Raymond Jennings <shentino@...il.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Pavel Emelyanov <xemul@...allels.com>,
Linux kernel <linux-kernel@...r.kernel.org>
Subject: Re: [regression] x86/signal/64: Fix SS handling for signals delivered
to 64-bit programs breaks dosemu
On Thu, Aug 13, 2015 at 5:00 PM, Stas Sergeev <stsp@...t.ru> wrote:
> 14.08.2015 02:00, Andy Lutomirski пишет:
>>
>> DOSEMU, when you set that flag, WRFSBASE gets enabled, and glibc's
>> threading library starts using WRFSBASE instead of arch_prctl.
>
> Hmm, how about the following:
>
> prctl(ARCH_SET_SIGNAL_FS, my_tls)
> If my_tls==NULL - use current fsbase (including one of WRFSBASE).
> If my_tls==(void)-1 - don't restore.
>
> Can this work?
I'm really inclined to wonder whether we need the change and such a flag at all.
Basically, no _normal_ application will ever play with segments at all
on x86-64. So our current behavior of not touching any segments at all
for signal handling would seem to be the right thing to do - because
it handles all the sane cases optimally.
And applications that *do* play with segments very much know they do
so, and we already put the onus on *them* to save/restore segments.
That's how dosemu clearly works today.
So why not just keep to that policy? It has worked fairly well so
far. Only when we tried to change that policy did we hit these
problems, because existing applications obviously already live with
what we do (or rather, what we _don't_ do) right now...
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists