[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100922204323.GG19804@ZenIV.linux.org.uk>
Date: Wed, 22 Sep 2010 21:43:24 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
sparclinux@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [GIT] Sparc
On Wed, Sep 22, 2010 at 08:53:49PM +0100, Al Viro wrote:
> On Wed, Sep 22, 2010 at 12:08:53PM -0700, Linus Torvalds wrote:
> > On Wed, Sep 22, 2010 at 11:53 AM, Al Viro <viro@...iv.linux.org.uk> wrote:
> > >
> > > Um, no. ?You've *already* called get_signal_to_deliver(). ?There had been
> > > no SIGSEGV in sight. ?You happily went on to set a sigframe for e.g.
> > > SIGHUP, but ran out of stack. ?At that point you get force_sigsegv()
> > > from handle_signal(). ?_NOW_ you have a pending SIGSEGV
> >
> > Ahh. Ok. Different case from the one I thought you were worried about.
> > And yeah, I guess that one does require us to mess with the low-level
> > asm code (although I do wonder if we could not make the whole
> > do_notify_resume + reschedule code be generic C code - it's a lot of
> > duplicated subtle asm as it is).
>
> Worse than just that... Note that on sparc you need to deal with
> fault_in_user_windows(), which can also trigger signals.
Actually, I wonder why don't we do the following:
1) check wsaved first, do fault_in_user_windows() if needed (and probably do
Something Cruel(tm) if we fail copy_to_user() in there)
2) in a loop check if we need to reschedule / if we need to handle signals
3) don't bother with wsaved checks in setup_frame() variants at all -
wsaved can't grow back at that point; we also can use flush_user_windows()
instead of full synchronize_user_stack() in there.
It's definitely a separate patch, but it looks like it might be worth
doing... Comments?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists