[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100624123804.GK578@basil.fritz.box>
Date: Thu, 24 Jun 2010 14:38:04 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Andi Kleen <andi@...stfloor.org>,
Huang Ying <ying.huang@...el.com>, Ingo Molnar <mingo@...e.hu>,
"H.PeterA" <"nvin hpa"@zytor.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH] irq_work
On Thu, Jun 24, 2010 at 02:18:07PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-06-24 at 14:02 +0200, Andi Kleen wrote:
> > On Thu, Jun 24, 2010 at 01:57:29PM +0200, Peter Zijlstra wrote:
> > > On Thu, 2010-06-24 at 13:55 +0200, Andi Kleen wrote:
> > > > > but we don't have anything else that does that.
> > > >
> > > > Actually we do, audit in syscalls and scheduling in interrupts and signals
> > > > all work this way. Probably more at some point adding more code to this
> > > > path was very popular.
> > >
> > > That's the return to user path, nothing to do with softirqs. Add a TIF
> > > flag and call your function there.
> >
> > It does that, but there are some cases where it's not enough.
>
> care to expand on that?
This is for execution context error recovery.
TIF works for user space, but it's a bit ugly because it requires adding
more data to the task_struct because CPUs can change. The sleepable
soft irq would have avoided that (that's not a show stopper)
The other case was to recover from a *_user() error in the kernel.
I originally had some fancy code for preemptive kernels that
exploited that you could sleep here
(it doesn't work for non preemptive unfortunately because we can't
know if locks are hold and some *_user are expected to
never sleep)
But there were still ugly special cases for switching stacks
and the sleepable softirqs could have avoided that.
Anyways the later is not fatal either, but it would have been
nice to solve that one.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists