[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1185896412.12034.17.camel@twins>
Date: Tue, 31 Jul 2007 17:40:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Gregory Haskins <ghaskins@...ell.com>
Cc: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org
Subject: Re: [PATCH 0/2][RFC] VFCIPI v3
On Tue, 2007-07-31 at 11:33 -0400, Gregory Haskins wrote:
> >>> On Tue, Jul 31, 2007 at 10:25 AM, in message <1185891957.12034.12.camel@...ns>,
> Peter Zijlstra <peterz@...radead.org> wrote:
> >(with mistakes).
>
> If there was anything more than what you already mention here, please
> point them out so I don't "port" them over to the workqueues
> implemenation ;)
The two that jumped out at me while skimming over the code were:
- use of raw_spinlock_t (while not always a bug, it does need _very_
good justification)
- not cpu hotplug safe
> > Your idea that GFP_ATOMIC is broken in -rt is unfortunate, it isn't. The
> > whole allocator is preemptable - that is a feature!
>
> Heh...that may be, and I can see how it is useful to have the
> allocatorr be generally preemptible. However, I would have thought
> that "GFP_ATOMIC" means I can call it from atomic code, which is not
> the case apparently. It's probably just my own ignorance again of
> what that flag should really mean ;)
>
> In any case I abstracted the heap_allocator away so that I could make
> an in_atomic friendly allocation. If the GFP_ATOMIC allocator ever
> supports in_atomic in the future, all we have to do is change
> vfcipi/heap.h. I am open to other suggestions to this problem, of
> course.
The thing is, we should never need to allocate from a real atomic
context, every time you end up in that situation, ask yourself if the
problem can be solved differently. That is, it often points at a design
flaw.
Peter
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists