[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <46AF2756.BA47.005A.0@novell.com>
Date: Tue, 31 Jul 2007 12:14:45 -0400
From: "Gregory Haskins" <ghaskins@...ell.com>
To: "Peter Zijlstra" <peterz@...radead.org>
Cc: "Ingo Molnar" <mingo@...e.hu>, <linux-kernel@...r.kernel.org>,
<linux-rt-users@...r.kernel.org>
Subject: Re: [PATCH 0/2][RFC] VFCIPI v3
>>> On Tue, Jul 31, 2007 at 11:40 AM, in message <1185896412.12034.17.camel@...ns>,
Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, 2007-07-31 at 11:33 -0400, Gregory Haskins wrote:
>>
>> If there was anything more than what you already mention here, please
>> point them out so I don't "port" them over to the workqueues
>> implemenation ;)
>
> The two that jumped out at me while skimming over the code were:
>
> - use of raw_spinlock_t (while not always a bug, it does need _very_
> good justification)
Ah, yes. This was done on purpose based on the design parameters. I needed the calls to work from atomic context (which smp_call_function() often is called under). However, note that I am sensitive to the impact this decision causes and you will see that the lock scopes are all very tight and light (at least, IMHO).
> - not cpu hotplug safe
Yeah, agreed. I was aware of this potential race against the "for_each_online_cpu()". However, I am not knowledgeable enough (yet) to have even attempted a cursory stab at how to support the notification mechanism, so I just left the gaping hole. It will definitely need to be addressed before any serious merge consideration for whatever final form this thing takes. I should have commented that.
>
> The thing is, we should never need to allocate from a real atomic
> context
I agree with you on principle. Making unnecessary external calls within a lock scope should always be avoided when possible. However, in this case I had the following design parameters:
1) "heap" allocations over something like static/stack allocations allowed a higher degree of parallelization while supporting asynchronous calls, which was an existing feature of smp_call().
2) The context in which the function could be invoked is beyond my control, and is often atomic.
> every time you end up in that situation, ask yourself if the
> problem can be solved differently.
Ah, but it was. I wrote my own cheeseball heap manager ;) And it was nicely abstracted and conditionally compiled in case GFP_ATOMIC (or equivalent) ever popped up on the radar.
The fact is, when deciding between finer grained parallelism and managing a simple heap myself....the heap code really isn't rocket science ;)
Out of curiosity and for my own edification: What *is* GFP_ATOMIC meant for?
Regards,
-Greg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists