[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48A08270.7080303@goop.org>
Date: Mon, 11 Aug 2008 11:18:24 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Arjan van de Ven <arjan@...radead.org>
CC: Nick Piggin <nickpiggin@...oo.com.au>,
Venki Pallipadi <venkatesh.pallipadi@...el.com>,
Jens Axboe <jens.axboe@...cle.com>,
Ingo Molnar <mingo@...e.hu>, npiggin@...e.de,
linux-kernel <linux-kernel@...r.kernel.org>,
suresh.b.siddha@...el.com
Subject: Re: [PATCH] stack and rcu interaction bug in smp_call_function_mask()
Arjan van de Ven wrote:
> On Sun, 10 Aug 2008 21:26:18 -0700
> Jeremy Fitzhardinge <jeremy@...p.org> wrote:
>
>
>> Nick Piggin wrote:
>>
>>> Nice debugging work.
>>>
>>> I'd suggest something like the attached (untested) patch as the
>>> simple fix for now.
>>>
>>> I expect the benefits from the less synchronized,
>>> multiple-in-flight-data global queue will still outweigh the costs
>>> of dynamic allocations. But if worst comes to worst then we just go
>>> back to a globally synchronous one-at-a-time implementation, but
>>> that would be pretty sad!
>>>
>> What if we went the other way and strictly used queue-per-cpu? It
>> means multicast would require multiple enqueueing operations, which
>> is a bit heavy, but it does make dequeuing and lifetime management
>> very simple...
>>
>
> as long as send-to-all is still one apic operation.. otherwise it gets
> *really* expensive....
> (just think about waking all cpus up out of their C-states.. one by one
> getting the full exit latency sequentially)
>
Well, send-to-all can be special cased (is already at the apic IPI
level, but we could have a broadcast queue as well).
But I wonder how common an operation that is? Most calls to
smp_call_function_mask are sending to mm->cpu_vm_mask. For a small
number of cores, that could well be broadcast, but as the core count
goes up, the likelihood that all cpus have been involved with a given mm
will go down (very workload dependent, of course).
It could be that if we're sending to more than some proportion of the
cpus, it would be more efficient to just broadcast, and let the cpus
work out whether they need to do anything or not. But that's more or
less the scheme we have now...
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists