[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8A3221EE-0D94-487E-B53D-885A555634BD@amacapital.net>
Date: Sun, 29 Jul 2018 11:55:30 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Rik van Riel <riel@...riel.com>, torvalds@...ux-foundation.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Andy Lutomirski <luto@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...com>,
Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
Dave Hansen <dave.hansen@...el.com>, will.daecon@....com,
Catalin Marinas <catalin.marinas@....com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 03/10] smp,cpumask: introduce on_each_cpu_cond_mask
> On Jul 29, 2018, at 10:51 AM, Rik van Riel <riel@...riel.com> wrote:
>
>> On Sun, 2018-07-29 at 08:36 -0700, Andy Lutomirski wrote:
>>> On Jul 29, 2018, at 5:00 AM, Rik van Riel <riel@...riel.com> wrote:
>>>
>>>> On Sat, 2018-07-28 at 19:57 -0700, Andy Lutomirski wrote:
>>>> On Sat, Jul 28, 2018 at 2:53 PM, Rik van Riel <riel@...riel.com>
>>>> wrote:
>>>>> Introduce a variant of on_each_cpu_cond that iterates only over
>>>>> the
>>>>> CPUs in a cpumask, in order to avoid making callbacks for every
>>>>> single
>>>>> CPU in the system when we only need to test a subset.
>>>> Nice.
>>>> Although, if you want to be really fancy, you could optimize this
>>>> (or
>>>> add a variant) that does the callback on the local CPU in
>>>> parallel
>>>> with the remote ones. That would give a small boost to TLB
>>>> flushes.
>>>
>>> The test_func callbacks are not run remotely, but on
>>> the local CPU, before deciding who to send callbacks
>>> to.
>>>
>>> The actual IPIs are sent in parallel, if the cpumask
>>> allocation succeeds (it always should in many kernel
>>> configurations, and almost always in the rest).
>>>
>>
>> What I meant is that on_each_cpu_mask does:
>>
>> smp_call_function_many(mask, func, info, wait);
>> if (cpumask_test_cpu(cpu, mask)) {
>> unsigned long flags;
>> local_irq_save(flags); func(info);
>> local_irq_restore(flags);
>> }
>>
>> So it IPIs all the remote CPUs in parallel, then waits, then does the
>> local work. In principle, the local flush could be done after
>> triggering the IPIs but before they all finish.
>
> Grepping around the code, I found a few examples where the
> calling code appears to expect that smp_call_function_many
> also calls "func" on the local CPU.
>
> For example, kvm_emulate_wbinvd_noskip has this:
>
> if (kvm_x86_ops->has_wbinvd_exit()) {
> int cpu = get_cpu();
>
> cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
> smp_call_function_many(vcpu->arch.wbinvd_dirty_mask,
> wbinvd_ipi, NULL, 1);
> put_cpu();
> cpumask_clear(vcpu->arch.wbinvd_dirty_mask);
> } else
> wbinvd();
>
> This seems to result in systems with ->has_wbinvd_exit
> only calling wbinvd_ipi on OTHER CPUs, and not on the
> CPU where the guest exited with wbinvd?
>
> This seems unintended.
>
> I guess looking into on_each_cpu_mask might be a little
> higher priority than waiting until the next Outreachy
> season :)
>
The right approach might be a tree wise rename from smp_call_... to on_other_cpus_mask() it similar. The current naming and semantics are extremely confusing.
Linus, this is the kind of thing you seem to like taking outside the merge window. What do you think about a straight-up search_and_replace to make rename the smp_call_... functions to exactly match the corresponding on_each_cpu functions except with “each” replaced with “other”?
Powered by blists - more mailing lists