[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48927260.2060302@qualcomm.com>
Date: Thu, 31 Jul 2008 19:18:08 -0700
From: Max Krasnyansky <maxk@...lcomm.com>
To: Dmitry Adamushko <dmitry.adamushko@...il.com>
CC: Peter Oruba <peter.oruba@....com>, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Tigran Aivazian <tigran@...azian.fsnet.co.uk>,
"H. Peter Anvin" <hpa@...or.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [patch 0/4] x86: AMD microcode patch loading v2 fixes
Dmitry Adamushko wrote:
> 2008/7/30 Max Krasnyansky <maxk@...lcomm.com>:
>> Dmitry Adamushko wrote:
>>> 2008/7/30 Peter Oruba <peter.oruba@....com>:
>>>>> [ ... ]
>>>> Since ucode updates may fix severe issues, it is supposed to happen as early
>>>> as possible. If you re-plug your CPU into your socket, your BIOS also
>>>> applies a ucode patch, but that won't necessarily be the latest and critical
>>>> one.
>> Sure. The question is would not workqueue be soon enough ?
>> I'd say it is given the non-deterministic CPU hotplug callback sequence.
>
> Max, cpu-hotplug callbacks might have been not the best choice in the
> first place. So a comparison with them is not that relevant :-)
The reason I thought it's relevant is "hey it has worked before" :)
I mean it looks like people were happy with updating microcode from
hotplug. Also the original interface was driven entirely by the
userspace which tells me that timing of the microcode update was not
considered critical.
>>> Hum, let's say we don't do it from cpu-hotplug handlers [1] but from
>>> start_secondary() before calling cpu_idle()? [*]
>>>
>>> This way, we do it before any other task may have a chance to run on a
>>> cpu which is not a case with cpu-hotplug handlers
>>> (and we don't mess-up with cpu-hotplug events :-)
>>>
>>> [ the drawback is that 'microcode' subsystem is not local to
>>> microcode.c anymore ]
>>>
>>> [1] if we need a sync. operation in cpu-hotplug handlers and IPI is
>>> not ok (say, we need to run in a sleepablel context) then perhaps it's
>>> workqueues + wait_on_cpu_work(). But then it's not a bit later than
>>> could have been with [*].
>> Why would not IPI be ok ? From looking at the code all we have to do is to
>> factor request_firmware() out of the update path. So we'd do
>> collect_cpu_info() in the IPI, then do request_firwmare() inplace and then do
>> apply_microcode() in the IPI. ie The only thing that sleeps is request_firmware().
>
> I think it's quite a complecated scheme. I still wonder whether e.g.
> start_secondary() - cpu_idle() would be a better place or we just move
> set_cpu(cpu, cpu_active_map) a bit :^)
Sure. I'm ok with start_secondary() or whatever I was just saying that
IPI would work and yes maybe it's a bit more complicated.
btw I still think workqueue would work just fine.
> But you know, at least short-term, it'd be nice if whoever might come
> up with any working solution. It's already -rc1 and this thing is
> still broken ;-)
Agree. I was going to implement/test workqueue based solution but did/do
not have spare cycles (24x7 in the lab these days).
> btw., I've greped for "set_cpus_allowed_ptr()" and the following
> scheme seems to be quite wide-spread (didn't check all of them so
> maybe someone else does call it from cpu-hotplug notifications, heh)
>
> cpus_allowed = current->cpus_allowed;
> set_cpus_allowed_ptr(current, cpus);
> // do_something
> set_cpus_allowed_ptr(current, &cpus_allowed);
>
> but _not_ safely used indeed. argh
Uh, that's not good. We need to fix all that. I can think of a bunch of
interesting races. Like adding a process to a cpuset while it was doing
that "something" above.
Max
Max
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists