[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <487C0A76.8060401@qualcomm.com>
Date: Mon, 14 Jul 2008 19:24:54 -0700
From: Max Krasnyansky <maxk@...lcomm.com>
To: Heiko Carstens <heiko.carstens@...ibm.com>
CC: Jeremy Fitzhardinge <jeremy@...p.org>,
Rusty Russell <rusty@...tcorp.com.au>,
Christian Borntraeger <borntraeger@...ibm.com>,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
Zachary Amsden <zach@...are.com>
Subject: Re: [PATCH] stopmachine: add stopmachine_timeout
Heiko Carstens wrote:
> On Mon, Jul 14, 2008 at 11:56:18AM -0700, Jeremy Fitzhardinge wrote:
>> Rusty Russell wrote:
>>> On Monday 14 July 2008 21:51:25 Christian Borntraeger wrote:
>>>> Am Montag, 14. Juli 2008 schrieb Hidetoshi Seto:
>>>>
>>>>> + /* Wait all others come to life */
>>>>> + while (cpus_weight(prepared_cpus) != num_online_cpus() - 1) {
>>>>> + if (time_is_before_jiffies(limit))
>>>>> + goto timeout;
>>>>> + cpu_relax();
>>>>> + }
>>>>> +
>>>>>
>>>> Hmm. I think this could become interesting on virtual machines. The
>>>> hypervisor might be to busy to schedule a specific cpu at certain load
>>>> scenarios. This would cause a failure even if the cpu is not really locked
>>>> up. We had similar problems with the soft lockup daemon on s390.
>>> 5 seconds is a fairly long time. If all else fails we could have a config
>>> option to simply disable this code.
>
> Hmm.. probably a stupid question: but what could happen that a real cpu
> (not virtual) becomes unresponsive so that it won't schedule a MAX_RT_PRIO-1
> prioritized task for 5 seconds?
I have a workload where MAX_PRIO RT thread runs and never yeilds. That's what
my cpu isolation patches/tree addresses. Stopmachine is the only (that I know
of) thing that really brakes in that case. btw In case you're wondering yes
we've discussed workqueue threads starvation and stuff in the other threads.
So yet it can happen.
>>>> It would be good to not-use wall-clock time, but really used cpu time
>>>> instead. Unfortunately I have no idea, if that is possible in a generic
>>>> way. Heiko, any ideas?
>>> Ah, cpu time comes up again. Perhaps we should actually dig that up again;
>>> Zach and Jeremy CC'd.
>> Hm, yeah. But in this case, it's tricky. CPU time is an inherently
>> per-cpu quantity. If cpu A is waiting for cpu B, and wants to do the
>> timeout in cpu-seconds, then it has to be in *B*s cpu-seconds (and if A
>> is waiting on B,C,D,E,F... it needs to measure separate timeouts with
>> separate timebases for each other CPU). It also means that if B is
>> unresponsive but also not consuming any time (blocked in IO,
>> administratively paused, etc), then the timeout will never trigger.
>>
>> So I think monotonic wallclock time actually makes the most sense here.
>
> This is asking for trouble... a config option to disable this would be
> nice. But as I don't know which problem this patch originally addresses
> it might be that this is needed anyway. So lets see why we need it first.
How about this. We'll make this a sysctl, as Rusty already did, and set the
default to 0 which means "never timeout". That way crazy people like me who
care about this scenario can enable this feature.
btw Rusty, I just had this "why didn't I think of that" moments. This is
actually another way of handling my workload. I mean it certainly does not fix
the root case of the problems and we still need other things that we talked
about (non-blocking module delete, lock-free module insertion, etc) but at
least in the mean time it avoids wedging the machines for good.
btw I'd like that timeout in milliseconds. I think 5 seconds is way tooooo
long :).
Max
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists