[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTi=npNjb06_yaYezUb28pKC7dnmGjA@mail.gmail.com>
Date: Tue, 5 Apr 2011 15:35:40 +0545
From: Ben Nagy <ben@...u.net>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Avi Kivity <avi@...hat.com>, Eric Dumazet <eric.dumazet@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
KVM list <kvm@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
John Stultz <johnstul@...ibm.com>,
Richard Cochran <richard.cochran@...cron.at>,
Mike Galbraith <efault@....de>
Subject: Re: [PATCH] posix-timers: RCU conversion
On Tue, Apr 5, 2011 at 2:48 PM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> On Tue, 2011-04-05 at 11:56 +0300, Avi Kivity wrote:
>>
>> Could be waking up due to guest wakeups, or qemu internal wakeups
>> (display refresh) or due to guest timer sources which are masked away in
>> the guest (if that's the case we should optimize it away).
>
> Right, so I guess we're all clutching at straws here :-)
>
> Ben how usable is that system when its in that state? Could you run a
> function trace or a trace with all kvm and sched trace-events enabled?
I'm just rebuilding the storage on the network to work around an ocfs2
kernel oops (trying nfs/rdma) so I can't test anything just at the
moment.
I ran some tests under load with the local ext4 ssd, and, weirdly,
everything looked to be just fine - the huge bulk of the system time
was in svm_vcpu_run, which is as it should be I guess, but that was
with only 60 loaded guests.
I'll be able to repeat the same workload test tomorrow, and I'll see
how the perf top stuff looks. I should also be able to repeat the '96
idle guests' test and see if it's the same - if so we'll do that
tracing. My kernel's a moving target at the moment, sorry, we're
tracking the natty git (with Eric's rcu patch merged in).
Thanks, all,
ben
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists