[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201003111546.44059.sheng@linux.intel.com>
Date: Thu, 11 Mar 2010 15:46:43 +0800
From: Sheng Yang <sheng@...ux.intel.com>
To: Avi Kivity <avi@...hat.com>
Cc: Marcelo Tosatti <mtosatti@...hat.com>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
"Zhang, Yanmin" <yanmin.zhang@...el.com>
Subject: Re: [PATCH] x86/kvm: Show guest system/user cputime in cpustat
On Thursday 11 March 2010 15:36:01 Avi Kivity wrote:
> On 03/11/2010 09:20 AM, Sheng Yang wrote:
> > Currently we can only get the cpu_stat of whole guest as one. This patch
> > enhanced cpu_stat with more detail, has guest_system and guest_user cpu
> > time statistics with a little overhead.
> >
> > Signed-off-by: Sheng Yang<sheng@...ux.intel.com>
> > ---
> >
> > This draft patch based on KVM upstream to show the idea. I would split it
> > into more kernel friendly version later.
> >
> > The overhead is, the cost of get_cpl() after each exit from guest.
>
> This can be very expensive in the nested virtualization case, so I
> wouldn't like this to be in normal paths. I think detailed profiling
> like that can be left to 'perf kvm', which only has overhead if enabled
> at runtime.
Yes, that's my concern too(though nested vmcs/vmcb read already too expensive,
they should be optimized...). The other concern is, perf alike mechanism would
bring a lot more overhead compared to this.
> For example you can put the code to note the cpl in a tracepoint which
> is enabled dynamically.
Yanmin have already implement "perf kvm" to support this. We are just arguing
if a normal top-alike mechanism is necessary.
I am also considering to make it a feature that can be disabled. But seems it
make things complicate and result in uncertain cpustat output.
--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists