[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110512142449.GI8707@8bytes.org>
Date: Thu, 12 May 2011 16:24:49 +0200
From: Joerg Roedel <joro@...tes.org>
To: Avi Kivity <avi@...hat.com>
Cc: Jan Kiszka <jan.kiszka@...mens.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: [PATCH v1 0/5] KVM in-guest performance monitoring
On Thu, May 12, 2011 at 04:31:38PM +0300, Avi Kivity wrote:
> - when the cpu gains support for virtualizing the architectural feature,
> we transparently speed the guest up, including support for live
> migrating from a deployment that emulates the feature to a deployment
> that properly virtualizes the feature, and back. Usually the
> virtualized support will beat the pants off any paravirtualization we can
> do
> - following an existing spec is a lot easier to get right than doing
> something from scratch
> - no need to meticulously document the feature
Need to be done, but not problematic I think.
> - easier testing
Testing shouldn't be different on both variants I think.
> - existing guest support - only need to write the host side (sometimes
> the only one available to us)
Otherwise I agree.
> Paravirtualizing does have its advantages. For the PMU, for example, we
> can have a single hypercall read and reprogram all counters, saving
> *many* exits. But I think we need to start from the architectural PMU
> and see exactly what the problems are, before we optimize it to death.
The problem certainly is that with arch-pmu we add a lot of msr-exits to
the guest-context-switch path if it uses per-task profiling. Depending
on the workload this can very much distort the results.
Joerg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists