[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DCBF0B7.7080102@redhat.com>
Date: Thu, 12 May 2011 17:37:43 +0300
From: Avi Kivity <avi@...hat.com>
To: Joerg Roedel <joro@...tes.org>
CC: Jan Kiszka <jan.kiszka@...mens.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: [PATCH v1 0/5] KVM in-guest performance monitoring
On 05/12/2011 05:24 PM, Joerg Roedel wrote:
> > Paravirtualizing does have its advantages. For the PMU, for example, we
> > can have a single hypercall read and reprogram all counters, saving
> > *many* exits. But I think we need to start from the architectural PMU
> > and see exactly what the problems are, before we optimize it to death.
>
> The problem certainly is that with arch-pmu we add a lot of msr-exits to
> the guest-context-switch path if it uses per-task profiling. Depending
> on the workload this can very much distort the results.
Right. The combination of per-task profiling with high context switch
rates is problematic.
One thing we could do is paravirtualize at a lower level, introduce a
hypercall for batch MSR reads and writes. So we can use the existing
PMU semantics and code, just optimize the switch. This is similar to
what Xen did with lazy cpu updates, and what kvm did for paravirt
pagetable writes.
I've considered something similar for mmio - use hypercalls for ordinary
mmio to avoid calling into the emulator - but virtio uses pio which
isn't emulated and we don't have massive consumers of mmio (except
perhaps hpet).
(and we can have a cpuid bit that advertises whether we recommend to use
this feature for PMU MSRs; if/when we get hardware support, we turn it off)
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists