lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 3 Oct 2011 17:41:13 +0200
From:	Gleb Natapov <gleb@...hat.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	joerg.roedel@....com, mingo@...e.hu, a.p.zijlstra@...llo.nl
Subject: Re: [PATCH 6/9] perf, intel: Use GO/HO bits in perf-ctr

On Mon, Oct 03, 2011 at 05:06:27PM +0200, Avi Kivity wrote:
> On 10/03/2011 03:49 PM, Gleb Natapov wrote:
> >Intel does not have guest/host-only bit in perf counters like AMD
> >does.  To support GO/HO bits KVM needs to switch EVENTSELn values
> >(or PERF_GLOBAL_CTRL if available) at a guest entry. If a counter is
> >configured to count only in a guest mode it stays disabled in a host,
> >but VMX is configured to switch it to enabled value during guest entry.
> >
> >This patch adds GO/HO tracking to Intel perf code and provides interface
> >for KVM to get a list of MSRs that need to be switched on a guest entry.
> >
> >Only cpus with architectural PMU (v1 or later) are supported with this
> >patch.  To my knowledge there is not p6 models with VMX but without
> >architectural PMU and p4 with VMX are rare and the interface is general
> >enough to support them if need arise.
> >
> >+
> >+static int core_guest_get_msrs(int cnt, struct perf_guest_switch_msr *arr)
> >+{
> >+	struct cpu_hw_events *cpuc =&__get_cpu_var(cpu_hw_events);
> >+	int idx;
> >+
> >+	if (cnt<  x86_pmu.num_counters)
> >+		return -ENOMEM;
> >+
> >+	for (idx = 0; idx<  x86_pmu.num_counters; idx++)  {
> >+		struct perf_event *event = cpuc->events[idx];
> >+
> >+		arr[idx].msr = x86_pmu_config_addr(idx);
> >+		arr[idx].host = arr[idx].guest = 0;
> >+
> >+		if (!test_bit(idx, cpuc->active_mask))
> >+			continue;
> >+
> >+		arr[idx].host = arr[idx].guest =
> >+				event->hw.config | ARCH_PERFMON_EVENTSEL_ENABLE;
> >+
> >+		if (event->attr.exclude_host)
> >+			arr[idx].host&= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> >+		else if (event->attr.exclude_guest)
> >+			arr[idx].guest&= ~ARCH_PERFMON_EVENTSEL_ENABLE;
> >+	}
> >+
> >+	return 0;
> >+}
> 
> Would be better to calculate these when the host msrs are
> calculated, instead of here, every vmentry.
>
For arch PMU v2 and greater it is precalculated. For v1 (which is almost
non existent, even my oldest cpu with VMX has v2 PMU) I am not sure it
will help since we need to copy information to perf_guest_switch_msr
array anyway here.

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ