lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140610192207.GC8178@tassilo.jf.intel.com>
Date:	Tue, 10 Jun 2014 12:22:07 -0700
From:	Andi Kleen <ak@...ux.intel.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
Cc:	Andi Kleen <andi@...stfloor.org>, peterz@...radead.org,
	gleb@...nel.org, pbonzini@...hat.com, eranian@...gle.com,
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] kvm: Implement PEBS virtualization

On Tue, Jun 10, 2014 at 03:04:48PM -0300, Marcelo Tosatti wrote:
> On Thu, May 29, 2014 at 06:12:07PM -0700, Andi Kleen wrote:
> >  {
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > @@ -407,6 +551,20 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  			return 0;
> >  		}
> >  		break;
> > +	case MSR_IA32_DS_AREA:
> > +		pmu->ds_area = data;
> > +		return 0;
> > +	case MSR_IA32_PEBS_ENABLE:
> > +		if (data & ~0xf0000000fULL)
> > +			break;
> 
> Bit 63 == PS_ENABLE ?

PEBS_EN is [3:0] for each counter, but only one bit on Silvermont.
LL_EN is [36:32], but currently unused.

> 
> >  void kvm_handle_pmu_event(struct kvm_vcpu *vcpu)
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index 33e8c02..4f39917 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -7288,6 +7288,12 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
> >  	atomic_switch_perf_msrs(vmx);
> >  	debugctlmsr = get_debugctlmsr();
> >  
> > +	/* Move this somewhere else? */
> 
> Unless you hook into vcpu->arch.pmu.ds_area and perf_get_ds_area()
> writers, it has to be at every vcpu entry.
> 
> Could compare values in MSR save area to avoid switch.

Ok.

> 
> > +	if (vcpu->arch.pmu.ds_area)
> > +		add_atomic_switch_msr(vmx, MSR_IA32_DS_AREA,
> > +				      vcpu->arch.pmu.ds_area,
> > +				      perf_get_ds_area());
> 
> Should clear_atomic_switch_msr before 
> add_atomic_switch_msr.

Ok.

BTW how about general PMU migration? As far as I can tell there 
is no code to save/restore the state for that currently, right?

-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ