lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1277350560.2096.878.camel@ymzhang.sh.intel.com>
Date:	Thu, 24 Jun 2010 11:36:00 +0800
From:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
	Ingo Molnar <mingo@...e.hu>,
	Fr??d??ric Weisbecker <fweisbec@...il.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Cyrill Gorcunov <gorcunov@...il.com>,
	Lin Ming <ming.m.lin@...el.com>,
	Sheng Yang <sheng@...ux.intel.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	oerg Roedel <joro@...tes.org>,
	Jes Sorensen <Jes.Sorensen@...hat.com>,
	Gleb Natapov <gleb@...hat.com>,
	Zachary Amsden <zamsden@...hat.com>, zhiteng.huang@...el.com,
	tim.c.chen@...el.com, Alexander Graf <agraf@...e.de>,
	Carsten Otte <carsteno@...ibm.com>,
	"Zhang, Xiantao" <xiantao.zhang@...el.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH V2 3/5] ara virt interface of perf to support kvm guest
 os statistics collection in guest os

On Wed, 2010-06-23 at 08:51 +0300, Avi Kivity wrote:
> On 06/23/2010 06:12 AM, Zhang, Yanmin wrote:
> >>>
> >>> This design is to deal with a task context perf collection in guest os.
> >>> Scenario 1:
> >>> 1) guest os starts to collect statistics of process A on vcpu 0;
> >>> 2) process A is scheduled to vcpu 1. Then, the perf_event at host side need
> >>> to be moved to VCPU 1 's thread. With the per KVM instance design, we needn't
> >>> move host_perf_shadow among vcpus.
> >>>
> >>>        
> >> First, the guest already knows how to deal with per-cpu performance
> >> monitors, since that's how most (all) hardware works.  So we aren't
> >> making the guest more complex, and on the other hand we simplify the host.
> >>      
> > I agree that we need keep things simple.
> >
> >    
> >> Second, if process A is migrated, and the guest uses per-process
> >> counters, the guest will need to stop/start the counter during the
> >> migration.  This will cause the host to migrate the counter,
> >>      
> > Agree. My patches do so.
> >
> > Question: Where does host migrate the counter?
> > The perf event at host side is bound to specific vcpu thread.
> >    
> 
> If the perf event is bound to the vm, not a vcpu, then on guest process 
> migration you will have to disable it on one vcpu and enable it on the 
> other, no?
I found we start from different points. This patch is to implement a para virt
interface based on current perf implementation in kernel.

Here is a diagram about perf implementation layers. Below picture is not precise,
but it could show perf layers. Ingo and Peter could correct me if something is wrong.

		-------------------------------------------------
		|  Perf Generic Layer				|
		-------------------------------------------------
		|  PMU Abstraction Layer	|	  
		|  (a couple of callbacks)	|	  
		-------------------------------------------------
		|  x86_pmu				   	|
		|  (operate real PMU hardware)			|
		-------------------------------------------------


The upper layer is perf generic layer. The 3rd layer is x86_pmu which really
manipulate PMU hardware. Sometimes, 1st calls 3rd directly at event initialization
and enable/disable all events.

My patch implements a kvm_pmu at the 2nd layer in guest os, to call hypercall to vmexit
to host. At host side, mostly it would go through the 3 layers till accessing real
hardware.

Most of your comments don't agree with the kvm_pmu design. Although you didn't say
directly, I know that perhaps you want to implement para virt interface at 3rd layer
in guest os. That means guest os maintains a mapping between guest event and PMU counters.
That's why you strongly prefer per-vcpu event managements and idx reference to event.
If we implement it at 3rd layer (or something like that although you might say I don't
like that layer...) in guest, we need bypass 1st and 2nd layers in host kernel when
processing guest os event. Eventually, we almost add a new layer under x86_pmu to arbitrate
between perf PMU request and KVM guest event request.

My current patch arranges the calling to go through the whole perf stack at host side.
The upper layer arranges perf event scheduling on PMU hardware. Applications don't know
when its event will be really scheduled to real hardware as they needn't know.


> 
> >>   so while we
> >> didn't move the counter to a different vcpu,
> >>      
> > Disagree here. If process A on vcpu 0 in guest os is migrated to vcpu 1,
> > host has to move process A's perf_event to vcpu 1 thread.
> >    
> 
> Sorry, I'm confused now (lost track of our example).  But whatever we 
> do, if a guest process is migrated, the host will have to migrate the 
> perf event, yes?
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ