lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Jul 2012 23:09:10 -0300
From:	Marcelo Tosatti <mtosatti@...hat.com>
To:	Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
Cc:	peterz@...radead.org, mingo@...e.hu, avi@...hat.com,
	raghukt@...ux.vnet.ibm.com, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, x86@...nel.org, jeremy@...p.org,
	vatsa@...ux.vnet.ibm.com, hpa@...or.com
Subject: Re: [PATCH v2 3/7] KVM: Add paravirt kvm_flush_tlb_others

On Tue, Jul 03, 2012 at 01:49:49PM +0530, Nikunj A Dadhania wrote:
> On Tue, 3 Jul 2012 04:55:35 -0300, Marcelo Tosatti <mtosatti@...hat.com> wrote:
> > > 
> > >    if (!zero_mask)
> > > 	   goto again;
> > 
> > Can you please measure increased vmentry/vmexit overhead? x86/vmexit.c 
> > of git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git should 
> > help.
> >
> Sure will get back with the result.
> 
> > > +	/* 
> > > +	 * Guest might have seen us offline and would have set
> > > +	 * flush_on_enter. 
> > > +	 */
> > > +	kvm_read_guest_cached(vcpu->kvm, ghc, vs, 2*sizeof(__u32));
> > > +	if (vs->flush_on_enter) 
> > > +		kvm_x86_ops->tlb_flush(vcpu);
> > 
> > 
> > So flush_tlb_page which was an invlpg now flushes the entire TLB. Did
> > you take that into account?
> > 
> When the vcpu is sleeping/pre-empted out, multiple request for flush_tlb
> could have happened. And now when we are here, it is cleaning up all the
> TLB.

Yes, cases where there are sufficient exits transforming one TLB entry
invalidation into full TLB invalidation should go unnoticed.

> One other approach would be to queue the addresses, that brings us with
> the question: how many request to queue? This would require us adding
> more syncronization between guest and host for updating the area where
> these addresses is shared.

Sounds unnecessarily complicated.

> > > +again:
> > > +		for_each_cpu(cpu, to_cpumask(f->flush_cpumask)) {
> > > +			v_state = &per_cpu(vcpu_state, cpu);
> > > +
> > > +			if (!v_state->state) {
> > 
> > Should use ACCESS_ONCE to make sure the value is not register cached.
> > \
> > > +				v_state->flush_on_enter = 1;
> > > +				smp_mb();
> > > +				if (!v_state->state)
> > 
> > And here.
> > 
> Sure will add this check for both in my next version.
> 
> > > +					cpumask_clear_cpu(cpu, to_cpumask(f->flush_cpumask));
> > > +			}
> > > +		}
> > > +
> > > +		if (cpumask_empty(to_cpumask(f->flush_cpumask)))
> > > +			goto out;
> > > +
> > > +		apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
> > > +				    INVALIDATE_TLB_VECTOR_START + sender);
> > > +
> > > +		loop = 1000;
> > > +		while (!cpumask_empty(to_cpumask(f->flush_cpumask)) && --loop)
> > > +			cpu_relax();
> > > +
> > > +		if (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> > > +			goto again;
> > 
> > Is this necessary in addition to the in-guest-mode/out-guest-mode
> > detection? If so, why?
> > 
> The "case 3" where we initially saw the vcpu was running, and a flush
> ipi is send to the vcpu. During this time the vcpu might be pre-empted,
> so we come out of the loop=1000 with !empty flushmask. We then re-verify
> the flushmask against the current running vcpu and make sure that the
> vcpu that was pre-empted is un-marked and we can proceed out of the
> kvm_flush_tlb_others_ipi without waiting for sleeping/pre-empted vcpus.
> 
> Regards
> Nikunj
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ