lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 05 Dec 2010 14:58:14 +0200
From:	Avi Kivity <avi@...hat.com>
To:	Chris Wright <chrisw@...s-sol.org>
CC:	Rik van Riel <riel@...hat.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...e.hu>,
	Anthony Liguori <aliguori@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 3/3] kvm: use yield_to instead of sleep in kvm_vcpu_on_spin

On 12/03/2010 04:24 AM, Chris Wright wrote:
> * Rik van Riel (riel@...hat.com) wrote:
> >  --- a/virt/kvm/kvm_main.c
> >  +++ b/virt/kvm/kvm_main.c
> >  @@ -1880,18 +1880,53 @@ void kvm_resched(struct kvm_vcpu *vcpu)
> >   }
> >   EXPORT_SYMBOL_GPL(kvm_resched);
> >
> >  -void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu)
> >  +void kvm_vcpu_on_spin(struct kvm_vcpu *me)
> >   {
> >  -	ktime_t expires;
> >  -	DEFINE_WAIT(wait);
> >  +	struct kvm *kvm = me->kvm;
> >  +	struct kvm_vcpu *vcpu;
> >  +	int last_boosted_vcpu = me->kvm->last_boosted_vcpu;
>
> s/me->//
>
> >  +	int first_round = 1;
> >  +	int i;
> >
> >  -	prepare_to_wait(&vcpu->wq,&wait, TASK_INTERRUPTIBLE);
> >  +	me->spinning = 1;
> >  +
> >  +	/*
> >  +	 * We boost the priority of a VCPU that is runnable but not
> >  +	 * currently running, because it got preempted by something
> >  +	 * else and called schedule in __vcpu_run.  Hopefully that
> >  +	 * VCPU is holding the lock that we need and will release it.
> >  +	 * We approximate round-robin by starting at the last boosted VCPU.
> >  +	 */
> >  + again:
> >  +	kvm_for_each_vcpu(i, vcpu, kvm) {
> >  +		struct task_struct *task = vcpu->task;
> >  +		if (first_round&&  i<  last_boosted_vcpu) {
> >  +			i = last_boosted_vcpu;
> >  +			continue;
> >  +		} else if (!first_round&&  i>  last_boosted_vcpu)
> >  +			break;
> >  +		if (vcpu == me)
> >  +			continue;
> >  +		if (vcpu->spinning)
> >  +			continue;
> >  +		if (!task)
> >  +			continue;
> >  +		if (waitqueue_active(&vcpu->wq))
> >  +			continue;
> >  +		if (task->flags&  PF_VCPU)
> >  +			continue;
>
> I wonder if you set vcpu->task in sched_out and then NULL it in sched_in
> if you'd get what you want you could simplify the checks.  Basically
> that would be only the preempted runnable vcpus.

They may be sleeping due to some other reason (HLT, major page fault).

Better check is that the task is runnable but not running.  Can we get 
this information from a task?

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ