[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50641D84.2020807@redhat.com>
Date: Thu, 27 Sep 2012 11:33:56 +0200
From: Avi Kivity <avi@...hat.com>
To: Gleb Natapov <gleb@...hat.com>
CC: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Srikar <srikar@...ux.vnet.ibm.com>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
KVM <kvm@...r.kernel.org>, Jiannan Ouyang <ouyang@...pitt.edu>,
chegu vinod <chegu_vinod@...com>,
"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>
Subject: Re: [PATCH RFC 1/2] kvm: Handle undercommitted guest case in PLE
handler
On 09/27/2012 11:11 AM, Gleb Natapov wrote:
>>
>> User return notifier is per-cpu, not per-task. There is a new task_work
>> (<linux/task_work.h>) that does what you want. With these
>> technicalities out of the way, I think it's the wrong idea. If a vcpu
>> thread is in userspace, that doesn't mean it's preempted, there's no
>> point in boosting it if it's already running.
>>
> Ah, so you want to set bit in kvm->preempted_vcpus if task is _not_
> TASK_RUNNING in sched_out (you wrote opposite in your email)? If a task
> is in userspace it is definitely not preempted.
No, as I originally wrote. If it's TASK_RUNNING when it saw sched_out,
then it is preempted (i.e. runnable), not sleeping on some waitqueue,
voluntarily (HLT) or involuntarily (page fault).
>
>> btw, we can have secondary effects. A vcpu can be waiting for a lock in
>> the host kernel, or for a host page fault. There's no point in boosting
>> anything for that. Or a vcpu in userspace can be waiting for a lock
>> that is held by another thread, which has been preempted.
> Do you mean userspace spinlock? Because otherwise task that's waits on
> a kernel lock will sleep in the kernel.
I meant a kernel mutex.
vcpu 0: take guest spinlock
vcpu 0: vmexit
vcpu 0: spin_lock(some_lock)
vcpu 1: take same guest spinlock
vcpu 1: PLE vmexit
vcpu 1: wtf?
Waiting on a host kernel spinlock is not too bad because we expect to be
out shortly. Waiting on a host kernel mutex can be a lot worse.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists