[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091102070720.GF29477@redhat.com>
Date: Mon, 2 Nov 2009 09:07:20 +0200
From: Gleb Natapov <gleb@...hat.com>
To: Rik van Riel <riel@...hat.com>
Cc: kvm@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 01/11] Add shared memory hypercall to PV Linux guest.
On Sun, Nov 01, 2009 at 11:27:15PM -0500, Rik van Riel wrote:
> On 11/01/2009 06:56 AM, Gleb Natapov wrote:
> >Add hypercall that allows guest and host to setup per cpu shared
> >memory.
>
> While it is pretty obvious that we should implement
> the asynchronous pagefaults for KVM, so a swap-in
> of a page the host swapped out does not stall the
> entire virtual CPU, I believe that adding extra
> data accesses at context switch time may not be
> the best tradeoff.
>
> It may be better to simply tell the guest what
> address is faulting (or give the guest some other
> random unique number as a token). Then, once the
> host brings that page into memory, we can send a
> signal to the guest with that same token.
>
> The problem of finding the task(s) associated with
> that token can be left to the guest. A little more
> complexity on the guest side, but probably worth it
> if we can avoid adding cost to the context switch
> path.
>
This is precisely what this series implements. The function below
is leftover from previous implementation, not used by the rest of the
patch and removed by a later patch. Just a left over from rebase. Sorry
about that. Will be fixed for future submissions.
> >+static void kvm_end_context_switch(struct task_struct *next)
> >+{
> >+ struct kvm_vcpu_pv_shm *pv_shm =
> >+ per_cpu(kvm_vcpu_pv_shm, smp_processor_id());
> >+
> >+ if (!pv_shm)
> >+ return;
> >+
> >+ pv_shm->current_task = (u64)next;
> >+}
> >+
>
>
>
> --
> All rights reversed.
--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists