[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5390636D.2090809@suse.de>
Date: Thu, 05 Jun 2014 14:32:45 +0200
From: Alexander Graf <agraf@...e.de>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
CC: Alexey Kardashevskiy <aik@...abs.ru>,
linuxppc-dev@...ts.ozlabs.org, Paul Mackerras <paulus@...ba.org>,
Gleb Natapov <gleb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, kvm-ppc@...r.kernel.org
Subject: Re: [PATCH 3/3] PPC: KVM: Add support for 64bit TCE windows
On 05.06.14 14:30, Benjamin Herrenschmidt wrote:
> On Thu, 2014-06-05 at 13:56 +0200, Alexander Graf wrote:
>> What if we ask user space to give us a pointer to user space allocated
>> memory along with the TCE registration? We would still ask user space to
>> only use the returned fd for TCE modifications, but would have some
>> nicely swappable memory we can store the TCE entries in.
> That isn't going to work terribly well for VFIO :-) But yes, for
> emulated devices, we could improve things a bit, including for
> the 32-bit TCE tables.
>
> For emulated, the real mode path could walk the page tables and fallback
> to virtual mode & get_user if the page isn't present, thus operating
> directly on qemu memory TCE tables instead of the current pinned stuff.
>
> However that has a cost in performance, but since that's really only
> used for emulated devices and PAPR VIOs, it might not be a huge issue.
>
> But for VFIO we don't have much choice, we need to create something the
> HW can access.
But we need to create separate tables for VFIO anyways, because these
TCE tables contain virtual addresses, no?
Alex
>
>> In fact, the code as is today can allocate an arbitrary amount of pinned
>> kernel memory from within user space without any checks.
> Right. We should at least account it in the locked limit.
>
> Cheers,
> Ben.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists