[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ACB46AD.8010405@gmail.com>
Date: Tue, 06 Oct 2009 09:31:25 -0400
From: Gregory Haskins <gregory.haskins@...il.com>
To: Avi Kivity <avi@...hat.com>
CC: Gregory Haskins <ghaskins@...ell.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
"alacrityvm-devel@...ts.sourceforge.net"
<alacrityvm-devel@...ts.sourceforge.net>,
David Howells <dhowells@...hat.com>
Subject: Re: [PATCH v2 2/4] KVM: introduce "xinterface" API for external interaction
with guests
Avi Kivity wrote:
> On 10/06/2009 01:57 AM, Gregory Haskins wrote:
>> Avi Kivity wrote:
>>
>>> On 10/02/2009 10:19 PM, Gregory Haskins wrote:
>>>
>>>> What: xinterface is a mechanism that allows kernel modules external to
>>>> the kvm.ko proper to interface with a running guest. It accomplishes
>>>> this by creating an abstracted interface which does not expose any
>>>> private details of the guest or its related KVM structures, and
>>>> provides
>>>> a mechanism to find and bind to this interface at run-time.
>>>>
>>>>
>>> If this is needed, it should be done as a virt_address_space to which
>>> kvm and other modules bind, instead of as something that kvm exports and
>>> other modules import. The virt_address_space can be identified by an fd
>>> and passed around to kvm and other modules.
>>>
>> IIUC, what you are proposing is something similar to generalizing the
>> vbus::memctx object. I had considered doing something like that in the
>> early design phase of vbus, but then decided it would be a hard-sell to
>> the mm crowd, and difficult to generalize.
>>
>> What do you propose as the interface to program the object?
>>
>
> Something like the current kvm interfaces, de-warted. It will be a hard
> sell indeed, for good reasons.
I am not convinced generalizing this at this point is the best step
forward, since we only have a KVM client. Let me put some more thought
into it.
>
>>> So, under my suggestion above, you'd call
>>> sys_create_virt_address_space(), populate it, and pass the result to kvm
>>> and to foo. This allows the use of virt_address_space without kvm and
>>> doesn't require foo to interact with kvm.
>>>
>> The problem I see here is that the only way I can think to implement
>> this generally is something that looks very kvm-esque (slots-to-pages
>> kind of translation). Is there a way you can think of that does not
>> involve a kvm.ko originated vtable that is also not kvm centric?
>>
>
> slots would be one implementation, if you can think of others then you'd
> add them.
I'm more interested in *how* you'd add them more than "if" we would add
them. What I am getting at are the logistics of such a beast.
For instance, would I have /dev/slots-vas with ioctls for adding slots,
and /dev/foo-vas for adding foos? And each one would instantiate a
different vas_struct object with its own vas_struct->ops? Or were you
thinking of something different.
>
> If you can't, I think it indicates that the whole thing isn't necessary
> and we're better off with slots and virtual memory.
I'm not sure if we are talking about the same thing yet, but if we are,
there are uses of a generalized interface outside of slots/virtual
memory (Ira's physical box being a good example).
In any case, I think the best approach is what I already proposed.
KVM's arrangement of memory is going to tend to be KVM specific, and
what better place to implement the interface than close to the kvm.ko core.
> The only thing missing is dma, which you don't deal with anyway.
>
Afaict I do support dma in the generalized vbus::memctx, though I do not
use it on anything related to KVM or xinterface. Can you elaborate on
the problem here? Does the SG interface in 4/4 help get us closer to
what you envision?
>>>> +struct kvm_xinterface_ops {
>>>> + unsigned long (*copy_to)(struct kvm_xinterface *intf,
>>>> + unsigned long gpa, const void *src,
>>>> + unsigned long len);
>>>> + unsigned long (*copy_from)(struct kvm_xinterface *intf, void *dst,
>>>> + unsigned long gpa, unsigned long len);
>>>> + struct kvm_xvmap* (*vmap)(struct kvm_xinterface *intf,
>>>> + unsigned long gpa,
>>>> + unsigned long len);
>>>>
>>>>
>>> How would vmap() work with live migration?
>>>
>> vmap represents shmem regions, and is a per-guest-instance resource. So
>> my plan there is that the new and old guest instance would each have the
>> vmap region instated at the same GPA location (assumption: gpas are
>> stable across migration), and any state relevant data local to the shmem
>> (like ring head/tail position) is conveyed in the serialized stream for
>> the device model.
>>
>
> You'd have to copy the entire range since you don't know what the guest
> might put there. I guess it's acceptable for small areas.
? The vmap is presumably part of an ABI between guest and host, so the
host should always know what structure is present within the region, and
what is relevant from within that structure to migrate once that state
is "frozen".
These regions (for vbus, anyway) equate to things like virtqueue
metadata, and presumably the same problem exists for virtio-net in
userspace as it does here, since that is another form of a "vmap". So
whatever solution works for virtio-net migrating its virtqueues in
userspace should be close to what will work here. The primary
difference is the location of the serializer.
>
>>>> +
>>>> +static inline void
>>>> +_kvm_xinterface_release(struct kref *kref)
>>>> +{
>>>> + struct kvm_xinterface *intf;
>>>> + struct module *owner;
>>>> +
>>>> + intf = container_of(kref, struct kvm_xinterface, kref);
>>>> +
>>>> + owner = intf->owner;
>>>> + rmb();
>>>>
>>>>
>>> Why rmb?
>>>
>> the intf->ops->release() line may invalidate the intf pointer, so we
>> want to ensure that the read completes before the release() is called.
>>
>> TBH: I'm not 100% its needed, but I was being conservative.
>>
>
> rmb()s are only needed if an external agent can issue writes, otherwise
> you'd need one after every statement.
I was following lessons learned here:
http://lkml.org/lkml/2009/7/7/175
Perhaps mb() or barrier() are more appropriate than rmb()? I'm CC'ing
David Howells in case he has more insight.
>
>
>
>
>>>
>>> A simple per-vcpu cache (in struct kvm_vcpu) is likely to give better
>>> results.
>>>
>> per-vcpu will not work well here, unfortunately, since this is an
>> external interface mechanism. The callers will generally be from a
>> kthread or some other non-vcpu related context. Even if we could figure
>> out a vcpu to use as a basis, we would require some kind of
>> heavier-weight synchronization which would not be as desirable.
>>
>> Therefore, I opted to go per-cpu and use the presumably lighterweight
>> get_cpu/put_cpu() instead.
>>
>
> This just assumes a low context switch rate.
It primarily assumes a low _migration_ rate, since you do not typically
have two contexts on the same cpu pounding on the memslots. And even if
you did, there's a good chance for locality between the threads, since
the IO activity is likely related. For the odd times where locality
fails to yield a hit, the time-slice or migration rate should be
sufficiently infrequent enough to still yield high 90s hit rates for
when it matters. For low-volume, the lookup is in the noise so I am not
as concerned.
IOW: Where the lookup hurts the most is trying to walk an SG list, since
each pointer is usually within the same slot. For this case, at least,
this cache helps immensely, at least according to profiles.
>
> How about a gfn_to_pfn_cached(..., struct gfn_to_pfn_cache *cache)?
> Each user can place it in a natural place.
Sounds good. I will incorporate this into the split patch.
>
>>>> +static unsigned long
>>>> +xinterface_copy_to(struct kvm_xinterface *intf, unsigned long gpa,
>>>> + const void *src, unsigned long n)
>>>> +{
>>>> + struct _xinterface *_intf = to_intf(intf);
>>>> + unsigned long dst;
>>>> + bool kthread = !current->mm;
>>>> +
>>>> + down_read(&_intf->kvm->slots_lock);
>>>> +
>>>> + dst = gpa_to_hva(_intf, gpa);
>>>> + if (!dst)
>>>> + goto out;
>>>> +
>>>> + if (kthread)
>>>> + use_mm(_intf->mm);
>>>> +
>>>> + if (kthread || _intf->mm == current->mm)
>>>> + n = copy_to_user((void *)dst, src, n);
>>>> + else
>>>> + n = _slow_copy_to_user(_intf, dst, src, n);
>>>>
>>>>
>>> Can't you switch the mm temporarily instead of this?
>>>
>> Thats actually what I do for the fast-path (use_mm() does a switch_to()
>> internally).
>>
>> The slow-path is only there for completeness for when switching is not
>> possible (such as if called with an mm already active i.e.
>> process-context).
>
> Still, why can't you switch temporarily?
I am not an mm expert, but iiuc you cannot call switch_to() from
anything other than kthread context. Thats what the doc says, anyway.
>
>> In practice, however, this doesnt happen. Virtually
>> 100% of the calls in vbus hit the fast-path here, and I suspect most
>> xinterface clients would find the same conditions as well.
>>
>
> So you have 100% untested code here.
>
Actually, no. Before Michael enlightened me recently regarding
switch_to/use_mm, the '_slow_xx' functions were my _only_ path. So they
have indeed multiple months (and multiple GB) of testing, as it turns
out. I only recently optimized them away to "backup" duty.
Thanks Avi,
-Greg
Download attachment "signature.asc" of type "application/pgp-signature" (268 bytes)
Powered by blists - more mailing lists