[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150622091027.582a1549@nial.brq.redhat.com>
Date: Mon, 22 Jun 2015 09:10:27 +0200
From: Igor Mammedov <imammedo@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, andrey@...l.ru
Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions
On Fri, 19 Jun 2015 18:33:39 +0200
"Michael S. Tsirkin" <mst@...hat.com> wrote:
> On Fri, Jun 19, 2015 at 06:26:27PM +0200, Paolo Bonzini wrote:
> >
> >
> > On 19/06/2015 18:20, Michael S. Tsirkin wrote:
> > > > We could, but I/O is just an example. It can be I/O, a network ring,
> > > > whatever. We cannot audit all address_space_map uses.
> > > >
> > >
> > > No need to audit them all: defer device_add using an hva range until
> > > address_space_unmap drops using hvas in range drops reference count to
> > > 0.
> >
> > That could be forever. You certainly don't want to lockup the monitor
> > forever just because a device model isn't too friendly to memory hot-unplug.
>
> We can defer the addition, no need to lockup the monitor.
>
> > That's why you need to audit them (also, it's perfectly in the device
> > model's right to use address_space_unmap this way: it's the guest that's
> > buggy and leaves a dangling reference to a region before unplugging it).
> >
> > Paolo
>
> Then maybe it's not too bad that the guest will crash because the memory
> was unmapped.
So far HVA is unusable even if we will make this assumption and let guest crash.
virt_net doesn't work with it anyway,
translation of GPA to HVA for descriptors works as expected (correctly)
but vhost+HVA hack backed virtio still can't send/received packets.
That's why I prefer to merge kernel solution first as a stable and
not introducing any issues solution. And work on userspace approach on
top of that.
Hopefully it could be done but we still would need time
to iron out side effects/issues it causes or could cause so that
fix became stable enough for production.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists