[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150217123212.GA6362@redhat.com>
Date: Tue, 17 Feb 2015 13:32:12 +0100
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Igor Mammedov <imammedo@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH] vhost: support upto 509 memory regions
On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>
>
> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > >
> > > Signed-off-by: Igor Mammedov <imammedo@...hat.com>
> >
> > This scares me a bit: each region is 32byte, we are talking
> > a 16K allocation that userspace can trigger.
>
> What's bad with a 16K allocation?
It fails when memory is fragmented.
> > How does kvm handle this issue?
>
> It doesn't.
>
> Paolo
I'm guessing kvm doesn't do memory scans on data path,
vhost does.
qemu is just doing things that kernel didn't expect it to need.
Instead, I suggest reducing number of GPA<->HVA mappings:
you have GPA 1,5,7
map them at HVA 11,15,17
then you can have 1 slot: 1->11
To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
or something like this.
We can discuss smarter lookup algorithms but I'd rather
userspace didn't do things that we then have to
work around in kernel.
--
MST
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists