[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150617083202-mutt-send-email-mst@redhat.com>
Date: Wed, 17 Jun 2015 08:34:26 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Igor Mammedov <imammedo@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
pbonzini@...hat.com
Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions
On Wed, Jun 17, 2015 at 12:00:56AM +0200, Igor Mammedov wrote:
> On Tue, 16 Jun 2015 23:14:20 +0200
> "Michael S. Tsirkin" <mst@...hat.com> wrote:
>
> > On Tue, Jun 16, 2015 at 06:33:37PM +0200, Igor Mammedov wrote:
> > > since commit
> > > 1d4e7e3 kvm: x86: increase user memory slots to 509
> > >
> > > it became possible to use a bigger amount of memory
> > > slots, which is used by memory hotplug for
> > > registering hotplugged memory.
> > > However QEMU crashes if it's used with more than ~60
> > > pc-dimm devices and vhost-net since host kernel
> > > in module vhost-net refuses to accept more than 65
> > > memory regions.
> > >
> > > Increase VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> >
> > It was 64, not 65.
> >
> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > >
> > > Signed-off-by: Igor Mammedov <imammedo@...hat.com>
> >
> > Still thinking about this: can you reorder this to
> > be the last patch in the series please?
> sure
>
> >
> > Also - 509?
> userspace memory slots in terms of KVM, I made it match
> KVM's allotment of memory slots for userspace side.
Maybe KVM has its reasons for this #. I don't see
why we need to match this exactly.
> > I think if we are changing this, it'd be nice to
> > create a way for userspace to discover the support
> > and the # of regions supported.
> That was my first idea before extending KVM's memslots
> to teach kernel to tell qemu this number so that QEMU
> at least would be able to check if new memory slot could
> be added but I was redirected to a more simple solution
> of just extending vs everdoing things.
> Currently QEMU supports upto ~250 memslots so 509
> is about twice high we need it so it should work for near
> future
Yes but old kernels are still around. Would be nice if you
can detect them.
> but eventually we might still teach kernel and QEMU
> to make things more robust.
A new ioctl would be easy to add, I think it's a good
idea generally.
> >
> >
> > > ---
> > > drivers/vhost/vhost.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > index 99931a0..6a18c92 100644
> > > --- a/drivers/vhost/vhost.c
> > > +++ b/drivers/vhost/vhost.c
> > > @@ -30,7 +30,7 @@
> > > #include "vhost.h"
> > >
> > > enum {
> > > - VHOST_MEMORY_MAX_NREGIONS = 64,
> > > + VHOST_MEMORY_MAX_NREGIONS = 509,
> > > VHOST_MEMORY_F_LOG = 0x1,
> > > };
> > >
> > > --
> > > 1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists