lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150730170842.7bf3a0f2@nial.brq.redhat.com>
Date:	Thu, 30 Jul 2015 17:08:42 +0200
From:	Igor Mammedov <imammedo@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	linux-kernel@...r.kernel.org, pbonzini@...hat.com,
	kvm@...r.kernel.org
Subject: Re: [PATCH 2/2] vhost: increase default limit of nregions from 64
 to 509

On Thu, 30 Jul 2015 09:33:57 +0300
"Michael S. Tsirkin" <mst@...hat.com> wrote:

> On Thu, Jul 30, 2015 at 08:26:03AM +0200, Igor Mammedov wrote:
> > On Wed, 29 Jul 2015 18:28:26 +0300
> > "Michael S. Tsirkin" <mst@...hat.com> wrote:
> > 
> > > On Wed, Jul 29, 2015 at 04:29:23PM +0200, Igor Mammedov wrote:
> > > > although now there is vhost module max_mem_regions option
> > > > to set custom limit it doesn't help for default setups,
> > > > since it requires administrator manually set a higher
> > > > limit on each host. Which complicates servers deployments
> > > > and management.
> > > > Rise limit to the same value as KVM has (509 slots max),
> > > > so that default deployments would work out of box.
> > > > 
> > > > Signed-off-by: Igor Mammedov <imammedo@...hat.com>
> > > > ---
> > > > PS:
> > > > Users that would want to lock down vhost could still
> > > > use max_mem_regions option to set lower limit, but
> > > > I expect it would be minority.
> > > 
> > > I'm not inclined to merge this.
> > > 
> > > Once we change this we can't take it back. It's not a decision
> > > to be taken lightly.
> > considering that continuous HVA idea has failed, why would you
> > want to take limit back in the future if we rise it now?
> 
> I'm not sure.
> 
> I think you merely demonstrated it's a big change for userspace -
> not that it's unfeasible.
> 
> Alternatively, if we want an unlimited size table, we should keep it
> in userspace memory.
btw:
if table were a simple array and kernel will do inefficient linear scan
to do translation then I guess we could use userspace memory.

But I'm afraid we can't trust userspace in case of more elaborate
structure. Even if it's just binary search over sorted array,
it would be possible for userspace to hung kernel thread in
translate_desc() by providing corrupted or wrongly sorted table.
And we can't afford table validation on hot path.


> 
> > > 
> > > And memory hotplug users are a minority.  Out of these, users with a
> > > heavily fragmented PA space due to hotplug abuse are an even smaller
> > > minority.
> > > 
> > > > ---
> > > >  include/uapi/linux/vhost.h | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > > > index 2511954..92657bf 100644
> > > > --- a/include/uapi/linux/vhost.h
> > > > +++ b/include/uapi/linux/vhost.h
> > > > @@ -140,7 +140,7 @@ struct vhost_memory {
> > > >  #define VHOST_MEM_MAX_NREGIONS_NONE 0
> > > >  /* We support at least as many nregions in VHOST_SET_MEM_TABLE:
> > > >   * for use on legacy kernels without VHOST_GET_MEM_MAX_NREGIONS support. */
> > > > -#define VHOST_MEM_MAX_NREGIONS_DEFAULT 64
> > > > +#define VHOST_MEM_MAX_NREGIONS_DEFAULT 509
> > > >  
> > > >  /* VHOST_NET specific defines */
> > > >  
> > > > -- 
> > > > 1.8.3.1
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ