lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20150217154452.6f62dd77@nial.brq.redhat.com> Date: Tue, 17 Feb 2015 15:44:52 +0100 From: Igor Mammedov <imammedo@...hat.com> To: "Michael S. Tsirkin" <mst@...hat.com> Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org, kvm@...r.kernel.org, netdev@...r.kernel.org Subject: Re: [PATCH] vhost: support upto 509 memory regions On Tue, 17 Feb 2015 13:32:12 +0100 "Michael S. Tsirkin" <mst@...hat.com> wrote: > On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote: > > > > > > On 17/02/2015 10:02, Michael S. Tsirkin wrote: > > > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509 > > > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net. > > > > > > > > Signed-off-by: Igor Mammedov <imammedo@...hat.com> > > > > > > This scares me a bit: each region is 32byte, we are talking > > > a 16K allocation that userspace can trigger. > > > > What's bad with a 16K allocation? > > It fails when memory is fragmented. > > > > How does kvm handle this issue? > > > > It doesn't. > > > > Paolo > > I'm guessing kvm doesn't do memory scans on data path, > vhost does. > > qemu is just doing things that kernel didn't expect it to need. > > Instead, I suggest reducing number of GPA<->HVA mappings: > > you have GPA 1,5,7 > map them at HVA 11,15,17 > then you can have 1 slot: 1->11 > > To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE > or something like this. Lets suppose that we add API to reserve whole memory hotplug region with MAP_NORESERVE and passed it as memslot to KVM. Then what will happen to guest accessing not really mapped region? This memslot will also be passed to vhost as region, is it really ok? I don't know what else it might break. As alternative: we can filter out hotplugged memory and vhost will continue to work with only initial memory. So question is id we have to tell vhost about hotplugged memory? > > We can discuss smarter lookup algorithms but I'd rather > userspace didn't do things that we then have to > work around in kernel. > > -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists