lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Jun 2015 21:11:10 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	Igor Mammedov <imammedo@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org
Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions

On Wed, Jun 17, 2015 at 06:47:18PM +0200, Paolo Bonzini wrote:
> 
> 
> On 17/06/2015 18:41, Michael S. Tsirkin wrote:
> > On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote:
> >>
> >>
> >> On 17/06/2015 18:34, Michael S. Tsirkin wrote:
> >>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bonzini wrote:
> >>>>
> >>>>
> >>>> On 17/06/2015 18:30, Michael S. Tsirkin wrote:
> >>>>> Meanwhile old tools are vulnerable to OOM attacks.
> >>>>
> >>>> For each vhost device there will be likely one tap interface, and I
> >>>> suspect that it takes way, way more than 16KB of memory.
> >>>
> >>> That's not true. We have a vhost device per queue, all queues
> >>> are part of a single tap device.
> >>
> >> s/tap/VCPU/ then.  A KVM VCPU also takes more than 16KB of memory.
> > 
> > That's up to you as a kvm maintainer :)
> 
> Not easy, when the CPU alone requires three (albeit non-consecutive)
> pages for the VMCS, the APIC access page and the EPT root.
> 
> > People are already concerned about vhost device
> > memory usage, I'm not happy to define our user/kernel interface
> > in a way that forces even more memory to be used up.
> 
> So, the questions to ask are:
> 
> 1) What is the memory usage like immediately after vhost is brought up,
> apart from these 16K?

About 24K, but most are iov pool arrays are kept around as an optimization
to avoid kmalloc on data path. Below 1K tracks persistent state.
Recently people have been complaining about these pools
so I've been thinking about switching to a per-cpu array,
or something similar.

> 2) Is there anything in vhost that allocates a user-controllable amount
> of memory?

Definitely not in vhost-net.

> 3) What is the size of the data structures that support one virtqueue
> (there are two of them)?

Around 256 bytes.

>  Does it depend on the size of the virtqueues?

No.

> 4) Would it make sense to share memory regions between multiple vhost
> devices?  Would it be hard to implement?

It's not trivial. It would absolutely require userspace ABI
extensions.

>  It would also make memory
> operations O(1) rather than O(#cpus).
> 
> Paolo

We'd save the kmalloc/memcpy/kfree, that is true.

But we'd still need to flush all VQs so it's still O(#cpus),
we'd just be doing less stuff in that O(#cpus).

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ