[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5581A496.5060503@redhat.com>
Date: Wed, 17 Jun 2015 18:47:18 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: Igor Mammedov <imammedo@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions
On 17/06/2015 18:41, Michael S. Tsirkin wrote:
> On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote:
>>
>>
>> On 17/06/2015 18:34, Michael S. Tsirkin wrote:
>>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bonzini wrote:
>>>>
>>>>
>>>> On 17/06/2015 18:30, Michael S. Tsirkin wrote:
>>>>> Meanwhile old tools are vulnerable to OOM attacks.
>>>>
>>>> For each vhost device there will be likely one tap interface, and I
>>>> suspect that it takes way, way more than 16KB of memory.
>>>
>>> That's not true. We have a vhost device per queue, all queues
>>> are part of a single tap device.
>>
>> s/tap/VCPU/ then. A KVM VCPU also takes more than 16KB of memory.
>
> That's up to you as a kvm maintainer :)
Not easy, when the CPU alone requires three (albeit non-consecutive)
pages for the VMCS, the APIC access page and the EPT root.
> People are already concerned about vhost device
> memory usage, I'm not happy to define our user/kernel interface
> in a way that forces even more memory to be used up.
So, the questions to ask are:
1) What is the memory usage like immediately after vhost is brought up,
apart from these 16K?
2) Is there anything in vhost that allocates a user-controllable amount
of memory?
3) What is the size of the data structures that support one virtqueue
(there are two of them)? Does it depend on the size of the virtqueues?
4) Would it make sense to share memory regions between multiple vhost
devices? Would it be hard to implement? It would also make memory
operations O(1) rather than O(#cpus).
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists