[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <782ac7e2-b8d9-139d-6182-cb4e2d082458@redhat.com>
Date: Thu, 5 Apr 2018 14:19:25 +0200
From: David Hildenbrand <david@...hat.com>
To: Pankaj Gupta <pagupta@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
qemu-devel@...gnu.org, linux-nvdimm@...1.01.org, kwolf@...hat.com,
haozhong zhang <haozhong.zhang@...el.com>, jack@...e.cz,
xiaoguangrong eric <xiaoguangrong.eric@...il.com>,
riel@...riel.com, niteshnarayanlal@...mail.com, mst@...hat.com,
ross zwisler <ross.zwisler@...el.com>, hch@...radead.org,
stefanha@...hat.com, imammedo@...hat.com, marcel@...hat.com,
pbonzini@...hat.com, dan j williams <dan.j.williams@...el.com>,
nilal@...hat.com
Subject: Re: [Qemu-devel] [RFC] qemu: Add virtio pmem device
>>
>> So right now you're just using some memdev for testing.
>
> yes.
>
>>
>> I assume that the memory region we will provide to the guest will be a
>> simple memory mapped raw file. Dirty tracking (using the kvm slot) will
>> be used to detect which blocks actually changed and have to be flushed
>> to disk.
>
> Not really, we will perform fsync on raw file. As this file is created
> on regular storage and not nvdimm, so host page cache radix tree would have
> the dirty pages information which will be used for fsync.
Ah right. That makes things a lot easier!
>
>>
>> Will this raw file already have the "disk information header" (no idea
>> how that stuff is called) encoded? Are there any plans/possible ways to
>>
>> a) automatically create the headers? (if that's even possible)
>
> Its raw. Right now we are just supporting raw format.
>
> As this is direct mapping of memory into guest address space, I don't
> think we can have an abstraction of headers for block specific features.
> Or may be we can get opinion of others(Qemu block people) it is at all possible?
>
>> b) support anything but raw files?
>>
>> Please note that under x86, a KVM memory slot still has a (in my
>> opinion) fairly big overhead depending on the size of the slot (rmap,
>> page_track). We might have to optimize that.
>
> I have not tried/observed this. Right now I just used single memory slot and cold add
> few MB's of memory in Qemu. Can you please provide more details on this?
>
You can have a look at kvm_arch_create_memslot() in arch/x86/kvm/x86.c.
"npages" is used to allocate certain arrays (rmap for shadow page
tables). Also kvm_page_track_create_memslot() allocates data for page_track.
Having a big disk involves a lot of memory overhead due to the big kvm
memory slot. This is already the case for NVDIMMs as of now.
Other architectures (e.g. s390x) don't have this "problem". They don't
allocate any such data depending on the size of a memory slot.
This is certainly something to work on in the future.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists