lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 4 Jun 2018 11:55:33 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Igor Mammedov <imammedo@...hat.com>,
        Pankaj Gupta <pagupta@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        qemu-devel@...gnu.org, linux-nvdimm@...1.01.org,
        linux-mm@...ck.org, kwolf@...hat.com, haozhong.zhang@...el.com,
        jack@...e.cz, xiaoguangrong.eric@...il.com, riel@...riel.com,
        niteshnarayanlal@...mail.com, ross.zwisler@...el.com,
        lcapitulino@...hat.com, hch@...radead.org, mst@...hat.com,
        stefanha@...hat.com, marcel@...hat.com, pbonzini@...hat.com,
        dan.j.williams@...el.com, nilal@...hat.com
Subject: Re: [Qemu-devel] [RFC v2 0/2] kvm "fake DAX" device flushing

On 01.06.2018 14:24, Igor Mammedov wrote:
> On Wed, 25 Apr 2018 16:54:12 +0530
> Pankaj Gupta <pagupta@...hat.com> wrote:
> 
> [...]
>> - Qemu virtio-pmem device
>>   It exposes a persistent memory range to KVM guest which 
>>   at host side is file backed memory and works as persistent 
>>   memory device. In addition to this it provides virtio 
>>   device handling of flushing interface. KVM guest performs
>>   Qemu side asynchronous sync using this interface.
> a random high level question,
> Have you considered using a separate (from memory itself)
> virtio device as controller for exposing some memory, async flushing.
> And then just slaving pc-dimm devices to it with notification/ACPI
> code suppressed so that guest won't touch them?

I don't think slaving pc-dimm would be the right thing to do (e.g.
slots, pcdimm vs nvdimm, bus(less), etc..). However the general idea is
interesting for virtio-pmem (as we might have a bigger number of disks).

We could have something like a virtio-pmem-bus to which you attach
virtio-pmem devices. By specifying the mapping, e.g. the thread that
will be used for async flushes will be implicit.

> 
> That way it might be more scale-able, you consume only 1 PCI slot
> for controller vs multiple for virtio-pmem devices.>
> 
>> Changes from previous RFC[1]:
>>
>> - Reuse existing 'pmem' code for registering persistent 
>>   memory and other operations instead of creating an entirely 
>>   new block driver.
>> - Use VIRTIO driver to register memory information with 
>>   nvdimm_bus and create region_type accordingly. 
>> - Call VIRTIO flush from existing pmem driver.
>>
>> Details of project idea for 'fake DAX' flushing interface is 
>> shared [2] & [3].
>>
>> Pankaj Gupta (2):
>>    Add virtio-pmem guest driver
>>    pmem: device flush over VIRTIO
>>
>> [1] https://marc.info/?l=linux-mm&m=150782346802290&w=2
>> [2] https://www.spinics.net/lists/kvm/msg149761.html
>> [3] https://www.spinics.net/lists/kvm/msg153095.html  
>>
>>  drivers/nvdimm/region_devs.c     |    7 ++
>>  drivers/virtio/Kconfig           |   12 +++
>>  drivers/virtio/Makefile          |    1 
>>  drivers/virtio/virtio_pmem.c     |  118 +++++++++++++++++++++++++++++++++++++++
>>  include/linux/libnvdimm.h        |    4 +
>>  include/uapi/linux/virtio_ids.h  |    1 
>>  include/uapi/linux/virtio_pmem.h |   58 +++++++++++++++++++
>>  7 files changed, 201 insertions(+)
>>
> 


-- 

Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ