lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPcyv4gvPhi4oe-KOo1HH+QbE-9whOvP7TypDshbbx3XZXh9=A@mail.gmail.com>
Date:   Thu, 27 Sep 2018 08:55:56 -0700
From:   Dan Williams <dan.j.williams@...el.com>
To:     Pankaj Gupta <pagupta@...hat.com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        KVM list <kvm@...r.kernel.org>,
        Qemu Developers <qemu-devel@...gnu.org>,
        linux-nvdimm <linux-nvdimm@...1.01.org>, Jan Kara <jack@...e.cz>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        Rik van Riel <riel@...riel.com>,
        Nitesh Narayan Lal <nilal@...hat.com>,
        Kevin Wolf <kwolf@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        "Zwisler, Ross" <ross.zwisler@...el.com>,
        David Hildenbrand <david@...hat.com>,
        Xiao Guangrong <xiaoguangrong.eric@...il.com>,
        Christoph Hellwig <hch@...radead.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        niteshnarayanlal@...mail.com, lcapitulino@...hat.com,
        Igor Mammedov <imammedo@...hat.com>,
        Eric Blake <eblake@...hat.com>
Subject: Re: [PATCH 3/3] virtio-pmem: Add virtio pmem driver

On Thu, Sep 27, 2018 at 6:07 AM Pankaj Gupta <pagupta@...hat.com> wrote:
[..]
> > We are plugging VIRTIO based flush callback for virtio_pmem driver. If pmem
> > driver (pmem_make_request) has to queue request we have to plug "blk_mq_ops"
> > callbacks for corresponding  VIRTIO vqs. AFAICU there is no existing
> > multiqueue
> > code merged for pmem driver yet, though i could see patches by Dave upstream.
> >
>
> I thought about this and with current infrastructure "make_request" releases spinlock
> and makes current thread/task. All Other threads are free to call 'make_request'/flush
> and similarly wait by releasing the lock.

Which lock are you referring?

> This actually works like a queue of threads
> waiting for notifications from host.
>
> Current pmem code do not have multiqueue support and I am not sure if core pmem code
> needs it. Adding multiqueue support just for virtio-pmem and not for pmem in same driver
> will be confusing or require alot of tweaking.

Why does the pmem driver need to be converted to multiqueue support?

> Could you please give your suggestions on this.

I was expecting that flush requests that cannot be completed
synchronously be placed on a queue and have bio_endio() called at a
future time. I.e. use bio_chain() to manage the async portion of the
flush request. This causes the guest block layer to just assume the
bio was queued and will be completed at some point in the future.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ