lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 27 Sep 2018 09:06:40 -0400 (EDT) From: Pankaj Gupta <pagupta@...hat.com> To: Dan Williams <dan.j.williams@...el.com> Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, KVM list <kvm@...r.kernel.org>, Qemu Developers <qemu-devel@...gnu.org>, linux-nvdimm <linux-nvdimm@...1.01.org>, Jan Kara <jack@...e.cz>, Stefan Hajnoczi <stefanha@...hat.com>, Rik van Riel <riel@...riel.com>, Nitesh Narayan Lal <nilal@...hat.com>, Kevin Wolf <kwolf@...hat.com>, Paolo Bonzini <pbonzini@...hat.com>, Ross Zwisler <ross.zwisler@...el.com>, David Hildenbrand <david@...hat.com>, Xiao Guangrong <xiaoguangrong.eric@...il.com>, Christoph Hellwig <hch@...radead.org>, "Michael S. Tsirkin" <mst@...hat.com>, niteshnarayanlal@...mail.com, lcapitulino@...hat.com, Igor Mammedov <imammedo@...hat.com>, Eric Blake <eblake@...hat.com> Subject: Re: [PATCH 3/3] virtio-pmem: Add virtio pmem driver Hello Dan, > > > + /* The request submission function */ > > > +static int virtio_pmem_flush(struct nd_region *nd_region) > > > +{ > > > + int err; [...] > > > + init_waitqueue_head(&req->host_acked); > > > + init_waitqueue_head(&req->wq_buf); > > > + > > > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > > > + sg_init_one(&sg, req->name, strlen(req->name)); > > > + sgs[0] = &sg; > > > + sg_init_one(&ret, &req->ret, sizeof(req->ret)); > > > + sgs[1] = &ret; [...] > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, flags); > > > + /* When host has read buffer, this completes via host_ack */ > > > + wait_event(req->host_acked, req->done); > > > > Hmm, this seems awkward if this is called from pmem_make_request. If > > we need to wait for completion that should be managed by the guest > > block layer. I.e. make_request should just queue request and then > > trigger bio_endio() when the response comes back. > > We are plugging VIRTIO based flush callback for virtio_pmem driver. If pmem > driver (pmem_make_request) has to queue request we have to plug "blk_mq_ops" > callbacks for corresponding VIRTIO vqs. AFAICU there is no existing > multiqueue > code merged for pmem driver yet, though i could see patches by Dave upstream. > I thought about this and with current infrastructure "make_request" releases spinlock and makes current thread/task. All Other threads are free to call 'make_request'/flush and similarly wait by releasing the lock. This actually works like a queue of threads waiting for notifications from host. Current pmem code do not have multiqueue support and I am not sure if core pmem code needs it. Adding multiqueue support just for virtio-pmem and not for pmem in same driver will be confusing or require alot of tweaking. Could you please give your suggestions on this. Thanks, Pankaj
Powered by blists - more mailing lists