lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 May 2021 23:33:33 -0700
From:   Dongli Zhang <dongli.zhang@...cle.com>
To:     Stefan Hajnoczi <stefanha@...hat.com>,
        Hannes Reinecke <hare@...e.de>
Cc:     virtualization@...ts.linux-foundation.org,
        linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-block@...r.kernel.org, mst@...hat.com, jasowang@...hat.com,
        pbonzini@...hat.com, jejb@...ux.ibm.com,
        martin.petersen@...cle.com, joe.jin@...cle.com,
        junxiao.bi@...cle.com, srinivas.eeda@...cle.com
Subject: Re: [RFC] virtio_scsi: to poll and kick the virtqueue in timeout
 handler

Hi Stefan and Hannes,

On 5/24/21 6:24 AM, Stefan Hajnoczi wrote:
> On Sun, May 23, 2021 at 09:39:51AM +0200, Hannes Reinecke wrote:
>> On 5/23/21 8:38 AM, Dongli Zhang wrote:
>>> This RFC is to trigger the discussion about to poll and kick the
>>> virtqueue on purpose in virtio-scsi timeout handler.
>>>
>>> The virtio-scsi relies on the virtio vring shared between VM and host.
>>> The VM side produces requests to vring and kicks the virtqueue, while the
>>> host side produces responses to vring and interrupts the VM side.
>>>
>>> By default the virtio-scsi handler depends on the host timeout handler
>>> by BLK_EH_RESET_TIMER to give host a chance to perform EH.
>>>
>>> However, this is not helpful for the case that the responses are available
>>> on vring but the notification from host to VM is lost.
>>>
>> How can this happen?
>> If responses are lost the communication between VM and host is broken, and
>> we should rather reset the virtio rings themselves.
> 
> I agree. In principle it's fine to poll the virtqueue at any time, but I
> don't understand the failure scenario here. It's not clear to me why the
> device-to-driver vq notification could be lost.
> 

One example is the CPU hotplug issue before the commit bf0beec0607d ("blk-mq:
drain I/O when all CPUs in a hctx are offline") was available. The issue is
equivalent to loss of interrupt. Without the CPU hotplug fix, while NVMe driver
relies on the timeout handler to complete inflight IO requests, the PV
virtio-scsi may hang permanently.

In addition, as the virtio/vhost/QEMU are complex software, we are not able to
guarantee there is no further lost of interrupt/kick issue in the future. It is
really painful if we encounter such issue in production environment.


About to reset vring, if this is just due to loss of interrupt, I do not think
it is necessary to reset the entire vring. To poll the vring should be enough.
The NVMe PCI does the same by assuming there may be loss of interrupt.

Once there is request timeout, the NVMe PCI driver first polls the ring buffer
and confirm if the request is completed, instead of reset/abort immediately.


1254 static enum blk_eh_timer_return nvme_timeout(struct request *req, bool
reserved)
... ...
1280         /*
1281          * Did we miss an interrupt?
1282          */
1283         if (test_bit(NVMEQ_POLLED, &nvmeq->flags))
1284                 nvme_poll(req->mq_hctx);
1285         else
1286                 nvme_poll_irqdisable(nvmeq);
1287
1288         if (blk_mq_request_completed(req)) {
1289                 dev_warn(dev->ctrl.device,
1290                          "I/O %d QID %d timeout, completion polled\n",
1291                          req->tag, nvmeq->qid);
1292                 return BLK_EH_DONE;
1293         }


Thank you very much!

Dongli Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ