lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACycT3uYbnrQDDbFmwdww8ukMU1t9RsAuutHsFT-UzK9_Mc=Kg@mail.gmail.com>
Date:   Tue, 28 Mar 2023 11:33:00 +0800
From:   Yongji Xie <xieyongji@...edance.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Christoph Hellwig <hch@....de>,
        virtualization <virtualization@...ts.linux-foundation.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 03/11] virtio-vdpa: Support interrupt affinity
 spreading mechanism

On Tue, Mar 28, 2023 at 11:14 AM Jason Wang <jasowang@...hat.com> wrote:
>
> On Tue, Mar 28, 2023 at 11:03 AM Yongji Xie <xieyongji@...edance.com> wrote:
> >
> > On Fri, Mar 24, 2023 at 2:28 PM Jason Wang <jasowang@...hat.com> wrote:
> > >
> > > On Thu, Mar 23, 2023 at 1:31 PM Xie Yongji <xieyongji@...edance.com> wrote:
> > > >
> > > > To support interrupt affinity spreading mechanism,
> > > > this makes use of group_cpus_evenly() to create
> > > > an irq callback affinity mask for each virtqueue
> > > > of vdpa device. Then we will unify set_vq_affinity
> > > > callback to pass the affinity to the vdpa device driver.
> > > >
> > > > Signed-off-by: Xie Yongji <xieyongji@...edance.com>
> > >
> > > Thinking hard of all the logics, I think I've found something interesting.
> > >
> > > Commit ad71473d9c437 ("virtio_blk: use virtio IRQ affinity") tries to
> > > pass irq_affinity to transport specific find_vqs().  This seems a
> > > layer violation since driver has no knowledge of
> > >
> > > 1) whether or not the callback is based on an IRQ
> > > 2) whether or not the device is a PCI or not (the details are hided by
> > > the transport driver)
> > > 3) how many vectors could be used by a device
> > >
> > > This means the driver can't actually pass a real affinity masks so the
> > > commit passes a zero irq affinity structure as a hint in fact, so the
> > > PCI layer can build a default affinity based that groups cpus evenly
> > > based on the number of MSI-X vectors (the core logic is the
> > > group_cpus_evenly). I think we should fix this by replacing the
> > > irq_affinity structure with
> > >
> > > 1) a boolean like auto_cb_spreading
> > >
> > > or
> > >
> > > 2) queue to cpu mapping
> > >
> >
> > But only the driver knows which queues are used in the control path
> > which don't need the automatic irq affinity assignment.
>
> Is this knowledge awarded by the transport driver now?
>

This knowledge is awarded by the device driver rather than the transport driver.

E.g. virtio-scsi uses:

    struct irq_affinity desc = { .pre_vectors = 2 }; // vq0 is control
queue, vq1 is event queue

> E.g virtio-blk uses:
>
>         struct irq_affinity desc = { 0, };
>
> Atleast we can tell the transport driver which vq requires automatic
> irq affinity.
>

I think that is what the current implementation does.

> > So I think the
> > irq_affinity structure can only be created by device drivers and
> > passed to the virtio-pci/virtio-vdpa driver.
>
> This could be not easy since the driver doesn't even know how many
> interrupts will be used by the transport driver, so it can't built the
> actual affinity structure.
>

The actual affinity mask is built by the transport driver, device
driver only passes a hint on which queues don't need the automatic irq
affinity assignment.

Thanks,
Yongji

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ