lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Feb 2017 11:17:41 -0800
From:   Benjamin Serebrin <serebrin@...gle.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Christian Borntraeger <borntraeger@...ibm.com>,
        netdev@...r.kernel.org, Jason Wang <jasowang@...hat.com>,
        David Miller <davem@...emloft.net>,
        Willem de Bruijn <willemb@...gle.com>,
        Venkatesh Srinivas <venkateshs@...gle.com>,
        "Jon Olson (Google Drive)" <jonolson@...gle.com>,
        willemdebruijn.kernel@...il.com, Rick Jones <rick.jones2@....com>,
        James Mattson <jmattson@...gle.com>,
        linux-s390 <linux-s390@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue pairs

On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin <mst@...hat.com> wrote:

> IIRC irqbalance will bail out and avoid touching affinity
> if you set affinity from driver.  Breaking that's not nice.
> Pls correct me if I'm wrong.


I believe you're right that irqbalance will leave the affinity alone.

Irqbalance has had changes that may or may not be in the versions bundled with
various guests, and I don't have a definitive cross-correlation of irqbalance
version to guest version.  But in the existing code, the driver does
set affinity for #VCPUs==#queues, so that's been happening anyway.

The (original) intention of this patch was to extend the existing behavior
to the case where we limit queue counts, to avoid the surprising discontinuity
when #VCPU != #queues.

It's not obvious that it's wrong to cause irqbalance to leave these
queues alone:  Generally you want the interrupt to come to the core that
caused the work, to have cache locality and avoid lock contention.
Doing fancier things is outside the scope of this patch.

> Doesn't look like this will handle the case of num cpus < num queues well.

I believe it's correct.  The first #VCPUs queues will have one bit set in their
xps mask, and the remaining queues have no bits set.  That means each VCPU uses
its own assigned TX queue (and the TX interrupt comes back to that VCPU).

Thanks again for the review!
Ben

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ