lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Feb 2017 23:05:29 +0200
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Benjamin Serebrin <serebrin@...gle.com>
Cc:     Christian Borntraeger <borntraeger@...ibm.com>,
        netdev@...r.kernel.org, Jason Wang <jasowang@...hat.com>,
        David Miller <davem@...emloft.net>,
        Willem de Bruijn <willemb@...gle.com>,
        Venkatesh Srinivas <venkateshs@...gle.com>,
        "Jon Olson (Google Drive)" <jonolson@...gle.com>,
        willemdebruijn.kernel@...il.com, Rick Jones <rick.jones2@....com>,
        James Mattson <jmattson@...gle.com>,
        linux-s390 <linux-s390@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue
 pairs

On Tue, Feb 14, 2017 at 11:17:41AM -0800, Benjamin Serebrin wrote:
> On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin <mst@...hat.com> wrote:
> 
> > IIRC irqbalance will bail out and avoid touching affinity
> > if you set affinity from driver.  Breaking that's not nice.
> > Pls correct me if I'm wrong.
> 
> 
> I believe you're right that irqbalance will leave the affinity alone.
> 
> Irqbalance has had changes that may or may not be in the versions bundled with
> various guests, and I don't have a definitive cross-correlation of irqbalance
> version to guest version.  But in the existing code, the driver does
> set affinity for #VCPUs==#queues, so that's been happening anyway.

Right - the idea being we load all CPUs equally so we don't
need help from irqbalance - hopefully packets will be spread
across queues in a balanced way.

When we have less queues the load isn't balanced so we
definitely need something fancier to take into account
the overall system load.


> The (original) intention of this patch was to extend the existing behavior
> to the case where we limit queue counts, to avoid the surprising discontinuity
> when #VCPU != #queues.
> 
> It's not obvious that it's wrong to cause irqbalance to leave these
> queues alone:  Generally you want the interrupt to come to the core that
> caused the work, to have cache locality and avoid lock contention.

But why the first N cpus? That's more or less the same as assigning them
at random. It might benefit your workload but it's not clear it isn't
breaking someone else's. You are right it's not obvious it's doing the
wrong thing but it's also not obvious it's doing the right one.

> Doing fancier things is outside the scope of this patch.
>
> > Doesn't look like this will handle the case of num cpus < num queues well.
> 
> I believe it's correct.  The first #VCPUs queues will have one bit set in their
> xps mask, and the remaining queues have no bits set.  That means each VCPU uses
> its own assigned TX queue (and the TX interrupt comes back to that VCPU).
> 
> Thanks again for the review!
> Ben

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ