lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAN+hb0V5hcMkWwQhk=5G4B+vWBARyEq+g-=XVaygH5s-s7mPZw@mail.gmail.com>
Date:   Wed, 15 Feb 2017 10:27:37 -0800
From:   Benjamin Serebrin <serebrin@...gle.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Network Development <netdev@...r.kernel.org>,
        Jason Wang <jasowang@...hat.com>,
        David Miller <davem@...emloft.net>,
        Willem de Bruijn <willemb@...gle.com>,
        Venkatesh Srinivas <venkateshs@...gle.com>,
        "Jon Olson (Google Drive)" <jonolson@...gle.com>,
        Rick Jones <rick.jones2@....com>,
        James Mattson <jmattson@...gle.com>,
        linux-s390 <linux-s390@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue pairs

On Wed, Feb 15, 2017 at 9:42 AM, Michael S. Tsirkin <mst@...hat.com> wrote:
>
>
> > For pure network load, assigning each txqueue IRQ exclusively
> > to one of the cores that generates traffic on that queue is the
> > optimal layout in terms of load spreading. Irqbalance does
> > not have the XPS information to make this optimal decision.
>
> Try to add hints for it?
>
>
> > Overall system load affects this calculation both in the case of 1:1
> > mapping uneven queue distribution. In both cases, irqbalance
> > is hopefully smart enough to migrate other non-pinned IRQs to
> > cpus with lower overall load.
>
> Not if everyone starts inserting hacks like this one in code.


It seems to me that the default behavior is equally "random" - why would we want
XPS striped across the cores the way it's done today?

What we're trying to do here is avoid the surprise cliff that guests will
hit when queue count is limited to less than VCPU count.  That will
happen because
we limit queue pair count to 32.  I'll happily push further complexity
to user mode.

If this won't fly, we can leave all of this behavior in user code.
Michael, would
you prefer that I abandon this patch?

> That's another problem with this patch. If you care about hyperthreads
> you want an API to probe for that.


It's something of a happy accident that hyperthreads line up that way.
Keeping the topology knowledge out of the patch and into user space seems
cleaner, would you agree?

Thanks!
Ben

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ