[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-Ja-qpQ1BCzdxbg2ZAw4LR62i3wCL6gkspnGyLSohC8Yg@mail.gmail.com>
Date: Wed, 1 Aug 2018 11:56:14 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: David Miller <davem@...emloft.net>, caleb.raitto@...il.com,
Jason Wang <jasowang@...hat.com>,
Network Development <netdev@...r.kernel.org>,
Caleb Raitto <caraitto@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: force_napi_tx module param.
> > > > Just distribute across the available cpus evenly, and be done with it.
> > >
> > > Sounds good to me.
> >
> > So e.g. we could set an affinity hint to a group of CPUs that
> > might transmit to this queue.
>
> We also want to set the xps mask for all cpus in the group to this queue.
>
> Is there a benefit over explicitly choosing one cpu from the set, btw?
> I assumed striping. Something along the lines of
>
> int stripe = max_t(int, num_online_cpus() / vi->curr_queue_pairs, 1);
> int vq = 0;
>
> cpumask_clear(xps_mask);
>
> for_each_online_cpu(cpu) {
> cpumask_set_cpu(cpu, xps_mask);
>
> if ((i + 1) % stripe == 0) {
> virtqueue_set_affinity(vi->rq[vq].vq, cpu);
> virtqueue_set_affinity(vi->sq[vq].vq, cpu);
> netif_set_xps_queue(vi->dev, xps_mask, vq);
> cpumask_clear(xps_mask);
> vq++;
> }
> i++;
> }
.. but handling edge cases correctly, such as #cpu not being a perfect
multiple of #vq.
Powered by blists - more mailing lists