[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-KWixbQb2=1oGuxCye9BNy9pjN6Ozku3kcRbQ77c5teJg@mail.gmail.com>
Date: Sun, 29 Jul 2018 17:09:53 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: David Miller <davem@...emloft.net>, caleb.raitto@...il.com,
Jason Wang <jasowang@...hat.com>,
Network Development <netdev@...r.kernel.org>,
Caleb Raitto <caraitto@...gle.com>,
Benjamin Serebrin <serebrin@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: force_napi_tx module param.
On Sun, Jul 29, 2018 at 4:48 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Sun, Jul 29, 2018 at 09:00:27AM -0700, David Miller wrote:
> > From: Caleb Raitto <caleb.raitto@...il.com>
> > Date: Mon, 23 Jul 2018 16:11:19 -0700
> >
> > > From: Caleb Raitto <caraitto@...gle.com>
> > >
> > > The driver disables tx napi if it's not certain that completions will
> > > be processed affine with tx service.
> > >
> > > Its heuristic doesn't account for some scenarios where it is, such as
> > > when the queue pair count matches the core but not hyperthread count.
> > >
> > > Allow userspace to override the heuristic. This is an alternative
> > > solution to that in the linked patch. That added more logic in the
> > > kernel for these cases, but the agreement was that this was better left
> > > to user control.
> > >
> > > Do not expand the existing napi_tx variable to a ternary value,
> > > because doing so can break user applications that expect
> > > boolean ('Y'/'N') instead of integer output. Add a new param instead.
> > >
> > > Link: https://patchwork.ozlabs.org/patch/725249/
> > > Acked-by: Willem de Bruijn <willemb@...gle.com>
> > > Acked-by: Jon Olson <jonolson@...gle.com>
> > > Signed-off-by: Caleb Raitto <caraitto@...gle.com>
> >
> > So I looked into the history surrounding these issues.
> >
> > First of all, it's always ends up turning out crummy when drivers start
> > to set affinities themselves. The worst possible case is to do it
> > _conditionally_, and that is exactly what virtio_net is doing.
> >
> > >From the user's perspective, this provides a really bad experience.
> >
> > So if I have a 32-queue device and there are 32 cpus, you'll do all
> > the affinity settings, stopping Irqbalanced from doing anything
> > right?
> >
> > So if I add one more cpu, you'll say "oops, no idea what to do in
> > this situation" and not touch the affinities at all?
> >
> > That makes no sense at all.
> >
> > If the driver is going to set affinities at all, OWN that decision
> > and set it all the time to something reasonable.
> >
> > Or accept that you shouldn't be touching this stuff in the first place
> > and leave the affinities alone.
> >
> > Right now we're kinda in a situation where the driver has been setting
> > affinities in the ncpus==nqueues cases for some time, so we can't stop
> > doing it.
> >
> > Which means we have to set them in all cases to make the user
> > experience sane again.
> >
> > I looked at the linked to patch again:
> >
> > https://patchwork.ozlabs.org/patch/725249/
> >
> > And I think the strategy should be made more generic, to get rid of
> > the hyperthreading assumptions. I also agree that the "assign
> > to first N cpus" logic doesn't make much sense either.
> >
> > Just distribute across the available cpus evenly, and be done with it.
> > If you have 64 cpus and 32 queues, this assigns queues to every other
> > cpu.
> >
> > Then we don't need this weird new module parameter.
>
> Can't we set affinity to a set of CPUs?
>
> The point really is that tx irq handler needs a lock on the tx queue to
> free up skbs, so processing it on another CPU while tx is active causes
> cache line bounces. So we want affinity to CPUs that submit to this
> queue on the theory they have these cache line(s) anyway.
>
> I suspect it's not a uniqueue property of virtio.
It is a good heuristic. But as Jon pointed out, there is a trade-off
with other costs, such as increased interrupt load with additional
tx queues. This is particularly stark when multiple hyperthreads
share a cache, but have independent tx queues.
Powered by blists - more mailing lists