lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180729231247-mutt-send-email-mst@kernel.org>
Date:   Sun, 29 Jul 2018 23:33:20 +0300
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     David Miller <davem@...emloft.net>
Cc:     caleb.raitto@...il.com, jasowang@...hat.com,
        netdev@...r.kernel.org, caraitto@...gle.com
Subject: Re: [PATCH net-next] virtio_net: force_napi_tx module param.

On Sun, Jul 29, 2018 at 09:00:27AM -0700, David Miller wrote:
> From: Caleb Raitto <caleb.raitto@...il.com>
> Date: Mon, 23 Jul 2018 16:11:19 -0700
> 
> > From: Caleb Raitto <caraitto@...gle.com>
> > 
> > The driver disables tx napi if it's not certain that completions will
> > be processed affine with tx service.
> > 
> > Its heuristic doesn't account for some scenarios where it is, such as
> > when the queue pair count matches the core but not hyperthread count.
> > 
> > Allow userspace to override the heuristic. This is an alternative
> > solution to that in the linked patch. That added more logic in the
> > kernel for these cases, but the agreement was that this was better left
> > to user control.
> > 
> > Do not expand the existing napi_tx variable to a ternary value,
> > because doing so can break user applications that expect
> > boolean ('Y'/'N') instead of integer output. Add a new param instead.
> > 
> > Link: https://patchwork.ozlabs.org/patch/725249/
> > Acked-by: Willem de Bruijn <willemb@...gle.com>
> > Acked-by: Jon Olson <jonolson@...gle.com>
> > Signed-off-by: Caleb Raitto <caraitto@...gle.com>
> 
> So I looked into the history surrounding these issues.
> 
> First of all, it's always ends up turning out crummy when drivers start
> to set affinities themselves.  The worst possible case is to do it
> _conditionally_, and that is exactly what virtio_net is doing.
> 
> >From the user's perspective, this provides a really bad experience.
> 
> So if I have a 32-queue device and there are 32 cpus, you'll do all
> the affinity settings, stopping Irqbalanced from doing anything
> right?
> 
> So if I add one more cpu, you'll say "oops, no idea what to do in
> this situation" and not touch the affinities at all?
> 
> That makes no sense at all.
> 
> If the driver is going to set affinities at all, OWN that decision
> and set it all the time to something reasonable.
> 
> Or accept that you shouldn't be touching this stuff in the first place
> and leave the affinities alone.
> 
> Right now we're kinda in a situation where the driver has been setting
> affinities in the ncpus==nqueues cases for some time, so we can't stop
> doing it.
> 
> Which means we have to set them in all cases to make the user
> experience sane again.
> 
> I looked at the linked to patch again:
> 
> 	https://patchwork.ozlabs.org/patch/725249/
> 
> And I think the strategy should be made more generic, to get rid of
> the hyperthreading assumptions.  I also agree that the "assign
> to first N cpus" logic doesn't make much sense either.
> 
> Just distribute across the available cpus evenly, and be done with it.
> If you have 64 cpus and 32 queues, this assigns queues to every other
> cpu.
> 
> Then we don't need this weird new module parameter.

Can't we set affinity to a set of CPUs?

The point really is that tx irq handler needs a lock on the tx queue to
free up skbs, so processing it on another CPU while tx is active causes
cache line bounces. So we want affinity to CPUs that submit to this
queue on the theory they have these cache line(s) anyway.

I suspect it's not a uniqueue property of virtio.

-- 
MST

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ