lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+XnL=TowYwdaww7hBkPQek4_cDMSiqo02UaJhKy5PrTA@mail.gmail.com>
Date:   Sun, 29 Jul 2018 17:32:56 -0400
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     David Miller <davem@...emloft.net>
Cc:     caleb.raitto@...il.com, "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Network Development <netdev@...r.kernel.org>,
        Caleb Raitto <caraitto@...gle.com>
Subject: Re: [PATCH net-next] virtio_net: force_napi_tx module param.

On Sun, Jul 29, 2018 at 12:01 PM David Miller <davem@...emloft.net> wrote:
>
> From: Caleb Raitto <caleb.raitto@...il.com>
> Date: Mon, 23 Jul 2018 16:11:19 -0700
>
> > From: Caleb Raitto <caraitto@...gle.com>
> >
> > The driver disables tx napi if it's not certain that completions will
> > be processed affine with tx service.
> >
> > Its heuristic doesn't account for some scenarios where it is, such as
> > when the queue pair count matches the core but not hyperthread count.
> >
> > Allow userspace to override the heuristic. This is an alternative
> > solution to that in the linked patch. That added more logic in the
> > kernel for these cases, but the agreement was that this was better left
> > to user control.
> >
> > Do not expand the existing napi_tx variable to a ternary value,
> > because doing so can break user applications that expect
> > boolean ('Y'/'N') instead of integer output. Add a new param instead.
> >
> > Link: https://patchwork.ozlabs.org/patch/725249/
> > Acked-by: Willem de Bruijn <willemb@...gle.com>
> > Acked-by: Jon Olson <jonolson@...gle.com>
> > Signed-off-by: Caleb Raitto <caraitto@...gle.com>
>
> So I looked into the history surrounding these issues.
>
> First of all, it's always ends up turning out crummy when drivers start
> to set affinities themselves.  The worst possible case is to do it
> _conditionally_, and that is exactly what virtio_net is doing.
>
> From the user's perspective, this provides a really bad experience.
>
> So if I have a 32-queue device and there are 32 cpus, you'll do all
> the affinity settings, stopping Irqbalanced from doing anything
> right?
>
> So if I add one more cpu, you'll say "oops, no idea what to do in
> this situation" and not touch the affinities at all?
>
> That makes no sense at all.
>
> If the driver is going to set affinities at all, OWN that decision
> and set it all the time to something reasonable.
>
> Or accept that you shouldn't be touching this stuff in the first place
> and leave the affinities alone.
>
> Right now we're kinda in a situation where the driver has been setting
> affinities in the ncpus==nqueues cases for some time, so we can't stop
> doing it.
>
> Which means we have to set them in all cases to make the user
> experience sane again.
>
> I looked at the linked to patch again:
>
>         https://patchwork.ozlabs.org/patch/725249/
>
> And I think the strategy should be made more generic, to get rid of
> the hyperthreading assumptions.  I also agree that the "assign
> to first N cpus" logic doesn't make much sense either.
>
> Just distribute across the available cpus evenly, and be done with it.

Sounds good to me.

> If you have 64 cpus and 32 queues, this assigns queues to every other
> cpu.

Striping half the number of queues as cores on a hyperthreaded system
with two logical cores per physical core will allocate all queues on
only half the physical cores.

But it is probably not safe to make any assumptions on virtual to
physical core mapping, anyway, which makes the simplest strategy is
preferable.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ