lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201102124159.hw6iry2wg4ibcggc@skbuf>
Date:   Mon, 2 Nov 2020 14:41:59 +0200
From:   Vladimir Oltean <olteanv@...il.com>
To:     Heiner Kallweit <hkallweit1@...il.com>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        David Miller <davem@...emloft.net>,
        Realtek linux nic maintainers <nic_swsd@...ltek.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] r8169: set IRQF_NO_THREAD if MSI(X) is enabled

On Mon, Nov 02, 2020 at 09:01:00AM +0100, Heiner Kallweit wrote:
> As mentioned by Eric it doesn't make sense to make the minimal hard irq
> handlers used with NAPI a thread. This more contributes to the problem
> than to the solution. The change here reflects this.

When you say that "it doesn't make sense", is there something that is
actually measurably worse when the hardirq handler gets force-threaded?
Rephrased, is it something that doesn't make sense in principle, or in
practice?

My understanding is that this is not where the bulk of the NAPI poll
processing is done anyway, so it should not have a severe negative
impact on performance in any case.

On the other hand, moving as much code as possible outside interrupt
context (be it hardirq or softirq) is beneficial to some use cases,
because the scheduler is not in control of that code's runtime unless it
is in a thread.

> The actual discussion would be how to make the NAPI processing a
> thread (instead softirq).

I don't get it, so you prefer the hardirq handler to consume CPU time
which is not accounted for by the scheduler, but for the NAPI poll, you
do want the scheduler to account for it? So why one but not the other?

> For using napi_schedule_irqoff we most likely need something like
> if (pci_dev_msi_enabled(pdev))
> 	napi_schedule_irqoff(napi);
> else
> 	napi_schedule(napi);
> and I doubt that's worth it.

Yes, probably not, hence my question.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ