lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86f8f667-bda6-59c4-91b7-6ba2ef55e3db@intel.com>
Date:   Mon, 26 Oct 2020 12:21:45 -0700
From:   Jacob Keller <jacob.e.keller@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Marcelo Tosatti <mtosatti@...hat.com>
Cc:     Nitesh Narayan Lal <nitesh@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>, helgaas@...nel.org,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
        linux-pci@...r.kernel.org, intel-wired-lan@...ts.osuosl.org,
        frederic@...nel.org, sassmann@...hat.com,
        jesse.brandeburg@...el.com, lihong.yang@...el.com,
        jeffrey.t.kirsher@...el.com, jlelli@...hat.com, hch@...radead.org,
        bhelgaas@...gle.com, mike.marciniszyn@...el.com,
        dennis.dalessandro@...el.com, thomas.lendacky@....com,
        jiri@...dia.com, mingo@...hat.com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, lgoncalv@...hat.com
Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping
 CPUs



On 10/26/2020 12:00 PM, Thomas Gleixner wrote:
> On Mon, Oct 26 2020 at 14:30, Marcelo Tosatti wrote:
>> On Fri, Oct 23, 2020 at 11:00:52PM +0200, Thomas Gleixner wrote:
>>> So without information from the driver which tells what the best number
>>> of interrupts is with a reduced number of CPUs, this cutoff will cause
>>> more problems than it solves. Regressions guaranteed.
>>
>> One might want to move from one interrupt per isolated app core
>> to zero, or vice versa. It seems that "best number of interrupts 
>> is with reduced number of CPUs" information, is therefore in userspace, 
>> not in driver...
> 
> How does userspace know about the driver internals? Number of management
> interrupts, optimal number of interrupts per queue?
> 

I guess this is the problem solved in part by the queue management work
that would make queues a thing that userspace is aware of.

Are there drivers which use more than one interrupt per queue? I know
drivers have multiple management interrupts.. and I guess some drivers
do combined 1 interrupt per pair of Tx/Rx..  It's also plausible to to
have multiple queues for one interrupt .. I'm not sure how a single
queue with multiple interrupts would work though.

>>> Managed interrupts base their interrupt allocation and spreading on
>>> information which is handed in by the individual driver and not on crude
>>> assumptions. They are not imposing restrictions on the use case.
>>>
>>> It's perfectly fine for isolated work to save a data set to disk after
>>> computation has finished and that just works with the per-cpu I/O queue
>>> which is otherwise completely silent. 
>>
>> Userspace could only change the mask of interrupts which are not 
>> triggered by requests from the local CPU (admin, error, mgmt, etc),
>> to avoid the vector exhaustion problem.
>>
>> However, there is no explicit way for userspace to know that, as far as
>> i know.
>>
>>  130:      34845          0          0          0          0          0          0          0  IR-PCI-MSI 33554433-edge      nvme0q1
>>  131:          0      27062          0          0          0          0          0          0  IR-PCI-MSI 33554434-edge      nvme0q2
>>  132:          0          0      24393          0          0          0          0          0  IR-PCI-MSI 33554435-edge      nvme0q3
>>  133:          0          0          0      24313          0          0          0          0  IR-PCI-MSI 33554436-edge      nvme0q4
>>  134:          0          0          0          0      20608          0          0          0  IR-PCI-MSI 33554437-edge      nvme0q5
>>  135:          0          0          0          0          0      22163          0          0  IR-PCI-MSI 33554438-edge      nvme0q6
>>  136:          0          0          0          0          0          0      23020          0  IR-PCI-MSI 33554439-edge      nvme0q7
>>  137:          0          0          0          0          0          0          0      24285  IR-PCI-MSI 33554440-edge      nvme0q8
>>
>> Can that be retrieved from PCI-MSI information, or drivers
>> have to inform this?
> 
> The driver should use a different name for the admin queues.
> 
> Thanks,
> 
>         tglx
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ