lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 6 Aug 2020 15:27:54 -0700
From:   "Dey, Megha" <megha.dey@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Jason Gunthorpe <jgg@...lanox.com>
CC:     Marc Zyngier <maz@...nel.org>,
        "Jiang, Dave" <dave.jiang@...el.com>,
        "vkoul@...nel.org" <vkoul@...nel.org>,
        "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
        "rafael@...nel.org" <rafael@...nel.org>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "hpa@...or.com" <hpa@...or.com>,
        "alex.williamson@...hat.com" <alex.williamson@...hat.com>,
        "Pan, Jacob jun" <jacob.jun.pan@...el.com>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        "Liu, Yi L" <yi.l.liu@...el.com>, "Lu, Baolu" <baolu.lu@...el.com>,
        "Tian, Kevin" <kevin.tian@...el.com>,
        "Kumar, Sanjay K" <sanjay.k.kumar@...el.com>,
        "Luck, Tony" <tony.luck@...el.com>,
        "Lin, Jing" <jing.lin@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "kwankhede@...dia.com" <kwankhede@...dia.com>,
        "eric.auger@...hat.com" <eric.auger@...hat.com>,
        "parav@...lanox.com" <parav@...lanox.com>,
        "Hansen, Dave" <dave.hansen@...el.com>,
        "netanelg@...lanox.com" <netanelg@...lanox.com>,
        "shahafs@...lanox.com" <shahafs@...lanox.com>,
        "yan.y.zhao@...ux.intel.com" <yan.y.zhao@...ux.intel.com>,
        "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "Ortiz, Samuel" <samuel.ortiz@...el.com>,
        "Hossain, Mona" <mona.hossain@...el.com>,
        "dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: Re: [PATCH RFC v2 02/18] irq/dev-msi: Add support for a new DEV_MSI
 irq domain

Hi Thomas,

On 8/6/2020 1:21 PM, Thomas Gleixner wrote:
> Megha,
>
> "Dey, Megha" <megha.dey@...el.com> writes:
>> On 8/6/2020 10:10 AM, Thomas Gleixner wrote:
>>> If the DEV/MSI domain has it's own per IR unit resource management, then
>>> you need one per IR unit.
>>>
>>> If the resource management is solely per device then having a domain per
>>> device is the right choice.
>> The dev-msi domain can be used by other devices if they too would want
>> to follow the vector->intel IR->dev-msi IRQ hierarchy.  I do create
>> one dev-msi IRQ domain instance per IR unit. So I guess for this case,
>> it makes most sense to have a dev-msi IRQ domain per IR unit as
>> opposed to create one per individual driver..
> I'm not really convinced. I looked at the idxd driver and that has it's
> own interrupt related resource management for the IMS slots and provides
> the mask,unmask callbacks for the interrupt chip via this crude platform
> data indirection.
>
> So I don't see the value of the dev-msi domain per IR unit. The domain
> itself does not provide much functionality other than indirections and
> you clearly need per device interrupt resource management on the side
> and a customized irq chip. I rather see it as a plain layering
> violation.
>
> The point is that your IDXD driver manages the per device IMS slots
> which is a interrupt related resource. The story would be different if
> the IMS slots would be managed by some central or per IR unit entity,
> but in that case you'd need IMS specific domain(s).
>
> So the obvious consequence of the hierarchical irq design is:
>
>     vector -> IR -> IDXD
>
> which makes the control flow of allocating an interrupt for a subdevice
> straight forward following the irq hierarchy rules.
>
> This still wants to inherit the existing msi domain functionality, but
> the amount of code required is small and removes all these pointless
> indirections and integrates the slot management naturally.
>
> If you expect or know that there are other devices coming up with IMS
> integrated then most of that code can be made a common library. But for
> this to make sense, you really want to make sure that these other
> devices do not require yet another horrible layer of indirection.
Yes Thomas, for now this may look odd since there is only one device 
using this
IRQ domain. But there will be other devices following suit, hence I have 
added
all the IRQ chip/domain bits in a separate file in drivers/irqchip in 
the next
version of patches. I'll submit the patches shortly and it will be great 
if I
can get more feedback on it.
> A side note: I just read back on the specification and stumbled over
> the following gem:
>
>   "IMS may also optionally support per-message masking and pending bit
>    status, similar to the per-vector mask and pending bit array in the
>    PCI Express MSI-X capability."
>
> Optionally? Please tell the hardware folks to make this mandatory. We
> have enough pain with non maskable MSI interrupts already so introducing
> yet another non maskable interrupt trainwreck is not an option.
>
> It's more than a decade now that I tell HW people not to repeat the
> non-maskable MSI failure, but obviously they still think that
> non-maskable interrupts are a brilliant idea. I know that HW folks
> believe that everything they omit can be fixed in software, but they
> have to finally understand that this particular issue _cannot_ be fixed
> at all.
hmm, I asked the hardware folks and they have informed me that all IMS 
devices
will support per vector masking/pending bit. This will be updated in the 
next SIOV
spec which will be published soon.
>
> Thanks,
>
>          tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ