lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Jun 2019 10:25:35 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
cc:     Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>,
        Ashok Raj <ashok.raj@...el.com>,
        Joerg Roedel <joro@...tes.org>,
        Andi Kleen <andi.kleen@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
        Stephane Eranian <eranian@...gle.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Randy Dunlap <rdunlap@...radead.org>, x86@...nel.org,
        linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
        Ricardo Neri <ricardo.neri@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Jacob Pan <jacob.jun.pan@...el.com>,
        Juergen Gross <jgross@...e.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Wincy Van <fanwenyi0529@...il.com>,
        Kate Stewart <kstewart@...uxfoundation.org>,
        Philippe Ombredanne <pombredanne@...b.com>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Baoquan He <bhe@...hat.com>,
        Jan Kiszka <jan.kiszka@...mens.com>,
        Lu Baolu <baolu.lu@...ux.intel.com>
Subject: Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt
 remampping table entry for watchdog

On Sun, 16 Jun 2019, Thomas Gleixner wrote:
> On Thu, 23 May 2019, Ricardo Neri wrote:
> > When the hardlockup detector is enabled, the function
> > hld_hpet_intremapactivate_irq() activates the recently created entry
> > in the interrupt remapping table via the modify_irte() functions. While
> > doing this, it specifies which CPU the interrupt must target via its APIC
> > ID. This function can be called every time the destination iD of the
> > interrupt needs to be updated; there is no need to allocate or remove
> > entries in the interrupt remapping table.
> 
> Brilliant.
> 
> > +int hld_hpet_intremap_activate_irq(struct hpet_hld_data *hdata)
> > +{
> > +	u32 destid = apic->calc_dest_apicid(hdata->handling_cpu);
> > +	struct intel_ir_data *data;
> > +
> > +	data = (struct intel_ir_data *)hdata->intremap_data;
> > +	data->irte_entry.dest_id = IRTE_DEST(destid);
> > +	return modify_irte(&data->irq_2_iommu, &data->irte_entry);
> 
> This calls modify_irte() which does at the very beginning:
> 
>    raw_spin_lock_irqsave(&irq_2_ir_lock, flags);
> 
> How is that supposed to work from NMI context? Not to talk about the
> other spinlocks which are taken in the subsequent call chain.
> 
> You cannot call in any of that code from NMI context.
> 
> The only reason why this never deadlocked in your testing is that nothing
> else touched that particular iommu where the HPET hangs off concurrently.
> 
> But that's just pure luck and not design. 

And just for the record. I warned you about that problem during the review
of an earlier version and told you to talk to IOMMU folks whether there is
a way to update the entry w/o running into that lock problem.

Can you tell my why am I actually reviewing patches and spending time on
this when the result is ignored anyway?

I also tried to figure out why you went away from the IPI broadcast
design. The only information I found is:

Changes vs. v1:

 * Brought back the round-robin mechanism proposed in v1 (this time not
   using the interrupt subsystem). This also requires to compute
   expiration times as in v1 (Andi Kleen, Stephane Eranian).

Great that there is no trace of any mail from Andi or Stephane about this
on LKML. There is no problem with talking offlist about this stuff, but
then you should at least provide a rationale for those who were not part of
the private conversation.

Thanks,

	tglcx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ