lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 26 Jan 2024 15:30:47 -0800
From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, X86 Kernel <x86@...nel.org>,
 iommu@...ts.linux.dev, Lu Baolu <baolu.lu@...ux.intel.com>,
 kvm@...r.kernel.org, Dave Hansen <dave.hansen@...el.com>, Joerg Roedel
 <joro@...tes.org>, "H. Peter Anvin" <hpa@...or.com>, Borislav Petkov
 <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>, Raj Ashok
 <ashok.raj@...el.com>, "Tian, Kevin" <kevin.tian@...el.com>,
 maz@...nel.org, peterz@...radead.org, seanjc@...gle.com, Robin Murphy
 <robin.murphy@....com>, jacob.jun.pan@...ux.intel.com
Subject: Re: [PATCH RFC 12/13] iommu/vt-d: Add a helper to retrieve PID
 address

Hi Thomas,

On Wed, 06 Dec 2023 21:19:11 +0100, Thomas Gleixner <tglx@...utronix.de>
wrote:

> On Sat, Nov 11 2023 at 20:16, Jacob Pan wrote:
> > From: Thomas Gleixner <tglx@...utronix.de>
> >
> > When programming IRTE for posted mode, we need to retrieve the
> > physical  
> 
> we need .... I surely did not write this changelog.
> 
Will delete this.

> > address of the posted interrupt descriptor (PID) that belongs to it's
> > target CPU.
> >
> > This per CPU PID has already been set up during cpu_init().  
> 
> This information is useful because?
ditto.

> > +static u64 get_pi_desc_addr(struct irq_data *irqd)
> > +{
> > +	int cpu =
> > cpumask_first(irq_data_get_effective_affinity_mask(irqd));  
> 
> The effective affinity mask is magically correct when this is called?
> 
My understanding is that remappable device MSIs have the following
hierarchy,e.g.

parent:                              
    domain:  INTEL-IR-5-13            
     hwirq:   0x20000                 
     chip:    INTEL-IR-POST           
      flags:   0x0                    
     parent:                          
        domain:  VECTOR            
         hwirq:   0x3c             
         chip:    APIC         

When irqs are allocated and activated, parents domain op is always called
first. Effective affinity mask is set up by the parent domain, i.e. VECTOR.
Example call stack for alloc:
	irq_data_update_effective_affinity
	apic_update_irq_cfg
	x86_vector_alloc_irqs
	intel_irq_remapping_alloc
	msi_domain_alloc

x86_vector_activate also changes the effective affinity mask before calling
intel_irq_remapping_activate() where a posted interrupt is configured for
its destination CPU.

At runtime, when IRQ affinity is changed by userspace Intel interrupt
remapping code also calls parent data/chip to update the effective affinity
map before changing IRTE.

intel_ir_set_affinity(struct irq_data *data, const struct cpumask *mask,
		      bool force)
{
	ret = parent->chip->irq_set_affinity(parent, mask, force);

..
}
Here the parent APIC chip does apic_set_affinity() which will set up
effective mask before posted MSI affinity change.

Maybe I missed some cases?

I will also add a check if the effective affinity mask is not set up.

static phys_addr_t get_pi_desc_addr(struct irq_data *irqd)
{
	int cpu = cpumask_first(irq_data_get_effective_affinity_mask(irqd));

	if (WARN_ON(cpu >= nr_cpu_ids))
		return 0;

	return __pa(per_cpu_ptr(&posted_interrupt_desc, cpu));
}


Thanks,

Jacob

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ