lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8303e362-417a-6649-0cb3-f67d676ba163@quicinc.com>
Date:   Thu, 28 Apr 2022 09:15:34 -0600
From:   Jeffrey Hugo <quic_jhugo@...cinc.com>
To:     Wei Liu <wei.liu@...nel.org>
CC:     <kys@...rosoft.com>, <haiyangz@...rosoft.com>,
        <sthemmin@...rosoft.com>, <decui@...rosoft.com>,
        <lorenzo.pieralisi@....com>, <robh@...nel.org>, <kw@...ux.com>,
        <bhelgaas@...gle.com>, <bjorn.andersson@...aro.org>,
        <linux-hyperv@...r.kernel.org>, <linux-pci@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] PCI: hv: Fix hv_arch_irq_unmask() for multi-MSI

On 4/28/2022 9:08 AM, Wei Liu wrote:
> On Thu, Apr 28, 2022 at 09:06:42AM -0600, Jeffrey Hugo wrote:
>> On 4/28/2022 8:58 AM, Wei Liu wrote:
>>> On Wed, Apr 27, 2022 at 08:07:33AM -0600, Jeffrey Hugo wrote:
>>>> In the multi-MSI case, hv_arch_irq_unmask() will only operate on the first
>>>> MSI of the N allocated.  This is because only the first msi_desc is cached
>>>> and it is shared by all the MSIs of the multi-MSI block.  This means that
>>>> hv_arch_irq_unmask() gets the correct address, but the wrong data (always
>>>> 0).
>>>>
>>>> This can break MSIs.
>>>>
>>>> Lets assume MSI0 is vector 34 on CPU0, and MSI1 is vector 33 on CPU0.
>>>>
>>>> hv_arch_irq_unmask() is called on MSI0.  It uses a hypercall to configure
>>>> the MSI address and data (0) to vector 34 of CPU0.  This is correct.  Then
>>>> hv_arch_irq_unmask is called on MSI1.  It uses another hypercall to
>>>> configure the MSI address and data (0) to vector 33 of CPU0.  This is
>>>> wrong, and results in both MSI0 and MSI1 being routed to vector 33.  Linux
>>>> will observe extra instances of MSI1 and no instances of MSI0 despite the
>>>> endpoint device behaving correctly.
>>>>
>>>> For the multi-MSI case, we need unique address and data info for each MSI,
>>>> but the cached msi_desc does not provide that.  However, that information
>>>> can be gotten from the int_desc cached in the chip_data by
>>>> compose_msi_msg().  Fix the multi-MSI case to use that cached information
>>>> instead.  Since hv_set_msi_entry_from_desc() is no longer applicable,
>>>> remove it.
>>>>
>>>> Signed-off-by: Jeffrey Hugo <quic_jhugo@...cinc.com>
>>>> ---
>>>>    drivers/pci/controller/pci-hyperv.c | 12 ++++--------
>>>>    1 file changed, 4 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
>>>> index 5800ecf..7aea0b7 100644
>>>> --- a/drivers/pci/controller/pci-hyperv.c
>>>> +++ b/drivers/pci/controller/pci-hyperv.c
>>>> @@ -611,13 +611,6 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
>>>>    	return cfg->vector;
>>>>    }
>>>> -static void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
>>>> -				       struct msi_desc *msi_desc)
>>>> -{
>>>> -	msi_entry->address.as_uint32 = msi_desc->msg.address_lo;
>>>> -	msi_entry->data.as_uint32 = msi_desc->msg.data;
>>>> -}
>>>> -
>>>
>>> Instead of dropping this function, can you change the second argument to
>>> take struct tran_int_desc *?
>>>
>>> This way you can use the same function in hv_compose_msi_msg.
>>
>> I do not see how this could be reused in hv_compose_msi_msg() with the
>> proposed change of the second argument.  The hv_msi_entry type is not used
>> in hv_compose_msi_msg(), nor does it look like it is applicable anywhere
>> within the function.
>>
>> What am I missing?
> 
> I mixed up two different types while going through the code --
> hv_msi_entry and Linux's own msi_entry type. Sorry for the noise.

No problem.  Thanks for picking this up.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ