lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Jan 2022 02:10:13 -0800
From:   Tong Zhang <ztong0001@...il.com>
To:     Marc Zyngier <maz@...nel.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Jason Gunthorpe <jgg@...pe.ca>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] genirq/msi: fix crash when handling Multi-MSI

Hello,

ops->msi_check could point to pci_msi_domain_check_cap that is the
function in question

hence we can have following call stack

pci_msi_domain_check_cap (used by ops->msi_check(domain, info, dev))
msi_domain_prepare_irqs
__msi_domain_alloc_irqs
msi_domain_alloc_irqs_descs_locked

What I am suggesting is commit 0f62d941acf9 changed how this return
value is being handled and created a UAF

- Tong


On Mon, Jan 17, 2022 at 2:00 AM Marc Zyngier <maz@...nel.org> wrote:
>
> On Mon, 17 Jan 2022 09:27:59 +0000,
> Tong Zhang <ztong0001@...il.com> wrote:
> >
> > pci_msi_domain_check_cap() could return 1 when domain does not support
> > multi MSI and user request multi MSI. This positive value will be used by
> > __pci_enable_msi_range(). In previous refactor, this positive value is
> > handled as error case which will cause kernel crash.
> >
> > [    1.197953] BUG: KASAN: use-after-free in __pci_enable_msi_range+0x234/0x320
> > [    1.198327] Freed by task 1:
> > [    1.198327]  kfree+0x8f/0x2b0
> > [    1.198327]  msi_free_msi_descs_range+0xf5/0x130
> > [    1.198327]  msi_domain_alloc_irqs_descs_locked+0x8d/0xa0
> > [    1.198327]  __pci_enable_msi_range+0x1a4/0x320
> > [    1.198327]  pci_alloc_irq_vectors_affinity+0x135/0x1a0
> > [    1.198327]  pcie_port_device_register+0x4a1/0x5c0
> > [    1.198327]  pcie_portdrv_probe+0x50/0x100
>
> I'm sorry, but you'll have to be a bit clearer in your commit message,
> because I cannot relate what you describe with the patch.
>
> The real issue seems to be that a domain_alloc_irqs callback can
> return a positive, non-zero value, and I don't think this is expected.
>
> How about this instead? If I am barking up the wrong tree, please
> provide a more accurate description of the problem you are seeing.
>
> Thanks,
>
>         M.
>
> diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
> index 2bdfce5edafd..da8bb6135627 100644
> --- a/kernel/irq/msi.c
> +++ b/kernel/irq/msi.c
> @@ -878,8 +878,10 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
>                 virq = __irq_domain_alloc_irqs(domain, -1, desc->nvec_used,
>                                                dev_to_node(dev), &arg, false,
>                                                desc->affinity);
> -               if (virq < 0)
> -                       return msi_handle_pci_fail(domain, desc, allocated);
> +               if (virq < 0) {
> +                       ret = msi_handle_pci_fail(domain, desc, allocated);
> +                       return ret < 0 ? ret : 0;
> +               }
>
>                 for (i = 0; i < desc->nvec_used; i++) {
>                         irq_set_msi_desc_off(virq, i, desc);
>
> --
> Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ