lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFCwf12QOxB4HJJAjJLknsEBiSAbaDgbna5L3JFhCcr36Rqc9w@mail.gmail.com>
Date:   Sun, 5 Jun 2022 13:05:33 +0300
From:   Oded Gabbay <ogabbay@...nel.org>
To:     Dan Carpenter <dan.carpenter@...cle.com>
Cc:     Ohad Sharabi <osharabi@...ana.ai>, Arnd Bergmann <arnd@...db.de>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Yuri Nudelman <ynudelman@...ana.ai>,
        Ofir Bitton <obitton@...ana.ai>,
        farah kassabri <fkassabri@...ana.ai>,
        Tomer Tayar <ttayar@...ana.ai>,
        "Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
        kernel-janitors@...r.kernel.org
Subject: Re: [PATCH] habanalabs: fix double unlock on error in map_device_va()

On Wed, May 25, 2022 at 3:25 PM Dan Carpenter <dan.carpenter@...cle.com> wrote:
>
> If hl_mmu_prefetch_cache_range() fails then this code calls
> mutex_unlock(&ctx->mmu_lock) when it's no longer holding the mutex.
>
> Fixes: 9e495e24003e ("habanalabs: do MMU prefetch as deferred work")
> Signed-off-by: Dan Carpenter <dan.carpenter@...cle.com>
> ---
>  drivers/misc/habanalabs/common/memory.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
> index 663dd7e589d4..d5e6500f8a1f 100644
> --- a/drivers/misc/habanalabs/common/memory.c
> +++ b/drivers/misc/habanalabs/common/memory.c
> @@ -1245,16 +1245,16 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, u64 *device
>         rc = map_phys_pg_pack(ctx, ret_vaddr, phys_pg_pack);
>         if (rc) {
>                 dev_err(hdev->dev, "mapping page pack failed for handle %u\n", handle);
> +               mutex_unlock(&ctx->mmu_lock);
>                 goto map_err;
>         }
>
>         rc = hl_mmu_invalidate_cache_range(hdev, false, *vm_type | MMU_OP_SKIP_LOW_CACHE_INV,
>                                 ctx->asid, ret_vaddr, phys_pg_pack->total_size);
> +       mutex_unlock(&ctx->mmu_lock);
>         if (rc)
>                 goto map_err;
>
> -       mutex_unlock(&ctx->mmu_lock);
> -
>         /*
>          * prefetch is done upon user's request. it is performed in WQ as and so can
>          * be outside the MMU lock. the operation itself is already protected by the mmu lock
> @@ -1283,8 +1283,6 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args, u64 *device
>         return rc;
>
>  map_err:
> -       mutex_unlock(&ctx->mmu_lock);
> -
>         if (add_va_block(hdev, va_range, ret_vaddr,
>                                 ret_vaddr + phys_pg_pack->total_size - 1))
>                 dev_warn(hdev->dev,
> --
> 2.35.1
>

Reviewed-by: Oded Gabbay <ogabbay@...nel.org>
Applied to -next.
Thanks,
Oded

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ