[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3cd2d3d2-6d97-3cd6-b273-6e52aa6066c9@redhat.com>
Date: Fri, 22 Oct 2021 14:22:44 +0200
From: David Hildenbrand <david@...hat.com>
To: Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Qian Cai <quic_qiancai@...cinc.com>,
Vladimir Zapolskiy <vladimir.zapolskiy@...aro.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] memblock: exclude MEMBLOCK_NOMAP regions from
kmemleak
On 21.10.21 09:09, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@...ux.ibm.com>
>
> Vladimir Zapolskiy reports:
>
> commit a7259df76702 ("memblock: make memblock_find_in_range method private")
> invokes a kernel panic while running kmemleak on OF platforms with nomaped
> regions:
>
> Unable to handle kernel paging request at virtual address fff000021e00000
> [...]
> scan_block+0x64/0x170
> scan_gray_list+0xe8/0x17c
> kmemleak_scan+0x270/0x514
> kmemleak_write+0x34c/0x4ac
>
> The memory allocated from memblock is registered with kmemleak, but if it
> is marked MEMBLOCK_NOMAP it won't have linear map entries so an attempt to
> scan such areas will fault.
>
> Ideally, memblock_mark_nomap() would inform kmemleak to ignore
> MEMBLOCK_NOMAP memory, but it can be called before kmemleak interfaces
> operating on physical addresses can use __va() conversion.
>
> Make sure that functions that mark allocated memory as MEMBLOCK_NOMAP take
> care of informing kmemleak to ignore such memory.
>
> Link: https://lore.kernel.org/all/8ade5174-b143-d621-8c8e-dc6a1898c6fb@linaro.org
> Link: https://lore.kernel.org/all/c30ff0a2-d196-c50d-22f0-bd50696b1205@quicinc.com
> Fixes: a7259df76702 ("memblock: make memblock_find_in_range method private")
> Reported-by: Vladimir Zapolskiy <vladimir.zapolskiy@...aro.org>
> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> drivers/acpi/tables.c | 3 +++
> drivers/of/of_reserved_mem.c | 2 ++
> mm/memblock.c | 3 +++
> 3 files changed, 8 insertions(+)
>
> diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
> index f9383736fa0f..71419eb16e09 100644
> --- a/drivers/acpi/tables.c
> +++ b/drivers/acpi/tables.c
> @@ -21,6 +21,7 @@
> #include <linux/earlycpio.h>
> #include <linux/initrd.h>
> #include <linux/security.h>
> +#include <linux/kmemleak.h>
> #include "internal.h"
>
> #ifdef CONFIG_ACPI_CUSTOM_DSDT
> @@ -601,6 +602,8 @@ void __init acpi_table_upgrade(void)
> */
> arch_reserve_mem_area(acpi_tables_addr, all_tables_size);
>
> + kmemleak_ignore_phys(acpi_tables_addr);
> +
> /*
> * early_ioremap only can remap 256k one time. If we map all
> * tables one time, we will hit the limit. Need to map chunks
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 59c1390cdf42..9da8835ba5a5 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -21,6 +21,7 @@
> #include <linux/sort.h>
> #include <linux/slab.h>
> #include <linux/memblock.h>
> +#include <linux/kmemleak.h>
>
> #include "of_private.h"
>
> @@ -46,6 +47,7 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
> err = memblock_mark_nomap(base, size);
> if (err)
> memblock_free(base, size);
> + kmemleak_ignore_phys(base);
> }
>
> return err;
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 184dcd2e5d99..dab804b09d62 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -932,6 +932,9 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size)
> * covered by the memory map. The struct page representing NOMAP memory
> * frames in the memory map will be PageReserved()
> *
> + * Note: if the memory being marked %MEMBLOCK_NOMAP was allocated from
> + * memblock, the caller must inform kmemleak to ignore that memory
> + *
> * Return: 0 on success, -errno on failure.
> */
> int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)
>
Reviewed-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists