lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Sep 2020 10:57:37 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     Chen Jun <chenjun102@...wei.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        akpm@...ux-foundation.org, rui.xiang@...wei.com,
        weiyongjun1@...wei.com
Subject: Re: [PATCH -next 3/5] mm/kmemleak: Add support for percpu memory
 leak detect

On Mon, Sep 21, 2020 at 02:00:05AM +0000, Chen Jun wrote:
> From: Wei Yongjun <weiyongjun1@...wei.com>
> 
> Currently the reporting of the percpu chunks leaking problem
> are not supported. This patch introduces this function.
> 
> Since __percpu pointer is not pointing directly to the actual chunks,
> this patch creates an object for __percpu pointer, but marks it as no
> scan block, only check whether this pointer is referenced by other
> blocks.

OK, so you wanted NO_SCAN to not touch the block at all, not even update
the checksum. Maybe better add a new flag, NO_ACCESS (and we could use
it to track ioremap leaks, it's been on my wishlist for years).

> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index c09c6b59eda6..feedb72f06f2 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -283,6 +288,9 @@ static void hex_dump_object(struct seq_file *seq,
>  	const u8 *ptr = (const u8 *)object->pointer;
>  	size_t len;
>  
> +	if (object->flags & OBJECT_PERCPU)
> +		ptr = this_cpu_ptr((void __percpu *)object->pointer);

You may want to print the CPU number as well since the information is
likely different on another CPU. Also, I think this context is
preemptable, so it's better with a get_cpu/put_cpu().

> @@ -651,6 +672,19 @@ static void create_object(unsigned long ptr, size_t size, int min_count,
>  	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
>  }
>  
> +static void create_object(unsigned long ptr, size_t size, int min_count,
> +			  gfp_t gfp)
> +{
> +	__create_object(ptr, size, min_count, 0, gfp);
> +}
> +
> +static void create_object_percpu(unsigned long ptr, size_t size, int min_count,
> +				 gfp_t gfp)
> +{
> +	__create_object(ptr, size, min_count, OBJECT_PERCPU | OBJECT_NO_SCAN,
> +			gfp);
> +}
> +
>  /*
>   * Mark the object as not allocated and schedule RCU freeing via put_object().
>   */
> @@ -912,10 +946,12 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
>  	 * Percpu allocations are only scanned and not reported as leaks
>  	 * (min_count is set to 0).
>  	 */
> -	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> +	if (kmemleak_enabled && ptr && !IS_ERR(ptr)) {
>  		for_each_possible_cpu(cpu)
>  			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
>  				      size, 0, gfp);
> +		create_object_percpu((unsigned long)ptr, size, 1, gfp);
> +	}
>  }

A concern I have here is that ptr may overlap with an existing object
and the insertion in the rb tree will fail. For example, with !SMP,
ptr == per_cpu_ptr(ptr, 0), so create_object() will fail and kmemleak
gets disabled.

An option would to figure out how to allow overlapping ranges with rb
tree (or find a replacement for it if not possible).

Another option would be to have an additional structure to track the
__percpu pointers since they have their own range. If size is not
relevant, maybe go for an xarray, otherwise another rb tree (do we have
any instance of pointers referring some inner member of a __percpu
object?). The scan_object() function will have to search two trees.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ