lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191114220715.1ac54ddf@carbon>
Date:   Thu, 14 Nov 2019 22:07:15 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Jonathan Lemon <jonathan.lemon@...il.com>
Cc:     <netdev@...r.kernel.org>, <davem@...emloft.net>,
        <kernel-team@...com>, <ilias.apalodimas@...aro.org>,
        brouer@...hat.com
Subject: Re: [net-next PATCH v2 2/2] page_pool: remove hold/release count
 from tracepoints

On Thu, 14 Nov 2019 08:37:15 -0800
Jonathan Lemon <jonathan.lemon@...il.com> wrote:

> When the last page is released from the page pool, it is possible
> that the delayed removal thread sees inflight == 0, and frees the
> pool.  While the freed pointer is only copied by the tracepoint
> and not dereferenced, it really isn't correct.  Avoid this case by
> reporting the page release before releasing the page.

I don't like this patch!

I'm actually using these counters, in my current version of my bpftrace
leak detector for page_pool:

https://github.com/xdp-project/xdp-project/blob/master/areas/mem/bpftrace/page_pool_track_leaks01.bt



> This also removes a second atomic operation from the release path.
> 
> Signed-off-by: Jonathan Lemon <jonathan.lemon@...il.com>
> ---
>  include/trace/events/page_pool.h | 24 ++++++++++--------------
>  net/core/page_pool.c             |  8 +++++---
>  2 files changed, 15 insertions(+), 17 deletions(-)
[...]

> @@ -222,9 +222,11 @@ static void __page_pool_clean_page(struct page_pool *pool,
>  			     DMA_ATTR_SKIP_CPU_SYNC);
>  	page->dma_addr = 0;
>  skip_dma_unmap:
> +	trace_page_pool_page_release(pool, page);
> +	/* This may be the last page returned, releasing the pool, so
> +	 * it is not safe to reference pool afterwards.
> +	 */
>  	atomic_inc(&pool->pages_state_release_cnt);
> -	trace_page_pool_state_release(pool, page,
> -			      atomic_read(&pool->pages_state_release_cnt));
>  }

I will prefer that you do an atomic_inc_return, and send the cnt to the
existing tracepoint.  I'm not dereferencing the pool in my tracepoint
use-case, and as Alexei wrote, this would still be 'safe' (as in not
crashing) for a tracepoint if someone do.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ