[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250123104325.GK395043@kernel.org>
Date: Thu, 23 Jan 2025 10:43:25 +0000
From: Simon Horman <horms@...nel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
anthony.l.nguyen@...el.com, magnus.karlsson@...el.com,
jacob.e.keller@...el.com, xudu@...hat.com, mschmidt@...hat.com,
jmaxwell@...hat.com, poros@...hat.com, przemyslaw.kitszel@...el.com
Subject: Re: [PATCH v4 iwl-net 2/3] ice: gather page_count()'s of each frag
right before XDP prog call
On Wed, Jan 22, 2025 at 04:10:45PM +0100, Maciej Fijalkowski wrote:
> If we store the pgcnt on few fragments while being in the middle of
> gathering the whole frame and we stumbled upon DD bit not being set, we
> terminate the NAPI Rx processing loop and come back later on. Then on
> next NAPI execution we work on previously stored pgcnt.
>
> Imagine that second half of page was used actively by networking stack
> and by the time we came back, stack is not busy with this page anymore
> and decremented the refcnt. The page reuse algorithm in this case should
> be good to reuse the page but given the old refcnt it will not do so and
> attempt to release the page via page_frag_cache_drain() with
> pagecnt_bias used as an arg. This in turn will result in negative refcnt
> on struct page, which was initially observed by Xu Du.
>
> Therefore, move the page count storage from ice_get_rx_buf() to a place
> where we are sure that whole frame has been collected, but before
> calling XDP program as it internally can also change the page count of
> fragments belonging to xdp_buff.
>
> Fixes: ac0753391195 ("ice: Store page count inside ice_rx_buf")
> Reported-and-tested-by: Xu Du <xudu@...hat.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@...el.com>
> Co-developed-by: Jacob Keller <jacob.e.keller@...el.com>
> Signed-off-by: Jacob Keller <jacob.e.keller@...el.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Reviewed-by: Simon Horman <horms@...nel.org>
Powered by blists - more mailing lists