[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <117FF31A-7BE0-4050-B2BB-E41F224FF72F@meta.com>
Date: Fri, 29 Sep 2023 10:46:44 +0000
From: Chris Mason <clm@...a.com>
To: Dragos Tatulea <dtatulea@...dia.com>
CC: "dw@...idwei.uk" <dw@...idwei.uk>,
"netdev@...r.kernel.org"
<netdev@...r.kernel.org>,
Chris Mason <clm@...a.com>, Saeed Mahameed
<saeedm@...dia.com>,
"kuba@...nel.org" <kuba@...nel.org>, Tariq Toukan
<tariqt@...dia.com>
Subject: Re: [PATCH RFC] net/mlx5e: avoid page pool frag counter underflow
> On Sep 29, 2023, at 5:06 AM, Dragos Tatulea <dtatulea@...dia.com> wrote:
[ … ]
>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
>> b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
>> index 3fd11b0761e0..9a7b10f0bba9 100644
>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
>> @@ -298,6 +298,16 @@ static void mlx5e_page_release_fragmented(struct mlx5e_rq
>> *rq,
>> u16 drain_count = MLX5E_PAGECNT_BIAS_MAX - frag_page->frags;
>> struct page *page = frag_page->page;
>>
>> + if (!page)
>> + return;
>> +
> Ideally we'd like to avoid this kind of broad check as it can hide other issues.
>
>> + /*
>> + * we're dropping all of our counts on this page, make sure we
>> + * don't do it again the next time we process this frag
>> + */
>> + frag_page->frags = 0;
>> + frag_page->page = NULL;
>> +
>> if (page_pool_defrag_page(page, drain_count) == 0)
>> page_pool_put_defragged_page(rq->page_pool, page, -1, true);
>> }
>
> We already have a mechanism to avoid double releases: setting the
> MLX5E_WQE_FRAG_SKIP_RELEASE bit on the mlx5e_wqe_frag_info flags parameter. When
> mlx5e_alloc_rx_wqes fails we should set that bit on the remaining frag_pages.
> This is for legacy rq mode, multi-packet wqe rq mode has to be handled as well
> in a similar way.
>
> If I send a patch later, would you be able to test it?
I wasn’t as confident in using the SKIP_RELEASE bit since that seems to be
set once early on and never changed again. But, I definitely didn’t expect my patch to
be the final answer, and I’m happy to test the real fix.
Thanks!
-chris
Powered by blists - more mailing lists