[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bbe06f51-459a-4973-9322-56b3d27427f1@wanadoo.fr>
Date: Mon, 26 Aug 2024 19:14:55 +0200
From: Christophe JAILLET <christophe.jaillet@...adoo.fr>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Dan Carpenter <dan.carpenter@...aro.org>
Cc: Tony Nguyen <anthony.l.nguyen@...el.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
linux-kernel@...r.kernel.org, kernel-janitors@...r.kernel.org,
intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
Pavan Kumar Linga <pavan.kumar.linga@...el.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>
Subject: Re: [PATCH net-next] idpf: Slightly simplify memory management in
idpf_add_del_mac_filters()
Le 26/08/2024 à 11:15, Przemek Kitszel a écrit :
> On 8/23/24 11:10, Dan Carpenter wrote:
>> On Fri, Aug 23, 2024 at 08:23:29AM +0200, Christophe JAILLET wrote:
>>> In idpf_add_del_mac_filters(), filters are chunked up into multiple
>>> messages to avoid sending a control queue message buffer that is too
>>> large.
>>>
>>> Each chunk has up to IDPF_NUM_FILTERS_PER_MSG entries. So except for the
>>> last iteration which can be smaller, space for exactly
>>> IDPF_NUM_FILTERS_PER_MSG entries is allocated.
>>>
>>> There is no need to free and reallocate a smaller array just for the
>>> last
>>> iteration.
>>>
>>> This slightly simplifies the code and avoid an (unlikely) memory
>>> allocation
>>> failure.
>>>
>
> Thanks, that is indeed an improvement.
>
>>> Signed-off-by: Christophe JAILLET <christophe.jaillet@...adoo.fr>
>>> ---
>>> drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 7 +++++--
>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>>> b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>>> index 70986e12da28..b6f4b58e1094 100644
>>> --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>>> +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>>> @@ -3669,12 +3669,15 @@ int idpf_add_del_mac_filters(struct
>>> idpf_vport *vport,
>>> entries_size = sizeof(struct virtchnl2_mac_addr) *
>>> num_entries;
>>> buf_size = struct_size(ma_list, mac_addr_list, num_entries);
>>> - if (!ma_list || num_entries != IDPF_NUM_FILTERS_PER_MSG) {
>>> - kfree(ma_list);
>>> + if (!ma_list) {
>>> ma_list = kzalloc(buf_size, GFP_ATOMIC);
>>> if (!ma_list)
>>> return -ENOMEM;
>>> } else {
>>> + /* ma_list was allocated in the first iteration
>>> + * so IDPF_NUM_FILTERS_PER_MSG entries are
>>> + * available
>>> + */
>>> memset(ma_list, 0, buf_size);
>>> }
>>
>> It would be even nicer to move the ma_list allocation outside the loop:
>>
>> buf_size = struct_size(ma_list, mac_addr_list,
>> IDPF_NUM_FILTERS_PER_MSG);
>> ma_list = kmalloc(buf_size, GFP_ATOMIC);
>
> good point
>
> I've opened whole function for inspection and it asks for even more,
> as of now, we allocate an array in atomic context, just to have a copy
> of some stuff from the spinlock-protected list.
>
> It would be good to have allocation as pointed by Dan prior to iteration
> and fill it on the fly, sending when new message would not fit plus once
> at the end. Sending procedure is safe to be called under a spinlock.
If I understand correctly, you propose to remove the initial copy in
mac_addr and hold &vport_config->mac_filter_list_lock till the end of
the function?
That's it?
There is a wait_for_completion_timeout() in idpf_vc_xn_exec() and the
default time-out is IDPF_VC_XN_DEFAULT_TIMEOUT_MSEC (60 * 1000)
So, should an issue occurs, and the time out run till the end, we could
hold the 'mac_filter_list_lock' spinlock for up to 60 seconds?
Is that ok?
And if in asynch update mode, idpf_mac_filter_async_handler() also takes
&vport_config->mac_filter_list_lock;. Could we dead-lock?
So, I'm not sure to understand what you propose, or the code in
idpf_add_del_mac_filters() and co.
>
> CCing author; CCing Olek to ask if there are already some refactors that
> would conflict with this.
I'll way a few days for these feedbacks and send a v2.
CJ
>
>>
>> regards,
>> dan carpenter
>>
>
>
>
Powered by blists - more mailing lists