[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2896a4b2-2297-44cd-b4c7-a4d320298740@intel.com>
Date: Mon, 26 Aug 2024 11:15:47 +0200
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
To: Dan Carpenter <dan.carpenter@...aro.org>, Christophe JAILLET
<christophe.jaillet@...adoo.fr>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
<linux-kernel@...r.kernel.org>, <kernel-janitors@...r.kernel.org>,
<intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>, "Pavan Kumar
Linga" <pavan.kumar.linga@...el.com>, Alexander Lobakin
<aleksander.lobakin@...el.com>
Subject: Re: [PATCH net-next] idpf: Slightly simplify memory management in
idpf_add_del_mac_filters()
On 8/23/24 11:10, Dan Carpenter wrote:
> On Fri, Aug 23, 2024 at 08:23:29AM +0200, Christophe JAILLET wrote:
>> In idpf_add_del_mac_filters(), filters are chunked up into multiple
>> messages to avoid sending a control queue message buffer that is too large.
>>
>> Each chunk has up to IDPF_NUM_FILTERS_PER_MSG entries. So except for the
>> last iteration which can be smaller, space for exactly
>> IDPF_NUM_FILTERS_PER_MSG entries is allocated.
>>
>> There is no need to free and reallocate a smaller array just for the last
>> iteration.
>>
>> This slightly simplifies the code and avoid an (unlikely) memory allocation
>> failure.
>>
Thanks, that is indeed an improvement.
>> Signed-off-by: Christophe JAILLET <christophe.jaillet@...adoo.fr>
>> ---
>> drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>> index 70986e12da28..b6f4b58e1094 100644
>> --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>> +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
>> @@ -3669,12 +3669,15 @@ int idpf_add_del_mac_filters(struct idpf_vport *vport,
>> entries_size = sizeof(struct virtchnl2_mac_addr) * num_entries;
>> buf_size = struct_size(ma_list, mac_addr_list, num_entries);
>>
>> - if (!ma_list || num_entries != IDPF_NUM_FILTERS_PER_MSG) {
>> - kfree(ma_list);
>> + if (!ma_list) {
>> ma_list = kzalloc(buf_size, GFP_ATOMIC);
>> if (!ma_list)
>> return -ENOMEM;
>> } else {
>> + /* ma_list was allocated in the first iteration
>> + * so IDPF_NUM_FILTERS_PER_MSG entries are
>> + * available
>> + */
>> memset(ma_list, 0, buf_size);
>> }
>
> It would be even nicer to move the ma_list allocation outside the loop:
>
> buf_size = struct_size(ma_list, mac_addr_list, IDPF_NUM_FILTERS_PER_MSG);
> ma_list = kmalloc(buf_size, GFP_ATOMIC);
good point
I've opened whole function for inspection and it asks for even more,
as of now, we allocate an array in atomic context, just to have a copy
of some stuff from the spinlock-protected list.
It would be good to have allocation as pointed by Dan prior to iteration
and fill it on the fly, sending when new message would not fit plus once
at the end. Sending procedure is safe to be called under a spinlock.
CCing author; CCing Olek to ask if there are already some refactors that
would conflict with this.
>
> regards,
> dan carpenter
>
Powered by blists - more mailing lists