[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <85bdedf5-930d-c47c-fee0-bb4fee480e42@intel.com>
Date: Wed, 10 May 2023 11:33:33 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Simon Horman <simon.horman@...igine.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>
CC: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>,
<edumazet@...gle.com>, <netdev@...r.kernel.org>, Piotr Raczynski
<piotr.raczynski@...el.com>, Michal Swiatkowski
<michal.swiatkowski@...ux.intel.com>, Pucha Himasekhar Reddy
<himasekharx.reddy.pucha@...el.com>
Subject: Re: [PATCH net-next 6/8] ice: add individual interrupt allocation
On 5/10/2023 1:54 AM, Simon Horman wrote:
> On Tue, May 09, 2023 at 10:00:46AM -0700, Tony Nguyen wrote:
>> From: Piotr Raczynski <piotr.raczynski@...el.com>
>>
>> Currently interrupt allocations, depending on a feature are distributed
>> in batches. Also, after allocation there is a series of operations that
>> distributes per irq settings through that batch of interrupts.
>>
>> Although driver does not yet support dynamic interrupt allocation, keep
>> allocated interrupts in a pool and add allocation abstraction logic to
>> make code more flexible. Keep per interrupt information in the
>> ice_q_vector structure, which yields ice_vsi::base_vector redundant.
>> Also, as a result there are a few functions that can be removed.
>>
>> Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
>> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
>> Reviewed-by: Simon Horman <simon.horman@...igine.com>
>> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@...el.com> (A Contingent worker at Intel)
>> Signed-off-by: Piotr Raczynski <piotr.raczynski@...el.com>
>> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
>
> ...
>
>> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
>> index 1911d644dfa8..7dd7a0f32471 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_base.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
>> @@ -105,8 +105,7 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
>> struct ice_q_vector *q_vector;
>>
>> /* allocate q_vector */
>> - q_vector = devm_kzalloc(ice_pf_to_dev(pf), sizeof(*q_vector),
>> - GFP_KERNEL);
>> + q_vector = kzalloc(sizeof(*q_vector), GFP_KERNEL);
Especially since we moved away from devm_kzalloc here so it won't
automatically get released at driver unload. (Which is fine, I think
we're slowly moving away from devm here because we didn't really commit
to using it properly and had devm_kfree a lot anyways...).
>> if (!q_vector)
>> return -ENOMEM;
>>
>> @@ -118,9 +117,31 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
>> q_vector->rx.itr_mode = ITR_DYNAMIC;
>> q_vector->tx.type = ICE_TX_CONTAINER;
>> q_vector->rx.type = ICE_RX_CONTAINER;
>> + q_vector->irq.index = -ENOENT;
>>
>> - if (vsi->type == ICE_VSI_VF)
>> + if (vsi->type == ICE_VSI_VF) {
>> + q_vector->reg_idx = ice_calc_vf_reg_idx(vsi->vf, q_vector);
>> goto out;
>> + } else if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
>> + struct ice_vsi *ctrl_vsi = ice_get_vf_ctrl_vsi(pf, vsi);
>> +
>> + if (ctrl_vsi) {
>> + if (unlikely(!ctrl_vsi->q_vectors))
>> + return -ENOENT;
>
> q_vector appears to be leaked here.
Yea that seems like a leak to me too. We allocate q_vector near the
start of the function then perform some lookup checks here per VSI type
to get the index. We wanted to obtain the irq value from the CTRL VSI. I
bet this case is very rare since it would be unlikely that we have a
ctrl_vsi pointer but do *not* have the q_vectors array setup yet.
Probably best if this was refactored a bit to have a cleanup exit label
so that it was more difficult to miss the cleanup.
>
>> + q_vector->irq = ctrl_vsi->q_vectors[0]->irq;
>> + goto skip_alloc;
>> + }
>> + }
>> +
>> + q_vector->irq = ice_alloc_irq(pf);
>> + if (q_vector->irq.index < 0) {
>> + kfree(q_vector);
>> + return -ENOMEM;
>> + }
>> +
>> +skip_alloc:
>> + q_vector->reg_idx = q_vector->irq.index;
>> +
>> /* only set affinity_mask if the CPU is online */
>> if (cpu_online(v_idx))
>> cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
>
> ...
Powered by blists - more mailing lists