[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZFtb1Uyr2j+wEM+g@corigine.com>
Date: Wed, 10 May 2023 10:54:45 +0200
From: Simon Horman <simon.horman@...igine.com>
To: Tony Nguyen <anthony.l.nguyen@...el.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
edumazet@...gle.com, netdev@...r.kernel.org,
Piotr Raczynski <piotr.raczynski@...el.com>,
Jacob Keller <jacob.e.keller@...el.com>,
Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
Pucha Himasekhar Reddy <himasekharx.reddy.pucha@...el.com>
Subject: Re: [PATCH net-next 6/8] ice: add individual interrupt allocation
On Tue, May 09, 2023 at 10:00:46AM -0700, Tony Nguyen wrote:
> From: Piotr Raczynski <piotr.raczynski@...el.com>
>
> Currently interrupt allocations, depending on a feature are distributed
> in batches. Also, after allocation there is a series of operations that
> distributes per irq settings through that batch of interrupts.
>
> Although driver does not yet support dynamic interrupt allocation, keep
> allocated interrupts in a pool and add allocation abstraction logic to
> make code more flexible. Keep per interrupt information in the
> ice_q_vector structure, which yields ice_vsi::base_vector redundant.
> Also, as a result there are a few functions that can be removed.
>
> Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>
> Reviewed-by: Simon Horman <simon.horman@...igine.com>
> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@...el.com> (A Contingent worker at Intel)
> Signed-off-by: Piotr Raczynski <piotr.raczynski@...el.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
...
> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
> index 1911d644dfa8..7dd7a0f32471 100644
> --- a/drivers/net/ethernet/intel/ice/ice_base.c
> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> @@ -105,8 +105,7 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
> struct ice_q_vector *q_vector;
>
> /* allocate q_vector */
> - q_vector = devm_kzalloc(ice_pf_to_dev(pf), sizeof(*q_vector),
> - GFP_KERNEL);
> + q_vector = kzalloc(sizeof(*q_vector), GFP_KERNEL);
> if (!q_vector)
> return -ENOMEM;
>
> @@ -118,9 +117,31 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
> q_vector->rx.itr_mode = ITR_DYNAMIC;
> q_vector->tx.type = ICE_TX_CONTAINER;
> q_vector->rx.type = ICE_RX_CONTAINER;
> + q_vector->irq.index = -ENOENT;
>
> - if (vsi->type == ICE_VSI_VF)
> + if (vsi->type == ICE_VSI_VF) {
> + q_vector->reg_idx = ice_calc_vf_reg_idx(vsi->vf, q_vector);
> goto out;
> + } else if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
> + struct ice_vsi *ctrl_vsi = ice_get_vf_ctrl_vsi(pf, vsi);
> +
> + if (ctrl_vsi) {
> + if (unlikely(!ctrl_vsi->q_vectors))
> + return -ENOENT;
q_vector appears to be leaked here.
> + q_vector->irq = ctrl_vsi->q_vectors[0]->irq;
> + goto skip_alloc;
> + }
> + }
> +
> + q_vector->irq = ice_alloc_irq(pf);
> + if (q_vector->irq.index < 0) {
> + kfree(q_vector);
> + return -ENOMEM;
> + }
> +
> +skip_alloc:
> + q_vector->reg_idx = q_vector->irq.index;
> +
> /* only set affinity_mask if the CPU is online */
> if (cpu_online(v_idx))
> cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
...
Powered by blists - more mailing lists