[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1525907973.19800.1.camel@mellanox.com>
Date: Wed, 9 May 2018 23:19:36 +0000
From: Saeed Mahameed <saeedm@...lanox.com>
To: "linux@...ck-us.net" <linux@...ck-us.net>,
"tglx@...utronix.de" <tglx@...utronix.de>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Max Gurtovoy <maxg@...lanox.com>,
Israel Rukshin <israelr@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
Matan Barak <matanb@...lanox.com>,
"dledford@...hat.com" <dledford@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] net/mlx5: Fix mlx5_get_vector_affinity function
On Wed, 2018-05-09 at 15:19 -0700, Guenter Roeck wrote:
> On Sun, May 06, 2018 at 09:33:26AM +0200, Thomas Gleixner wrote:
> > On Sat, 5 May 2018, Guenter Roeck wrote:
> >
> > > On Thu, Apr 12, 2018 at 09:49:11AM +0000, Israel Rukshin wrote:
> > > > Adding the vector offset when calling to mlx5_vector2eqn() is
> > > > wrong.
> > > > This is because mlx5_vector2eqn() checks if EQ index is equal
> > > > to vector number
> > > > and the fact that the internal completion vectors that mlx5
> > > > allocates
> > > > don't get an EQ index.
> > > >
> > > > The second problem here is that using effective_affinity_mask
> > > > gives the same
> > > > CPU for different vectors.
> > > > This leads to unmapped queues when calling it from
> > > > blk_mq_rdma_map_queues().
> > > > This doesn't happen when using affinity_hint mask.
> > > >
> > >
> > > Except that affinity_hint is only defined if SMP is enabled.
> > > Without:
> > >
> > > include/linux/mlx5/driver.h: In function
> > > ‘mlx5_get_vector_affinity_hint’:
> > > include/linux/mlx5/driver.h:1299:13: error:
> > > ‘struct irq_desc’ has no member named ‘affinity_hint’
> > >
> > > Note that this is the only use of affinity_hint outside
> > > kernel/irq.
> > > Don't other drivers have similar problems ?
> >
> > Aside of that.
> >
> > > > static inline const struct cpumask *
> > > > -mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int
> > > > vector)
> > > > +mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int
> > > > vector)
> > > > {
> > > > - const struct cpumask *mask;
> > > > struct irq_desc *desc;
> > > > unsigned int irq;
> > > > int eqn;
> > > > int err;
> > > >
> > > > - err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE +
> > > > vector, &eqn, &irq);
> > > > + err = mlx5_vector2eqn(dev, vector, &eqn, &irq);
> > > > if (err)
> > > > return NULL;
> > > >
> > > > desc = irq_to_desc(irq);
> > > > -#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK
> > > > - mask = irq_data_get_effective_affinity_mask(&desc-
> > > > >irq_data);
> > > > -#else
> > > > - mask = desc->irq_common_data.affinity;
> > > > -#endif
> > > > - return mask;
> > > > + return desc->affinity_hint;
> >
> > NAK.
> >
>
> The offending patch is upstream, breaking non-SMP test builds, and I
> have
> not seen any feedback from the submitter. Any suggestion how to
> proceed ?
>
> Guenter
>
> >
>
Hi Guenter and Thomas,
Max and Israel are handling this internally to find a solution that
provides the needed functionality for rdma and addresses your comments.
Thanks,
Saeed.
Powered by blists - more mailing lists