[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c5b7ae5-8578-3008-5e78-02e77e121cda@nvidia.com>
Date: Thu, 24 Jun 2021 02:06:46 +0300
From: Max Gurtovoy <mgurtovoy@...dia.com>
To: Leon Romanovsky <leon@...nel.org>,
Doug Ledford <dledford@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>
CC: Avihai Horon <avihaih@...dia.com>, <linux-kernel@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, Christoph Hellwig <hch@....de>,
Bart Van Assche <bvanassche@....org>,
Tom Talpey <tom@...pey.com>,
Santosh Shilimkar <santosh.shilimkar@...cle.com>,
Chuck Lever III <chuck.lever@...cle.com>,
Keith Busch <kbusch@...nel.org>,
David Laight <David.Laight@...LAB.COM>,
Honggang LI <honli@...hat.com>
Subject: Re: [PATCH v2 rdma-next] RDMA/mlx5: Enable Relaxed Ordering by
default for kernel ULPs
On 6/9/2021 2:05 PM, Leon Romanovsky wrote:
> From: Avihai Horon <avihaih@...dia.com>
>
> Relaxed Ordering is a capability that can only benefit users that support
> it. All kernel ULPs should support Relaxed Ordering, as they are designed
> to read data only after observing the CQE and use the DMA API correctly.
>
> Hence, implicitly enable Relaxed Ordering by default for kernel ULPs.
>
> Signed-off-by: Avihai Horon <avihaih@...dia.com>
> Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> ---
> Changelog:
> v2:
> * Dropped IB/core patch and set RO implicitly in mlx5 exactly like in
> eth side of mlx5 driver.
> v1: https://lore.kernel.org/lkml/cover.1621505111.git.leonro@nvidia.com
> * Enabled by default RO in IB/core instead of changing all users
> v0: https://lore.kernel.org/lkml/20210405052404.213889-1-leon@kernel.org
> ---
> drivers/infiniband/hw/mlx5/mr.c | 10 ++++++----
> drivers/infiniband/hw/mlx5/wr.c | 5 ++++-
> 2 files changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
> index 3363cde85b14..2182e76ae734 100644
> --- a/drivers/infiniband/hw/mlx5/mr.c
> +++ b/drivers/infiniband/hw/mlx5/mr.c
> @@ -69,6 +69,7 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr,
> struct ib_pd *pd)
> {
> struct mlx5_ib_dev *dev = to_mdev(pd->device);
> + bool ro_pci_enabled = pcie_relaxed_ordering_enabled(dev->mdev->pdev);
>
> MLX5_SET(mkc, mkc, a, !!(acc & IB_ACCESS_REMOTE_ATOMIC));
> MLX5_SET(mkc, mkc, rw, !!(acc & IB_ACCESS_REMOTE_WRITE));
> @@ -78,10 +79,10 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr,
>
> if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write))
> MLX5_SET(mkc, mkc, relaxed_ordering_write,
> - !!(acc & IB_ACCESS_RELAXED_ORDERING));
> + acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled);
> if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read))
> MLX5_SET(mkc, mkc, relaxed_ordering_read,
> - !!(acc & IB_ACCESS_RELAXED_ORDERING));
> + acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled);
Jason,
If it's still possible to add small change, it will be nice to avoid
calculating "acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled" twice.
Powered by blists - more mailing lists