[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZPsuscPwNslStltB@ziepe.ca>
Date: Fri, 8 Sep 2023 11:24:49 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Daisuke Matsuda <matsuda-daisuke@...itsu.com>
Cc: linux-rdma@...r.kernel.org, leon@...nel.org, zyjzyj2000@...il.com,
linux-kernel@...r.kernel.org, rpearsonhpe@...il.com,
yangx.jy@...itsu.com, lizhijian@...itsu.com, y-goto@...itsu.com
Subject: Re: [PATCH for-next v6 5/7] RDMA/rxe: Allow registering MRs for
On-Demand Paging
On Fri, Sep 08, 2023 at 03:26:46PM +0900, Daisuke Matsuda wrote:
> diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
> index 834fb1a84800..713bef9161e3 100644
> --- a/drivers/infiniband/sw/rxe/rxe_odp.c
> +++ b/drivers/infiniband/sw/rxe/rxe_odp.c
> @@ -32,6 +32,31 @@ static void rxe_mr_unset_xarray(struct rxe_mr *mr, unsigned long start,
> xas_unlock(&xas);
> }
>
> +static void rxe_mr_set_xarray(struct rxe_mr *mr, unsigned long start,
> + unsigned long end, unsigned long *pfn_list)
> +{
> + unsigned long lower = rxe_mr_iova_to_index(mr, start);
> + unsigned long upper = rxe_mr_iova_to_index(mr, end - 1);
> + struct page *page;
> + void *entry;
> +
> + XA_STATE(xas, &mr->page_list, lower);
> +
> + /* ib_umem_odp_unmap_dma_pages() ensures pages are HMM_PFN_VALID */
> + xas_lock(&xas);
> + while (true) {
> + page = hmm_pfn_to_page(pfn_list[xas.xa_index]);
> + xas_store(&xas, page);
> +
> + entry = xas_next(&xas);
> + if (xas_retry(&xas, entry) || (xas.xa_index <= upper))
> + continue;
> +
> + break;
> + }
while (xas.xa_index <= upper) {
xas_store(&xas, hmm_pfn_to_page(pfn_list[xas.xa_index]));
xas_next(&xas);
}
Again no need for retries
Jason
Powered by blists - more mailing lists