lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <OS3PR01MB986527D371D3840D1534A555E54A2@OS3PR01MB9865.jpnprd01.prod.outlook.com>
Date: Mon, 28 Oct 2024 07:25:50 +0000
From: "Daisuke Matsuda (Fujitsu)" <matsuda-daisuke@...itsu.com>
To: 'Zhu Yanjun' <yanjun.zhu@...ux.dev>, "linux-rdma@...r.kernel.org"
	<linux-rdma@...r.kernel.org>, "leon@...nel.org" <leon@...nel.org>,
	"jgg@...pe.ca" <jgg@...pe.ca>, "zyjzyj2000@...il.com" <zyjzyj2000@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"rpearsonhpe@...il.com" <rpearsonhpe@...il.com>, "Zhijian Li (Fujitsu)"
	<lizhijian@...itsu.com>
Subject: RE: [PATCH for-next v8 3/6] RDMA/rxe: Add page invalidation support

On Sun, Oct 13, 2024 3:16 PM Zhu Yanjun wrote:
> 在 2024/10/9 9:59, Daisuke Matsuda 写道:
> > On page invalidation, an MMU notifier callback is invoked to unmap DMA
> > addresses and update the driver page table(umem_odp->dma_list). It also
> > sets the corresponding entries in MR xarray to NULL to prevent any access.
> > The callback is registered when an ODP-enabled MR is created.
> >
> > Signed-off-by: Daisuke Matsuda <matsuda-daisuke@...itsu.com>
> > ---
> >   drivers/infiniband/sw/rxe/Makefile  |  2 +
> >   drivers/infiniband/sw/rxe/rxe_odp.c | 57 +++++++++++++++++++++++++++++
> >   2 files changed, 59 insertions(+)
> >   create mode 100644 drivers/infiniband/sw/rxe/rxe_odp.c
> >
> > diff --git a/drivers/infiniband/sw/rxe/Makefile b/drivers/infiniband/sw/rxe/Makefile
> > index 5395a581f4bb..93134f1d1d0c 100644
> > --- a/drivers/infiniband/sw/rxe/Makefile
> > +++ b/drivers/infiniband/sw/rxe/Makefile
> > @@ -23,3 +23,5 @@ rdma_rxe-y := \
> >   	rxe_task.o \
> >   	rxe_net.o \
> >   	rxe_hw_counters.o
> > +
> > +rdma_rxe-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += rxe_odp.o
> > diff --git a/drivers/infiniband/sw/rxe/rxe_odp.c b/drivers/infiniband/sw/rxe/rxe_odp.c
> > new file mode 100644
> > index 000000000000..ea55b79be0c6
> > --- /dev/null
> > +++ b/drivers/infiniband/sw/rxe/rxe_odp.c
> > @@ -0,0 +1,57 @@
> > +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
> > +/*
> > + * Copyright (c) 2022-2023 Fujitsu Ltd. All rights reserved.
> > + */
> > +
> > +#include <linux/hmm.h>
> > +
> > +#include <rdma/ib_umem_odp.h>
> > +
> > +#include "rxe.h"
> > +
> > +static void rxe_mr_unset_xarray(struct rxe_mr *mr, unsigned long start,
> > +				unsigned long end)
> > +{
> > +	unsigned long upper = rxe_mr_iova_to_index(mr, end - 1);
> > +	unsigned long lower = rxe_mr_iova_to_index(mr, start);
> > +	void *entry;
> > +
> > +	XA_STATE(xas, &mr->page_list, lower);
> > +
> > +	/* make elements in xarray NULL */
> > +	xas_lock(&xas);
> > +	xas_for_each(&xas, entry, upper)
> > +		xas_store(&xas, NULL);
> > +	xas_unlock(&xas);
> > +}
> > +
> > +static bool rxe_ib_invalidate_range(struct mmu_interval_notifier *mni,
> > +				    const struct mmu_notifier_range *range,
> > +				    unsigned long cur_seq)
> > +{
> > +	struct ib_umem_odp *umem_odp =
> > +		container_of(mni, struct ib_umem_odp, notifier);
> > +	struct rxe_mr *mr = umem_odp->private;
> > +	unsigned long start, end;
> > +
> > +	if (!mmu_notifier_range_blockable(range))
> > +		return false;
> > +
> > +	mutex_lock(&umem_odp->umem_mutex);
> 
> guard(mutex)(&umem_odp->umem_mutex);
> 
> It seems that the above is more popular.

Thanks for the comment.

I have no objection to your suggestion since the increasing number of
kernel components use "guard(mutex)" syntax these days, but I would rather
suggest making the change to the whole infiniband subsystem at once because
there are multiple mutex lock/unlock pairs to be converted.

Regards,
Daisuke Matsuda

> 
> Zhu Yanjun
> > +	mmu_interval_set_seq(mni, cur_seq);
> > +
> > +	start = max_t(u64, ib_umem_start(umem_odp), range->start);
> > +	end = min_t(u64, ib_umem_end(umem_odp), range->end);
> > +
> > +	rxe_mr_unset_xarray(mr, start, end);
> > +
> > +	/* update umem_odp->dma_list */
> > +	ib_umem_odp_unmap_dma_pages(umem_odp, start, end);
> > +
> > +	mutex_unlock(&umem_odp->umem_mutex);
> > +	return true;
> > +}
> > +
> > +const struct mmu_interval_notifier_ops rxe_mn_ops = {
> > +	.invalidate = rxe_ib_invalidate_range,
> > +};

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ