[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4c9da9a0-55f4-6cf4-53b8-e8a69744bf98@linux.ibm.com>
Date: Thu, 9 Jan 2020 15:54:35 +0100
From: Frederic Barrat <fbarrat@...ux.ibm.com>
To: "Alastair D'Silva" <alastair@....ibm.com>, alastair@...ilva.org
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Andrew Donnellan <ajd@...ux.ibm.com>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Keith Busch <keith.busch@...el.com>,
Ira Weiny <ira.weiny@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Rob Herring <robh@...nel.org>,
Anton Blanchard <anton@...abs.org>,
Krzysztof Kozlowski <krzk@...nel.org>,
Mahesh Salgaonkar <mahesh@...ux.vnet.ibm.com>,
Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>,
Cédric Le Goater <clg@...d.org>,
Anju T Sudhakar <anju@...ux.vnet.ibm.com>,
Hari Bathini <hbathini@...ux.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Greg Kurz <groug@...d.org>,
Nicholas Piggin <npiggin@...il.com>,
Masahiro Yamada <yamada.masahiro@...ionext.com>,
Alexey Kardashevskiy <aik@...abs.ru>,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-nvdimm@...ts.01.org, linux-mm@...ck.org
Subject: Re: [PATCH v2 09/27] ocxl: Free detached contexts in
ocxl_context_detach_all()
Le 03/12/2019 à 04:46, Alastair D'Silva a écrit :
> From: Alastair D'Silva <alastair@...ilva.org>
>
> ocxl_context_detach_all() is called from ocxl_function_close(), so
> there is no reason to leave the contexts allocated, as the caller
> can do nothing useful with them at that point.
>
> This also has the side-effect of freeing any allocated IRQs
> within the context.
>
> Signed-off-by: Alastair D'Silva <alastair@...ilva.org>
> ---
I think this is wrong and probably unneeded. In ocxl (and I would assume
most drivers), we separate pretty clearly what is setup by the driver
framework when a device is probed, and what is allocated by the users
(userland or scm). Contexts are allocated by the users. So they should
be freed by them only. That separation is also why we have some
reference counting on the afu and function structs, to make sure the
core data remains valid for as long as required.
Though it's a bit asking for troubles, it can be seen when unbinding a
function from the driver through sysfs. That will end up calling
ocxl_function_close() and therefore ocxl_context_detach_all(). However
it's possible for a user process to still have a file descriptor opened.
The context is detached and marked as CLOSED, so any interaction with it
from the user will fail, but it should still be allocated so that it is
valid if the user process makes a system call to the driver. The context
will be freed when the file descriptor is closed.
I don't think this is needed for scm either, since you've now added the
context detach and free call in free_scm()
I would just drop this patch.
Fred
> drivers/misc/ocxl/context.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
> index 994563a078eb..6cb36ef96e09 100644
> --- a/drivers/misc/ocxl/context.c
> +++ b/drivers/misc/ocxl/context.c
> @@ -259,10 +259,11 @@ void ocxl_context_detach_all(struct ocxl_afu *afu)
> {
> struct ocxl_context *ctx;
> int tmp;
> + int rc;
>
> mutex_lock(&afu->contexts_lock);
> idr_for_each_entry(&afu->contexts_idr, ctx, tmp) {
> - ocxl_context_detach(ctx);
> + rc = ocxl_context_detach(ctx);
> /*
> * We are force detaching - remove any active mmio
> * mappings so userspace cannot interfere with the
> @@ -274,6 +275,9 @@ void ocxl_context_detach_all(struct ocxl_afu *afu)
> if (ctx->mapping)
> unmap_mapping_range(ctx->mapping, 0, 0, 1);
> mutex_unlock(&ctx->mapping_lock);
> +
> + if (rc != -EBUSY)
> + ocxl_context_free(ctx);
> }
> mutex_unlock(&afu->contexts_lock);
> }
>
Powered by blists - more mailing lists