[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210325171445.GE2356281@nvidia.com>
Date: Thu, 25 Mar 2021 14:14:45 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: linux-cxl@...r.kernel.org, ira.weiny@...el.com,
vishal.l.verma@...el.com, alison.schofield@...el.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] cxl/mem: Do not rely on device_add() side effects
for dev_set_name() failures
On Wed, Mar 24, 2021 at 02:02:01PM -0700, Dan Williams wrote:
> While device_add() will happen to catch dev_set_name() failures it is a
> broken pattern to follow given that the core may try to fall back to a
> different name.
>
> Add explicit checking for dev_set_name() failures to be cleaned up by
> put_device(). Skip cdev_device_add() and proceed directly to
> put_device() if the name set failure.
>
> Fixes: b39cb1052a5c ("cxl/mem: Register CXL memX devices")
> Reported-by: Jason Gunthorpe <jgg@...dia.com>
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> drivers/cxl/mem.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> index e53d573ae4ab..d615f183520c 100644
> +++ b/drivers/cxl/mem.c
> @@ -1204,12 +1204,14 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
> dev->bus = &cxl_bus_type;
> dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
> dev->type = &cxl_memdev_type;
> - dev_set_name(dev, "mem%d", cxlmd->id);
>
> cdev = &cxlmd->cdev;
> cdev_init(cdev, &cxl_memdev_fops);
>
> - rc = cdev_device_add(cdev, dev);
> + rc = dev_set_name(dev, "mem%d", cxlmd->id);
> + if (rc == 0)
> + rc = cdev_device_add(cdev, dev);
Success oriented flow please
This is much nicer if you split the allocation, then this flow is
clean and simple. cxl_alloc_memdev() is undone by cxl_memdev_release()
and I would reorder the code so they are above/below each other, it is
easy to check and understand when logically paired functions are on
the same screen.
static struct cxl_memdev *cxl_alloc_memdev(struct cxl_mem *cxlm)
{
struct cxl_memdev *cxlmd;
struct device *dev;
cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL);
if (!cxlmd)
return PTR_ERR(-ENOMEM);
init_completion(&cxlmd->ops_dead);
cxlmd->cxlm = cxlm;
rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL);
if (rc < 0)
goto err_free;
cxlmd->id = rc;
/*
* @cxlm is deallocated when the driver unbinds so operations
* that are using it need to hold a live reference.
*/
rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0,
GFP_KERNEL);
if (rc)
goto err_id;
dev = &cxlmd->dev;
dev->parent = &pdev->dev;
dev->bus = &cxl_bus_type;
dev->devt = MKDEV(cxl_mem_major, cxlmd->id);
dev->type = &cxl_memdev_type;
device_initialize(dev);
return cxlmd;
err_id:
ida_free(&cxl_memdev_ida, cxlmd->id);
err_free:
kfree(cxlmd);
return PTR_ERR(rc);
}
static int cxl_mem_add_memdev(struct cxl_mem *cxlm)
{
struct pci_dev *pdev = cxlm->pdev;
struct cdev *cdev;
int rc;
cxlmd = cxl_alloc_memdev(cxlm);
if (IS_ERR(cxlmd))
return ERR_PTR(cxlmd)
rc = dev_set_name(dev, "mem%d", cxlmd->id);
if (rc)
goto err;
cdev = &cxlmd->cdev;
cdev_init(cdev, &cxl_memdev_fops);
// Must be last
rc = cdev_device_add(cdev, dev);
if (rc)
goto err;
return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd);
err:
put_device(&cxlmd->dev);
return rc;
}
Powered by blists - more mailing lists