lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Apr 2022 10:30:25 +0200
From:   Thomas Zimmermann <tzimmermann@...e.de>
To:     Cai Huoqing <cai.huoqing@...ux.dev>
Cc:     Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <mripard@...nel.org>,
        David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Christian König <christian.koenig@....com>,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH 2/2] drm/nvdla: Add driver support for NVDLA

(Resending, as some MLs didn't like the size of the origninal mail.)

Hi,

thanks for your submission. Some general comments:

   * some functions are prefixed with dla_, others use nvdla_. It seems 
arbitrary to me. Please use nvdla_ consistently throughout the source code.

   * For reporting errors, please use drm_err(), drm_warn(), etc. I 
suggest to rearrange the error messages to not be located in the 
innermost functions.

   * Could you please split this patch into smaller pieces? It currently 
hits size limits of some mailing lists. Maybe add the register constants 
separately.

Please find more review comments below. It's not a full review, but at 
least something to start with.

Best regards
Thomas

Am 19.04.22 um 15:59 schrieb Cai Huoqing:
> The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP
> which is integrated into NVIDIA Jetson AGX Xavier,
> so add driver support for this accelerator.
> 
> Signed-off-by: Cai Huoqing <cai.huoqing@...ux.dev>
> ---
>   drivers/gpu/drm/Kconfig                 |    2 +
>   drivers/gpu/drm/Makefile                |    1 +
>   drivers/gpu/drm/nvdla/Kconfig           |    8 +
>   drivers/gpu/drm/nvdla/Makefile          |   19 +
>   drivers/gpu/drm/nvdla/nvdla_bdma.c      |  200 +
>   drivers/gpu/drm/nvdla/nvdla_cache.c     |  215 +
>   drivers/gpu/drm/nvdla/nvdla_cdp.c       |  300 ++
>   drivers/gpu/drm/nvdla/nvdla_common.c    |  295 ++
>   drivers/gpu/drm/nvdla/nvdla_common.h    |  835 +++
>   drivers/gpu/drm/nvdla/nvdla_conv.c      |  683 +++
>   drivers/gpu/drm/nvdla/nvdla_drm.c       |  695 +++
>   drivers/gpu/drm/nvdla/nvdla_drm.h       |  127 +
>   drivers/gpu/drm/nvdla/nvdla_engine.c    |  233 +
>   drivers/gpu/drm/nvdla/nvdla_engine.h    |  272 +
>   drivers/gpu/drm/nvdla/nvdla_gem.c       |  393 ++
>   drivers/gpu/drm/nvdla/nvdla_ioctl.h     |   99 +
>   drivers/gpu/drm/nvdla/nvdla_pdp.c       |  446 ++
>   drivers/gpu/drm/nvdla/nvdla_reg.h       | 6411 +++++++++++++++++++++++
>   drivers/gpu/drm/nvdla/nvdla_rubik.c     |  217 +
>   drivers/gpu/drm/nvdla/nvdla_sched.h     |   52 +
>   drivers/gpu/drm/nvdla/nvdla_scheduler.c | 1005 ++++
>   drivers/gpu/drm/nvdla/nvdla_sdp.c       |  728 +++
>   22 files changed, 13236 insertions(+)
>   create mode 100644 drivers/gpu/drm/nvdla/Kconfig
>   create mode 100644 drivers/gpu/drm/nvdla/Makefile
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_bdma.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_cache.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_cdp.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_common.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_conv.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_drm.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_drm.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_engine.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_gem.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_ioctl.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_pdp.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_reg.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_rubik.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_sched.h
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_scheduler.c
>   create mode 100644 drivers/gpu/drm/nvdla/nvdla_sdp.c
> 
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 5133c3f028ab..a55cff374abd 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -409,6 +409,8 @@ source "drivers/gpu/drm/solomon/Kconfig"
>   
>   source "drivers/gpu/drm/sprd/Kconfig"
>   
> +source "drivers/gpu/drm/nvdla/Kconfig"
> +
>   config DRM_HYPERV
>   	tristate "DRM Support for Hyper-V synthetic video device"
>   	depends on DRM && PCI && MMU && HYPERV
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index c2ef5f9fce54..8fa3537f308a 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -134,3 +134,4 @@ obj-y			+= gud/
>   obj-$(CONFIG_DRM_HYPERV) += hyperv/
>   obj-y			+= solomon/
>   obj-$(CONFIG_DRM_SPRD) += sprd/
> +obj-$(CONFIG_DRM_NVDLA) += nvdla/
> diff --git a/drivers/gpu/drm/nvdla/Kconfig b/drivers/gpu/drm/nvdla/Kconfig
> new file mode 100644
> index 000000000000..11c04f5da877
> --- /dev/null
> +++ b/drivers/gpu/drm/nvdla/Kconfig
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +config DRM_NVDLA
> +	tristate "NVDLA DRM"
> +	depends on DRM
> +	select DRM_GEM_CMA_HELPER
> +	help
> +	  Choose this option for open-source NVIDIA DLA support.
> +	  If M is selected the module will be called nvdla-drm.
> diff --git a/drivers/gpu/drm/nvdla/Makefile b/drivers/gpu/drm/nvdla/Makefile
> new file mode 100644
> index 000000000000..74f37d258f8d
> --- /dev/null
> +++ b/drivers/gpu/drm/nvdla/Makefile
> @@ -0,0 +1,19 @@
> +
> +# SPDX-License-Identifier: GPL-2.0
> +nvdla-drm-y := \
> +	nvdla_drm.o \
> +	nvdla_gem.o \
> +	nvdla_scheduler.o \
> +	nvdla_engine.o \
> +	nvdla_bdma.o \
> +	nvdla_conv.o \
> +	nvdla_sdp.o \
> +	nvdla_cdp.o \
> +	nvdla_pdp.o \
> +	nvdla_rubik.o \
> +	nvdla_cache.o \
> +	nvdla_common.o \
> +	nvdla_engine_data.o \
> +	nvdla_engine_debug.o \

File names should be sorted alphabetically here.

> +
> +obj-$(CONFIG_DRM_NVDLA) += nvdla-drm.o
> diff --git a/drivers/gpu/drm/nvdla/nvdla_bdma.c b/drivers/gpu/drm/nvdla/nvdla_bdma.c
> new file mode 100644
> index 000000000000..225613f27acf
> --- /dev/null
}


> diff --git a/drivers/gpu/drm/nvdla/nvdla_drm.c b/drivers/gpu/drm/nvdla/nvdla_drm.c
> new file mode 100644
> index 000000000000..9217eee1de3b
> --- /dev/null
> +++ b/drivers/gpu/drm/nvdla/nvdla_drm.c
> @@ -0,0 +1,695 @@
> +// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> +/*
> + * Copyright (C) 2017-2018 NVIDIA CORPORATION
> + * Copyright (C) 2022 Cai Huoqing
> + */
> +
> +#include <linux/dma-buf.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/fs.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/platform_device.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/time.h>
> +#include <linux/uaccess.h>
> +#include <linux/types.h>
> +
> +#include "nvdla_drm.h"
> +#include "nvdla_ioctl.h"
> +#include "nvdla_engine.h"
> +
> +static struct nvdla_config nvdla_config_os_initial = {
> +	.atom_size = 32,
> +	.bdma_enable = true,
> +	.rubik_enable = true,
> +	.weight_compress_support = true,
> +};
> +
> +static struct nvdla_config nvdla_config_small = {
> +	//.atom_size = 8,
> +	.atom_size = 32,  // nv_large config
> +	.bdma_enable = false,
> +	.rubik_enable = false,
> +	.weight_compress_support = false,
> +};
> +
> +int64_t dla_get_time_us(void)
> +{
> +	return ktime_get_ns() / NSEC_PER_USEC;
> +}
> +
> +void dla_reg_write(void *driver_context, uint32_t addr, uint32_t reg)
> +{
> +	struct nvdla_device *nvdla_dev =
> +			(struct nvdla_device *)driver_context;
> +
> +	if (!nvdla_dev)
> +		return;
> +
> +	writel(reg, nvdla_dev->base + addr);
> +}
> +
> +uint32_t dla_reg_read(void *driver_context, uint32_t addr)
> +{
> +	struct nvdla_device *nvdla_dev =
> +			(struct nvdla_device *)driver_context;
> +
> +	if (!nvdla_dev)
> +		return 0;
> +
> +	return readl(nvdla_dev->base + addr);
> +}
> +
> +static irqreturn_t nvdla_engine_isr(int32_t irq, void *data)
> +{
> +	unsigned long flags;
> +	uint32_t mask;
> +	uint32_t reg;
> +	struct dla_processor *processor = NULL;
> +	struct dla_processor_group *group;
> +	struct dla_engine *engine;
> +	struct nvdla_device *nvdla_dev = (struct nvdla_device *)data;
> +
> +	if (!nvdla_dev)
> +		return IRQ_NONE;
> +
> +	engine = nvdla_dev->engine_context;
> +	spin_lock_irqsave(&nvdla_dev->nvdla_lock, flags);
> +
> +	mask = glb_reg_read(engine, S_INTR_MASK);
> +	reg = glb_reg_read(engine, S_INTR_STATUS);
> +
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CACC_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CACC_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, SDP_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_SDP];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, SDP_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_SDP];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDP_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_CDP];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDP_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_CDP];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, RUBIK_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_RUBIK];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, RUBIK_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_RUBIK];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, PDP_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_PDP];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, PDP_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_PDP];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, BDMA_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_BDMA];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, BDMA_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_BDMA];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_OP_COMPLETED);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDMA_DAT_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_CDMA_DT_DONE);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDMA_DAT_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_CDMA_DT_DONE);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDMA_WT_DONE_STATUS0)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[0];
> +		group->events |= (1 << DLA_EVENT_CDMA_WT_DONE);
> +	}
> +	if (reg & MASK(GLB_S_INTR_STATUS_0, CDMA_WT_DONE_STATUS1)) {
> +		processor = &engine->processors[DLA_OP_CONV];
> +		group = &processor->groups[1];
> +		group->events |= (1 << DLA_EVENT_CDMA_WT_DONE);
> +	}
> +
> +	glb_reg_write(engine, S_INTR_STATUS, reg);
> +	mask = glb_reg_read(engine, S_INTR_MASK);
> +	reg = glb_reg_read(engine, S_INTR_STATUS);
> +
> +	complete(&nvdla_dev->event_notifier);
> +	spin_unlock_irqrestore(&nvdla_dev->nvdla_lock, flags);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int32_t dla_read_dma_address(void *driver_context, void *task_data,
> +						int16_t index, void *dst)
> +{
> +	int32_t ret = 0;
> +	struct nvdla_mem_handle *handles;
> +	dma_addr_t *phys_addr = (dma_addr_t *)(dst);
> +	struct nvdla_device *nvdla_dev =
> +			(struct nvdla_device *)driver_context;
> +	struct nvdla_task *task = (struct nvdla_task *)task_data;
> +
> +	if (index == -1 || index > task->num_addresses)
> +		return -EINVAL;
> +
> +	handles = (struct nvdla_mem_handle *)task->address_list;
> +	ret = nvdla_gem_dma_addr(nvdla_dev->drm, task->file,
> +					handles[index].handle,
> +					phys_addr);
> +
> +	/* Add offset to IOVA address */
> +	*phys_addr = *phys_addr + handles[index].offset;
> +
> +	return ret;
> +}
> +
> +static int32_t dla_read_cpu_address(void *driver_context, void *task_data,
> +						int16_t index, void *dst)
> +{
> +	uint64_t *temp = (uint64_t *)dst;
> +	struct nvdla_task *task = (struct nvdla_task *)task_data;
> +
> +	if (index == -1 || index > task->num_addresses)
> +		return -EINVAL;
> +
> +	*temp = (uint64_t)index;
> +	return 0;
> +}
> +
> +int32_t dla_get_dma_address(void *driver_context, void *task_data,
> +					int16_t index, void *dst_ptr,
> +					uint32_t destination)
> +{
> +	int32_t ret = 0;
> +
> +	if (destination == DESTINATION_PROCESSOR) {
> +		ret = dla_read_cpu_address(driver_context, task_data,
> +						index, dst_ptr);
> +	} else if (destination == DESTINATION_DMA) {
> +		ret = dla_read_dma_address(driver_context, task_data,
> +						index, dst_ptr);
> +	} else {
> +		ret = -EINVAL;
> +	}
> +
> +	return ret;
> +}
> +
> +int32_t dla_data_write(void *driver_context, void *task_data,
> +				void *src, uint64_t dst,
> +				uint32_t size, uint64_t offset)
> +{
> +	int32_t ret;
> +	void *ptr = NULL;
> +	struct dma_buf *buf;
> +	struct iosys_map map;
> +	struct nvdla_mem_handle *handles;
> +	struct nvdla_task *task = (struct nvdla_task *)task_data;
> +
> +	handles = task->address_list;
> +	buf = dma_buf_get(handles[dst].handle);
> +	if (IS_ERR(buf)) {
> +		pr_err("%s: Failed get dma_buf for handle=%d\n", __func__,
> +						handles[dst].handle);
> +		return -EFAULT;
> +	}
> +
> +	ret = dma_buf_begin_cpu_access(buf, DMA_BIDIRECTIONAL);
> +	if (ret)
> +		goto put_dma_buf;
> +
> +	ret = dma_buf_vmap(buf, &map);
> +	ptr = ret ? NULL : map.vaddr;

Never extract the pointer's address without good reason. You don't know 
if this points to a location in I/O memory.

> +	if (!ptr) {

Simply test for ret here.

> +		pr_err("%s: Failed to vmap dma_buf for handle=%d\n", __func__,
> +						handles[dst].handle);
> +		ret = -ENOMEM;

You already got an errno code. Don't override it.

> +		goto end_cpu_access;
> +	}
> +
> +
> +	memcpy((void *)((uint8_t *)ptr + offset), src, size);

Use iosys_map_memcpy_to() here.  It does the right thing

> +
> +	dma_buf_vunmap(buf, ptr);

You have to pass map as the second argument.

> +
> +end_cpu_access:
> +	dma_buf_end_cpu_access(buf, DMA_BIDIRECTIONAL);
> +
> +put_dma_buf:
> +	dma_buf_put(buf);
> +
> +	return ret;
> +}
> +
> +int32_t dla_data_read(void *driver_context, void *task_data,
> +				uint64_t src, void *dst,
> +				uint32_t size, uint64_t offset)
> +{
> +	int32_t ret;
> +	void *ptr = NULL;
> +	struct dma_buf *buf;
> +	struct iosys_map map;
> +	struct nvdla_mem_handle *handles;
> +	struct nvdla_task *task = (struct nvdla_task *)task_data;
> +
> +	handles = task->address_list;
> +
> +	buf = dma_buf_get(handles[src].handle);
> +	if (IS_ERR(buf)) {
> +		pr_err("%s: Failed get dma_buf for handle=%d\n", __func__,
> +						handles[src].handle);
> +		return -EFAULT;
> +	}
> +
> +	ret = dma_buf_begin_cpu_access(buf, DMA_BIDIRECTIONAL);
> +	if (ret)
> +		goto put_dma_buf;
> +
> +	ret = dma_buf_vmap(buf, &map);
> +	ptr = ret ? NULL : map.vaddr;
> +	if (!ptr) {
> +		pr_err("%s: Failed to vmap dma_buf for handle=%d\n", __func__,
> +						handles[src].handle);
> +		ret = -ENOMEM;
> +		goto end_cpu_access;
> +	}

All the same problems as in dla_data_write().

> +
> +	memcpy(dst, (void *)(((uint8_t *)ptr) + offset), size);

Use iosys_map_memcpy_from() here.

> +
> +	dma_buf_vunmap(buf, ptr);

'map' instead of 'ptr'

> +
> +end_cpu_access:
> +	dma_buf_end_cpu_access(buf, DMA_BIDIRECTIONAL);
> +
> +put_dma_buf:
> +	dma_buf_put(buf);
> +
> +	return ret;
> +}
> +

> +
> +static int32_t nvdla_submit(struct drm_device *drm, void *arg,
> +					struct drm_file *file)
> +{
> +	int32_t err = 0;
> +	struct nvdla_task *task;
> +	struct nvdla_ioctl_submit_task local_task;
> +	struct nvdla_ioctl_submit_task __user *user_task;
> +	struct nvdla_device *nvdla_dev = dev_get_drvdata(drm->dev);
> +	struct nvdla_submit_args *args =
> +			(struct nvdla_submit_args *)arg;
> +
> +	user_task = (struct nvdla_ioctl_submit_task __user *)
> +			(uintptr_t)args->tasks;
> +	if (!user_task)
> +		return -EINVAL;
> +
> +	/* IOCTL copy descriptors */
> +	if (copy_from_user(&local_task, (void __user *)user_task,
> +			(sizeof(*user_task))))
> +		return -EFAULT;
> +
> +	task = kzalloc(sizeof(*task), GFP_KERNEL);
> +	if (task == NULL)
> +		return -EFAULT;
> +
> +	nvdla_dev->task = task;
> +	kref_init(&task->ref);
> +	task->nvdla_dev = nvdla_dev;
> +	task->file = file;
> +
> +	/* update task desc fields */
> +	err = nvdla_fill_task_desc(&local_task, task);
> +	if (err)
> +		goto free_task_desc;
> +
> +	err = nvdla_task_submit(nvdla_dev, task);
> +
> +	kfree(task->address_list);
> +
> +free_task_desc:
> +	kfree(task);
> +	return err;
> +}
> +
> +static int32_t nvdla_gem_alloc(struct nvdla_gem_object *nobj)
> +{
> +	struct drm_gem_object *dobj = &nobj->object;
> +	struct drm_device *drm = dobj->dev;
> +
> +	nobj->dma_attrs = DMA_ATTR_WRITE_COMBINE;
> +
> +	nobj->kvaddr = dma_alloc_attrs(drm->dev, dobj->size, &nobj->dma_addr,
> +						GFP_KERNEL, nobj->dma_attrs);
> +

Store an iosys-map address in nobj and initialize it with 
iosys_map_set_vaddr(); or iosys_map_set_vaddr_iomem() if you're working 
with I/O memory.

> +	if (!nobj->kvaddr)
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +
> +static void nvdla_gem_free(struct nvdla_gem_object *nobj)
> +{
> +	struct drm_gem_object *dobj = &nobj->object;
> +	struct drm_device *drm = dobj->dev;
> +
> +	dma_free_attrs(drm->dev, dobj->size, nobj->kvaddr, nobj->dma_addr,
> +				nobj->dma_attrs);
> +}
> +
> +static void nvdla_gem_free_object(struct drm_gem_object *dobj)
> +{
> +	struct nvdla_gem_object *nobj;
> +
> +	drm_gem_free_mmap_offset(dobj);
> +
> +	nobj = to_nvdla_obj(dobj);
> +
> +	nvdla_gem_free(nobj);
> +
> +	kfree(nobj);
> +}
> +
> +static struct nvdla_gem_object *
> +nvdla_gem_create_object(struct drm_device *drm, uint32_t size)
> +{
> +	int32_t ret;
> +	struct drm_gem_object *dobj;
> +	struct nvdla_gem_object *nobj;
> +
> +	size = round_up(size, PAGE_SIZE);
> +
> +	nobj = kzalloc(sizeof(*nobj), GFP_KERNEL);
> +	if (!nobj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	dobj = &nobj->object;
> +
> +	drm_gem_private_object_init(drm, dobj, size);
> +
> +	ret = nvdla_gem_alloc(nobj);
> +	if (ret)
> +		goto free_nvdla_obj;
> +
> +	return nobj;
> +
> +free_nvdla_obj:
> +	kfree(nobj);
> +	return ERR_PTR(ret);
> +}
> +
> +static struct sg_table*
> +nvdla_drm_gem_prime_get_sg_table(struct drm_gem_object *dobj)
> +{
> +	int32_t ret;
> +	struct sg_table *sgt;
> +	struct drm_device *drm = dobj->dev;
> +	struct nvdla_gem_object *nobj = to_nvdla_obj(dobj);
> +
> +	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> +	if (!sgt)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ret = dma_get_sgtable_attrs(drm->dev, sgt, nobj->kvaddr,
> +				    nobj->dma_addr, dobj->size,
> +				    nobj->dma_attrs);
> +	if (ret) {
> +		DRM_ERROR("failed to allocate sgt, %d\n", ret);
> +		kfree(sgt);
> +		return ERR_PTR(ret);
> +	}
> +
> +	return sgt;
> +}
> +
> +static int nvdla_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
> +{
> +	struct nvdla_gem_object *nobj = to_nvdla_obj(obj);
> +
> +	map->vaddr = nobj->kvaddr;

Instead of kvaddr, store the pointer as struct iosys_map. Then simply 
copy it here, as in

    *map = nobj->map;

> +
> +	return 0;
> +}
> +
> +static void nvdla_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
> +{
> +	/* Nothing to do */
> +}
> +
> +static int32_t nvdla_drm_gem_object_mmap(struct drm_gem_object *dobj,
> +					struct vm_area_struct *vma)
> +{
> +	int32_t ret;
> +	struct nvdla_gem_object *nobj = to_nvdla_obj(dobj);
> +	struct drm_device *drm = dobj->dev;
> +
> +	vma->vm_flags &= ~VM_PFNMAP;
> +	vma->vm_pgoff = 0;

It's cleaner to do this as

    vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node)

> +
> +	ret = dma_mmap_attrs(drm->dev, vma, nobj->kvaddr, nobj->dma_addr,
> +			     dobj->size, nobj->dma_attrs);
> +	if (ret)
> +		drm_gem_vm_close(vma);
> +
> +	return ret;
> +}
> +
> +static const struct drm_gem_object_funcs nvdla_gem_object_funcs = {
> +	.free = nvdla_gem_free_object,
> +	.get_sg_table = nvdla_drm_gem_prime_get_sg_table,
> +	.vmap = nvdla_drm_gem_prime_vmap,
> +	.vunmap = nvdla_drm_gem_prime_vunmap,
> +	.mmap = nvdla_drm_gem_object_mmap,
> +};
> +
> +static struct nvdla_gem_object*
> +nvdla_gem_create_with_handle(struct drm_file *file_priv,
> +							 struct drm_device *drm, uint32_t size,
> +							 uint32_t *handle)
> +{
> +	int32_t ret;
> +	struct drm_gem_object *dobj;
> +	struct nvdla_gem_object *nobj;
> +
> +	nobj = nvdla_gem_create_object(drm, size);
> +	if (IS_ERR(nobj))
> +		return ERR_CAST(nobj);
> +
> +	dobj = &nobj->object;
> +	dobj->funcs = &nvdla_gem_object_funcs;
> +	ret = drm_gem_handle_create(file_priv, dobj, handle);
> +	if (ret)
> +		goto free_drm_object;
> +
> +	drm_gem_object_put(dobj);
> +
> +	return nobj;
> +
> +free_drm_object:
> +	nvdla_gem_free_object(dobj);
> +
> +	return ERR_PTR(ret);
> +}
> +
> +static int32_t nvdla_gem_create(struct drm_device *drm, void *data,
> +								struct drm_file *file)
> +{
> +	struct nvdla_gem_object *nobj;
> +	struct nvdla_gem_create_args *args = data;
> +
> +	nobj = nvdla_gem_create_with_handle(file, drm, args->size,
> +					 &args->handle);
> +	if (IS_ERR(nobj))
> +		return PTR_ERR(nobj);
> +
> +	return 0;
> +}
> +
> +static int32_t nvdla_drm_gem_mmap_buf(struct drm_gem_object *obj,
> +									  struct vm_area_struct *vma)
> +{
> +	int32_t ret;
> +
> +	ret = drm_gem_mmap_obj(obj, obj->size, vma);
> +	if (ret)
> +		return ret;
> +
> +	return nvdla_drm_gem_object_mmap(obj, vma);
> +}
> +
> +static int32_t nvdla_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> +	int32_t ret;
> +	struct drm_gem_object *obj;
> +
> +	ret = drm_gem_mmap(filp, vma);
> +	if (ret)
> +		return ret;
> +
> +	obj = vma->vm_private_data;
> +
> +	return nvdla_drm_gem_object_mmap(obj, vma);

I don't understand these two lines. This is part of what drm_gem_mmap() 
does. It shouldn't be necessary here.

> +}
> +
> +int32_t nvdla_gem_dma_addr(struct drm_device *dev, struct drm_file *file,
> +						   uint32_t fd, dma_addr_t *addr)
> +{
> +	int32_t ret;
> +	uint32_t handle;
> +	struct nvdla_gem_object *nobj;
> +	struct drm_gem_object *dobj;
> +
> +	ret = drm_gem_prime_fd_to_handle(dev, file, fd, &handle);
> +	if (ret)
> +		return ret;
> +
> +	dobj = drm_gem_object_lookup(file, handle);
> +	if (!dobj)
> +		return -EINVAL;
> +
> +	nobj = to_nvdla_obj(dobj);
> +
> +	*addr = nobj->dma_addr;
> +
> +	drm_gem_object_put(dobj);
> +
> +	return 0;
> +}
> +
> +static int32_t nvdla_gem_map_offset(struct drm_device *drm, void *data,
> +									struct drm_file *file)
> +{
> +	struct nvdla_gem_map_offset_args *args = data;
> +
> +	return drm_gem_dumb_map_offset(file, drm, args->handle,
> +								   &args->offset);
> +}
> +
> +static const struct file_operations nvdla_drm_fops = {
> +	.owner = THIS_MODULE,
> +	.open = drm_open,
> +	.release = drm_release,
> +	.unlocked_ioctl = drm_ioctl,
> +	.mmap = nvdla_drm_gem_mmap,

It should be fine to use drm_gem_mmap here. Then you should use 
DEFINE_DRM_GEM_FOPS() to define nvdla_drm_fops.

> +	.poll = drm_poll,
> +	.read = drm_read,
> +#ifdef CONFIG_COMPAT
> +	.compat_ioctl = drm_compat_ioctl,
> +#endif
> +	.llseek = noop_llseek,
> +};
> +
> +static const struct drm_ioctl_desc nvdla_drm_ioctls[] = {
> +	DRM_IOCTL_DEF_DRV(NVDLA_SUBMIT, nvdla_submit, DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(NVDLA_GEM_CREATE, nvdla_gem_create, DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF_DRV(NVDLA_GEM_MMAP, nvdla_gem_map_offset, DRM_RENDER_ALLOW),
> +	/* use DRM_IOCTL_MODE_DESTROY_DUMB to destory */
> +};
> +
> +static struct drm_driver nvdla_drm_driver = {
> +	.driver_features = DRIVER_GEM | DRIVER_RENDER,
> +
> +	.ioctls = nvdla_drm_ioctls,
> +	.num_ioctls = ARRAY_SIZE(nvdla_drm_ioctls),
> +	.fops = &nvdla_drm_fops,
> +	.gem_prime_mmap		= nvdla_drm_gem_mmap_buf,

Use drm_gem_prime_mmap() here.

Some context: the situation with these mmap functions has been confusing 
and inconsistent among DRM drivers. But we cleaned it up so that you 
only have to provide a minimal implementation of struct 
drm_gem_object_funcs.mmap.  All other mmap callbacks can then be filled 
with standard DRM helpers.

> +
> +	.name = "nvdla",
> +	.desc = "NVDLA driver",
> +	.date = "20171017",
> +	.major = 0,
> +	.minor = 0,
> +	.patchlevel = 0,
> +};
> +
> +int32_t nvdla_drm_probe(struct nvdla_device *nvdla_dev)
> +{
> +	int32_t err;
> +	struct drm_device *drm;
> +	struct drm_driver *driver = &nvdla_drm_driver;
> +
> +	drm = drm_dev_alloc(driver, &nvdla_dev->pdev->dev);
> +	if (IS_ERR(drm))
> +		return PTR_ERR(drm);
> +
> +	nvdla_dev->drm = drm;
> +
> +	err = drm_dev_register(drm, 0);
> +	if (err < 0)
> +		goto unref;
> +
> +	return 0;
> +
> +unref:
> +	drm_dev_put(drm);
> +	return err;
> +}
> +
> +void nvdla_drm_remove(struct nvdla_device *nvdla_dev)
> +{
> +	drm_dev_unregister(nvdla_dev->drm);
> +	drm_dev_put(nvdla_dev->drm);
> +}

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

Download attachment "OpenPGP_signature" of type "application/pgp-signature" (841 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ