[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YPZ8hX7sx1RFL0c5@kroah.com>
Date: Tue, 20 Jul 2021 09:34:29 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: longli@...uxonhyperv.com
Cc: linux-fs@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hyperv@...r.kernel.org,
Long Li <longli@...rosoft.com>,
Jonathan Corbet <corbet@....net>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Hans de Goede <hdegoede@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Maximilian Luz <luzmaximilian@...il.com>,
Mike Rapoport <rppt@...nel.org>,
Ben Widawsky <ben.widawsky@...el.com>,
Jiri Slaby <jirislaby@...nel.org>,
Andra Paraschiv <andraprs@...zon.com>,
Siddharth Gupta <sidgup@...eaurora.org>,
Hannes Reinecke <hare@...e.de>, linux-doc@...r.kernel.org
Subject: Re: [Patch v4 2/3] Drivers: hv: add Azure Blob driver
On Mon, Jul 19, 2021 at 08:31:05PM -0700, longli@...uxonhyperv.com wrote:
> +struct az_blob_device {
> + struct hv_device *device;
> +
> + /* Opened files maintained by this device */
> + struct list_head file_list;
> + /* Lock for protecting file_list */
> + spinlock_t file_lock;
> +
> + /* The refcount for this device */
> + refcount_t count;
Just use a kref please if you really need this. Are you sure you do?
You already have 2 other reference counted objects being used here, why
make it 3?
> + /* Pending requests to VSP */
> + atomic_t pending;
Why does this need to be atomic?
> + wait_queue_head_t waiting_to_drain;
> +
> + bool removing;
Are you sure this actually works properly? Why is it needed vs. any
other misc device?
> +/* VSC->VSP request */
> +struct az_blob_vsp_request {
> + u32 version;
> + u32 timeout_ms;
> + u32 data_buffer_offset;
> + u32 data_buffer_length;
> + u32 data_buffer_valid;
> + u32 operation_type;
> + u32 request_buffer_offset;
> + u32 request_buffer_length;
> + u32 response_buffer_offset;
> + u32 response_buffer_length;
> + guid_t transaction_id;
> +} __packed;
Why packed? If this is going across the wire somewhere, you need to
specify the endian-ness of these values, right? If this is not going
across the wire, no need for it to be packed.
> +
> +/* VSP->VSC response */
> +struct az_blob_vsp_response {
> + u32 length;
> + u32 error;
> + u32 response_len;
> +} __packed;
Same here.
> +
> +struct az_blob_vsp_request_ctx {
> + struct list_head list;
> + struct completion wait_vsp;
> + struct az_blob_request_sync *request;
> +};
> +
> +struct az_blob_file_ctx {
> + struct list_head list;
> +
> + /* List of pending requests to VSP */
> + struct list_head vsp_pending_requests;
> + /* Lock for protecting vsp_pending_requests */
> + spinlock_t vsp_pending_lock;
> + wait_queue_head_t wait_vsp_pending;
> +
> + pid_t pid;
Why do you need a pid? What namespace is this pid in?
> +static int az_blob_probe(struct hv_device *device,
> + const struct hv_vmbus_device_id *dev_id)
> +{
> + int ret;
> + struct az_blob_device *dev;
> +
> + dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> + if (!dev)
> + return -ENOMEM;
> +
> + spin_lock_init(&dev->file_lock);
> + INIT_LIST_HEAD(&dev->file_list);
> + atomic_set(&dev->pending, 0);
> + init_waitqueue_head(&dev->waiting_to_drain);
> +
> + ret = az_blob_connect_to_vsp(device, dev, AZ_BLOB_RING_SIZE);
> + if (ret)
> + goto fail;
> +
> + refcount_set(&dev->count, 1);
> + az_blob_dev = dev;
> +
> + // create user-mode client library facing device
> + ret = az_blob_create_device(dev);
> + if (ret) {
> + dev_err(AZ_DEV, "failed to create device ret=%d\n", ret);
> + az_blob_remove_vmbus(device);
> + goto fail;
> + }
> +
> + dev_info(AZ_DEV, "successfully probed device\n");
When drivers are working properly, they should be quiet.
And what is with the AZ_DEV macro mess?
And can you handle more than one device in the system at one time? I
think your debugfs logic will get really confused.
thanks,
greg k-h
Powered by blists - more mailing lists