[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2bbbaf7-3b4a-6dfe-6683-1c14c18b3adf@amd.com>
Date: Tue, 26 May 2020 12:06:26 +0530
From: Sanjay R Mehta <sanmehta@....com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: Vinod Koul <vkoul@...nel.org>,
Sanjay R Mehta <Sanju.Mehta@....com>,
dan.j.williams@...el.com, Thomas.Lendacky@....com,
Shyam-sundar.S-k@....com, Nehal-bakulchandra.Shah@....com,
robh@...nel.org, mchehab+samsung@...nel.org, davem@...emloft.net,
linux-kernel@...r.kernel.org, dmaengine@...r.kernel.org
Subject: Re: [PATCH v4 1/3] dmaengine: ptdma: Initial driver for the AMD PTDMA
controller
On 5/26/2020 11:59 AM, Greg KH wrote:
> [CAUTION: External Email]
>
> On Tue, May 26, 2020 at 11:35:02AM +0530, Sanjay R Mehta wrote:
>> Apologies for my delayed response.
>>
>>>> +#include <linux/module.h>
>>>> +#include <linux/kernel.h>
>>>> +#include <linux/pci.h>
>>>> +#include <linux/dma-mapping.h>
>>>> +#include <linux/interrupt.h>
>>>> +
>>>> +#include "ptdma.h"
>>>> +
>>>> +static int cmd_queue_length = 32;
>>>> +module_param(cmd_queue_length, uint, 0644);
>>>> +MODULE_PARM_DESC(cmd_queue_length,
>>>> + " length of the command queue, a power of 2 (2 <= val <= 128)");
>>>
>>> Any reason for this as module param? who will configure this and how?
>>>
>> The command queue length can be from 2 to 64K command.
>> Therefore added as module parameter to allow the length of the queue to be specified at load time.
>
> Please no, this is not the 1990's. No one can use them easily, make
> this configurable on a per-device basis if you really need to be able to
> change this.
>
> But step back, why do you need to change this at all? Why do you have a
> limit and why can you not just do this dynamically? The goal here
> should not have any user-changable options at all, it should "just
> work".
>
Sure Greg, will remove this.
>>>> + * List of PTDMAs, PTDMA count, read-write access lock, and access functions
>>>> + *
>>>> + * Lock structure: get pt_unit_lock for reading whenever we need to
>>>> + * examine the PTDMA list. While holding it for reading we can acquire
>>>> + * the RR lock to update the round-robin next-PTDMA pointer. The unit lock
>>>> + * must be acquired before the RR lock.
>>>> + *
>>>> + * If the unit-lock is acquired for writing, we have total control over
>>>> + * the list, so there's no value in getting the RR lock.
>>>> + */
>>>> +static DEFINE_RWLOCK(pt_unit_lock);
>>>> +static LIST_HEAD(pt_units);
>>>> +
>>>> +static struct pt_device *pt_rr;
>>>
>>> why do we need these globals and not in driver context?
>>>
>> The AMD SOC has multiple PT controller's with the same PCI device ID and hence the same driver is probed for each instance.
>> The driver stores the pt_device context of each PT controller in this global list.
>
> That's horrid and not needed at all. No driver should have a static
> list anymore, again, this isn't the 1990's :)
>
Sure, will remove this :).
>>>> +static void pt_add_device(struct pt_device *pt)
>>>> +{
>>>> + unsigned long flags;
>>>> +
>>>> + write_lock_irqsave(&pt_unit_lock, flags);
>>>> + list_add_tail(&pt->entry, &pt_units);
>>>> + if (!pt_rr)
>>>> + /*
>>>> + * We already have the list lock (we're first) so this
>>>> + * pointer can't change on us. Set its initial value.
>>>> + */
>>>> + pt_rr = pt;
>>>> + write_unlock_irqrestore(&pt_unit_lock, flags);
>>>> +}
>>>
>>> Can you please explain what do you mean by having a list of devices and
>>> why are we adding/removing dynamically?
>>>
>> Since AMD SOC has many PT controller's with the same PCI device ID and
>> hence the same driver probed for initialization of each PT controller device instance.
>
> That's fine, PCI drivers should all work on a per-device basis and not
> care if there are 1, or 1000 of the same device in the system.
>
>> Also, the number of PT controller varies for different AMD SOC's.
>
> Again, that's fine.
>
>> Therefore the dynamic adding/removing of each PT controller context to global device list implemented.
>
> Such a list should never be needed, unless you are doing something
> really wrong. Please remove it and use the proper PCI device driver
> apis for your individual instances instead.
>
Sure, will incorporate the changes in the next version of patch set.
> thanks,
>
> greg k-h
>
Powered by blists - more mailing lists