[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2c00f25e-9dac-7d75-8138-026ad4bcc7fa@codeaurora.org>
Date: Mon, 9 Apr 2018 19:51:51 -0400
From: Sinan Kaya <okaya@...eaurora.org>
To: Keith Busch <keith.busch@...el.com>,
Oza Pawandeep <poza@...eaurora.org>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
Philippe Ombredanne <pombredanne@...b.com>,
Thomas Gleixner <tglx@...utronix.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Kate Stewart <kstewart@...uxfoundation.org>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
Dongdong Liu <liudongdong3@...wei.com>,
Wei Zhang <wzhang@...com>, Timur Tabi <timur@...eaurora.org>
Subject: Re: [PATCH v13 4/6] PCI/DPC: Unify and plumb error handling into DPC
On 4/9/2018 7:29 PM, Keith Busch wrote:
> On Mon, Apr 09, 2018 at 10:41:52AM -0400, Oza Pawandeep wrote:
>> +static int find_dpc_dev_iter(struct device *device, void *data)
>> +{
>> + struct pcie_port_service_driver *service_driver;
>> + struct device **dev;
>> +
>> + dev = (struct device **) data;
>> +
>> + if (device->bus == &pcie_port_bus_type && device->driver) {
>> + service_driver = to_service_driver(device->driver);
>> + if (service_driver->service == PCIE_PORT_SERVICE_DPC) {
>> + *dev = device;
>> + return 1;
>> + }
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static struct device *pci_find_dpc_dev(struct pci_dev *pdev)
>> +{
>> + struct device *dev = NULL;
>> +
>> + device_for_each_child(&pdev->dev, &dev, find_dpc_dev_iter);
>> +
>> + return dev;
>> +}
>
> The only caller of this doesn't seem to care to use struct device. This
> should probably just extract struct dpc_dev directly from in here.
>
Bjorn wants to kill the port service driver infrastructure but that is a much
bigger task.
How do we obtain the DPC object from the parent object directly? Each port
service driver object is a children.
--
Sinan Kaya
Qualcomm Datacenter Technologies, Inc. as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.
Powered by blists - more mailing lists