lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 21 Feb 2020 17:23:31 +0000
From:   Parav Pandit <parav@...lanox.com>
To:     "Saleem, Shiraz" <shiraz.saleem@...el.com>,
        "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
CC:     "Ismail, Mustafa" <mustafa.ismail@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "nhorman@...hat.com" <nhorman@...hat.com>,
        "sassmann@...hat.com" <sassmann@...hat.com>,
        "jgg@...pe.ca" <jgg@...pe.ca>
Subject: Re: [RFC PATCH v4 10/25] RDMA/irdma: Add driver framework definitions

On 2/21/2020 11:01 AM, Saleem, Shiraz wrote:
>> Subject: RE: [RFC PATCH v4 10/25] RDMA/irdma: Add driver framework
>> definitions
>>
> 
> [....]
> 
>>>>> +static int irdma_devlink_reload_up(struct devlink *devlink,
>>>>> +				   struct netlink_ext_ack *extack) {
>>>>> +	struct irdma_dl_priv *priv = devlink_priv(devlink);
>>>>> +	union devlink_param_value saved_value;
>>>>> +	const struct virtbus_dev_id *id = priv->vdev->matched_element;
>>>>
>>>> Like irdma_probe(), struct iidc_virtbus_object *vo is accesible for
>>>> the given
>>> priv.
>>>> Please use struct iidc_virtbus_object for any sharing between two drivers.
>>>> matched_element modification inside the virtbus match() function and
>>>> accessing pointer to some driver data between two driver through
>>>> this matched_element is not appropriate.
>>>
>>> We can possibly avoid matched_element and driver data look up here.
>>> But fundamentally, at probe time (see irdma_gen_probe) the irdma
>>> driver needs to know which generation type of vdev we bound to. i.e. i40e or ice
>> ?
>>> since we support both.
>>> And based on it, extract the driver specific virtbus device object,
>>> i.e i40e_virtbus_device vs iidc_virtbus_object and init that device.
>>>
>>> Accessing driver_data off the vdev matched entry in
>>> irdma_virtbus_id_table is how we know this generation info and make the
>> decision.
>>>
>> If there is single irdma driver for two different virtbus device types, it is better to
>> have two instances of virtbus_register_driver() with different matching string/id.
>> So based on the probe(), it will be clear with virtbus device of interest got added.
>> This way, code will have clear separation between two device types.
> 
> Thanks for the feedback!
> Is it common place to have multiple driver_register instances of same bus type
> in a driver to support different devices? Seems odd.
> Typically a single driver that supports multiple device types for a specific bus-type
> would do a single bus-specific driver_register and pass in an array of bus-specific
> device IDs and let the bus do the match up for you right? And in the probe(), a driver could do device
> specific quirks for the device types. Isnt that purpose of device ID tables for pci, platform, usb etc?
> Why are we trying to handle multiple virtbus device types from a driver any differently?
> 

If differences in treating the two devices is not a lot, if you have lot
of common code, it make sense to do single virtbus_register_driver()
with two different ids.
In that case, struct virtbus_device_id should have some device specific
field like how pci has driver_data.

It should not be set by the match() function by virtbus core.
This field should be setup in the id table by the hw driver which
invokes virtbus_register_device().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ