[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79d0ca87-c6c7-18d5-6429-bb20041646ff@mellanox.com>
Date: Fri, 6 Dec 2019 17:33:52 +0000
From: Parav Pandit <parav@...lanox.com>
To: Zhenyu Wang <zhenyuw@...ux.intel.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"kwankhede@...dia.com" <kwankhede@...dia.com>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"cohuck@...hat.com" <cohuck@...hat.com>,
Jiri Pirko <jiri@...lanox.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Jason Wang <jasowang@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH 0/6] VFIO mdev aggregated resources handling
On 12/6/2019 2:03 AM, Zhenyu Wang wrote:
> On 2019.12.05 18:59:36 +0000, Parav Pandit wrote:
>>>>
>>>>> On 2019.11.07 20:37:49 +0000, Parav Pandit wrote:
>>>>>> Hi,
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: kvm-owner@...r.kernel.org <kvm-owner@...r.kernel.org> On
>>>>>>> Behalf Of Zhenyu Wang
>>>>>>> Sent: Thursday, October 24, 2019 12:08 AM
>>>>>>> To: kvm@...r.kernel.org
>>>>>>> Cc: alex.williamson@...hat.com; kwankhede@...dia.com;
>>>>>>> kevin.tian@...el.com; cohuck@...hat.com
>>>>>>> Subject: [PATCH 0/6] VFIO mdev aggregated resources handling
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> This is a refresh for previous send of this series. I got
>>>>>>> impression that some SIOV drivers would still deploy their own
>>>>>>> create and config method so stopped effort on this. But seems
>>>>>>> this would still be useful for some other SIOV driver which may
>>>>>>> simply want capability to aggregate resources. So here's refreshed
>>> series.
>>>>>>>
>>>>>>> Current mdev device create interface depends on fixed mdev type,
>>>>>>> which get uuid from user to create instance of mdev device. If
>>>>>>> user wants to use customized number of resource for mdev device,
>>>>>>> then only can create new
>>>>>> Can you please give an example of 'resource'?
>>>>>> When I grep [1], [2] and [3], I couldn't find anything related to '
>>> aggregate'.
>>>>>
>>>>> The resource is vendor device specific, in SIOV spec there's ADI
>>>>> (Assignable Device Interface) definition which could be e.g queue
>>>>> for net device, context for gpu, etc. I just named this interface as
>>> 'aggregate'
>>>>> for aggregation purpose, it's not used in spec doc.
>>>>>
>>>>
>>>> Some 'unknown/undefined' vendor specific resource just doesn't work.
>>>> Orchestration tool doesn't know which resource and what/how to configure
>>> for which vendor.
>>>> It has to be well defined.
>>>>
>>>> You can also find such discussion in recent lgpu DRM cgroup patches series
>>> v4.
>>>>
>>>> Exposing networking resource configuration in non-net namespace aware
>>> mdev sysfs at PCI device level is no-go.
>>>> Adding per file NET_ADMIN or other checks is not the approach we follow in
>>> kernel.
>>>>
>>>> devlink has been a subsystem though under net, that has very rich interface
>>> for syscaller, device health, resource management and many more.
>>>> Even though it is used by net driver today, its written for generic device
>>> management at bus/device level.
>>>>
>>>> Yuval has posted patches to manage PCI sub-devices [1] and updated version
>>> will be posted soon which addresses comments.
>>>>
>>>> For any device slice resource management of mdev, sub-function etc, we
>>> should be using single kernel interface as devlink [2], [3].
>>>>
>>>> [1]
>>>> https://lore.kernel.org/netdev/1573229926-30040-1-git-send-email-yuval
>>>> av@...lanox.com/ [2]
>>>> http://man7.org/linux/man-pages/man8/devlink-dev.8.html
>>>> [3] http://man7.org/linux/man-pages/man8/devlink-resource.8.html
>>>>
>>>> Most modern device configuration that I am aware of is usually done via well
>>> defined ioctl() of the subsystem (vhost, virtio, vfio, rdma, nvme and more) or
>>> via netlink commands (net, devlink, rdma and more) not via sysfs.
>>>>
>>>
>>> Current vfio/mdev configuration is via documented sysfs ABI instead of other
>>> ways. So this adhere to that way to introduce more configurable method on
>>> mdev device for standard, it's optional and not actually vendor specific e.g vfio-
>>> ap.
>>>
>> Some unknown/undefined resource as 'aggregate' is just not an ABI.
>> It has to be well defined, as 'hardware_address', 'num_netdev_sqs' or something similar appropriate to that mdev device class.
>> If user wants to set a parameter for a mdev regardless of vendor, they must have single way to do so.
>
> The idea is not specific for some device class, but for each mdev
> type's resource, and be optional for each vendor. If more device class
> specific way is preferred, then we might have very different ways for
> different vendors. Better to avoid that, so here means to aggregate
> number of mdev type's resources for target instance, instead of defining
> kinds of mdev types for those number of resources.
>
Parameter or attribute certainly can be optional.
But the way to aggregate them should not be vendor specific.
Look for some excellent existing examples across subsystems, for example
how you create aggregated netdev or block device is not depend on vendor
or underlying device type.
>>
>>> I'm not sure how many devices support devlink now, or if really make sense to
>>> utilize devlink for other devices except net, or if really make sense to take
>>> mdev resource configuration from there...
>>>
>> This is about adding new knobs not the existing one.
>> It has to be well defined. 'aggregate' is not the word that describes it.
>> If this is something very device specific, it should be prefixed with 'misc_' something.. or it should be misc_X ioctl().
>> Miscellaneous not so well defined class of devices are usually registered using misc_register().
>> Similarly attributes has to be well defined, otherwise, it should fall under misc category specially when you are pointing to 3 well defined specifications.
>>
>
> Any suggestion for naming it?
If parameter is miscellaneous, please prefix it with misc in mdev
ioctl() or in sysfs.
If parameter/attribute is max_netdev_txqs for netdev, name as that,
If its max_dedicated_wqs of some dsa device, please name is that way.
Powered by blists - more mailing lists