[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20231006042807.GA22906@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
Date: Thu, 5 Oct 2023 21:28:07 -0700
From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: Saurabh Singh Sengar <ssengar@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
"corbet@....net" <corbet@....net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [EXTERNAL] Re: [PATCH v4 0/3] UIO driver for low speed Hyper-V
devices
On Tue, Sep 26, 2023 at 05:41:26AM -0700, Saurabh Singh Sengar wrote:
> On Wed, Sep 06, 2023 at 05:23:07AM -0700, Saurabh Singh Sengar wrote:
> > On Tue, Aug 22, 2023 at 01:48:03PM +0200, Greg KH wrote:
> > > On Mon, Aug 21, 2023 at 07:36:18AM +0000, Saurabh Singh Sengar wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Greg KH <gregkh@...uxfoundation.org>
> > > > > Sent: Saturday, August 12, 2023 4:45 PM
> > > > > To: Saurabh Sengar <ssengar@...ux.microsoft.com>
> > > > > Cc: KY Srinivasan <kys@...rosoft.com>; Haiyang Zhang
> > > > > <haiyangz@...rosoft.com>; wei.liu@...nel.org; Dexuan Cui
> > > > > <decui@...rosoft.com>; Michael Kelley (LINUX) <mikelley@...rosoft.com>;
> > > > > corbet@....net; linux-kernel@...r.kernel.org; linux-hyperv@...r.kernel.org;
> > > > > linux-doc@...r.kernel.org
> > > > > Subject: [EXTERNAL] Re: [PATCH v4 0/3] UIO driver for low speed Hyper-V
> > > > > devices
> > > > >
> > > > > On Fri, Aug 04, 2023 at 12:09:53AM -0700, Saurabh Sengar wrote:
> > > > > > Hyper-V is adding multiple low speed "speciality" synthetic devices.
> > > > > > Instead of writing a new kernel-level VMBus driver for each device,
> > > > > > make the devices accessible to user space through a UIO-based
> > > > > > hv_vmbus_client driver. Each device can then be supported by a user
> > > > > > space driver. This approach optimizes the development process and
> > > > > > provides flexibility to user space applications to control the key
> > > > > > interactions with the VMBus ring buffer.
> > > > >
> > > > > Why is it faster to write userspace drivers here? Where are those new drivers,
> > > > > and why can't they be proper kernel drivers? Are all hyper-v drivers going to
> > > > > move to userspace now?
> > > >
> > > > Hi Greg,
> > > >
> > > > You are correct; it isn't faster. However, the developers working on these userspace
> > > > drivers can concentrate entirely on the business logic of these devices. The more
> > > > intricate aspects of the kernel, such as interrupt management and host communication,
> > > > can be encapsulated within the uio driver.
> > >
> > > Yes, kernel drivers are hard, we all know that.
> > >
> > > But if you do it right, it doesn't have to be, saying "it's too hard for
> > > our programmers to write good code for our platform" isn't exactly a
> > > good endorcement of either your programmers, or your platform :)
> > >
> > > > The quantity of Hyper-V devices is substantial, and their numbers are consistently
> > > > increasing. Presently, all of these drivers are in a development/planning phase and
> > > > rely significantly on the acceptance of this UIO driver as a prerequisite.
> > >
> > > Don't make my acceptance of something that you haven't submitted before
> > > a business decision that I need to make, that's disenginous.
> > >
> > > > Not all hyper-v drivers will move to userspace, but many a new slow Hyperv-V
> > > > devices will use this framework and will avoid introducing a new kernel driver. We
> > > > will also plan to remove some of the existing drivers like kvp/vss.
> > >
> > > Define "slow" please.
> >
> > In the Hyper-V environment, most devices, with the exception of network and storage,
> > typically do not require extensive data read/write exchanges with the host. Such
> > devices are considered to be 'slow' devices.
> >
> > >
> > > > > > The new synthetic devices are low speed devices that don't support
> > > > > > VMBus monitor bits, and so they must use vmbus_setevent() to notify
> > > > > > the host of ring buffer updates. The new driver provides this
> > > > > > functionality along with a configurable ring buffer size.
> > > > > >
> > > > > > Moreover, this series of patches incorporates an update to the fcopy
> > > > > > application, enabling it to seamlessly utilize the new interface. The
> > > > > > older fcopy driver and application will be phased out gradually.
> > > > > > Development of other similar userspace drivers is still underway.
> > > > > >
> > > > > > Moreover, this patch series adds a new implementation of the fcopy
> > > > > > application that uses the new UIO driver. The older fcopy driver and
> > > > > > application will be phased out gradually. Development of other similar
> > > > > > userspace drivers is still underway.
> > > > >
> > > > > You are adding a new user api with the "ring buffer" size api, which is odd for
> > > > > normal UIO drivers as that's not something that UIO was designed for.
> > > > >
> > > > > Why not just make you own generic type uiofs type kernel api if you really
> > > > > want to do all of this type of thing in userspace instead of in the kernel?
> > > >
> > > > Could you please elaborate more on this suggestion. I couldn't understand it
> > > > completely.
> > >
> > > Why is uio the requirement here? Why not make your own framework to
> > > write hv drivers in userspace that fits in better with the overall goal?
> > > Call it "hvfs" or something like that, much like we have usbfs for
> > > writing usb drivers in userspace.
> > >
> > > Bolting on HV drivers to UIO seems very odd as that is not what this
> > > framework is supposed to be providing at all. UIO was to enable "pass
> > > through" memory-mapped drivers that only wanted an interrupt and access
> > > to raw memory locations in the hardware.
> > >
> > > Now you are adding ring buffer managment and all other sorts of things
> > > just for your platform. So make it a real subsystem tuned exactly for
> > > what you need and NOT try to force it into the UIO interface (which
> > > should know nothing about ring buffers...)
> >
> > Thank you for elaborating the details. I will drop the plan to introduce a
> > new UIO driver for this effort. However, I would like to know your thoughts
> > on enhancing existing 'uio_hv_generic' driver to achieve the same. We
> > already have 'uio_hv_generic' driver in linux kernel, which is used for
> > developing userspace drivers for 'fast Hyper-V devices'.
> >
> > Since these newly introduced synthetic devices operate at a lower speed,
> > they do not have the capability to support monitor bits. Instead, we must
> > utilize the 'vmbus_setevent()' method to enable interrupts from the host.
> > Earlier we made an attempt to support slow devices by uio_hv_generic :
> > https://lore.kernel.org/lkml/1665685754-13971-1-git-send-email-ssengar@linux.microsoft.com/.
> > At that time, the absence of userspace code (fcopy) hindered progress
> > in this direction.
> >
> > Acknowledging your valid concerns about introducing a new UIO driver for
> > Hyper-V, I propose exploring the potential to enhance the existing
> > 'uio_hv_generic' driver to accommodate slower devices effectively. My
> > commitment to this endeavour includes ensuring the seamless operation of
> > the existing 'fcopy' functionality with the modified 'uio_hv_generic'
> > driver. Additionally, I will undertake the task of removing the current
> > 'fcopy' kernel driver and userspace daemon as part of this effort.
> >
> > Please let me know your thoughts. I look forward to your feedback and
> > the opportunity to discuss this proposal further.
>
> Greg,
>
> May I know if enhancing uio_hv_generic.c to support 'slow devices' is
> an accptable approach ? I'm willing to undertake this task and propose
> the necessary modifications.
>
> - Saurabh
ping
>
> >
> > - Saurabh
Powered by blists - more mailing lists