[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SN6PR02MB4157F630284939E084486AFED46FA@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Thu, 5 Jun 2025 17:38:57 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Thomas Zimmermann <tzimmermann@...e.de>, Simona Vetter
<simona.vetter@...ll.ch>
CC: David Hildenbrand <david@...hat.com>, "simona@...ll.ch" <simona@...ll.ch>,
"deller@....de" <deller@....de>, "haiyangz@...rosoft.com"
<haiyangz@...rosoft.com>, "kys@...rosoft.com" <kys@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, "decui@...rosoft.com"
<decui@...rosoft.com>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "weh@...rosoft.com" <weh@...rosoft.com>,
"hch@....de" <hch@....de>, "dri-devel@...ts.freedesktop.org"
<dri-devel@...ts.freedesktop.org>, "linux-fbdev@...r.kernel.org"
<linux-fbdev@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-hyperv@...r.kernel.org"
<linux-hyperv@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: RE: [PATCH v3 3/4] fbdev/deferred-io: Support contiguous kernel
memory framebuffers
From: Thomas Zimmermann <tzimmermann@...e.de> Sent: Thursday, June 5, 2025 8:36 AM
>
> Hi
>
> Am 04.06.25 um 23:43 schrieb Michael Kelley:
> [...]
> > Nonetheless, there's an underlying issue. A main cause of the difference
> > is the number of messages to Hyper-V to update dirty regions. With
> > hyperv_fb using deferred I/O, the messages are limited 20/second, so
> > the total number of messages to Hyper-V is about 480. But hyperv_drm
> > appears to send 3 messages to Hyper-V for each line of output, or a total of
> > about 3,000,000 messages (~90K/second). That's a lot of additional load
> > on the Hyper-V host, and it adds the 10 seconds of additional elapsed
> > time seen in the guest. There also this ugly output in dmesg because the
> > ring buffer for sending messages to the Hyper-V host gets full -- Hyper-V
> > doesn't always keep up, at least not on my local laptop where I'm
> > testing:
> >
> > [12574.327615] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12574.327684] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12574.327760] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12574.327841] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12597.016128] hyperv_sendpacket: 6211 callbacks suppressed
> > [12597.016133] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12597.016172] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12597.016220] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> > [12597.016267] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> *ERROR* Unable to send packet via vmbus; error -11
> >
> > hyperv_drm could be fixed to not output the ugly messages, but there's
> > still the underlying issue of overrunning the ring buffer, and excessively
> > hammering on the host. If we could get hyperv_drm doing deferred I/O, I
> > would feel much better about going full-on with deprecating hyperv_fb.
>
> I try to address the problem with the patches at
>
> https://lore.kernel.org/dri-devel/20250605152637.98493-1-tzimmermann@suse.de/
>
> Testing and feedback is much appreciated.
>
Nice!
I ran the same test case with your patches, and everything works well. The
hyperv_drm numbers are now pretty much the same as the hyperv_fb
numbers for both elapsed time and system CPU time -- within a few percent.
For hyperv_drm, there's no longer a gap in the elapsed time and system
CPU time. No errors due to the guest-to-host ring buffer being full. Total
messages to Hyper-V for hyperv_drm are now a few hundred instead of 3M.
The hyperv_drm message count is still a little higher than for hyperv_fb,
presumably because the simulated vblank rate in hyperv_drm is higher than
the 20 Hz rate used by hyperv_fb deferred I/O. But the overall numbers are
small enough that the difference is in the noise. Question: what is the default
value for the simulated vblank rate? Just curious ...
Michael
Powered by blists - more mailing lists