[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <154aa365-0e27-458c-b801-62fd1cbfa169@suse.de>
Date: Thu, 5 Jun 2025 17:35:57 +0200
From: Thomas Zimmermann <tzimmermann@...e.de>
To: Michael Kelley <mhklinux@...look.com>,
Simona Vetter <simona.vetter@...ll.ch>
Cc: David Hildenbrand <david@...hat.com>, "simona@...ll.ch"
<simona@...ll.ch>, "deller@....de" <deller@....de>,
"haiyangz@...rosoft.com" <haiyangz@...rosoft.com>,
"kys@...rosoft.com" <kys@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"decui@...rosoft.com" <decui@...rosoft.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"weh@...rosoft.com" <weh@...rosoft.com>, "hch@....de" <hch@....de>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-fbdev@...r.kernel.org" <linux-fbdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v3 3/4] fbdev/deferred-io: Support contiguous kernel
memory framebuffers
Hi
Am 04.06.25 um 23:43 schrieb Michael Kelley:
[...]
> Nonetheless, there's an underlying issue. A main cause of the difference
> is the number of messages to Hyper-V to update dirty regions. With
> hyperv_fb using deferred I/O, the messages are limited 20/second, so
> the total number of messages to Hyper-V is about 480. But hyperv_drm
> appears to send 3 messages to Hyper-V for each line of output, or a total of
> about 3,000,000 messages (~90K/second). That's a lot of additional load
> on the Hyper-V host, and it adds the 10 seconds of additional elapsed
> time seen in the guest. There also this ugly output in dmesg because the
> ring buffer for sending messages to the Hyper-V host gets full -- Hyper-V
> doesn't always keep up, at least not on my local laptop where I'm
> testing:
>
> [12574.327615] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12574.327684] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12574.327760] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12574.327841] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12597.016128] hyperv_sendpacket: 6211 callbacks suppressed
> [12597.016133] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12597.016172] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12597.016220] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
> [12597.016267] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] *ERROR* Unable to send packet via vmbus; error -11
>
> hyperv_drm could be fixed to not output the ugly messages, but there's
> still the underlying issue of overrunning the ring buffer, and excessively
> hammering on the host. If we could get hyperv_drm doing deferred I/O, I
> would feel much better about going full-on with deprecating hyperv_fb.
I try to address the problem with the patches at
https://lore.kernel.org/dri-devel/20250605152637.98493-1-tzimmermann@suse.de/
Testing and feedback is much appreciated.
Best regards
Thomas
>
> Michael
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
Powered by blists - more mailing lists