[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k06tvop5.fsf@redhat.com>
Date: Sat, 27 Aug 2022 14:46:46 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Wei Liu <wei.liu@...nel.org>,
Deepak Rawat <drawat.floss@...il.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Dexuan Cui <decui@...rosoft.com>
Subject: RE: [PATCH v2 3/3] Drivers: hv: Never allocate anything besides
framebuffer from framebuffer memory region
"Michael Kelley (LINUX)" <mikelley@...rosoft.com> writes:
> From: Vitaly Kuznetsov <vkuznets@...hat.com> Sent: Thursday, August 25, 2022 2:00 AM
>>
>> Passed through PCI device sometimes misbehave on Gen1 VMs when Hyper-V
>> DRM driver is also loaded. Looking at IOMEM assignment, we can see e.g.
>>
>> $ cat /proc/iomem
>> ...
>> f8000000-fffbffff : PCI Bus 0000:00
>> f8000000-fbffffff : 0000:00:08.0
>> f8000000-f8001fff : bb8c4f33-2ba2-4808-9f7f-02f3b4da22fe
>> ...
>> fe0000000-fffffffff : PCI Bus 0000:00
>> fe0000000-fe07fffff : bb8c4f33-2ba2-4808-9f7f-02f3b4da22fe
>> fe0000000-fe07fffff : 2ba2:00:02.0
>> fe0000000-fe07fffff : mlx4_core
>>
>> the interesting part is the 'f8000000' region as it is actually the
>> VM's framebuffer:
>>
>> $ lspci -v
>> ...
>> 0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
>> (prog-if 00 [VGA controller])
>> Flags: bus master, fast devsel, latency 0, IRQ 11
>> Memory at f8000000 (32-bit, non-prefetchable) [size=64M]
>> ...
>>
>> hv_vmbus: registering driver hyperv_drm
>> hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] Synthvid Version major 3, minor 5
>> hyperv_drm 0000:00:08.0: vgaarb: deactivate vga console
>> hyperv_drm 0000:00:08.0: BAR 0: can't reserve [mem 0xf8000000-0xfbffffff]
>> hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] Cannot request framebuffer, boot fb still active?
>>
>> Note: "Cannot request framebuffer" is not a fatal error in
>> hyperv_setup_gen1() as the code assumes there's some other framebuffer
>> device there but we actually have some other PCI device (mlx4 in this
>> case) config space there!
>
> My apologies for not getting around to commenting on the previous
> version of this patch. The function hyperv_setup_gen1() and the
> "Cannot request framebuffer" message have gone away as of
> commit a0ab5abced55.
>
True, will fix!
>>
>> The problem appears to be that vmbus_allocate_mmio() can allocate from
>> the reserved framebuffer region (fb_overlap_ok), however, if the
>> request to allocate MMIO comes from some other device before
>> framebuffer region is taken, it can happily use framebuffer region for
>> it.
>
> Interesting. I had never looked at the details of vmbus_allocate_mmio().
> The semantics one might assume of a parameter named "fb_overlap_ok"
> aren't implemented because !fb_overlap_ok essentially has no effect. The
> existing semantics are really "prefer_fb_overlap". This patch implements
> the expected and needed semantics, which is to not allocate from the frame
> buffer space when !fb_overlap_ok.
>
> If that's an accurate high level summary, maybe this commit message
> could describe it that way? The other details you provide about what can
> go wrong should still be included as well.
That's acually a very good summary! Let me update the commit message,
I'll be sending out v3 shortly.
>
>> Note, Gen2 VMs are usually unaffected by the issue because
>> framebuffer region is already taken by EFI fb (in case kernel supports
>> it) but Gen1 VMs may have this region unclaimed by the time Hyper-V PCI
>> pass-through driver tries allocating MMIO space if Hyper-V DRM/FB drivers
>> load after it. Devices can be brought up in any sequence so let's
>> resolve the issue by always ignoring 'fb_mmio' region for non-FB
>> requests, even if the region is unclaimed.
>>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>> ---
>> drivers/hv/vmbus_drv.c | 10 +++++++++-
>> 1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
>> index 536f68e563c6..3c833ea60db6 100644
>> --- a/drivers/hv/vmbus_drv.c
>> +++ b/drivers/hv/vmbus_drv.c
>> @@ -2331,7 +2331,7 @@ int vmbus_allocate_mmio(struct resource **new, struct
>> hv_device *device_obj,
>> bool fb_overlap_ok)
>> {
>> struct resource *iter, *shadow;
>> - resource_size_t range_min, range_max, start;
>> + resource_size_t range_min, range_max, start, end;
>> const char *dev_n = dev_name(&device_obj->device);
>> int retval;
>>
>> @@ -2366,6 +2366,14 @@ int vmbus_allocate_mmio(struct resource **new, struct
>> hv_device *device_obj,
>> range_max = iter->end;
>> start = (range_min + align - 1) & ~(align - 1);
>> for (; start + size - 1 <= range_max; start += align) {
>> + end = start + size - 1;
>> +
>> + /* Skip the whole fb_mmio region if not fb_overlap_ok */
>> + if (!fb_overlap_ok && fb_mmio &&
>> + (((start >= fb_mmio->start) && (start <= fb_mmio->end)) ||
>> + ((end >= fb_mmio->start) && (end <= fb_mmio->end))))
>> + continue;
>> +
>> shadow = __request_region(iter, start, size, NULL,
>> IORESOURCE_BUSY);
>> if (!shadow)
>> --
>> 2.37.1
>
> Other than my musings on the commit message,
>
> Reviewed-by: Michael Kelley <mikelley@...rosoft.com>
>
Thanks!
--
Vitaly
Powered by blists - more mailing lists