lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SN6PR02MB41575C18EA832E640484A02CD492A@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Sat, 17 May 2025 13:34:20 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Saurabh Singh Sengar <ssengar@...rosoft.com>, KY Srinivasan
	<kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>,
	"wei.liu@...nel.org" <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
	"deller@....de" <deller@....de>, "javierm@...hat.com" <javierm@...hat.com>,
	"arnd@...db.de" <arnd@...db.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
	"stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: RE: [EXTERNAL] [PATCH 1/1] Drivers: hv: Always select CONFIG_SYSFB
 for Hyper-V guests

From: Saurabh Singh Sengar <ssengar@...rosoft.com> Sent: Friday, May 16, 2025 9:38 PM
> 
> > From: Michael Kelley <mhklinux@...look.com>
> >
> > The Hyper-V host provides guest VMs with a range of MMIO addresses that
> > guest VMBus drivers can use. The VMBus driver in Linux manages that MMIO
> > space, and allocates portions to drivers upon request. As part of managing
> > that MMIO space in a Generation 2 VM, the VMBus driver must reserve the
> > portion of the MMIO space that Hyper-V has designated for the synthetic
> > frame buffer, and not allocate this space to VMBus drivers other than graphics
> > framebuffer drivers. The synthetic frame buffer MMIO area is described by
> > the screen_info data structure that is passed to the Linux kernel at boot time,
> > so the VMBus driver must access screen_info for Generation 2 VMs. (In
> > Generation 1 VMs, the framebuffer MMIO space is communicated to the
> > guest via a PCI pseudo-device, and access to screen_info is not needed.)
> >
> > In commit a07b50d80ab6 ("hyperv: avoid dependency on screen_info") the
> > VMBus driver's access to screen_info is restricted to when CONFIG_SYSFB is
> > enabled. CONFIG_SYSFB is typically enabled in kernels built for Hyper-V by
> > virtue of having at least one of CONFIG_FB_EFI, CONFIG_FB_VESA, or
> > CONFIG_SYSFB_SIMPLEFB enabled, so the restriction doesn't usually affect
> > anything. But it's valid to have none of these enabled, in which case
> > CONFIG_SYSFB is not enabled, and the VMBus driver is unable to properly
> > reserve the framebuffer MMIO space for graphics framebuffer drivers. The
> > framebuffer MMIO space may be assigned to some other VMBus driver, with
> > undefined results. As an example, if a VM is using a PCI pass-thru NVMe
> > controller to host the OS disk, the PCI NVMe controller is probed before any
> > graphic devices, and the NVMe controller is assigned a portion of the
> > framebuffer MMIO space.
> > Hyper-V reports an error to Linux during the probe, and the OS disk fails to
> > get setup. Then Linux fails to boot in the VM.
> >
> > Fix this by having CONFIG_HYPERV always select SYSFB. Then the VMBus
> > driver in a Gen 2 VM can always reserve the MMIO space for the graphics
> > framebuffer driver, and prevent the undefined behavior.
> 
> One question: Shouldn't the SYSFB be selected by actual graphics framebuffer driver
> which is expected to use it. With this patch this option will be enabled irrespective
> if there is any user for it or not, wondering if we can better optimize it for such systems.
> 

That approach doesn't work. For a cloud-based server, it might make
sense to build a kernel image without either of the Hyper-V graphics
framebuffer drivers (DRM_HYPERV or HYPERV_FB) since in that case the
Linux console is the serial console. But the problem could still occur
where a PCI pass-thru NVMe controller tries to use the MMIO space
that Hyper-V intends for the framebuffer. That problem is directly tied
to CONFIG_SYSFB because it's the VMBus driver that must treat the
framebuffer MMIO space as special. The absence or presence of a
framebuffer driver isn't the key factor, though we've been (incorrectly)
relying on the presence of a framebuffer driver to set CONFIG_SYSFB.

Michael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ