lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250211142140.GA2330@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
Date: Tue, 11 Feb 2025 06:21:40 -0800
From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
To: Michael Kelley <mhklinux@...look.com>
Cc: "drawat.floss@...il.com" <drawat.floss@...il.com>,
	"maarten.lankhorst@...ux.intel.com" <maarten.lankhorst@...ux.intel.com>,
	"mripard@...nel.org" <mripard@...nel.org>,
	"tzimmermann@...e.de" <tzimmermann@...e.de>,
	"airlied@...il.com" <airlied@...il.com>,
	"simona@...ll.ch" <simona@...ll.ch>,
	"christophe.jaillet@...adoo.fr" <christophe.jaillet@...adoo.fr>,
	"wei.liu@...nel.org" <wei.liu@...nel.org>,
	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>
Subject: Re: [PATCH 1/1] drm/hyperv: Fix address space leak when Hyper-V DRM
 device is removed

On Tue, Feb 11, 2025 at 03:46:51AM +0000, Michael Kelley wrote:
> From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com> Sent: Monday, February 10, 2025 7:33 PM
> > 
> > On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@...il.com wrote:
> > > From: Michael Kelley <mhklinux@...look.com>
> > >
> > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for
> > > the vram, and maps it cacheable. If the device removed, or in the error
> > > path for device probing, the MMIO space is released but no unmap is done.
> > > Consequently the kernel address space for the mapping is leaked.
> > >
> > > Fix this by adding iounmap() calls in the device removal path, and in the
> > > error path during device probing.
> > >
> > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()")
> > > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size")
> > > Signed-off-by: Michael Kelley <mhklinux@...look.com>
> > > ---
> > >  drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > > index e0953777a206..b491827941f1 100644
> > > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev,
> > >  	return 0;
> > >
> > >  err_free_mmio:
> > > +	iounmap(hv->vram);
> > >  	vmbus_free_mmio(hv->mem->start, hv->fb_size);
> > >  err_vmbus_close:
> > >  	vmbus_close(hdev->channel);
> > > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev)
> > >  	vmbus_close(hdev->channel);
> > >  	hv_set_drvdata(hdev, NULL);
> > >
> > > +	iounmap(hv->vram);
> > >  	vmbus_free_mmio(hv->mem->start, hv->fb_size);
> > >  }
> > >
> > > --
> > > 2.25.1
> > >
> > 
> > Thanks for the fix. May I know how do you find such issues ?
> 
> I think it was that I was looking at the Hyper-V FB driver for the
> vmbus_free_mmio() call sites, and realizing that such call sites
> should probably also have an associated iounmap(). Then I was
> looking at the same thing in the Hyper-V DRM driver, and
> realizing there were no calls to iounmap()!
> 
> To confirm, the contents of /proc/vmallocinfo can be filtered
> for ioremap calls with size 8 MiB (which actually show up as
> 8 MiB + 4KiB because the address space allocator adds a guard
> page to each allocation). When doing repeated unbind/bind
> sequences on the DRM driver, those 8 MiB entries in
> /proc/vmallocinfo kept accumulating and were never freed.
> 
> Michael

Thank you!

Regards,
Saurabh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ