[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<SN6PR02MB41575726CE86AAB16FF6C365D4FD2@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Tue, 11 Feb 2025 03:46:51 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
CC: "drawat.floss@...il.com" <drawat.floss@...il.com>,
"maarten.lankhorst@...ux.intel.com" <maarten.lankhorst@...ux.intel.com>,
"mripard@...nel.org" <mripard@...nel.org>, "tzimmermann@...e.de"
<tzimmermann@...e.de>, "airlied@...il.com" <airlied@...il.com>,
"simona@...ll.ch" <simona@...ll.ch>, "christophe.jaillet@...adoo.fr"
<christophe.jaillet@...adoo.fr>, "wei.liu@...nel.org" <wei.liu@...nel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>
Subject: RE: [PATCH 1/1] drm/hyperv: Fix address space leak when Hyper-V DRM
device is removed
From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com> Sent: Monday, February 10, 2025 7:33 PM
>
> On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@...il.com wrote:
> > From: Michael Kelley <mhklinux@...look.com>
> >
> > When a Hyper-V DRM device is probed, the driver allocates MMIO space for
> > the vram, and maps it cacheable. If the device removed, or in the error
> > path for device probing, the MMIO space is released but no unmap is done.
> > Consequently the kernel address space for the mapping is leaked.
> >
> > Fix this by adding iounmap() calls in the device removal path, and in the
> > error path during device probing.
> >
> > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()")
> > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size")
> > Signed-off-by: Michael Kelley <mhklinux@...look.com>
> > ---
> > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > index e0953777a206..b491827941f1 100644
> > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
> > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev,
> > return 0;
> >
> > err_free_mmio:
> > + iounmap(hv->vram);
> > vmbus_free_mmio(hv->mem->start, hv->fb_size);
> > err_vmbus_close:
> > vmbus_close(hdev->channel);
> > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev)
> > vmbus_close(hdev->channel);
> > hv_set_drvdata(hdev, NULL);
> >
> > + iounmap(hv->vram);
> > vmbus_free_mmio(hv->mem->start, hv->fb_size);
> > }
> >
> > --
> > 2.25.1
> >
>
> Thanks for the fix. May I know how do you find such issues ?
I think it was that I was looking at the Hyper-V FB driver for the
vmbus_free_mmio() call sites, and realizing that such call sites
should probably also have an associated iounmap(). Then I was
looking at the same thing in the Hyper-V DRM driver, and
realizing there were no calls to iounmap()!
To confirm, the contents of /proc/vmallocinfo can be filtered
for ioremap calls with size 8 MiB (which actually show up as
8 MiB + 4KiB because the address space allocator adds a guard
page to each allocation). When doing repeated unbind/bind
sequences on the DRM driver, those 8 MiB entries in
/proc/vmallocinfo kept accumulating and were never freed.
Michael
>
> Reviewed-by: Saurabh Sengar <ssengar@...ux.microsoft.com>
> Tested-by: Saurabh Sengar <ssengar@...ux.microsoft.com>
>
> - Saurabh
Powered by blists - more mailing lists