[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbPFZVdToW389mj_@intel.com>
Date: Fri, 26 Jan 2024 16:44:53 +0200
From: Ville Syrjälä <ville.syrjala@...ux.intel.com>
To: wangxiaoming321 <xiaoming.wang@...el.com>
Cc: lucas.demarchi@...el.com, ogabbay@...nel.org,
thomas.hellstrom@...ux.intel.com, maarten.lankhorst@...ux.intel.com,
mripard@...nel.org, tzimmermann@...e.de, airlied@...il.com,
daniel@...ll.ch, intel-xe@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] drm/xe/display: Fix memleak in display initialization
On Fri, Jan 26, 2024 at 10:34:33PM +0800, wangxiaoming321 wrote:
> intel_power_domains_init has been called twice in xe_device_probe:
> xe_device_probe -> xe_display_init_nommio -> intel_power_domains_init(xe)
> xe_device_probe -> xe_display_init_noirq -> intel_display_driver_probe_noirq
> -> intel_power_domains_init(i915)
Why are you calling it twice?
>
> It needs add a flag to avoid power_domains->power_wells double malloc.
>
> unreferenced object 0xffff88811150ee00 (size 512):
> comm "systemd-udevd", pid 506, jiffies 4294674198 (age 3605.560s)
> hex dump (first 32 bytes):
> 10 b4 9d a0 ff ff ff ff ff ff ff ff ff ff ff ff ................
> ff ff ff ff ff ff ff ff 00 00 00 00 00 00 00 00 ................
> backtrace:
> [<ffffffff8134b901>] __kmem_cache_alloc_node+0x1c1/0x2b0
> [<ffffffff812c98b2>] __kmalloc+0x52/0x150
> [<ffffffffa08b0033>] __set_power_wells+0xc3/0x360 [xe]
> [<ffffffffa08562fc>] xe_display_init_nommio+0x4c/0x70 [xe]
> [<ffffffffa07f0d1c>] xe_device_probe+0x3c/0x5a0 [xe]
> [<ffffffffa082e48f>] xe_pci_probe+0x33f/0x5a0 [xe]
> [<ffffffff817f2187>] local_pci_probe+0x47/0xa0
> [<ffffffff817f3db3>] pci_device_probe+0xc3/0x1f0
> [<ffffffff8192f2a2>] really_probe+0x1a2/0x410
> [<ffffffff8192f598>] __driver_probe_device+0x78/0x160
> [<ffffffff8192f6ae>] driver_probe_device+0x1e/0x90
> [<ffffffff8192f92a>] __driver_attach+0xda/0x1d0
> [<ffffffff8192c95c>] bus_for_each_dev+0x7c/0xd0
> [<ffffffff8192e159>] bus_add_driver+0x119/0x220
> [<ffffffff81930d00>] driver_register+0x60/0x120
> [<ffffffffa05e50a0>] 0xffffffffa05e50a0
>
> Signed-off-by: wangxiaoming321 <xiaoming.wang@...el.com>
> ---
> drivers/gpu/drm/i915/display/intel_display_power.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
> index bf9685acf75a..3b48a1cb7c54 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -36,6 +36,8 @@
> for_each_power_well_reverse(__dev_priv, __power_well) \
> for_each_if(test_bit((__domain), (__power_well)->domains.bits))
>
> +static int intel_power_domains_init_flag = 0;
> +
> const char *
> intel_display_power_domain_str(enum intel_display_power_domain domain)
> {
> @@ -1016,6 +1018,11 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
> {
> struct i915_power_domains *power_domains = &dev_priv->display.power.domains;
>
> + if(intel_power_domains_init_flag == 1)
> + return 0;
> +
> + intel_power_domains_init_flag++;
Consider what happens when you have multiple Intel GPUs in the system...
> +
> dev_priv->display.params.disable_power_well =
> sanitize_disable_power_well_option(dev_priv,
> dev_priv->display.params.disable_power_well);
> @@ -1041,6 +1048,7 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
> */
> void intel_power_domains_cleanup(struct drm_i915_private *dev_priv)
> {
> + intel_power_domains_init_flag = 0;
> intel_display_power_map_cleanup(&dev_priv->display.power.domains);
> }
>
> --
> 2.25.1
--
Ville Syrjälä
Intel
Powered by blists - more mailing lists