[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fb590417-2c27-1f46-2dc8-e90931d9c600@alu.unizg.hr>
Date: Tue, 20 Dec 2022 18:20:39 +0100
From: Mirsad Goran Todorovac <mirsad.todorovac@....unizg.hr>
To: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
srinivas pandruvada <srinivas.pandruvada@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>, jani.nikula@...ux.intel.com,
joonas.lahtinen@...ux.intel.com,
Rodrigo Vivi <rodrigo.vivi@...el.com>
Cc: Thorsten Leemhuis <regressions@...mhuis.info>,
intel-gfx@...ts.freedesktop.org
Subject: Re: Possible regression in drm/i915 driver: memleak
On 20. 12. 2022. 16:52, Tvrtko Ursulin wrote:
> On 20/12/2022 15:22, srinivas pandruvada wrote:
>> +Added DRM mailing list and maintainers
>>
>> On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
>>> Hi all,
>>>
>>> I have been unsuccessful to find any particular Intel i915 maintainer
>>> emails, so my best bet is to post here, as you will must assuredly
>>> already know them.
>
> For future reference you can use ${kernel_dir}/scripts/get_maintainer.pl -f ...
Thank you, this will help a great deal provided that I find any
more bugs ...
>>> The problem is a kernel memory leak that is repeatedly occurring
>>> triggered during the execution of Chrome browser under the latest
>>> 6.1.0+
>>> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
>>> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
>>>
>>> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
>>> build,
>>> on a vanilla mainline kernel from Mr. Torvalds' tree.
>>>
>>> The leaks look like this one:
>>>
>>> unreferenced object 0xffff888131754880 (size 64):
>>> comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>>> hex dump (first 32 bytes):
>>> 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> ................
>>> 00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff
>>> ...........>....
>>> backtrace:
>>> [<ffffffff9e9b5542>] slab_post_alloc_hook+0xb2/0x340
>>> [<ffffffff9e9bbf5f>] __kmem_cache_alloc_node+0x1bf/0x2c0
>>> [<ffffffff9e8f767a>] kmalloc_trace+0x2a/0xb0
>>> [<ffffffffc08dfde5>] drm_vma_node_allow+0x45/0x150 [drm]
>>> [<ffffffffc0b33315>] __assign_mmap_offset_handle+0x615/0x820
>>> [i915]
>>> [<ffffffffc0b34057>] i915_gem_mmap_offset_ioctl+0x77/0x110
>>> [i915]
>>> [<ffffffffc08bc5e1>] drm_ioctl_kernel+0x181/0x280 [drm]
>>> [<ffffffffc08bc9cd>] drm_ioctl+0x2dd/0x6a0 [drm]
>>> [<ffffffff9ea54744>] __x64_sys_ioctl+0xc4/0x100
>>> [<ffffffff9fbc0178>] do_syscall_64+0x58/0x80
>>> [<ffffffff9fc000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
>>>
>>> The complete list of leaks in attachment, but they seem similar or
>>> the same.
>>>
>>> Please find attached lshw and kernel build config file.
>>>
>>> I will probably check the same parms on my laptop at home, which is
>>> also
>>> Lenovo, but a different hw config and Ubuntu 22.10.
>
> Could you try the below patch?
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> index c3ea243d414d..0b07534c203a 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> @@ -679,9 +679,10 @@ mmap_offset_attach(struct drm_i915_gem_object *obj,
> insert:
> mmo = insert_mmo(obj, mmo);
> GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo);
> -out:
> +
> if (file)
> drm_vma_node_allow(&mmo->vma_node, file);
> +out:
> return mmo;
>
> err:
>
> Maybe it is not the best fix but curious to know if it will make the leak go away.
The patch was successfully applied to the latest Mr. Torvalds' tree (commit b6bb9676f216).
It is currently building, which can take up to 90 minutes on our system.
Now the test depends on whether I will be able to setup the machine at work remotely
(there were some firewalls on port 22 recently).
I will keep you updated.
Thanks,
Mirsad
--
Mirsad Goran Todorovac
Sistem inženjer
Grafički fakultet | Akademija likovnih umjetnosti
Sveučilište u Zagrebu
--
System engineer
Faculty of Graphic Arts | Academy of Fine Arts
University of Zagreb, Republic of Croatia
The European Union
Powered by blists - more mailing lists