lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250211134120.7e10b504@collabora.com>
Date: Tue, 11 Feb 2025 13:41:20 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Tvrtko Ursulin <tursulin@...ulin.net>
Cc: Adrián Larumbe <adrian.larumbe@...labora.com>, Steven
 Price <steven.price@....com>, Liviu Dudau <liviu.dudau@....com>, Maarten
 Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard
 <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, David Airlie
 <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, kernel@...labora.com,
 dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] drm/panthor: Replace sleep locks with spinlocks in
 fdinfo path

On Tue, 11 Feb 2025 11:39:49 +0000
Tvrtko Ursulin <tursulin@...ulin.net> wrote:

> On 10/02/2025 16:08, Adrián Larumbe wrote:
> > Hi Tvrtko,  
> 
> Thanks!
> 
> > [18153.770244] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:562
> > [18153.771059] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 203412, name: cat
> > [18153.771757] preempt_count: 1, expected: 0
> > [18153.772164] RCU nest depth: 0, expected: 0
> > [18153.772538] INFO: lockdep is turned off.
> > [18153.772898] CPU: 4 UID: 0 PID: 203412 Comm: cat Tainted: G        W          6.14.0-rc1-panthor-next-rk3588-fdinfo+ #1
> > [18153.772906] Tainted: [W]=WARN
> > [18153.772908] Hardware name: Radxa ROCK 5B (DT)
> > [18153.772911] Call trace:
> > [18153.772913]  show_stack+0x24/0x38 (C)
> > [18153.772927]  dump_stack_lvl+0x3c/0x98
> > [18153.772935]  dump_stack+0x18/0x24
> > [18153.772941]  __might_resched+0x298/0x2b0
> > [18153.772948]  __might_sleep+0x6c/0xb0
> > [18153.772953]  __mutex_lock_common+0x7c/0x1950
> > [18153.772962]  mutex_lock_nested+0x38/0x50
> > [18153.772969]  panthor_fdinfo_gather_group_samples+0x80/0x138 [panthor]
> > [18153.773042]  panthor_show_fdinfo+0x80/0x228 [panthor]
> > [18153.773109]  drm_show_fdinfo+0x1a4/0x1e0 [drm]
> > [18153.773397]  seq_show+0x274/0x358
> > [18153.773404]  seq_read_iter+0x1d4/0x630  
> 
> There is a mutex_lock literally in seq_read_iter.
> 
> So colour me confused. What created the atomic context between then and 
> Panthor code?! I just don't see it.

Uh, looks like we've leaked an atomic context somewhere, indeed.
Adrian, do you have a reliable reproducer for this bug?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ