[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d069c26-4971-415a-9751-a28d207feb43@redhat.com>
Date: Fri, 14 Feb 2025 09:09:37 -0500
From: Waiman Long <llong@...hat.com>
To: Marco Elver <elver@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>, Boqun Feng <boqun.feng@...il.com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com
Subject: Re: [PATCH v4 4/4] locking/lockdep: Add kasan_check_byte() check in
lock_acquire()
On 2/14/25 5:44 AM, Marco Elver wrote:
> On Thu, 13 Feb 2025 at 21:02, Waiman Long <longman@...hat.com> wrote:
>> KASAN instrumentation of lockdep has been disabled as we don't need
>> KASAN to check the validity of lockdep internal data structures and
>> incur unnecessary performance overhead. However, the lockdep_map pointer
>> passed in externally may not be valid (e.g. use-after-free) and we run
>> the risk of using garbage data resulting in false lockdep reports. Add
>> kasan_check_byte() call in lock_acquire() for non kernel core data
>> object to catch invalid lockdep_map and abort lockdep processing if
>> input data isn't valid.
>>
>> Suggested-by: Marco Elver <elver@...gle.com>
>> Signed-off-by: Waiman Long <longman@...hat.com>
> Reviewed-by: Marco Elver <elver@...gle.com>
>
> but double-check if the below can be simplified.
>
>> ---
>> kernel/locking/lock_events_list.h | 1 +
>> kernel/locking/lockdep.c | 14 ++++++++++++++
>> 2 files changed, 15 insertions(+)
>>
>> diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
>> index 9ef9850aeebe..bed59b2195c7 100644
>> --- a/kernel/locking/lock_events_list.h
>> +++ b/kernel/locking/lock_events_list.h
>> @@ -95,3 +95,4 @@ LOCK_EVENT(rtmutex_deadlock) /* # of rt_mutex_handle_deadlock()'s */
>> LOCK_EVENT(lockdep_acquire)
>> LOCK_EVENT(lockdep_lock)
>> LOCK_EVENT(lockdep_nocheck)
>> +LOCK_EVENT(lockdep_kasan_fail)
>> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
>> index 8436f017c74d..98dd0455d4be 100644
>> --- a/kernel/locking/lockdep.c
>> +++ b/kernel/locking/lockdep.c
>> @@ -57,6 +57,7 @@
>> #include <linux/lockdep.h>
>> #include <linux/context_tracking.h>
>> #include <linux/console.h>
>> +#include <linux/kasan.h>
>>
>> #include <asm/sections.h>
>>
>> @@ -5830,6 +5831,19 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass,
>> if (!debug_locks)
>> return;
>>
>> + /*
>> + * As KASAN instrumentation is disabled and lock_acquire() is usually
>> + * the first lockdep call when a task tries to acquire a lock, add
>> + * kasan_check_byte() here to check for use-after-free of non kernel
>> + * core lockdep_map data to avoid referencing garbage data.
>> + */
>> + if (unlikely(IS_ENABLED(CONFIG_KASAN) &&
> This is not needed - kasan_check_byte() will always return true if
> KASAN is disabled or not compiled in.
I added this check because of the is_kernel_core_data() call.
>
>> + !is_kernel_core_data((unsigned long)lock) &&
> Why use !is_kernel_core_data()? Is it to improve performance?
Not exactly. In my testing, just using kasan_check_byte() doesn't quite
work out. It seems to return false positive in some cases causing
lockdep splat. I didn't look into exactly why this happens and I added
the is_kernel_core_data() call to work around that.
Cheers,
Longman
Powered by blists - more mailing lists