[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cdfb26d-7faa-9da0-05b9-79bb21703283@acm.org>
Date: Thu, 19 Dec 2019 18:57:13 -0800
From: Bart Van Assche <bvanassche@....org>
To: Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH] locking/lockdep: Fix potential buffer overrun problem in
stack_trace[]
On 2019-12-19 10:28, Waiman Long wrote:
> If the lockdep code is really running out of the stack_trace entries,
> there is a possiblity that buffer overrun can happen and corrupt the
^^^^^^^^^^
possibility?
> data immediately after stack_trace[].
>
> If there is less than LOCK_TRACE_SIZE_IN_LONGS entries left before
> the call to save_trace(), the max_entries computation will leave it
> with a very large positive number because of its unsigned nature. The
> subsequent call to stack_trace_save() will then corrupt the data after
> stack_trace[]. Fix that by changing max_entries to a signed integer
> and check for negative value before calling stack_trace_save().
>
> Fixes: 12593b7467f9 ("locking/lockdep: Reduce space occupied by stack traces")
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/lockdep.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 32282e7112d3..56e260a7582f 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -482,7 +482,7 @@ static struct lock_trace *save_trace(void)
> struct lock_trace *trace, *t2;
> struct hlist_head *hash_head;
> u32 hash;
> - unsigned int max_entries;
> + int max_entries;
>
> BUILD_BUG_ON_NOT_POWER_OF_2(STACK_TRACE_HASH_SIZE);
> BUILD_BUG_ON(LOCK_TRACE_SIZE_IN_LONGS >= MAX_STACK_TRACE_ENTRIES);
> @@ -490,10 +490,8 @@ static struct lock_trace *save_trace(void)
> trace = (struct lock_trace *)(stack_trace + nr_stack_trace_entries);
> max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries -
> LOCK_TRACE_SIZE_IN_LONGS;
> - trace->nr_entries = stack_trace_save(trace->entries, max_entries, 3);
>
> - if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES -
> - LOCK_TRACE_SIZE_IN_LONGS - 1) {
> + if (max_entries < 0) {
> if (!debug_locks_off_graph_unlock())
> return NULL;
>
> @@ -502,6 +500,7 @@ static struct lock_trace *save_trace(void)
>
> return NULL;
> }
> + trace->nr_entries = stack_trace_save(trace->entries, max_entries, 3);
>
> hash = jhash(trace->entries, trace->nr_entries *
> sizeof(trace->entries[0]), 0);
I'm not sure whether it is useful to call stack_trace_save() if
max_entries == 0. How about changing the "max_entries < 0" test into
"max_entries <= 0"?
Thanks,
Bart.
Powered by blists - more mailing lists