[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb2fa38c-d963-4466-8702-e7017557e718@suse.cz>
Date: Mon, 25 Aug 2025 19:28:17 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Kuan-Wei Chiu <visitorckw@...il.com>, akpm@...ux-foundation.org
Cc: cl@...two.org, rientjes@...gle.com, roman.gushchin@...ux.dev,
harry.yoo@...cle.com, glittao@...il.com, jserv@...s.ncku.edu.tw,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Joshua Hahn <joshua.hahnjy@...il.com>
Subject: Re: [PATCH 1/2] mm/slub: Fix cmp_loc_by_count() to return 0 when
counts are equal
On 8/25/25 03:34, Kuan-Wei Chiu wrote:
> The comparison function cmp_loc_by_count() used for sorting stack trace
> locations in debugfs currently returns -1 if a->count > b->count and 1
> otherwise. This breaks the antisymmetry property required by sort(),
> because when two counts are equal, both cmp(a, b) and cmp(b, a) return
> 1.
Good catch.
> This can lead to undefined or incorrect ordering results. Fix it by
Wonder if it can really affect anything in practice other than swapping
needlessly some records with an equal count?
> explicitly returning 0 when the counts are equal, ensuring that the
> comparison function follows the expected mathematical properties.
Agreed with the cmp_int() suggestion for a v2.
> Fixes: 553c0369b3e1 ("mm/slub: sort debugfs output by frequency of stack traces")
> Cc: stable@...r.kernel.org
I don't think it can cause any serious bugs so Cc: stable is unnecessary.
> Signed-off-by: Kuan-Wei Chiu <visitorckw@...il.com>
Thanks!
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 30003763d224..c91b3744adbc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -7718,8 +7718,9 @@ static int cmp_loc_by_count(const void *a, const void *b, const void *data)
>
> if (loc1->count > loc2->count)
> return -1;
> - else
> + if (loc1->count < loc2->count)
> return 1;
> + return 0;
> }
>
> static void *slab_debugfs_start(struct seq_file *seq, loff_t *ppos)
Powered by blists - more mailing lists