[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ptwb6urnzbov545jsndxa4d324ezvor5vutbcev64dwauibwaj@kammuj4pbi45>
Date: Tue, 18 Mar 2025 16:25:15 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>, Christian Brauner <brauner@...nel.org>,
Gao Xiang <xiang@...nel.org>, Chao Yu <chao@...nel.org>,
Andreas Gruenbacher <agruenba@...hat.com>, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-erofs@...ts.ozlabs.org, gfs2@...ts.linux.dev
Subject: Re: [PATCH 3/8] lockref: use bool for false/true returns
On Wed, Jan 15, 2025 at 10:46:39AM +0100, Christoph Hellwig wrote:
> Replace int used as bool with the actual bool type for return values that
> can only be true or false.
>
[snip]
> -int lockref_get_not_zero(struct lockref *lockref)
> +bool lockref_get_not_zero(struct lockref *lockref)
> {
> - int retval;
> + bool retval = false;
>
> CMPXCHG_LOOP(
> new.count++;
> if (old.count <= 0)
> - return 0;
> + return false;
> ,
> - return 1;
> + return true;
> );
>
> spin_lock(&lockref->lock);
> - retval = 0;
> if (lockref->count > 0) {
> lockref->count++;
> - retval = 1;
> + retval = true;
> }
> spin_unlock(&lockref->lock);
> return retval;
While this looks perfectly sane, it worsens codegen around the atomic
on x86-64 at least with gcc 13.3.0. It bisected to this commit and
confirmed top of next-20250318 with this reverted undoes it.
The expected state looks like this:
f0 48 0f b1 13 lock cmpxchg %rdx,(%rbx)
75 0e jne ffffffff81b33626 <lockref_get_not_dead+0x46>
However, with the above patch I see:
f0 48 0f b1 13 lock cmpxchg %rdx,(%rbx)
40 0f 94 c5 sete %bpl
40 84 ed test %bpl,%bpl
74 09 je ffffffff81b33636 <lockref_get_not_dead+0x46>
This is not the end of the world, but also really does not need to be
there.
Given that the patch is merely a cosmetic change, I would suggest I gets
dropped.
Powered by blists - more mailing lists