[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9242c5c2-2011-45bf-8679-3f918323788e@app.fastmail.com>
Date: Wed, 28 Aug 2024 22:01:06 +0200
From: "Arnd Bergmann" <arnd@...db.de>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: linux-kernel@...r.kernel.org, Linux-Arch <linux-arch@...r.kernel.org>
Subject: Re: 16-bit store instructions &c?
On Wed, Aug 28, 2024, at 14:22, Paul E. McKenney wrote:
> On Wed, Aug 28, 2024 at 01:48:41PM +0000, Arnd Bergmann wrote:
>
>> There is a related problem with ARM RiscPC, which
>> uses a kernel built with -march=armv3, and that
>> disallows 16-bit load/store instructions entirely,
>> similar to how alpha ev5 and earlier lacked both
>> byte and word access.
>
> And one left to go. Progress, anyway. ;-)
What I meant to say about this one is also that we can probably
ignore it as well, since it's on the way out already, at the latest
when gcc-9 becomes the minimum compiler, as gcc-8 was the last
to support -march=armv3. We can also ask Russell if he's ok with
dropping it earlier, as he is almost certainly the only user.
>> Everything else that I see has native load/store
>> on 16-bit words and either has 16-bit atomics or
>> can emulate them using the 32-bit ones.
>>
>> However, the one thing that people usually
>> want 16-bit xchg() for is qspinlock, and that
>> one not only depends on it being atomic but also
>> on strict forward-progress guarantees, which
>> I think the emulated version can't provide
>> in general.
>>
>> This does not prevent architectures from doing
>> it anyway.
>
> Given that the simpler spinlock does not provide forward-progress
> guarantees, I don't see any reason that these guarantees cannot be voided
> for architectures without native 16-bit stores and atomics.
>
> After all, even without those guarantees, qspinlock provides very real
> benefits over simple spinlocks.
My understanding of this problem is that with a trivial bit spinlock,
the worst case is that one task never gets the lock while others
also want it, but a qspinlock based on a flawed xchg() implementation
may end with none of the CPUs ever getting the lock. It may not
matter in practice, but it does feel worse.
Arnd
Powered by blists - more mailing lists