lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Sep 2023 00:37:47 +1000
From:   Greg Ungerer <gregungerer@...tnet.com.au>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-arch@...r.kernel.org, torvalds@...ux-foundation.org,
        Nicholas Piggin <npiggin@...il.com>
Subject: Re: [PATCH 09/17] m68k: Implement xor_unlock_is_negative_byte


On 17/9/23 00:34, Matthew Wilcox wrote:
> On Sat, Sep 16, 2023 at 11:11:32PM +1000, Greg Ungerer wrote:
>> On 16/9/23 04:36, Matthew Wilcox (Oracle) wrote:
>>> Using EOR to clear the guaranteed-to-be-set lock bit will test the
>>> negative flag just like the x86 implementation.  This should be
>>> more efficient than the generic implementation in filemap.c.  It
>>> would be better if m68k had __GCC_ASM_FLAG_OUTPUTS__.
>>>
>>> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
>>> ---
>>>    arch/m68k/include/asm/bitops.h | 14 ++++++++++++++
>>>    1 file changed, 14 insertions(+)
>>>
>>> diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h
>>> index e984af71df6b..909ebe7cab5d 100644
>>> --- a/arch/m68k/include/asm/bitops.h
>>> +++ b/arch/m68k/include/asm/bitops.h
>>> @@ -319,6 +319,20 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
>>>    	return test_and_change_bit(nr, addr);
>>>    }
>>> +static inline bool xor_unlock_is_negative_byte(unsigned long mask,
>>> +		volatile unsigned long *p)
>>> +{
>>> +	char result;
>>> +	char *cp = (char *)p + 3;	/* m68k is big-endian */
>>> +
>>> +	__asm__ __volatile__ ("eor.b %1, %2; smi %0"
>>
>> The ColdFire members of the 68k family do not support byte size eor:
>>
>>    CC      mm/filemap.o
>> {standard input}: Assembler messages:
>> {standard input}:824: Error: invalid instruction for this architecture; needs 68000 or higher (68000 [68ec000, 68hc000, 68hc001, 68008, 68302, 68306, 68307, 68322, 68356], 68010, 68020 [68k, 68ec020], 68030 [68ec030], 68040 [68ec040], 68060 [68ec060], cpu32 [68330, 68331, 68332, 68333, 68334, 68336, 68340, 68341, 68349, 68360], fidoa [fido]) -- statement `eor.b #1,3(%a0)' ignored
> 
> Well, that sucks.  What do you suggest for Coldfire?

I am not seeing an easy way to not fall back to something like the MIPS
implementation for ColdFire. Could obviously assemblerize this to do better
than gcc, but if it has to be atomic I think we are stuck with the irq locking.

static inline bool cf_xor_is_negative_byte(unsigned long mask,
                 volatile unsigned long *addr)
{
         unsigned long flags;
         unsigned long data;

         local_irq_save(flags)
         data = *addr;
         *addr = data ^ mask;
         local_irq_restore(flags);

         return (data & BIT(7)) != 0;
}

Regards
Greg


> (Shame you didn't join in on the original discussion:
> https://lore.kernel.org/linux-m68k/ZLmKq2VLjYGBVhMI@casper.infradead.org/ )

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ