[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877dha6vvq.fsf@oldenburg.str.redhat.com>
Date: Wed, 28 Jul 2021 22:12:09 +0200
From: Florian Weimer <fweimer@...hat.com>
To: Nikolay Borisov <nborisov@...e.com>
Cc: linux-kernel@...r.kernel.org, ndesaulniers@...gle.com,
torvalds@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
david@...morbit.com
Subject: Re: [PATCH] lib/string: Bring optimized memcmp from glibc
* Nikolay Borisov:
> +/*
> + * Compare A and B bytewise in the byte order of the machine.
> + * A and B are known to be different. This is needed only on little-endian
> + * machines.
> + */
> +static inline int memcmp_bytes(unsigned long a, unsigned long b)
> +{
> + long srcp1 = (long) &a;
> + long srcp2 = (long) &b;
> + unsigned long a0, b0;
> +
> + do {
> + a0 = ((uint8_t *) srcp1)[0];
> + b0 = ((uint8_t *) srcp2)[0];
> + srcp1 += 1;
> + srcp2 += 1;
> + } while (a0 == b0);
> + return a0 - b0;
> +}
Should this be this?
static inline int memcmp_bytes(unsigned long a, unsigned long b)
{
if (sizeof(a) == 4)
return __builtin_bswap32(a) < __builtin_bswap32(b) ? -1 : 0;
else
return __builtin_bswap64(a) < __builtin_bswap64(b) ? -1 : 0;
}
(Or whatever macro versions the kernel has for this.)
Or is the expectation that targets that don't have an assembler
implementation for memcmp have also bad bswap built-ins?
Thanks,
Florian
Powered by blists - more mailing lists