[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7497e7ee0365364e8215d8ee3436812f75096c4.camel@perches.com>
Date: Wed, 13 Mar 2019 18:50:57 -0700
From: Joe Perches <joe@...ches.com>
To: Mathieu Malaterre <malat@...ian.org>,
"Jason A. Donenfeld" <Jason@...c4.com>
Cc: "Gustavo A. R. Silva" <gustavo@...eddedor.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] lib/siphash.c: annotate implicit fall throughs
On Wed, 2019-03-13 at 22:12 +0100, Mathieu Malaterre wrote:
> There is a plan to build the kernel with -Wimplicit-fallthrough and
> these places in the code produced warnings (W=1). Fix them up.
>
> This commit remove the following warnings:
>
> lib/siphash.c:71:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:72:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:73:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:75:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:108:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:109:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:110:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:112:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:434:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
> lib/siphash.c:462:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
>
> Move the break statement onto the next line to match the fall-through
> comment pattern. Also move the trailing statement onto the next line to
> pass checkpatch verification.
[]
> diff --git a/lib/siphash.c b/lib/siphash.c
[].
> @@ -68,13 +68,26 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
> bytemask_from_count(left)));
> #else
> switch (left) {
> - case 7: b |= ((u64)end[6]) << 48;
> - case 6: b |= ((u64)end[5]) << 40;
> - case 5: b |= ((u64)end[4]) << 32;
It might also be worth not casting to u64 then shift
as that can be
moderately expensive on 32 bit systems
and instead use ((char
*)&b)[<appropriate_index>].
> - case 4: b |= le32_to_cpup(data); break;
> - case 3: b |= ((u64)end[2]) << 16;
Perhaps an unnecessary cast before shift
> - case 2: b |= le16_to_cpup(data); break;
> - case 1: b |= end[0];
[]
> @@ -101,13 +114,26 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
> bytemask_from_count(left)));
> #else
> switch (left) {
> - case 7: b |= ((u64)end[6]) << 48;
> - case 6: b |= ((u64)end[5]) << 40;
> - case 5: b |= ((u64)end[4]) << 32;
etc...
> - case 4: b |= get_unaligned_le32(end); break;
> - case 3: b |= ((u64)end[2]) << 16;
> - case 2: b |= get_unaligned_le16(end); break;
> - case 1: b |= end[0];
Powered by blists - more mailing lists