lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20171212194532.GA7062@ZenIV.linux.org.uk> Date: Tue, 12 Dec 2017 19:45:32 +0000 From: Al Viro <viro@...IV.linux.org.uk> To: Jakub Kicinski <kubakici@...pl> Cc: Linus Torvalds <torvalds@...ux-foundation.org>, netdev@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [RFC][PATCH] new byteorder primitives - ..._{replace,get}_bits() On Tue, Dec 12, 2017 at 06:20:02AM +0000, Al Viro wrote: > Umm... What's wrong with > > #define FIELD_FOO 0,4 > #define FIELD_BAR 6,12 > #define FIELD_BAZ 18,14 > > A macro can bloody well expand to any sequence of tokens - le32_get_bits(v, FIELD_BAZ) > will become le32_get_bits(v, 18, 14) just fine. What's the problem with that? FWIW, if you want to use the mask, __builtin_ffsll() is not the only way to do it - you don't need the shift. Multiplier would do just as well, and that can be had easier. If mask = (2*a + 1)<<n = ((2*a)<<n) ^ (1<<n), then mask - 1 = ((2*a) << n) + ((1<<n) - 1) = ((2*n) << n) ^ ((1<<n) - 1) mask ^ (mask - 1) = (1<<n) + ((1<<n) - 1) and mask & (mask ^ (mask - 1)) = 1<<n. IOW, with static __always_inline u64 mask_to_multiplier(u64 mask) { return mask & (mask ^ (mask - 1)); } we could do static __always_inline __le64 le64_replace_bits(__le64 old, u64 v, u64 mask) { __le64 m = cpu_to_le64(mask); return (old & ~m) | (cpu_to_le64(v * mask_to_multiplier(mask)) & m); } static __always_inline u64 le64_get_bits(__le64 v, u64 mask) { return (le64_to_cpu(v) & mask) / mask_to_multiplier(mask); } etc. Compiler will turn those into shifts... I can live with either calling conventions. Comments?
Powered by blists - more mailing lists