lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 25 Jun 2024 12:24:23 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>,
	<intel-wired-lan@...ts.osuosl.org>, "David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, "Paolo
 Abeni" <pabeni@...hat.com>, Jacob Keller <jacob.e.keller@...el.com>, "Mina
 Almasry" <almasrymina@...gle.com>,
	<nex.sw.ncis.osdt.itp.upstreaming@...el.com>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH iwl-next v2 01/14] cache: add
 __cacheline_group_{begin,end}_aligned() (+ couple more)

From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Date: Thu, 20 Jun 2024 17:15:53 +0200

> On 6/20/24 15:53, Alexander Lobakin wrote:
>> __cacheline_group_begin(), unfortunately, doesn't align the group
>> anyhow. If it is wanted, then you need to do something like
>>
>> __cacheline_group_begin(grp) __aligned(ALIGN)
>>
>> which isn't really convenient nor compact.
>> Add the _aligned() counterparts to align the groups automatically to
>> either the specified alignment (optional) or ``SMP_CACHE_BYTES``.
>> Note that the actual struct layout will then be (on x64 with 64-byte CL):
>>
>> struct x {
>>     u32 y;                // offset 0, size 4, padding 56
>>     __cacheline_group_begin__grp;    // offset 64, size 0
>>     u32 z;                // offset 64, size 4, padding 4
>>     __cacheline_group_end__grp;    // offset 72, size 0
>>     __cacheline_group_pad__grp;    // offset 72, size 0, padding 56
>>     u32 w;                // offset 128
>> };
>>
>> The end marker is aligned to long, so that you can assert the struct
>> size more strictly, but the offset of the next field in the structure
>> will be aligned to the group alignment, so that the next field won't
>> fall into the group it's not intended to.
>>
>> Add __LARGEST_ALIGN definition and LARGEST_ALIGN() macro.
>> __LARGEST_ALIGN is the value to which the compilers align fields when
>> __aligned_largest is specified. Sometimes, it might be needed to get
>> this value outside of variable definitions. LARGEST_ALIGN() is macro
>> which just aligns a value to __LARGEST_ALIGN.
>> Also add SMP_CACHE_ALIGN(), similar to L1_CACHE_ALIGN(), but using
>> ``SMP_CACHE_BYTES`` instead of ``L1_CACHE_BYTES`` as the former
>> also accounts L2, needed in some cases.
>>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
>> ---
>>   include/linux/cache.h | 59 +++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 59 insertions(+)
>>
> 
> [...]
> 
>> +/**
>> + * __cacheline_group_begin_aligned - declare an aligned group start
>> + * @GROUP: name of the group
>> + * @...: optional group alignment
> 
> didn't know that you could document "..." :)
> 
>> + *
>> + * The following block inside a struct:
>> + *
>> + *    __cacheline_group_begin_aligned(grp);
>> + *    field a;
>> + *    field b;
>> + *    __cacheline_group_end_aligned(grp);
>> + *
>> + * will always be aligned to either the specified alignment or
>> + * ``SMP_CACHE_BYTES``.
>> + */
>> +#define __cacheline_group_begin_aligned(GROUP, ...)        \
>> +    __cacheline_group_begin(GROUP)                \
>> +    __aligned((__VA_ARGS__ + 0) ? : SMP_CACHE_BYTES)
> 
> nice trick :) +0

The usual way to handle varargs. However, this one:

	__cacheline_group_begin_aligned(grp, 63 & 31);

will trigger a compiler warning as it expands to

	__aligned(63 & 31 + 0)

The compilers don't like bitops and arithmetic ops not separated by
parenthesis even in such simple case =\

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ