lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 May 2021 17:34:02 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Justin Iurman <justin.iurman@...ege.be>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, tom@...bertland.com
Subject: Re: [RESEND PATCH net-next v3 4/5] ipv6: ioam: Support for IOAM
 injection with lwtunnels

On Wed, 26 May 2021 19:16:39 +0200 Justin Iurman wrote:
> Add support for the IOAM inline insertion (only for the host-to-host use case)
> which is per-route configured with lightweight tunnels. The target is iproute2
> and the patch is ready. It will be posted as soon as this patchset is merged.
> Here is an overview:
> 
> $ ip -6 ro ad fc00::1/128 encap ioam6 trace type 0x800000 ns 1 size 12 dev eth0
> 
> This example configures an IOAM Pre-allocated Trace option attached to the
> fc00::1/128 prefix. The IOAM namespace (ns) is 1, the size of the pre-allocated
> trace data block is 12 octets (size) and only the first IOAM data (bit 0:
> hop_limit + node id) is included in the trace (type) represented as a bitfield.
> 
> The reason why the in-transit (IPv6-in-IPv6 encapsulation) use case is not
> implemented is explained on the patchset cover.
> 
> Signed-off-by: Justin Iurman <justin.iurman@...ege.be>

Please address the warnings from checkpatch --strict on this patches.

For all patches please make sure you don't use static inline in C
files, and let the compiler decide what to inline by itself.

> +	if (trace->type.bit0) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit1) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit2) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit3) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit4) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit5) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit6) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit7) trace->nodelen += sizeof(__be32) / 4;
> +	if (trace->type.bit8) trace->nodelen += sizeof(__be64) / 4;
> +	if (trace->type.bit9) trace->nodelen += sizeof(__be64) / 4;
> +	if (trace->type.bit10) trace->nodelen += sizeof(__be64) / 4;
> +	if (trace->type.bit11) trace->nodelen += sizeof(__be32) / 4;

Seems simpler to do:

	nodelen += hweight16(field & MASK1) * (sizeof(__be32) / 4);
	nodelen += hweight16(field & MASK2) * (sizeof(__be64) / 4);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ