lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Jan 2016 16:41:34 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Florian Westphal' <fw@...len.de>
CC:	'Pablo Neira Ayuso' <pablo@...filter.org>,
	"netfilter-devel@...r.kernel.org" <netfilter-devel@...r.kernel.org>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH 16/17] netfilter: nft_byteorder: provide 64bit le/be
 conversion

From: Florian Westphal
> Sent: 08 January 2016 16:24
> To: David Laight
> David Laight <David.Laight@...LAB.COM> wrote:
> > From: Pablo Neira Ayuso
> > > Sent: 08 January 2016 14:02
> > > From: Florian Westphal <fw@...len.de>
> > >
> > > Needed to convert the (64bit) conntrack counters to BE ordering.
> > >
> > ...
> > >  	switch (priv->size) {
> > > +	case 8: {
> > > +		u64 src64;
> > > +
> > > +		switch (priv->op) {
> > > +		case NFT_BYTEORDER_NTOH:
> > > +			for (i = 0; i < priv->len / 8; i++) {
> > > +				src64 = get_unaligned_be64(&src[i]);
> > > +				src64 = be64_to_cpu((__force __be64)src64);
> > > +				put_unaligned_be64(src64, &dst[i]);
> > > +			}
> > > +			break;
> > > +		case NFT_BYTEORDER_HTON:
> > > +			for (i = 0; i < priv->len / 8; i++) {
> > > +				src64 = get_unaligned_be64(&src[i]);
> > > +				src64 = (__force u64)cpu_to_be64(src64);
> > > +				put_unaligned_be64(src64, &dst[i]);
> > > +			}
> > > +			break;
> > > +		}
> > > +		break;
> >
> > That is horrid.
> 
> Yes, sorry for this, however ...
> 
> > On a little-endian system you are byteswapping the data 3 times.
> > Image the code on a cpu that doesn't support misaligned transfers
> > and doesn't have a byteswap instruction.
> 
> diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
> --- a/net/netfilter/nft_byteorder.c
> +++ b/net/netfilter/nft_byteorder.c
> @@ -46,16 +46,16 @@ static void nft_byteorder_eval(const struct nft_expr *expr,
>  		switch (priv->op) {
>  		case NFT_BYTEORDER_NTOH:
>  			for (i = 0; i < priv->len / 8; i++) {
> -				src64 = get_unaligned_be64(&src[i]);
> +				src64 = get_unaligned((u64 *)&src[i]);
>  				src64 = be64_to_cpu((__force __be64)src64);
> -				put_unaligned_be64(src64, &dst[i]);
> +				put_unaligned(src64, (u64 *)&dst[i]);

Why not just:
				src64 = get_unaligned_be64(&src[i]);
				put_unaligned(src64, (u64 *)&dst[i]);

...
> Results in identical object code.

How many architectures have you looked at?
Try sparc64, ppc64 and arm64, especially le variants.
(le sparc would be bad - but doesn't exist).
Also older compilers that don't have builtin support for misaligned
memory accesses.

Actually there is (probably) a generic problem with:
		put_unaligned(src64, (u64 *)&dst[i]);
The C language rules for casts effectively forbid pointers to
misaligned structures (the compiler is allowed to assume they
are aligned however hard you try to persuade it otherwise).
I suspect that means that the (u64 *) cast is allowed to mask
off the bottom address bits.

	David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ