lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <256A84C1-DB36-454A-8730-ED7C5C6217F8@gmail.com>
Date:	Thu, 9 Jul 2015 03:36:02 +0200
From:	"H. Mijail" <hmijail@...il.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
	David Howells <dhowells@...hat.com>,
	Vivek Goyal <vgoyal@...hat.com>, linux-kernel@...r.kernel.org,
	trivial@...nel.org, Joe Perches <joe@...ches.com>
Subject: Re: [RESEND 2][PATCH v4] hexdump: fix for non-aligned buffers


> On 09 Jul 2015, at 02:03, Andrew Morton <akpm@...ux-foundation.org> wrote:
> 
> On Thu, 9 Jul 2015 01:44:18 +0200 Horacio Mijail Ant__n Quiles <hmijail@...il.com> wrote:
> 
>> An hexdump with a buf not aligned to the groupsize causes
>> non-naturally-aligned memory accesses. This was causing a kernel panic on
>> the processor BlackFin BF527, when such an unaligned buffer was fed by the
>> function ubifs_scanned_corruption in fs/ubifs/scan.c .
>> 
>> To fix this, if the buffer is not aligned to groupsize in a platform which
>> does not define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, then change the
>> groupsize to 1, so alignment is no longer a problem.
>> This behavior is coherent with the way the function currently deals with
>> inappropriate parameter combinations, which is to fall back to safe
>> "defaults", even if that means changing the output format and the implicit
>> access patterns that could have been expected.
> 
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS seems inappropriate for this. 
> Having this unset means "can do unaligned accesses, but they're
> inefficient".  It doesn't mean "unaligned accesses go oops".
> 
> But I can't the appropriate config knob.  There's a
> CONFIG_CPU_HAS_NO_UNALIGNED, but that’s an m68k-private thing.

I’m only a newbie, but I will argue that the lesser evil is checking
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS - until a new configure variable
is defined.

In Documentation/unaligned-memory-access.txt, an undefined
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is taken as if to mean “the 
hardware isn’t able to access memory on arbitrary boundaries”.

And I certainly can’t see any more appropriate CONFIG_* variable.

The other alternative in Documentation/unaligned-memory-access.txt is the
macro get_unaligned() from asm/unaligned.h. However, using get_unaligned()
would mean a much more intrusive patch, since each case of the groupsize 
would be changed, and anyway we would still need to check 
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to avoid penalising everyone.

Lastly, as noted on the patch’s description, hex_dump_to_buffer() as it is 
has no qualms about sanitizing the received parameter values down to 
conservative values, and so any current callers are already used to having
the expected output and the implicit memory access patterns changed. So 
I’d say that changing the groupsize is not only harmless, but expected.

> 
>> Resent on 8 Jul because of no answers.
>> 
>> Resent on 29 Jun because of no answers.
> 
> During the merge window.  So I have everything sitting there in my
> patches-to-process pile.  The backlog is excessive this time (700+
> emails) so I'm thinking I'll change things so I'll henceforth be
> processing patches-for-the-next-cycle during this-cycle, while keeping
> patches-for-next-cycle out of linux-next.

No problem for me - if I should squelch the next version of this patch
for some time, please let me know.

> 
>> --- a/lib/hexdump.c
>> +++ b/lib/hexdump.c
>> @@ -124,6 +124,11 @@ int hex_dump_to_buffer(const void *buf, size_t len, int rowsize, int groupsize,
>> 	if ((len % groupsize) != 0)	/* no mixed size output */
>> 		groupsize = 1;
>> 
>> +	/* fall back to 1-byte groups if buf is not aligned to groupsize */
> 
> That isn't a great comment.  It tells us what the code does (which is
> quite obvious just from reading it) but it doesn't tell us *why* it
> does it.  Something like "if buf is not aligned to groupsize then fall
> back to 1-byte groups to avoid unaligned memory accesses on
> architectures which do not support them”?

Thanks, note taken.

> 
> But as mentioned above, that's different from "architectures which do
> not support them efficently"!  Maybe we need a new config variable.
> 
> Or maybe blackfin should be handling the unaligned access trap and
> transparently handling it, like sparc?
> 

I’ll wait for anyone else to weight in…



>> +	if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&
>> +	    !IS_ALIGNED((uintptr_t)buf, groupsize))
>> +		groupsize = 1;
>> +
>> 
>> ...
>> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ