lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.LRH.2.02.2204191551430.16728@file01.intranet.prod.int.rdu2.redhat.com>
Date:   Tue, 19 Apr 2022 16:11:11 -0400 (EDT)
From:   Mikulas Patocka <mpatocka@...hat.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        the arch/x86 maintainers <x86@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [PATCH v2] x86: __memcpy_flushcache: fix wrong alignment if size >
 2^32



On Tue, 19 Apr 2022, Linus Torvalds wrote:

> On Tue, Apr 19, 2022 at 6:56 AM Mikulas Patocka <mpatocka@...hat.com> wrote:
> >
> > The first "if" condition in __memcpy_flushcache is supposed to align the
> > "dest" variable to 8 bytes and copy data up to this alignment. However,
> > this condition may misbehave if "size" is greater than 4GiB.
> 
> You're not wrong, but I also don't think it would be wrong to just have a
> 
>         if (WARN_ON_ONCE(size > MAX_INT))
>                 return;
> 
> in there instead.

If you skip copying, it could in theory have security or reliability 
implications (the user may be reading stale data that he is not supposed 
to read). So, I think it's better to just add WARN_ON_ONCE and proceed 
with the copying.

> It' not like "> 2**32" should ever really be a valid thing for any
> kind of copy in the kernel. Even if that were to be what you actually

For example, the dm-stats subsystem (drivers/md/dm-stats.c) can allocate 
extremely large blocks of memory using vmalloc (it is only limited to 1/4 
of total system memory) - though, it doesn't use memcpy on it.

> wanted to do (which sounds very unlikely), you'd need to split it up
> with cond_resched() just for latency reasons.
> 
>              Linus



From: Mikulas Patocka <mpatocka@...hat.com>

The first "if" condition in __memcpy_flushcache is supposed to align the
"dest" variable to 8 bytes and copy data up to this alignment. However,
this condition may misbehave if "size" is greater than 4GiB.

The statement min_t(unsigned, size, ALIGN(dest, 8) - dest); casts both
arguments to unsigned int and selects the smaller one. However, the cast
truncates high bits in "size" and it results in misbehavior.

For example:
	suppose that size == 0x100000001, dest == 0x200000002
	min_t(unsigned, size, ALIGN(dest, 8) - dest) == min_t(0x1, 0xe) == 0x1;
	...
	dest += 0x1;
so we copy just one byte "and" dest remains unaligned.

This patch fixes the bug by replacing unsigned with size_t.

As the function is not supposed to be used with large size, the patch also
adds a WARN_ON_ONCE there.

Signed-off-by: Mikulas Patocka <mpatocka@...hat.com>

---
 arch/x86/lib/usercopy_64.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/arch/x86/lib/usercopy_64.c
===================================================================
--- linux-2.6.orig/arch/x86/lib/usercopy_64.c
+++ linux-2.6/arch/x86/lib/usercopy_64.c
@@ -117,9 +117,11 @@ void __memcpy_flushcache(void *_dst, con
 	unsigned long dest = (unsigned long) _dst;
 	unsigned long source = (unsigned long) _src;
 
+	WARN_ON_ONCE(size > INT_MAX);
+
 	/* cache copy and flush to align dest */
 	if (!IS_ALIGNED(dest, 8)) {
-		unsigned len = min_t(unsigned, size, ALIGN(dest, 8) - dest);
+		size_t len = min_t(size_t, size, ALIGN(dest, 8) - dest);
 
 		memcpy((void *) dest, (void *) source, len);
 		clean_cache_range((void *) dest, len);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ