lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Dec 2020 17:50:52 -0800
From:   Randy Dunlap <rdunlap@...radead.org>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-kernel@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
        Toralf Förster <toralf.foerster@....de>,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH 2/2] mm: readahead: handle LARGE input to
 get_init_ra_size()

On 12/22/20 5:35 PM, Andrew Morton wrote:
> On Sun, 20 Dec 2020 13:10:51 -0800 Randy Dunlap <rdunlap@...radead.org> wrote:
> 
>> Add a test to detect if the input ra request size has its high order
>> bit set (is negative when tested as a signed long). This would be a
>> really Huge readahead.
>>
>> If so, WARN() with the value and a stack trace so that we can see
>> where this is happening and then make further corrections later.
>> Then adjust the size value so that it is not so Huge (although
>> this may not be needed).
> 
> What motivates this change?  Is there any reason to think this can
> happen?

Spotted in the wild:

mr-fox kernel: [ 1974.206977] UBSAN: shift-out-of-bounds in ./include/linux/log2.h:57:13
mr-fox kernel: [ 1974.206980] shift exponent 64 is too large for 64-bit type 'long unsigned int'

Original report:
https://lore.kernel.org/lkml/c6e5eb81-680f-dd5c-8a81-62041a5ce50c@gmx.de/


Willy suggested that get_init_ra_size() was being called with a size of 0,
which would cause this (instead of some Huge value), so I made a follow-up
patch that only checks size for 0 and if 0, defaults it to 32 (pages).

---
 mm/readahead.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- linux-5.10.1.orig/mm/readahead.c
+++ linux-5.10.1/mm/readahead.c
@@ -310,7 +310,11 @@ void force_page_cache_ra(struct readahea
  */
 static unsigned long get_init_ra_size(unsigned long size, unsigned long max)
 {
-	unsigned long newsize = roundup_pow_of_two(size);
+	unsigned long newsize;
+
+	if (!size)
+		size = 32;
+	newsize = roundup_pow_of_two(size);
 
 	if (newsize <= max / 32)
 		newsize = newsize * 4;


Toralf has only seen this problem one time.


> Also, everything in there *should* be unsigned, because a negative
> readahead is semantically nonsensical.  Is our handling of this
> inherently unsigned quantity incorrect somewhere?
> 
>> --- linux-5.10.1.orig/mm/readahead.c
>> +++ linux-5.10.1/mm/readahead.c
>>
>> ...
>>
>> @@ -303,14 +304,21 @@ void force_page_cache_ra(struct readahea
>>  }
>>  
>>  /*
>> - * Set the initial window size, round to next power of 2 and square
>> + * Set the initial window size, round to next power of 2
>>   * for small size, x 4 for medium, and x 2 for large
>>   * for 128k (32 page) max ra
>>   * 1-8 page = 32k initial, > 8 page = 128k initial
>>   */
>>  static unsigned long get_init_ra_size(unsigned long size, unsigned long max)
>>  {
>> -	unsigned long newsize = roundup_pow_of_two(size);
>> +	unsigned long newsize;
>> +
>> +	if ((signed long)size < 0) { /* high bit is set: ultra-large ra req */
>> +		WARN_ONCE(1, "%s: size=0x%lx\n", __func__, size);
>> +		size = -size;	/* really only need to flip the high/sign bit */
>> +	}
>> +
>> +	newsize = roundup_pow_of_two(size);
> 
> Is there any way in which userspace can deliberately trigger warning?
> Via sys_readadhead() or procfs tuning or whatever?
> 
> I guess that permitting a user-triggerable WARN_ONCE() isn't a huuuuge
> problem - it isn't a DoS if it only triggers a single time.  It does
> permit the malicious user to disable future valid warnings, but I don't
> see what incentive there would be for this.  But still, it seems
> desirable to avoid it.

Sure. I think that we can drop RFC patches 1/2 and 2/2 and just consider the
other one above.


-- 
~Randy

Powered by blists - more mailing lists