lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 20 Mar 2020 21:51:46 -0400
From:   Waiman Long <longman@...hat.com>
To:     Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>
Cc:     David Howells <dhowells@...hat.com>,
        James Morris <jmorris@...ei.org>,
        "Serge E. Hallyn" <serge@...lyn.com>,
        Mimi Zohar <zohar@...ux.ibm.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, keyrings@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        linux-security-module@...r.kernel.org,
        linux-integrity@...r.kernel.org, netdev@...r.kernel.org,
        linux-afs@...ts.infradead.org, Sumit Garg <sumit.garg@...aro.org>,
        Jerry Snitselaar <jsnitsel@...hat.com>,
        Roberto Sassu <roberto.sassu@...wei.com>,
        Eric Biggers <ebiggers@...gle.com>,
        Chris von Recklinghausen <crecklin@...hat.com>
Subject: Re: [PATCH v6 2/2] KEYS: Avoid false positive ENOMEM error on key
 read

On 3/20/20 6:19 PM, Jarkko Sakkinen wrote:
> On Fri, Mar 20, 2020 at 03:19:03PM -0400, Waiman Long wrote:
>> By allocating a kernel buffer with a user-supplied buffer length, it
>> is possible that a false positive ENOMEM error may be returned because
>> the user-supplied length is just too large even if the system do have
>> enough memory to hold the actual key data.
>>
>> Moreover, if the buffer length is larger than the maximum amount of
>> memory that can be returned by kmalloc() (2^(MAX_ORDER-1) number of
>> pages), a warning message will also be printed.
>>
>> To reduce this possibility, we set a threshold (page size) over which we
>> do check the actual key length first before allocating a buffer of the
>> right size to hold it. The threshold is arbitrary, it is just used to
>> trigger a buffer length check. It does not limit the actual key length
>> as long as there is enough memory to satisfy the memory request.
>>
>> To further avoid large buffer allocation failure due to page
>> fragmentation, kvmalloc() is used to allocate the buffer so that vmapped
>> pages can be used when there is not a large enough contiguous set of
>> pages available for allocation.
>>
>> Signed-off-by: Waiman Long <longman@...hat.com>
>> ---
>>  security/keys/internal.h | 12 ++++++++++++
>>  security/keys/keyctl.c   | 39 +++++++++++++++++++++++++++++++--------
>>  2 files changed, 43 insertions(+), 8 deletions(-)
>>
>> diff --git a/security/keys/internal.h b/security/keys/internal.h
>> index ba3e2da14cef..6d0ca48ae9a5 100644
>> --- a/security/keys/internal.h
>> +++ b/security/keys/internal.h
>> @@ -16,6 +16,8 @@
>>  #include <linux/keyctl.h>
>>  #include <linux/refcount.h>
>>  #include <linux/compat.h>
>> +#include <linux/mm.h>
>> +#include <linux/vmalloc.h>
>>  
>>  struct iovec;
>>  
>> @@ -349,4 +351,14 @@ static inline void key_check(const struct key *key)
>>  
>>  #endif
>>  
>> +/*
>> + * Helper function to clear and free a kvmalloc'ed memory object.
>> + */
>> +static inline void __kvzfree(const void *addr, size_t len)
>> +{
>> +	if (addr) {
>> +		memset((void *)addr, 0, len);
>> +		kvfree(addr);
>> +	}
>> +}
>>  #endif /* _INTERNAL_H */
>> diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
>> index 5a0794cb8815..ded69108db0d 100644
>> --- a/security/keys/keyctl.c
>> +++ b/security/keys/keyctl.c
>> @@ -339,7 +339,7 @@ long keyctl_update_key(key_serial_t id,
>>  	payload = NULL;
>>  	if (plen) {
>>  		ret = -ENOMEM;
>> -		payload = kmalloc(plen, GFP_KERNEL);
>> +		payload = kvmalloc(plen, GFP_KERNEL);
>>  		if (!payload)
>>  			goto error;
>>  
>> @@ -360,7 +360,7 @@ long keyctl_update_key(key_serial_t id,
>>  
>>  	key_ref_put(key_ref);
>>  error2:
>> -	kzfree(payload);
>> +	__kvzfree(payload, plen);
>>  error:
>>  	return ret;
>>  }
>> @@ -877,13 +877,23 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
>>  		 * transferring them to user buffer to avoid potential
>>  		 * deadlock involving page fault and mmap_sem.
>>  		 */
>> -		char *key_data = kmalloc(buflen, GFP_KERNEL);
>> +		char *key_data = NULL;
>> +		size_t key_data_len = buflen;
>>  
>> -		if (!key_data) {
>> -			ret = -ENOMEM;
>> -			goto error2;
>> +		/*
>> +		 * When the user-supplied key length is larger than
>> +		 * PAGE_SIZE, we get the actual key length first before
>> +		 * allocating a right-sized key data buffer.
>> +		 */
>> +		if (buflen <= PAGE_SIZE) {
>> +allocbuf:
> Would move this label before condition instead of jumping inside the
> nested block since it will always evaluate correctly.

Yes, you are right. That was not the case for initial version and I
didn't recheck it.


> To this version haven't really gotten why you don't use a legit loop
> construct but instead jump from one random nested location to another
> random nested location? This construct will be somewhat nasty to
> maintain. The construct is weird enough that you should have rather
> good explanation in the long description why such a mess.

I did that to avoid deep nesting. I can rewrite it to remove the the
goto statement.


>
>> +			key_data = kvmalloc(key_data_len, GFP_KERNEL);
>> +			if (!key_data) {
>> +				ret = -ENOMEM;
>> +				goto error2;
>> +			}
>>  		}
>> -		ret = __keyctl_read_key(key, key_data, buflen);
>> +		ret = __keyctl_read_key(key, key_data, key_data_len);
>>  
>>  		/*
>>  		 * Read methods will just return the required length
>> @@ -891,10 +901,23 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
>>  		 * enough.
>>  		 */
>>  		if (ret > 0 && ret <= buflen) {
>> +			/*
>> +			 * The key may change (unlikely) in between 2
>> +			 * consecutive __keyctl_read_key() calls. We will
>> +			 * need to allocate a larger buffer and redo the key
>> +			 * read when key_data_len < ret <= buflen.
>> +			 */
>> +			if (!key_data || unlikely(ret > key_data_len)) {
>> +				if (unlikely(key_data))
>> +					__kvzfree(key_data, key_data_len);
>> +				key_data_len = ret;
>> +				goto allocbuf;
>> +			}
>> +
>>  			if (copy_to_user(buffer, key_data, ret))
>>  				ret = -EFAULT;
>>  		}
>> -		kzfree(key_data);
>> +		__kvzfree(key_data, key_data_len);
>>  	}
>>  
>>  error2:
>> -- 
>> 2.18.1
>>
> Doesn't this go to infinite loop if actual key size is at least
> PAGE_SIZE + 1? Where is the guarantee that this cannot happen?

I think you may have the wrong impression that it caps the buffer length
to PAGE_SIZE. That is not true. key_data_len can be greater than
PAGE_SIZE. I run tests that include one that creates a big key of almost
1Mb. So for buflen <= PAGE_SIZE, key_data_len = buflen. For buflen >
PAGE_SIZE, key_data_len = the actual key length which can be >
PAGE_SIZE. This patch just tries to avoid allocating arbitrary large
buffer with a length much larger than the actual key length.

I will try to make the code easier to read.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ