lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Nov 2020 12:57:33 -0800
From:   Dongli Zhang <dongli.zhang@...cle.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-mm@...ck.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
        davem@...emloft.net, kuba@...nel.org, aruna.ramakrishna@...cle.com,
        bert.barbe@...cle.com, rama.nichanamatlu@...cle.com,
        venkat.x.venkatsubra@...cle.com, manjunath.b.patil@...cle.com,
        joe.jin@...cle.com, srinivas.eeda@...cle.com
Subject: Re: [PATCH 1/1] mm: avoid re-using pfmemalloc page in
 page_frag_alloc()

Hi Matthew,

On 11/3/20 12:35 PM, Matthew Wilcox wrote:
> On Tue, Nov 03, 2020 at 11:32:39AM -0800, Dongli Zhang wrote:
>> The ethernet driver may allocates skb (and skb->data) via napi_alloc_skb().
>> This ends up to page_frag_alloc() to allocate skb->data from
>> page_frag_cache->va.
>>
>> During the memory pressure, page_frag_cache->va may be allocated as
>> pfmemalloc page. As a result, the skb->pfmemalloc is always true as
>> skb->data is from page_frag_cache->va. The skb will be dropped if the
>> sock (receiver) does not have SOCK_MEMALLOC. This is expected behaviour
>> under memory pressure.
>>
>> However, once kernel is not under memory pressure any longer (suppose large
>> amount of memory pages are just reclaimed), the page_frag_alloc() may still
>> re-use the prior pfmemalloc page_frag_cache->va to allocate skb->data. As a
>> result, the skb->pfmemalloc is always true unless page_frag_cache->va is
>> re-allocated, even the kernel is not under memory pressure any longer.
>>
>> Here is how kernel runs into issue.
>>
>> 1. The kernel is under memory pressure and allocation of
>> PAGE_FRAG_CACHE_MAX_ORDER in __page_frag_cache_refill() will fail. Instead,
>> the pfmemalloc page is allocated for page_frag_cache->va.
>>
>> 2: All skb->data from page_frag_cache->va (pfmemalloc) will have
>> skb->pfmemalloc=true. The skb will always be dropped by sock without
>> SOCK_MEMALLOC. This is an expected behaviour.
>>
>> 3. Suppose a large amount of pages are reclaimed and kernel is not under
>> memory pressure any longer. We expect skb->pfmemalloc drop will not happen.
>>
>> 4. Unfortunately, page_frag_alloc() does not proactively re-allocate
>> page_frag_alloc->va and will always re-use the prior pfmemalloc page. The
>> skb->pfmemalloc is always true even kernel is not under memory pressure any
>> longer.
>>
>> Therefore, this patch always checks and tries to avoid re-using the
>> pfmemalloc page for page_frag_alloc->va.
>>
>> Cc: Aruna Ramakrishna <aruna.ramakrishna@...cle.com>
>> Cc: Bert Barbe <bert.barbe@...cle.com>
>> Cc: Rama Nichanamatlu <rama.nichanamatlu@...cle.com>
>> Cc: Venkat Venkatsubra <venkat.x.venkatsubra@...cle.com>
>> Cc: Manjunath Patil <manjunath.b.patil@...cle.com>
>> Cc: Joe Jin <joe.jin@...cle.com>
>> Cc: SRINIVAS <srinivas.eeda@...cle.com>
>> Signed-off-by: Dongli Zhang <dongli.zhang@...cle.com>
>> ---
>>  mm/page_alloc.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 23f5066bd4a5..291df2f9f8f3 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -5075,6 +5075,16 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>>  	struct page *page;
>>  	int offset;
>>  
>> +	/*
>> +	 * Try to avoid re-using pfmemalloc page because kernel may already
>> +	 * run out of the memory pressure situation at any time.
>> +	 */
>> +	if (unlikely(nc->va && nc->pfmemalloc)) {
>> +		page = virt_to_page(nc->va);
>> +		__page_frag_cache_drain(page, nc->pagecnt_bias);
>> +		nc->va = NULL;
>> +	}
> 
> I think this is the wrong way to solve this problem.  Instead, we should
> use up this page, but refuse to recycle it.  How about something like this (not even compile tested):

Thank you very much for the feedback. Yes, the option is to use the same page
until it is used up (offset < 0). Instead of recycling it, the kernel free it
and allocate new one.

This depends on whether we will tolerate the packet drop until this page is used up.

For virtio-net, the payload (skb->data) is of size 128-byte. The padding and
alignment will finally make it as 512-byte.

Therefore, for virtio-net, we will have at most 4096/512-1=7 packets dropped
before the page is used up.

Dongli Zhang

> 
> +++ b/mm/page_alloc.c
> @@ -5139,6 +5139,10 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>  
>                 if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
>                         goto refill;
> +               if (nc->pfmemalloc) {
> +                       free_the_page(page);
> +                       goto refill;
> +               }
>  
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>                 /* if size can vary use size else just use PAGE_SIZE */
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ