lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 06 Apr 2023 11:17:12 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     "jstultz@...gle.com" <jstultz@...gle.com>,
        "tjmercier@...gle.com" <tjmercier@...gle.com>,
        "sumit.semwal@...aro.org" <sumit.semwal@...aro.org>,
        "daniel.vetter@...ll.ch" <daniel.vetter@...ll.ch>,
        "hannes@...xchg.org" <hannes@...xchg.org>,
        "mhocko@...nel.org" <mhocko@...nel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jaewon31.kim@...il.com" <jaewon31.kim@...il.com>
Subject: RE: [PATCH v2] dma-buf/heaps: system_heap: Avoid DoS by limiting
 single allocations to half of all memory

>On Thu, 06 Apr 2023 10:44:19 +0900 Jaewon Kim <jaewon31.kim@...sung.com> wrote:
>
>> >> ...
>> >>
>> >> --- a/drivers/dma-buf/heaps/system_heap.c
>> >> +++ b/drivers/dma-buf/heaps/system_heap.c
>> >> @@ -351,6 +351,9 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
>> >>  	struct page *page, *tmp_page;
>> >>  	int i, ret = -ENOMEM;
>> >>  
>> >> +	if (len / PAGE_SIZE > totalram_pages() / 2)
>> >> +		return ERR_PTR(-ENOMEM);
>> >> +
>> >
>> >This seems so random.  Why ram/2 rather than ram/3 or 17*ram/35?
>> 
>> Hello
>> 
>> Thank you for your comment.
>> 
>> I just took the change from the old ion driver code, and actually I thought the
>> half of all memory is unrealistic. It could be unwanted size like negative,
>> or too big size which incurs slowness or OoM panic.
>> 
>> >
>> >Better behavior would be to try to allocate what the caller asked
>> >for and if that doesn't work out, fail gracefully after freeing the
>> >partial allocations which have been performed thus far.  If dma_buf
>> >is changed to do this then that change is useful in many scenarios other
>> >than this crazy corner case.
>> 
>> I think you would like __GFP_RETRY_MAYFAIL. Actually T.J. Mercier recommended
>> earlier, here's what we discussed.
>> https://lore.kernel.org/linux-mm/20230331005140epcms1p1ac5241f02f645e9dbc29626309a53b24@epcms1p1/
>> 
>> I just worried about a case in which we need oom kill to get more memory but
>> let me change my mind. That case seems to be rare. I think now it's time when
>> we need to make a decision and not to allow oom kill for dma-buf system heap
>> allocations.
>> 
>> But I still want to block that huge size over ram. For an unavailabe size,
>> I think, we don't have to do memory reclaim or killing processes, and we can
>> avoid freezing screen in user perspecitve.
>> 
>> This is eventually what I want. Can we check totalram_pages and and apply
>> __GFP_RETRY_MAYFAIL?
>> 
>> --- a/drivers/dma-buf/heaps/system_heap.c
>> +++ b/drivers/dma-buf/heaps/system_heap.c
>> @@ -41,7 +41,7 @@ struct dma_heap_attachment {
>>         bool mapped;
>>  };
>>  
>> -#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP)
>> +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP | __GFP_RETRY_MAYFAIL)
>>  #define MID_ORDER_GFP (LOW_ORDER_GFP | __GFP_NOWARN)
>>  #define HIGH_ORDER_GFP  (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \
>>                                 | __GFP_NORETRY) & ~__GFP_RECLAIM) \
>> @@ -351,6 +351,9 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
>>         struct page *page, *tmp_page;
>>         int i, ret = -ENOMEM;
>>  
>> +       if (len / PAGE_SIZE > totalram_pages())
>> +               return ERR_PTR(-ENOMEM);
>
>We're catering for a buggy caller here, aren't we?  Are such large
>requests ever reasonable?
>
>How about we decide what's the largest reasonable size and do a
>WARN_ON(larger-than-that), so the buggy caller gets fixed?

Yes we're considering a buggy caller. I thought even totalram_pages() / 2 in
the old ion system is also unreasonable. To avoid the /2, I changed it to
totalram_pages() though.

Because userspace can request that size repeately, I think WARN_ON() may be
called to too often, so that it would fill the kernel log buffer.

Even we think WARN_ON_ONCE rather than WARN_ON, the buggy point is not kernel
layer. Unlike page fault mechanism, this dma-buf system heap gets the size from 
userspace, and it is allowing unlimited size. I think we can't fix the buggy
user space with the kernel warning log. So I think warning is not enough,
and we need a safeguard in kernel layer.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ