lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220607011702epcms1p10100c4e86e2e0334f7fabbfafa3a0698@epcms1p1>
Date:   Tue, 07 Jun 2022 10:17:02 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
CC:     Jaewon Kim <jaewon31.kim@...sung.com>,
        "ngupta@...are.org" <ngupta@...are.org>,
        "senozhatsky@...omium.org" <senozhatsky@...omium.org>,
        "avromanov@...rdevices.ru" <avromanov@...rdevices.ru>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Sooyong Suk <s.suk@...sung.com>,
        YongTaek Lee <ytk.lee@...sung.com>,
        "jaewon31.kim@...il.com" <jaewon31.kim@...il.com>,
        Chulmin Kim <cmlaika.kim@...sung.com>
Subject: RE: [PATCH] zram_drv: add __GFP_NOMEMALLOC not to use
 ALLOC_NO_WATERMARKS

> 
> 
>--------- Original Message ---------
>Sender : Minchan Kim <minchan@...nel.org>
>Date : 2022-06-07 05:48 (GMT+9)
>Title : Re: [PATCH] zram_drv: add __GFP_NOMEMALLOC not to use ALLOC_NO_WATERMARKS
> 
>On Mon, Jun 06, 2022 at 12:59:39PM -0700, Andrew Morton wrote:
>> On Mon, 6 Jun 2022 12:46:38 -0700 Minchan Kim <minchan@...nel.org> wrote:
>> 
>> > On Fri, Jun 03, 2022 at 02:57:47PM +0900, Jaewon Kim wrote:
>> > > The atomic page allocation failure sometimes happened, and most of them
>> > > seem to occur during boot time.
>> > > 
>> > > <4>[   59.707645] system_server: page allocation failure: order:0, mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=foreground-boost,mems_allowed=0
>> >
>> > ...
>> >
>> > > 
>> > > The kswapd or other reclaim contexts may not prepare enough free pages
>> > > for too many atomic allocations occurred in short time. But zram may not
>> > > be helpful for this atomic allocation even though zram is used to
>> > > reclaim.
>> > > 
>> > > To get one zs object for a specific size, zram may allocate serveral
>> > > pages. And this can be happened on different class sizes at the same
>> > > time. It means zram may consume more pages to reclaim only one page.
>> > > This inefficiency may consume all free pages below watmerk min by a
>> > > process having PF_MEMALLOC like kswapd.
>> > 
>> > However, that's how zram has worked for a long time(allocate memory
>> > under memory pressure) and many folks already have raised min_free_kbytes
>> > when they use zram as swap. If we don't allow the allocation, swap out
>> > fails easier than old, which would break existing tunes.


Hello.

Yes correct. We may need to tune again to swap out as much as we did.

But on my experiment, there were quite many zram allocations which might
be failed unless it has the ALLOC_NO_WATERMARKS. I thought the zram
allocations seem to be easy to affect atomic allocation failure.

>> 
>> So is there a better way of preventing this warning?  Just suppress it
>> with __GFP_NOWARN?
> 
>For me, I usually tries to remove GFP_ATOMIC alllocation since the
>atomic allocation can be failed easily(zram is not only source for
>it). Otherwise, increase min_free_kbytes?
> 

I also hope driver developers to handle this atomic allocation failure.
However this selinux stuff, context_struct_to_string, is out of their domain.
Do I need to report this to selinux community? Actualy I got several
different callpaths to reach this context_struct_to_string.

Yes we may need to increase min_free_kbytes. But I have an experience where
changing wmark_min from 4MB to 8MB did not work last year. Could you share 
some advice about size?

Thank you

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ