lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Jun 2019 14:49:53 +0000
From:   "Nagal, Amit               UTC CCS" <Amit.Nagal@....com>
To:     Matthew Wilcox <willy@...radead.org>
CC:     Alexander Duyck <alexander.duyck@...il.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "CHAWLA, RITU UTC CCS" <RITU.CHAWLA@....com>,
        "Netter, Christian M UTC CCS" <christian.Netter@...UTC.COM>
Subject: RE: [External] Re: linux kernel page allocation failure and tuning of
 page cache


From: Matthew Wilcox [mailto:willy@...radead.org] 
Sent: Monday, June 3, 2019 5:42 PM
To: Nagal, Amit UTC CCS <Amit.Nagal@....com>
On Mon, Jun 03, 2019 at 05:30:57AM +0000, Nagal, Amit               UTC CCS wrote:
> > [  776.174308] Mem-Info:
> > [  776.176650] active_anon:2037 inactive_anon:23 isolated_anon:0 [ 
> > 776.176650]  active_file:2636 inactive_file:7391 isolated_file:32 [ 
> > 776.176650]  unevictable:0 dirty:1366 writeback:1281 unstable:0 [ 
> > 776.176650]  slab_reclaimable:719 slab_unreclaimable:724 [ 
> > 776.176650]  mapped:1990 shmem:26 pagetables:159 bounce:0 [ 
> > 776.176650]  free:373 free_pcp:6 free_cma:0 [  776.209062] Node 0 
> > active_anon:8148kB inactive_anon:92kB active_file:10544kB 
> > inactive_file:29564kB unevictable:0kB isolated(anon):0kB 
> > isolated(file):128kB mapped:7960kB dirty:5464kB writeback:5124kB 
> > shmem:104kB writeback_tmp:0kB unstable:0kB pages_scanned:0 
> > all_unreclaimable? no [  776.233602] Normal free:1492kB min:964kB 
> > low:1204kB high:1444kB active_anon:8148kB inactive_anon:92kB 
> > active_file:10544kB inactive_file:29564kB unevictable:0kB 
> > writepending:10588kB present:65536kB managed:59304kB mlocked:0kB 
> > slab_reclaimable:2876kB slab_unreclaimable:2896kB 
> > kernel_stack:1152kB pagetables:636kB bounce:0kB free_pcp:24kB 
> > local_pcp:24kB free_cma:0kB [  776.265406] lowmem_reserve[]: 0 0 [  
> > 776.268761] Normal: 7*4kB (H) 5*8kB (H) 7*16kB (H) 5*32kB (H) 6*64kB 
> > (H) 2*128kB (H) 2*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 
> > 1492kB
> > 10071 total pagecache pages
> > [  776.284124] 0 pages in swap cache [  776.287446] Swap cache 
> > stats: add 0, delete 0, find 0/0 [ 776.292645] Free swap  = 0kB [  
> > 776.295532] Total swap = 0kB [ 776.298421] 16384 pages RAM [  
> > 776.301224] 0 pages HighMem/MovableOnly [  776.305052] 1558 pages 
> > reserved
> >
> > 6) we have certain questions as below :
> > a) how the kernel memory got exhausted ? at the time of low memory conditions in kernel , are the kernel page flusher threads , which should have written dirty pages from page cache to flash disk , not > >executing at right time ? is the kernel page reclaim mechanism not executing at right time ?
> 
> >I suspect the pages are likely stuck in a state of buffering. In the case of sockets the packets will get queued up until either they can be serviced or the maximum size of the receive buffer as been exceeded >and they are dropped.
> 
> My concern here is that why the reclaim procedure has not triggered ?

>It has triggered.  1281 pages are under writeback.
Thanks for the reply .

Also , on target , cat /proc/sys/vm/min_free_kbytes = 965 .  As per https://www.kernel.org/doc/Documentation/sysctl/vm.txt  , 
the minimum value min_free_kbytes  should be set must be 1024 . 
is this min_free_kbytes setting creating the problem ?

Target is having 64MB memory  , what value is recommended for setting min_free_kbytes  ?

also is this a problem if the process receiving socket data is run at elevated priority ( we set it firstly  chrt -r 20 and then changed it later to renice -n -20)
I observed lru-add-drain , writeback threads were executing at normal priority .











Powered by blists - more mailing lists