lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwdIDpqNlziTn/et@dhcp22.suse.cz>
Date:   Thu, 25 Aug 2022 11:59:42 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     linux-mm@...ck.org, Christoph Hellwig <hch@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        Johannes Weiner <hannes@...xchg.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] mm: reduce noise in show_mem for lowmem allocations

On Thu 25-08-22 11:52:09, Vlastimil Babka wrote:
> On 8/23/22 11:22, Michal Hocko wrote:
> > All but node0 are really completely irrelevant for this allocation
> > because they do not have ZONE_DMA yet it swamps the log and makes it
> > harder to visually inspect.
> > 
> > Address this by providing gfp_maks parameter to show_mem and filter the
> > output to only those zones/nodes which are relevant for the allocation.
> > That means nodes which have at least one managed zone which is usable
> > for the allocation (zone_idx(zone) <= gfp_zone(gfp_mask)).
> > The resulting output for the same failure would become:
> 
> Looks good to me.
> 
> > [...]
> > [   14.017605][    T1] Mem-Info:
> 
> Maybe print the gfp_mask (or just max zone) here again, to make it more
> obvious in case somebody sents a report without the top header?

I have tried to not alter the output but rather filter it out. The gfp
mask is the first line of the allocation failure and from my past
experience it is usually included in reports.
> 
> > [   14.017956][    T1] active_anon:0 inactive_anon:0 isolated_anon:0
> > [   14.017956][    T1]  active_file:0 inactive_file:0 isolated_file:0
> > [   14.017956][    T1]  unevictable:0 dirty:0 writeback:0
> > [   14.017956][    T1]  slab_reclaimable:876 slab_unreclaimable:30300
> > [   14.017956][    T1]  mapped:0 shmem:0 pagetables:12 bounce:0
> > [   14.017956][    T1]  free:3170151735 free_pcp:6868 free_cma:0
> > [   14.017962][    T1] Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:7200kB pagetables:4kB all_unreclaimable? no
> > [   14.018026][    T1] Node 0 DMA free:160kB boost:0kB min:0kB low:0kB high:0kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> > [   14.018035][    T1] lowmem_reserve[]: 0 0 0 0 0
> > [   14.018339][    T1] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 0*64kB 1*128kB (U) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 160kB
> > [   14.018480][    T1] 0 total pagecache pages
> > [   14.018483][    T1] 0 pages in swap cache
> > [   14.018484][    T1] Swap cache stats: add 0, delete 0, find 0/0
> > [   14.018486][    T1] Free swap  = 0kB
> > [   14.018487][    T1] Total swap = 0kB
> > [   14.018488][    T1] 3221164600 pages RAM
> > [   14.018489][    T1] 0 pages HighMem/MovableOnly
> > [   14.018490][    T1] 50531051 pages reserved
> > [   14.018491][    T1] 0 pages cma reserved
> > [   14.018492][    T1] 0 pages hwpoisoned
> > 
> > Signed-off-by: Michal Hocko <mhocko@...e.com>

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ