lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171115081818.ucnp26tho4qffdwx@dhcp22.suse.cz>
Date:   Wed, 15 Nov 2017 09:18:18 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Roman Gushchin <guro@...com>, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>, kernel-team@...com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: show total hugetlb memory consumption in
 /proc/meminfo

On Tue 14-11-17 14:28:11, David Rientjes wrote:
[...]
> > /proc/meminfo is paved with mistakes throughout the history. It pretends
> > to give a good picture of the memory usage, yet we have many pointless
> > entries while large consumers are not reflected at all in many case.
> > 
> > Hugetlb data with that great details shouldn't have been exported in the
> > first place when they reflect only one specific hugepage size. I would
> > argue that if somebody went down to configure non-default hugetlb page
> > sizes then checking for the sysfs stats would be an immediate place to
> > look at. Anyway I can see that the cumulative information might be
> > helpful for those who do not own the machine but merely debug an issue
> > which is the primary usacase for the file.
> > 
> 
> I agree in principle, but I think it's inevitable on projects that span 
> decades and accumulate features that evolve over time.

Yes, this is acceptable in earlier stages but I believe we have reached
a mature state where we shouldn't repeat those mistakes.
[...]
> > >  	if (!hugepages_supported())
> > >  		return;
> > >  	seq_printf(m,
> > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > >  			h->resv_huge_pages,
> > >  			h->surplus_huge_pages,
> > >  			1UL << (huge_page_order(h) + PAGE_SHIFT - 10));
> > > +
> > > +	for_each_hstate(h)
> > > +		total += (PAGE_SIZE << huge_page_order(h)) * h->nr_huge_pages;
> > 
> > Please keep the total calculation consistent with what we have there
> > already.
> > 
> 
> Yeah, and I'm not sure if your comment eludes to this being racy, but it 
> would be better to store the default size for default_hstate during the 
> iteration to total the size for all hstates.

I just meant to have the code consistent. I do not prefer one or other
option.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ