lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171122091056.axzpd7tb3mxif4sg@dhcp22.suse.cz>
Date:   Wed, 22 Nov 2017 10:10:56 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Roman Gushchin <guro@...com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        Johannes Weiner <hannes@...xchg.org>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>,
        David Rientjes <rientjes@...gle.com>, kernel-team@...com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: show total hugetlb memory consumption in
 /proc/meminfo

On Tue 21-11-17 16:27:38, Mike Kravetz wrote:
> On 11/21/2017 11:59 AM, Roman Gushchin wrote:
[...]
> > What we can do, is to rename "count" into "nr_huge_pages", like:
> > 
> > 	for_each_hstate(h) {
> > 		unsigned long nr_huge_pages = h->nr_huge_pages;
> > 
> > 		total += (PAGE_SIZE << huge_page_order(h)) * nr_huge_pages;
> > 
> > 		if (h == &default_hstate)
> > 			seq_printf(m,
> > 				   "HugePages_Total:   %5lu\n"
> > 				   "HugePages_Free:    %5lu\n"
> > 				   "HugePages_Rsvd:    %5lu\n"
> > 				   "HugePages_Surp:    %5lu\n"
> > 				   "Hugepagesize:   %8lu kB\n",
> > 				   nr_huge_pages,
> > 				   h->free_huge_pages,
> > 				   h->resv_huge_pages,
> > 				   h->surplus_huge_pages,
> > 				   (PAGE_SIZE << huge_page_order(h)) / 1024);
> > 	}
> > 
> > 	seq_printf(m, "Hugetlb:        %8lu kB\n", total / 1024);
> > 
> > But maybe taking a lock is not a bad idea, because it will also
> > guarantee consistency between other numbers (like HugePages_Free) as well,
> > which is not true right now.
> 
> You are correct in that there is no consistency guarantee for the numbers
> with the default huge page size today.  However, I am not really a fan of
> taking the lock for that guarantee.  IMO, the above code is fine.

I agree

> This discussion reminds me that ideally there should be a per-hstate lock.
> My guess is that the global lock is a carry over from the days when only
> a single huge page size was supported.  In practice, I don't think this is
> much of an issue as people typically only use a single huge page size.  But,
> if anyone thinks is/may be an issue I am happy to make the changes.

Well, it kind of makes sense but I am not sure it is worth bothering.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ