[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171023161316.ajrxgd2jzo3u52eu@dhcp22.suse.cz>
Date: Mon, 23 Oct 2017 18:13:16 +0200
From: Michal Hocko <mhocko@...nel.org>
To: "C.Wehrmeyer" <c.wehrmeyer@....de>
Cc: Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
linux-kernel <linux-kernel@...r.kernel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: PROBLEM: Remapping hugepages mappings causes kernel to return
EINVAL
On Mon 23-10-17 16:00:13, C.Wehrmeyer wrote:
[...]
> > And that is what we have THP for...
>
> Then I might have been using it incorrectly? I've been digging through
> Documentation/vm/transhuge.txt after your initial pointing out, and verified
> that the kernel uses THPs pretty much always, without the usage of madvise:
>
> # cat /sys/kernel/mm/transparent_hugepage/enabled
> [always] madvise never
OK
> And just to be very sure I've added:
>
> if (madvise(buf1,ALLOC_SIZE_1,MADV_HUGEPAGE)) {
> errno_tmp = errno;
> fprintf(stderr,"madvise: %u\n",errno_tmp);
> goto out;
> }
>
> /*Make sure the mapping is actually used*/
> memset(buf1,'!',ALLOC_SIZE_1);
Is the buffer aligned to 2MB?
> /*Give me time for monitoring*/
> sleep(2000);
>
> right after the mmap call. I've also made sure that nothing is being
> optimised away by the compiler. With a 2MiB mapping being requested this
> should be a good opportunity for the kernel, and yet when I try to figure
> out how many THPs my processes uses:
>
> $ cat /proc/21986/smaps | grep 'AnonHuge'
>
> I just end up with lots of:
>
> AnonHugePages: 0 kB
>
> And cat /proc/meminfo | grep 'Huge' doesn't change significantly as well. Am
> I just doing something wrong here, or shouldn't I trust the THP mechanisms
> to actually allocate hugepages for me?
If the mapping is aligned properly then the rest is up to system and
availability of large physically contiguous memory blocks.
> > General purpose allocator playing with hugetlb
> > pages is rather tricky and I would be really cautious there. I would
> > rather play with THP to reduce the TLB footprint.
>
> May one ask why you'd recommend to be cautious here? I understand that
> actual huge pages can slow down certain things - swapping comes to mind
> immediately, which is probably the reason why Linux (used to?) lock such
> pages in memory as well.
THP shouldn't cause any significant slowdown or other issues (these
days). The main reason for the static pre allocated huge pages pool
(hugetlb) was a guarantee of the huge pages availability. Such a
pool has not been reclaimable. This brings an obvious issues, e.g.
unreclaimable huge pages will reduce the amount of usable memory for the
rest of the system so you should really think how much to reserve to not
get into memory short situations. That makes a general purpose hugetlb
pages usage rather challenging.
THP on the other hand can come and go as the system is able to
create/keep them without userspace involvement. You can hint a range by
madvise and the system will try harder to give you THP.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists