lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210301125754.GA4003@linux>
Date:   Mon, 1 Mar 2021 13:57:59 +0100
From:   Oscar Salvador <osalvador@...e.de>
To:     David Hildenbrand <david@...hat.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Muchun Song <songmuchun@...edance.com>,
        Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 0/2] Make alloc_contig_range handle Hugetlb pages

On Mon, Mar 01, 2021 at 01:43:00PM +0100, David Hildenbrand wrote:
> Same experiment with ZONE_MOVABLE:
> 
> a) Free huge pages: all memory can get unplugged again.
> 
> b) Allocated/populated but idle huge pages: all memory can get unplugged
> again.
> 
> c) Allocated/populated but all 512 huge pages are read/written in a loop:
> all memory can get unplugged again, but I get a single
> 
> [  121.192345] alloc_contig_range: [180000, 188000) PFNs busy
> 
> Most probably because it happened to try migrating a huge page while it was
> busy. As virtio-mem retries on ZONE_MOVABLE a couple of times, it can deal
> with this temporary failure.
> 
> 
> 
> Last but not least, I did something extreme:
> 
> ]# cat /proc/meminfo
> MemTotal:        5061568 kB
> MemFree:          186560 kB
> MemAvailable:     354524 kB
> ...
> HugePages_Total:    2048
> HugePages_Free:     2048
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> 
> 
> Triggering unplug would require to dissolve+alloc - which now fails when
> trying to allocate an additional ~512 huge pages (1G).
> 
> 
> As expected, I can properly see memory unplug not fully succeeding. + I get
> a fairly continuous stream of
> 
> [  226.611584] alloc_contig_range: [19f400, 19f800) PFNs busy
> ...
> 
> But more importantly, the hugepage count remains stable, as configured by
> the admin (me):
> 
> HugePages_Total:    2048
> HugePages_Free:     2048
> HugePages_Rsvd:        0
> HugePages_Surp:        0

Thanks for giving it a spin David, that is highly appreciated ;-)!

I will add above information in next's version changelog if you do not mind,
so the before-and-after can be seen clearly.

I shall send v4 in the course of the next few days.

-- 
Oscar Salvador
SUSE L3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ