lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Nov 2020 19:51:38 +0100
From:   Oscar Salvador <osalvador@...e.de>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     corbet@....net, mike.kravetz@...cle.com, tglx@...utronix.de,
        mingo@...hat.com, bp@...en8.de, x86@...nel.org, hpa@...or.com,
        dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
        viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
        paulmck@...nel.org, mchehab+huawei@...nel.org,
        pawan.kumar.gupta@...ux.intel.com, rdunlap@...radead.org,
        oneukum@...e.com, anshuman.khandual@....com, jroedel@...e.de,
        almasrymina@...gle.com, rientjes@...gle.com, willy@...radead.org,
        mhocko@...e.com, duanxiongchun@...edance.com,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap pages associated
 with each hugetlb page

On Sun, Nov 08, 2020 at 10:11:01PM +0800, Muchun Song wrote:
> +static inline int freed_vmemmap_hpage(struct page *page)
> +{
> +	return atomic_read(&page->_mapcount) + 1;
> +}
> +
> +static inline int freed_vmemmap_hpage_inc(struct page *page)
> +{
> +	return atomic_inc_return_relaxed(&page->_mapcount) + 1;
> +}
> +
> +static inline int freed_vmemmap_hpage_dec(struct page *page)
> +{
> +	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
> +}

Are these relaxed any different that the normal ones on x86_64? 
I got confused following the macros.

> +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
> +					 unsigned long start,
> +					 unsigned int nr_free,
> +					 struct list_head *free_pages)
> +{
> +	/* Make the tail pages are mapped read-only. */
> +	pgprot_t pgprot = PAGE_KERNEL_RO;
> +	pte_t entry = mk_pte(reuse, pgprot);
> +	unsigned long addr;
> +	unsigned long end = start + (nr_free << PAGE_SHIFT);

See below.

> +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
> +					 unsigned long addr,
> +					 struct list_head *free_pages)
> +{
> +	unsigned long next;
> +	unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
> +	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
> +	struct page *reuse = NULL;
> +
> +	addr = start;
> +	do {
> +		unsigned int nr_pages;
> +		pte_t *ptep;
> +
> +		ptep = pte_offset_kernel(pmd, addr);
> +		if (!reuse)
> +			reuse = pte_page(ptep[-1]);

Can we define a proper name for that instead of -1?

e.g: TAIL_PAGE_REUSE or something like that. 

> +
> +		next = vmemmap_hpage_addr_end(addr, end);
> +		nr_pages = (next - addr) >> PAGE_SHIFT;
> +		__free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
> +					     free_pages);

Why not passing next instead of nr_pages? I think it makes more sense.
As a bonus we can kill the variable.

> +static void split_vmemmap_huge_page(struct hstate *h, struct page *head,
> +				    pmd_t *pmd)
> +{
> +	pgtable_t pgtable;
> +	unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
> +	unsigned long addr = start;
> +	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> +	while (nr-- && (pgtable = vmemmap_pgtable_withdraw(head))) {

The same with previous patches, I would scrap "nr" and its use.

> +		VM_BUG_ON(freed_vmemmap_hpage(pgtable));

I guess here we want to check whether we already call free_huge_page_vmemmap
on this range?
For this to have happened, the locking should have failed, right?

> +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +	pmd_t *pmd;
> +	spinlock_t *ptl;
> +	LIST_HEAD(free_pages);
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	pmd = vmemmap_to_pmd(head);
> +	ptl = vmemmap_pmd_lock(pmd);
> +	if (vmemmap_pmd_huge(pmd)) {
> +		VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h));

I think that checking for free_vmemmap_pages_per_hpage is enough.
In the end, pgtable_pages_to_prealloc_per_hpage uses free_vmemmap_pages_per_hpage.


-- 
Oscar Salvador
SUSE L3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ