lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k021bfpj.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Thu, 05 Jan 2023 15:31:04 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Alistair Popple <apopple@...dia.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, Zi Yan <ziy@...dia.com>,
        Yang Shi <shy828301@...il.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        "Oscar Salvador" <osalvador@...e.de>,
        Matthew Wilcox <willy@...radead.org>,
        "Bharata B Rao" <bharata@....com>, haoxin <xhao@...ux.alibaba.com>
Subject: Re: [PATCH 2/8] migrate_pages: separate hugetlb folios migration


[snip]

>
>>>> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>  		nr_retry_pages = 0;
>>>>  
>>>>  		list_for_each_entry_safe(folio, folio2, from, lru) {
>>>> +			if (folio_test_hugetlb(folio)) {
>>>
>>> How do we hit this case? Shouldn't migrate_hugetlbs() have already moved
>>> any hugetlb folios off the from list?
>>
>> Retried hugetlb folios will be kept in from list.
>
> Couldn't migrate_hugetlbs() remove the failing retried pages from the
> list on the final pass? That seems cleaner to me.

To do that, we need to go through the folio list again to remove all
hugetlb pages.  It could be time-consuming in some cases.  So I think
that it's better to keep this.

Best Regards,
Huang, Ying

>>>> +				list_move_tail(&folio->lru, &ret_folios);
>>>> +				continue;
>>>> +			}
>>>> +
>>>>  			/*
>>>>  			 * Large folio statistics is based on the source large
>>>>  			 * folio. Capture required information that might get
>>>>  			 * lost during migration.
>>>>  			 */
>>>> -			is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
>>>> +			is_large = folio_test_large(folio);
>>>>  			is_thp = is_large && folio_test_pmd_mappable(folio);
>>>>  			nr_pages = folio_nr_pages(folio);
>>>> +
>>>>  			cond_resched();
>>>>  
>>>> -			if (folio_test_hugetlb(folio))
>>>> -				rc = unmap_and_move_huge_page(get_new_page,
>>>> -						put_new_page, private,
>>>> -						&folio->page, pass > 2, mode,
>>>> -						reason,
>>>> -						&ret_folios);
>>>> -			else
>>>> -				rc = unmap_and_move(get_new_page, put_new_page,
>>>> -						private, folio, pass > 2, mode,
>>>> -						reason, &ret_folios);
>>>> +			rc = unmap_and_move(get_new_page, put_new_page,
>>>> +					    private, folio, pass > 2, mode,
>>>> +					    reason, &ret_folios);
>>>>  			/*
>>>>  			 * The rules are:
>>>> -			 *	Success: non hugetlb folio will be freed, hugetlb
>>>> -			 *		 folio will be put back
>>>> +			 *	Success: folio will be freed
>>>>  			 *	-EAGAIN: stay on the from list
>>>>  			 *	-ENOMEM: stay on the from list
>>>>  			 *	-ENOSYS: stay on the from list
>>>> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>  						stats.nr_thp_split += is_thp;
>>>>  						break;
>>>>  					}
>>>> -				/* Hugetlb migration is unsupported */
>>>>  				} else if (!no_split_folio_counting) {
>>>>  					nr_failed++;
>>>>  				}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ