lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ceec5604-b1df-2e14-8966-933865245f1c@linux.alibaba.com>
Date:   Mon, 25 Mar 2019 12:49:21 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Keith Busch <kbusch@...nel.org>
Cc:     mhocko@...e.com, mgorman@...hsingularity.net, riel@...riel.com,
        hannes@...xchg.org, akpm@...ux-foundation.org,
        dave.hansen@...el.com, keith.busch@...el.com,
        dan.j.williams@...el.com, fengguang.wu@...el.com, fan.du@...el.com,
        ying.huang@...el.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 06/10] mm: vmscan: demote anon DRAM pages to PMEM node



On 3/24/19 3:20 PM, Keith Busch wrote:
> On Sat, Mar 23, 2019 at 12:44:31PM +0800, Yang Shi wrote:
>>   		/*
>> +		 * Demote DRAM pages regardless the mempolicy.
>> +		 * Demot anonymous pages only for now and skip MADV_FREE
>> +		 * pages.
>> +		 */
>> +		if (PageAnon(page) && !PageSwapCache(page) &&
>> +		    (node_isset(page_to_nid(page), def_alloc_nodemask)) &&
>> +		    PageSwapBacked(page)) {
>> +
>> +			if (has_nonram_online()) {
>> +				list_add(&page->lru, &demote_pages);
>> +				unlock_page(page);
>> +				continue;
>> +			}
>> +		}
>> +
>> +		/*
>>   		 * Anonymous process memory has backing store?
>>   		 * Try to allocate it some swap space here.
>>   		 * Lazyfree page could be freed directly
>> @@ -1477,6 +1507,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>>   		VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
>>   	}
>>   
>> +	/* Demote pages to PMEM */
>> +	if (!list_empty(&demote_pages)) {
>> +		int err, target_nid;
>> +		nodemask_t used_mask;
>> +
>> +		nodes_clear(used_mask);
>> +		target_nid = find_next_best_node(pgdat->node_id, &used_mask,
>> +						 true);
>> +
>> +		err = migrate_pages(&demote_pages, alloc_new_node_page, NULL,
>> +				    target_nid, MIGRATE_ASYNC, MR_DEMOTE);
>> +
>> +		if (err) {
>> +			putback_movable_pages(&demote_pages);
>> +
>> +			list_splice(&ret_pages, &demote_pages);
>> +		}
>> +	}
>> +
>>   	mem_cgroup_uncharge_list(&free_pages);
>>   	try_to_unmap_flush();
>>   	free_unref_page_list(&free_pages);
> How do these pages eventually get to swap when migration fails? Looks
> like that's skipped.

Yes, they will be just put back to LRU. Actually, I don't expect it 
would be very often to have migration fail at this stage (but I have no 
test data to support this hypothesis) since the pages have been isolated 
from LRU, so other reclaim path should not find them anymore.

If it is locked by someone else right before migration, it is likely 
referenced again, so putting back to LRU sounds not bad.

A potential improvement is to have sync migration for kswapd.

>
> And page cache demotion is useful too, we shouldn't consider only
> anonymous for this feature.

Yes, definitely. I'm looking into the page cache case now. Any 
suggestion is welcome.

Thanks,
Yang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ