lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181015154112.6bj5p4zuxjtz43pd@kshutemo-mobl1>
Date:   Mon, 15 Oct 2018 18:41:12 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Kirill Tkhai <ktkhai@...tuozzo.com>
Cc:     akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
        andriy.shevchenko@...ux.intel.com, mhocko@...e.com,
        rppt@...ux.vnet.ibm.com, imbrenda@...ux.vnet.ibm.com,
        corbet@....net, ndesaulniers@...gle.com, dave.jiang@...el.com,
        jglisse@...hat.com, jia.he@...-semitech.com,
        paulmck@...ux.vnet.ibm.com, colin.king@...onical.com,
        jiang.biao2@....com.cn, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] ksm: Assist buddy allocator to assemble 1-order pages

On Thu, Oct 11, 2018 at 01:52:22PM +0300, Kirill Tkhai wrote:
> try_to_merge_two_pages() merges two pages, one of them
> is a page of currently scanned mm, the second is a page
> with identical hash from unstable tree. Currently, we
> merge the page from unstable tree into the first one,
> and then free it.
> 
> The idea of this patch is to prefer freeing that page
> of them, which has a free neighbour (i.e., neighbour
> with zero page_count()). This allows buddy allocator
> to assemble at least 1-order set from the freed page
> and its neighbour; this is a kind of cheep passive
> compaction.
> 
> AFAIK, 1-order pages set consists of pages with PFNs
> [2n, 2n+1] (odd, even), so the neighbour's pfn is
> calculated via XOR with 1. We check the result pfn
> is valid and its page_count(), and prefer merging
> into @tree_page if neighbour's usage count is zero.
> 
> There a is small difference with current behavior
> in case of error path. In case of the second
> try_to_merge_with_ksm_page() is failed, we return
> from try_to_merge_two_pages() with @tree_page
> removed from unstable tree. It does not seem to matter,
> but if we do not want a change at all, it's not
> a problem to move remove_rmap_item_from_tree() from
> try_to_merge_with_ksm_page() to its callers.
> 
> Signed-off-by: Kirill Tkhai <ktkhai@...tuozzo.com>
> ---
>  mm/ksm.c |   15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 5b0894b45ee5..b83ca37e28f0 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1321,6 +1321,21 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item,
>  {
>  	int err;
>  
> +	if (IS_ENABLED(CONFIG_COMPACTION)) {
> +		unsigned long pfn;
> +		/*
> +		 * Find neighbour of @page containing 1-order pair
> +		 * in buddy-allocator and check whether it is free.

You cannot really check if the page is free. There are some paths that
makes the refcount zero temporarely, but doesn't free the page.
See page_ref_freeze() for instance.

It should be fine for the use case, but comment should state that we
speculate about page usage, not having definetive answer.

[ I don't know enough about KSM to ack the patch in general, but it looks
fine to me at the first glance.]

> +		 * If it is so, try to use @tree_page as ksm page
> +		 * and to free @page.
> +		 */
> +		pfn = (page_to_pfn(page) ^ 1);
> +		if (pfn_valid(pfn) && page_count(pfn_to_page(pfn)) == 0) {
> +			swap(rmap_item, tree_rmap_item);
> +			swap(page, tree_page);
> +		}
> +	}
> +
>  	err = try_to_merge_with_ksm_page(rmap_item, page, NULL);
>  	if (!err) {
>  		err = try_to_merge_with_ksm_page(tree_rmap_item,
> 

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ