lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Nov 2021 03:50:47 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     David Hildenbrand <david@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: split thp synchronously on MADV_DONTNEED

On Sat, Nov 20, 2021 at 12:12:30PM -0800, Shakeel Butt wrote:
> Many applications do sophisticated management of their heap memory for
> better performance but with low cost. We have a bunch of such
> applications running on our production and examples include caching and
> data storage services. These applications keep their hot data on the
> THPs for better performance and release the cold data through
> MADV_DONTNEED to keep the memory cost low.
> 
> The kernel defers the split and release of THPs until there is memory
> pressure. This causes complicates the memory management of these
> sophisticated applications which then needs to look into low level
> kernel handling of THPs to better gauge their headroom for expansion. In
> addition these applications are very latency sensitive and would prefer
> to not face memory reclaim due to non-deterministic nature of reclaim.
> 
> This patch let such applications not worry about the low level handling
> of THPs in the kernel and splits the THPs synchronously on
> MADV_DONTNEED.

Have you considered impact on short-living tasks where paying splitting
tax would hurt performace without any benefits? Maybe a sparete madvise
opration needed? I donno.

> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> ---
>  include/linux/mmzone.h   |  5 ++++
>  include/linux/sched.h    |  4 ++++
>  include/linux/sched/mm.h | 11 +++++++++
>  kernel/fork.c            |  3 +++
>  mm/huge_memory.c         | 50 ++++++++++++++++++++++++++++++++++++++++
>  mm/madvise.c             |  8 +++++++
>  6 files changed, 81 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..7fa0035128b9 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -795,6 +795,11 @@ struct deferred_split {
>  	struct list_head split_queue;
>  	unsigned long split_queue_len;
>  };
> +void split_local_deferred_list(struct list_head *defer_list);
> +#else
> +static inline void split_local_deferred_list(struct list_head *defer_list)
> +{
> +}
>  #endif
>  
>  /*
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 9d27fd0ce5df..a984bb6509d9 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1412,6 +1412,10 @@ struct task_struct {
>  	struct mem_cgroup		*active_memcg;
>  #endif
>  
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	struct list_head		*deferred_split_list;
> +#endif
> +
>  #ifdef CONFIG_BLK_CGROUP
>  	struct request_queue		*throttle_queue;
>  #endif

It looks dirty. We really don't have options to pass it down?

Maybe passdown the list via zap_details and call a new rmap remove helper
if the list is present?

>  
> +void split_local_deferred_list(struct list_head *defer_list)
> +{
> +	struct list_head *pos, *next;
> +	struct page *page;
> +
> +	/* First iteration for split. */
> +	list_for_each_safe(pos, next, defer_list) {
> +		page = list_entry((void *)pos, struct page, deferred_list);
> +		page = compound_head(page);
> +
> +		if (!trylock_page(page))
> +			continue;
> +
> +		if (split_huge_page(page)) {
> +			unlock_page(page);
> +			continue;
> +		}
> +		/* split_huge_page() removes page from list on success */
> +		unlock_page(page);
> +
> +		/* corresponding get in deferred_split_huge_page. */
> +		put_page(page);
> +	}
> +
> +	/* Second iteration to putback failed pages. */
> +	list_for_each_safe(pos, next, defer_list) {
> +		struct deferred_split *ds_queue;
> +		unsigned long flags;
> +
> +		page = list_entry((void *)pos, struct page, deferred_list);
> +		page = compound_head(page);
> +		ds_queue = get_deferred_split_queue(page);
> +
> +		spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> +		list_move(page_deferred_list(page), &ds_queue->split_queue);
> +		spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
> +
> +		/* corresponding get in deferred_split_huge_page. */
> +		put_page(page);
> +	}
> +}

Looks like a lot of copy-paste from deferred_split_scan(). Can we get them
consolidated?

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ