[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <585791f4-4b41-5e73-296e-691d5478a915@redhat.com>
Date: Tue, 22 Dec 2020 09:31:34 +0100
From: David Hildenbrand <david@...hat.com>
To: Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Michal Hocko <mhocko@...e.com>,
Liang Li <liliangleo@...iglobal.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Liang Li <liliang324@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, qemu-devel@...gnu.org
Subject: Re: [RFC PATCH 3/3] mm: support free hugepage pre zero out
On 22.12.20 08:49, Liang Li wrote:
> This patch add support of pre zero out free hugepage, we can use
> this feature to speed up page population and page fault handing.
>
> Cc: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Cc: Dan Williams <dan.j.williams@...el.com>
> Cc: Dave Hansen <dave.hansen@...el.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Alex Williamson <alex.williamson@...hat.com>
> Cc: Michael S. Tsirkin <mst@...hat.com>
> Cc: Jason Wang <jasowang@...hat.com>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Cc: Liang Li <liliang324@...il.com>
> Signed-off-by: Liang Li <liliangleo@...iglobal.com>
> ---
> mm/page_prezero.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/mm/page_prezero.c b/mm/page_prezero.c
> index c8ce720bfc54..dff4e0adf402 100644
> --- a/mm/page_prezero.c
> +++ b/mm/page_prezero.c
> @@ -26,6 +26,7 @@ static unsigned long delay_millisecs = 1000;
> static unsigned long zeropage_enable __read_mostly;
> static DEFINE_MUTEX(kzeropaged_mutex);
> static struct page_reporting_dev_info zero_page_dev_info;
> +static struct page_reporting_dev_info zero_hugepage_dev_info;
>
> inline void clear_zero_page_flag(struct page *page, int order)
> {
> @@ -69,9 +70,17 @@ static int start_kzeropaged(void)
> zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
>
> err = page_reporting_register(&zero_page_dev_info);
> +
> + zero_hugepage_dev_info.report = zero_free_pages;
> + zero_hugepage_dev_info.mini_order = mini_page_order;
> + zero_hugepage_dev_info.batch_size = batch_size;
> + zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
> +
> + err |= hugepage_reporting_register(&zero_hugepage_dev_info);
> pr_info("Zero page enabled\n");
> } else {
> page_reporting_unregister(&zero_page_dev_info);
> + hugepage_reporting_unregister(&zero_hugepage_dev_info);
> pr_info("Zero page disabled\n");
> }
>
> @@ -90,7 +99,15 @@ static int restart_kzeropaged(void)
> zero_page_dev_info.batch_size = batch_size;
> zero_page_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
>
> + hugepage_reporting_unregister(&zero_hugepage_dev_info);
> +
> + zero_hugepage_dev_info.report = zero_free_pages;
> + zero_hugepage_dev_info.mini_order = mini_page_order;
> + zero_hugepage_dev_info.batch_size = batch_size;
> + zero_hugepage_dev_info.delay_jiffies = msecs_to_jiffies(delay_millisecs);
> +
> err = page_reporting_register(&zero_page_dev_info);
> + err |= hugepage_reporting_register(&zero_hugepage_dev_info);
> pr_info("Zero page enabled\n");
> }
>
>
Free page reporting in virtio-balloon doesn't give you any guarantees
regarding zeroing of pages. Take a look at the QEMU implementation -
e.g., with vfio all reports are simply ignored.
Also, I am not sure if mangling such details ("zeroing of pages") into
the page reporting infrastructure is a good idea.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists