[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170119153324.69cd6ba29704b02040412ec6@linux-foundation.org>
Date: Thu, 19 Jan 2017 15:33:24 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Claudio Imbrenda <imbrenda@...ux.vnet.ibm.com>
Cc: linux-mm@...ck.org, borntraeger@...ibm.com, hughd@...gle.com,
aarcange@...hat.com, chrisw@...s-sol.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] mm/ksm: improve deduplication of zero pages with
colouring
On Thu, 19 Jan 2017 19:35:53 +0100 Claudio Imbrenda <imbrenda@...ux.vnet.ibm.com> wrote:
> Some architectures have a set of zero pages (coloured zero pages)
> instead of only one zero page, in order to improve the cache
> performance. In those cases, the kernel samepage merger (KSM) would
> merge all the allocated pages that happen to be filled with zeroes to
> the same deduplicated page, thus losing all the advantages of coloured
> zero pages.
>
> This behaviour is noticeable when a process accesses large arrays of
> allocated pages containing zeroes. A test I conducted on s390 shows
> that there is a speed penalty when KSM merges such pages, compared to
> not merging them or using actual zero pages from the start without
> breaking the COW.
>
> This patch fixes this behaviour. When coloured zero pages are present,
> the checksum of a zero page is calculated during initialisation, and
> compared with the checksum of the current canditate during merging. In
> case of a match, the normal merging routine is used to merge the page
> with the correct coloured zero page, which ensures the candidate page
> is checked to be equal to the target zero page.
>
> A sysfs entry is also added to toggle this behaviour, since it can
> potentially introduce performance regressions, especially on
> architectures without coloured zero pages. The default value is
> disabled, for backwards compatibility.
>
> With this patch, the performance with KSM is the same as with non
> COW-broken actual zero pages, which is also the same as without KSM.
>
> ...
>
> @@ -2233,6 +2267,28 @@ static ssize_t merge_across_nodes_store(struct kobject *kobj,
> KSM_ATTR(merge_across_nodes);
> #endif
>
> +static ssize_t use_zero_pages_show(struct kobject *kobj,
> + struct kobj_attribute *attr, char *buf)
> +{
> + return sprintf(buf, "%u\n", ksm_use_zero_pages);
> +}
> +static ssize_t use_zero_pages_store(struct kobject *kobj,
> + struct kobj_attribute *attr,
> + const char *buf, size_t count)
> +{
> + int err;
> + bool value;
> +
> + err = kstrtobool(buf, &value);
> + if (err)
> + return -EINVAL;
> +
> + ksm_use_zero_pages = value;
> +
> + return count;
> +}
> +KSM_ATTR(use_zero_pages);
Please send along an update for Documentation/vm/ksm.txt? Be sure that
it fully explains "since it can potentially introduce performance
regressions", so our users are able to understand whether or not they
should use this.
Powered by blists - more mailing lists