[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240904054132.F2DF72220083@mailuser.phl.internal>
Date: Tue, 03 Sep 2024 22:31:38 -0700
From: Stefan Roesch <shr@...kernel.io>
To: xu xin <xu.xin.sc@...il.com>
Cc: david@...hat.com, linux-kernel@...r.kernel.org, akpm@...ux-foundation.org, hughd@...gle.com, xu.xin16@....com.cn
Subject: Re: [PATCH 0/3] mm/ksm: Support controlling KSM with PID
xu xin <xu.xin.sc@...il.com> writes:
> Hi,
>
> In the field of embedded Linux for cost considerations, resources including
> cpu and memory, are often not very sufficient, so the global deployment of
> KSM can be a mitigation strategy, which is feasible for closed system
> (scenarios without Internet connection). However, KSM has a side effect of
> increasing write time replication latency, which is somewhat unacceptable
> for latency sensitive applications. Therefore, it can be combined with the
> QoS of the business tasks to dynamically close some part of those already
> started processes in real time if the QoS degrade. Although it is also
> beneficial for server/cloud OS, the requirement of embedded system is more
> urgent and strong compared to cloud or server operating systems with
> sufficient memory.
In general I'd expect a different approach for embedded Linux. Evaluate
which processes benefit from KSM and only enable it for these processes.
On embedded platforms CPU is generally a scarce resource.
In addition there is already the KSM advisor which checks if VMA's are
have benefited from KSM sharing or not. If they haven't benefited then
they are skipped the next time. Have you evaluated this?
Simply turning on and off KSM for certain processes seems to be a bit
questionable. How do you evaluate that you have waited long enough? To
see the benefits for KSM you need to have at least two full scans. Are
you taking that into account.
I don't see a strong use case for implementing a second technique to
achieve something similar.
Powered by blists - more mailing lists