[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230127113932.166089-12-suzuki.poulose@arm.com>
Date: Fri, 27 Jan 2023 11:39:12 +0000
From: Suzuki K Poulose <suzuki.poulose@....com>
To: kvm@...r.kernel.org, kvmarm@...ts.linux.dev
Cc: suzuki.poulose@....com,
Alexandru Elisei <alexandru.elisei@....com>,
Andrew Jones <andrew.jones@...ux.dev>,
Christoffer Dall <christoffer.dall@....com>,
Fuad Tabba <tabba@...gle.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Joey Gouly <Joey.Gouly@....com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Paolo Bonzini <pbonzini@...hat.com>,
Quentin Perret <qperret@...gle.com>,
Steven Price <steven.price@....com>,
Thomas Huth <thuth@...hat.com>, Will Deacon <will@...nel.org>,
Zenghui Yu <yuzenghui@...wei.com>, linux-coco@...ts.linux.dev,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [RFC kvmtool 11/31] arm64: Lock realm RAM in memory
From: Alexandru Elisei <alexandru.elisei@....com>
RMM doesn't yet support paging protected memory pages. Thus the VMM
must pin the entire VM memory.
Use mlock2 to keep the realm pages pinned in memory once they are faulted
in. Use the MLOCK_ONFAULT flag to prevent pre-mapping the pages and
maintain some semblance of on demand-paging for a realm VM.
Signed-off-by: Alexandru Elisei <alexandru.elisei@....com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
---
arm/kvm.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 42 insertions(+), 2 deletions(-)
diff --git a/arm/kvm.c b/arm/kvm.c
index d51cc15d..0e40b753 100644
--- a/arm/kvm.c
+++ b/arm/kvm.c
@@ -7,6 +7,8 @@
#include "arm-common/gic.h"
+#include <sys/resource.h>
+
#include <linux/kernel.h>
#include <linux/kvm.h>
#include <linux/sizes.h>
@@ -24,6 +26,25 @@ bool kvm__arch_cpu_supports_vm(void)
return true;
}
+static void try_increase_mlock_limit(struct kvm *kvm)
+{
+ u64 size = kvm->arch.ram_alloc_size;
+ struct rlimit mlock_limit, new_limit;
+
+ if (getrlimit(RLIMIT_MEMLOCK, &mlock_limit)) {
+ perror("getrlimit(RLIMIT_MEMLOCK)");
+ return;
+ }
+
+ if (mlock_limit.rlim_cur > size)
+ return;
+
+ new_limit.rlim_cur = size;
+ new_limit.rlim_max = max((rlim_t)size, mlock_limit.rlim_max);
+ /* Requires CAP_SYS_RESOURCE capability. */
+ setrlimit(RLIMIT_MEMLOCK, &new_limit);
+}
+
void kvm__init_ram(struct kvm *kvm)
{
u64 phys_start, phys_size;
@@ -49,8 +70,27 @@ void kvm__init_ram(struct kvm *kvm)
kvm->ram_start = (void *)ALIGN((unsigned long)kvm->arch.ram_alloc_start,
SZ_2M);
- madvise(kvm->arch.ram_alloc_start, kvm->arch.ram_alloc_size,
- MADV_MERGEABLE);
+ /*
+ * Do not merge pages if this is a Realm.
+ * a) We cannot replace a page in realm stage2 without export/import
+ *
+ * Pin the realm memory until we have export/import, due to the same
+ * reason as above.
+ *
+ * Use mlock2(,,MLOCK_ONFAULT) to allow faulting in pages and thus
+ * allowing to lazily populate the PAR.
+ */
+ if (kvm->cfg.arch.is_realm) {
+ int ret;
+
+ try_increase_mlock_limit(kvm);
+ ret = mlock2(kvm->arch.ram_alloc_start, kvm->arch.ram_alloc_size,
+ MLOCK_ONFAULT);
+ if (ret)
+ die_perror("mlock2");
+ } else {
+ madvise(kvm->arch.ram_alloc_start, kvm->arch.ram_alloc_size, MADV_MERGEABLE);
+ }
madvise(kvm->arch.ram_alloc_start, kvm->arch.ram_alloc_size,
MADV_HUGEPAGE);
--
2.34.1
Powered by blists - more mailing lists