[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fac0bdb1cf9d19ef818a7c4a470ac000e2e62dc1.1747686021.git.lorenzo.stoakes@oracle.com>
Date: Mon, 19 May 2025 21:52:42 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Liam R . Howlett" <Liam.Howlett@...cle.com>,
David Hildenbrand <david@...hat.com>, Vlastimil Babka <vbabka@...e.cz>,
Jann Horn <jannh@...gle.com>, Arnd Bergmann <arnd@...db.de>,
Christian Brauner <brauner@...nel.org>, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
SeongJae Park <sj@...nel.org>, Usama Arif <usamaarif642@...il.com>
Subject: [RFC PATCH 5/5] mm/madvise: add PMADV_ENTIRE_ADDRESS_SPACE process_madvise() flag
For convenience, add the ability to specify the entire address space to
apply the madvise() flag to.
This is best used with PMADV_SKIP_ERRORS (this implies
PMADV_NO_ERROR_ON_UNMAPPED) to skip over any VMAs to which the flags do not
apply.
When this flag is specified, the input vec and vlen parameters must be set
to NULL and -1 respectively, as the user is requesting to perform the
action on the entire address space, e.g.:
process_madvise(PIDFD_SELF, NULL, -1, MADV_HUGEPAGE,
PMADV_ENTIRE_ADDRESS_SPACE | PMADV_SKIP_ERRORS);
This can be used in conjunction with PMADV_SET_FORK_EXEC_DEFAULT for the
ability to both apply an madvise() behaviour to all VMAs in the process
address space but also to default set the relevant VMA flags for any new
mappings, e.g.:
process_madvise(PIDFD_SELF, NULL, -1, MADV_HUGEPAGE,
PMADV_ENTIRE_ADDRESS_SPACE | PMADV_SKIP_ERRORS |
PMADV_SET_FORK_EXEC_DEFAULT);
Which can be useful for ensuring that the flag in question is consistently
applied everywhere.
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
---
include/uapi/asm-generic/mman-common.h | 1 +
mm/madvise.c | 23 +++++++++++++++++++----
2 files changed, 20 insertions(+), 4 deletions(-)
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6998ea0ecc6d..3d523db2f100 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -95,5 +95,6 @@
#define PMADV_SKIP_ERRORS (1U << 0) /* Skip VMAs on errors, but carry on. Implies no error on unmapped. */
#define PMADV_NO_ERROR_ON_UNMAPPED (1U << 1) /* Never report an error on unmapped ranges. */
#define PMADV_SET_FORK_EXEC_DEFAULT (1U << 2) /* Set the behavior as a default that survives fork/exec. */
+#define PMADV_ENTIRE_ADDRESS_SPACE (1U << 3) /* Ignore input iovec and apply to entire address space. */
#endif /* __ASM_GENERIC_MMAN_COMMON_H */
diff --git a/mm/madvise.c b/mm/madvise.c
index 9ea36800de3c..0fb8cd7fdc7a 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1992,7 +1992,7 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter,
static bool check_process_madvise_flags(unsigned int flags)
{
unsigned int mask = PMADV_SKIP_ERRORS | PMADV_NO_ERROR_ON_UNMAPPED |
- PMADV_SET_FORK_EXEC_DEFAULT;
+ PMADV_SET_FORK_EXEC_DEFAULT | PMADV_ENTIRE_ADDRESS_SPACE;
if (flags & ~mask)
return false;
@@ -2010,15 +2010,30 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
struct task_struct *task;
struct mm_struct *mm;
unsigned int f_flags;
+ bool entire_address_space = flags & PMADV_ENTIRE_ADDRESS_SPACE;
if (!check_process_madvise_flags(flags)) {
ret = -EINVAL;
goto out;
}
- ret = import_iovec(ITER_DEST, vec, vlen, ARRAY_SIZE(iovstack), &iov, &iter);
- if (ret < 0)
- goto out;
+ if (entire_address_space) {
+ /* The user must specify NULL, -1 vec, vlen parameters. */
+ if (vec != NULL || vlen != (size_t)-1)
+ return -EINVAL;
+
+ /*
+ * Ignore the input and simply add a single entry spanning the
+ * whole address space.
+ */
+ iovstack[0].iov_base = 0;
+ iovstack[0].iov_len = TASK_SIZE_MAX;
+ iov_iter_init(&iter, ITER_DEST, iov, 1, 1);
+ } else {
+ ret = import_iovec(ITER_DEST, vec, vlen, ARRAY_SIZE(iovstack), &iov, &iter);
+ if (ret < 0)
+ goto out;
+ }
task = pidfd_get_task(pidfd, &f_flags);
if (IS_ERR(task)) {
--
2.49.0
Powered by blists - more mailing lists