[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210143114.661252-4-cmllamas@google.com>
Date: Tue, 10 Dec 2024 14:30:59 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Arve Hjønnevåg" <arve@...roid.com>, Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>, Christian Brauner <brauner@...nel.org>,
Carlos Llamas <cmllamas@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>
Subject: [PATCH v7 3/9] binder: select correct nid for pages in LRU
The numa node id for binder pages is currently being derived from the
lru entry under struct binder_lru_page. However, this object doesn't
reflect the node id of the struct page items allocated separately.
Instead, select the correct node id from the page itself. This was made
possible since commit 0a97c01cd20b ("list_lru: allow explicit memcg and
NUMA node selection").
Cc: Nhat Pham <nphamcs@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Reviewed-by: Suren Baghdasaryan <surenb@...gle.com>
Signed-off-by: Carlos Llamas <cmllamas@...gle.com>
---
drivers/android/binder_alloc.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index f26283c2c768..1f02bec78451 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -210,7 +210,10 @@ static void binder_lru_freelist_add(struct binder_alloc *alloc,
trace_binder_free_lru_start(alloc, index);
- ret = list_lru_add_obj(&binder_freelist, &page->lru);
+ ret = list_lru_add(&binder_freelist,
+ &page->lru,
+ page_to_nid(page->page_ptr),
+ NULL);
WARN_ON(!ret);
trace_binder_free_lru_end(alloc, index);
@@ -334,7 +337,10 @@ static void binder_lru_freelist_del(struct binder_alloc *alloc,
if (page->page_ptr) {
trace_binder_alloc_lru_start(alloc, index);
- on_lru = list_lru_del_obj(&binder_freelist, &page->lru);
+ on_lru = list_lru_del(&binder_freelist,
+ &page->lru,
+ page_to_nid(page->page_ptr),
+ NULL);
WARN_ON(!on_lru);
trace_binder_alloc_lru_end(alloc, index);
@@ -947,8 +953,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
if (!alloc->pages[i].page_ptr)
continue;
- on_lru = list_lru_del_obj(&binder_freelist,
- &alloc->pages[i].lru);
+ on_lru = list_lru_del(&binder_freelist,
+ &alloc->pages[i].lru,
+ page_to_nid(alloc->pages[i].page_ptr),
+ NULL);
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%s: %d: page %d %s\n",
__func__, alloc->pid, i,
--
2.47.0.338.g60cca15819-goog
Powered by blists - more mailing lists