[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <157019456205.3142.3369423180908482020.stgit@buzz>
Date: Fri, 04 Oct 2019 16:09:22 +0300
From: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
To: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
Matthew Wilcox <willy@...radead.org>
Subject: [PATCH v2] mm/swap: piggyback lru_add_drain_all() calls
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors while we waited for lock.
Piggyback on drain started and finished while we waited for lock:
all pages pended at the time of our enter were drained from vectors.
Callers like POSIX_FADV_DONTNEED retry their operations once after
draining per-cpu vectors when pages have unexpected references.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
---
mm/swap.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/swap.c b/mm/swap.c
index 38c3fa4308e2..5ba948a9d82a 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -708,9 +708,10 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
*/
void lru_add_drain_all(void)
{
+ static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
static DEFINE_MUTEX(lock);
static struct cpumask has_work;
- int cpu;
+ int cpu, seq;
/*
* Make sure nobody triggers this path before mm_percpu_wq is fully
@@ -719,7 +720,19 @@ void lru_add_drain_all(void)
if (WARN_ON(!mm_percpu_wq))
return;
+ seq = raw_read_seqcount_latch(&seqcount);
+
mutex_lock(&lock);
+
+ /*
+ * Piggyback on drain started and finished while we waited for lock:
+ * all pages pended at the time of our enter were drained from vectors.
+ */
+ if (__read_seqcount_retry(&seqcount, seq))
+ goto done;
+
+ raw_write_seqcount_latch(&seqcount);
+
cpumask_clear(&has_work);
for_each_online_cpu(cpu) {
@@ -740,6 +753,7 @@ void lru_add_drain_all(void)
for_each_cpu(cpu, &has_work)
flush_work(&per_cpu(lru_add_drain_work, cpu));
+done:
mutex_unlock(&lock);
}
#else
Powered by blists - more mailing lists