[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c1617cff-847f-4cbf-d314-0382a3e9233d@yandex-team.ru>
Date: Fri, 4 Oct 2019 16:32:39 +0300
From: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v2] mm/swap: piggyback lru_add_drain_all() calls
On 04/10/2019 16.12, Michal Hocko wrote:
> On Fri 04-10-19 16:09:22, Konstantin Khlebnikov wrote:
>> This is very slow operation. There is no reason to do it again if somebody
>> else already drained all per-cpu vectors while we waited for lock.
>>
>> Piggyback on drain started and finished while we waited for lock:
>> all pages pended at the time of our enter were drained from vectors.
>>
>> Callers like POSIX_FADV_DONTNEED retry their operations once after
>> draining per-cpu vectors when pages have unexpected references.
>
> This describes why we need to wait for preexisted pages on the pvecs but
> the changelog doesn't say anything about improvements this leads to.
> In other words what kind of workloads benefit from it?
Right now POSIX_FADV_DONTNEED is top user because it have to freeze page
reference when removes it from cache. invalidate_bdev calls it for same reason.
Both are triggered from userspace, so it's easy to generate storm.
mlock/mlockall no longer calls lru_add_drain_all - I've seen here
serious slowdown on older kernel.
There are some less obvious paths in memory migration/CMA/offlining
which shouldn't be called frequently.
>
>> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
>> ---
>> mm/swap.c | 16 +++++++++++++++-
>> 1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 38c3fa4308e2..5ba948a9d82a 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -708,9 +708,10 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
>> */
>> void lru_add_drain_all(void)
>> {
>> + static seqcount_t seqcount = SEQCNT_ZERO(seqcount);
>> static DEFINE_MUTEX(lock);
>> static struct cpumask has_work;
>> - int cpu;
>> + int cpu, seq;
>>
>> /*
>> * Make sure nobody triggers this path before mm_percpu_wq is fully
>> @@ -719,7 +720,19 @@ void lru_add_drain_all(void)
>> if (WARN_ON(!mm_percpu_wq))
>> return;
>>
>> + seq = raw_read_seqcount_latch(&seqcount);
>> +
>> mutex_lock(&lock);
>> +
>> + /*
>> + * Piggyback on drain started and finished while we waited for lock:
>> + * all pages pended at the time of our enter were drained from vectors.
>> + */
>> + if (__read_seqcount_retry(&seqcount, seq))
>> + goto done;
>> +
>> + raw_write_seqcount_latch(&seqcount);
>> +
>> cpumask_clear(&has_work);
>>
>> for_each_online_cpu(cpu) {
>> @@ -740,6 +753,7 @@ void lru_add_drain_all(void)
>> for_each_cpu(cpu, &has_work)
>> flush_work(&per_cpu(lru_add_drain_work, cpu));
>>
>> +done:
>> mutex_unlock(&lock);
>> }
>> #else
>>
>
Powered by blists - more mailing lists