[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <257f6172-971b-e0bd-0a74-30a0d143d6f9@yandex-team.ru>
Date: Fri, 4 Oct 2019 15:32:01 +0300
From: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
To: Michal Hocko <mhocko@...nel.org>,
Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/swap: piggyback lru_add_drain_all() calls
On 04/10/2019 15.27, Michal Hocko wrote:
> On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
>> On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
>>> This is very slow operation. There is no reason to do it again if somebody
>>> else already drained all per-cpu vectors after we waited for lock.
>>> + seq = raw_read_seqcount_latch(&seqcount);
>>> +
>>> mutex_lock(&lock);
>>> +
>>> + /* Piggyback on drain done by somebody else. */
>>> + if (__read_seqcount_retry(&seqcount, seq))
>>> + goto done;
>>> +
>>> + raw_write_seqcount_latch(&seqcount);
>>> +
>>
>> Do we really need the seqcount to do this? Wouldn't a mutex_trylock()
>> have the same effect?
>
> Yeah, this makes sense. From correctness point of view it should be ok
> because no caller can expect that per-cpu pvecs are empty on return.
> This might have some runtime effects that some paths might retry more -
> e.g. offlining path drains pcp pvces before migrating the range away, if
> there are pages still waiting for a worker to drain them then the
> migration would fail and we would retry. But this not a correctness
> issue.
>
Caller might expect that pages added by him before are drained.
Exiting after mutex_trylock() will not guarantee that.
For example POSIX_FADV_DONTNEED uses that.
Powered by blists - more mailing lists