[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191004122727.GA10845@dhcp22.suse.cz>
Date: Fri, 4 Oct 2019 14:27:27 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/swap: piggyback lru_add_drain_all() calls
On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
> On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
> > This is very slow operation. There is no reason to do it again if somebody
> > else already drained all per-cpu vectors after we waited for lock.
> > + seq = raw_read_seqcount_latch(&seqcount);
> > +
> > mutex_lock(&lock);
> > +
> > + /* Piggyback on drain done by somebody else. */
> > + if (__read_seqcount_retry(&seqcount, seq))
> > + goto done;
> > +
> > + raw_write_seqcount_latch(&seqcount);
> > +
>
> Do we really need the seqcount to do this? Wouldn't a mutex_trylock()
> have the same effect?
Yeah, this makes sense. From correctness point of view it should be ok
because no caller can expect that per-cpu pvecs are empty on return.
This might have some runtime effects that some paths might retry more -
e.g. offlining path drains pcp pvces before migrating the range away, if
there are pages still waiting for a worker to drain them then the
migration would fail and we would retry. But this not a correctness
issue.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists