[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200528171141.k3lc3mf7taqadv3v@ca-dmjordan1.us.oracle.com>
Date: Thu, 28 May 2020 13:11:41 -0400
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Daniel Jordan <daniel.m.jordan@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...e.com>,
Minchan Kim <minchan@...nel.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH -V3] swap: Reduce lock contention on swap cache from swap
slots allocation
On Thu, May 28, 2020 at 01:32:40PM +0800, Huang, Ying wrote:
> Daniel Jordan <daniel.m.jordan@...cle.com> writes:
>
> > On Mon, May 25, 2020 at 08:26:48AM +0800, Huang Ying wrote:
> >> diff --git a/mm/swapfile.c b/mm/swapfile.c
> >> index 423c234aca15..0abd93d2a4fc 100644
> >> --- a/mm/swapfile.c
> >> +++ b/mm/swapfile.c
> >> @@ -615,7 +615,8 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si,
> >> * discarding, do discard now and reclaim them
> >> */
> >> swap_do_scheduled_discard(si);
> >> - *scan_base = *offset = si->cluster_next;
> >> + *scan_base = this_cpu_read(*si->cluster_next_cpu);
> >> + *offset = *scan_base;
> >> goto new_cluster;
> >
> > Why is this done? As far as I can tell, the values always get overwritten at
> > the end of the function with tmp and tmp isn't derived from them. Seems
> > ebc2a1a69111 moved some logic that used to make sense but doesn't have any
> > effect now.
>
> If we fail to allocate from cluster, "scan_base" and "offset" will not
> be overridden.
Ok, if another task races to allocate the clusters the first just discarded.
> And "cluster_next" or "cluster_next_cpu" may be changed
> in swap_do_scheduled_discard(), because the lock is released and
> re-acquired there.
I see, by another task on the same cpu for cluster_next_cpu.
Both probably unlikely, but at least it tries to pick up where the racing task
left off. You might tack this onto the comment:
* discarding, do discard now and reclaim them, then reread
* cluster_next_cpu since we dropped si->lock
/*
> The code may not have much value.
No, it makes sense.
> > These aside, patch looks good to me.
>
> Thanks for your review! It really help me to improve the quality of the
> patch. Can I add your "Reviewed-by" in the next version?
Sure,
Reviewed-by: Daniel Jordan <daniel.m.jordan@...cle.com>
Powered by blists - more mailing lists