[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200918210126.GA1118730@google.com>
Date: Fri, 18 Sep 2020 15:01:26 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
Alex Shi <alex.shi@...ux.alibaba.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Roman Gushchin <guro@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Chris Down <chris@...isdown.name>,
Yafang Shao <laoar.shao@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Huang Ying <ying.huang@...el.com>,
Pankaj Gupta <pankaj.gupta.linux@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Konstantin Khlebnikov <koct9i@...il.com>,
Minchan Kim <minchan@...nel.org>,
Jaewon Kim <jaewon31.kim@...sung.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/13] mm: clean up some lru related pieces
On Fri, Sep 18, 2020 at 01:46:59PM -0700, Hugh Dickins wrote:
> On Thu, 17 Sep 2020, Yu Zhao wrote:
>
> > Hi Andrew,
> >
> > I see you have taken this:
> > mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > Do you mind dropping it?
> >
> > Michal asked to do a bit of additional work. So I thought I probably
> > should create a series to do more cleanups I've been meaning to.
> >
> > This series contains the change in the patch above and goes a few
> > more steps farther. It's intended to improve readability and should
> > not have any performance impacts. There are minor behavior changes in
> > terms of debugging and error reporting, which I have all highlighted
> > in the individual patches. All patches were properly tested on 5.8
> > running Chrome OS, with various debug options turned on.
> >
> > Michal,
> >
> > Do you mind taking a looking at the entire series?
> >
> > Thank you.
> >
> > Yu Zhao (13):
> > mm: use add_page_to_lru_list()
> > mm: use page_off_lru()
> > mm: move __ClearPageLRU() into page_off_lru()
> > mm: shuffle lru list addition and deletion functions
> > mm: don't pass enum lru_list to lru list addition functions
> > mm: don't pass enum lru_list to trace_mm_lru_insertion()
> > mm: don't pass enum lru_list to del_page_from_lru_list()
> > mm: rename page_off_lru() to __clear_page_lru_flags()
> > mm: inline page_lru_base_type()
> > mm: VM_BUG_ON lru page flags
> > mm: inline __update_lru_size()
> > mm: make lruvec_lru_size() static
> > mm: enlarge the int parameter of update_lru_size()
> >
> > include/linux/memcontrol.h | 14 ++--
> > include/linux/mm_inline.h | 115 ++++++++++++++-------------------
> > include/linux/mmzone.h | 2 -
> > include/linux/vmstat.h | 2 +-
> > include/trace/events/pagemap.h | 11 ++--
> > mm/compaction.c | 2 +-
> > mm/memcontrol.c | 10 +--
> > mm/mlock.c | 2 +-
> > mm/swap.c | 53 ++++++---------
> > mm/vmscan.c | 28 +++-----
> > 10 files changed, 95 insertions(+), 144 deletions(-)
> >
> > --
> > 2.28.0.681.g6f77f65b4e-goog
>
> Sorry, Yu, I may be out-of-line in sending this: but as you know,
> Alex Shi has a long per-memcg lru_lock series playing in much the
> same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
> a patchset that makes useful changes, that I'm very keen to help
> into mmotm a.s.a.p (but not before I've completed diligence).
>
> We've put a lot of effort into its testing, I'm currently reviewing
> it patch by patch (my general silence indicating that I'm busy on that,
> but slow as ever): so I'm a bit discouraged to have its stability
> potentially undermined by conflicting cleanups at this stage.
>
> If there's general agreement that your cleanups are safe and welcome
> (Michal's initial reaction sheds some doubt on that), great: I hope
> that Andrew can fast-track them into mmotm, then Alex rebase on top
> of them, and I then re-test and re-review.
>
> But if that quick agreement is not forthcoming, may I ask you please
> to hold back, and resend based on top of Alex's next posting?
The per-memcg lru lock series seems a high priority, and I have
absolutely no problem accommodate your request.
In return, may I ask you or Alex to review this series after you
have finished with per-memcg lru lock (to make sure that I resolve
all the conflicts correctly at least)?
Powered by blists - more mailing lists