[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc6a3125-779c-13c0-bec3-dc92deab19f6@linux.alibaba.com>
Date: Fri, 6 Mar 2020 21:30:24 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Hugh Dickins <hughd@...gle.com>, Qian Cai <cai@....pw>
Cc: Matthew Wilcox <willy@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>, aarcange@...hat.com,
daniel.m.jordan@...cle.com, hannes@...xchg.org,
khlebnikov@...dex-team.ru, kirill@...temov.name,
kravetz@...ibm.com, mhocko@...nel.org, mm-commits@...r.kernel.org,
tj@...nel.org, vdavydov.dev@...il.com, yang.shi@...ux.alibaba.com,
linux-mm@...ck.org
Subject: Re: [failures] mm-vmscan-remove-unnecessary-lruvec-adding.patch
removed from -mm tree
在 2020/3/6 下午12:17, Hugh Dickins 写道:
>>>
>>> Subject: Re: [PATCH v9 00/21] per lruvec lru_lock for memcg
>>
>> I don’t see it on lore.kernel or anywhere. Private email?
>
> You're right, sorry I didn't notice, lots of ccs but
> neither lkml nor linux-mm were on that thread from the start:
My fault, I thought people would often give comments on each patch, will care this from now on.
>
> And now the bad news.
>
> Andrew, please revert those six (or seven as they ended up in mmotm).
> 5.6-rc4-mm1 without them runs my tmpfs+loop+swapping+memcg+ksm kernel
> build loads fine (did four hours just now), but 5.6-rc4-mm1 itself
> crashed just after starting - seconds or minutes I didn't see,
> but it did not complete an iteration.
>
> I thought maybe those six would be harmless (though I've not looked
> at them at all); but knew already that the full series is not good yet:
> I gave it a try over 5.6-rc4 on Monday, and crashed very soon on simpler
> testing, in different ways from what hits mmotm.
>
> The first thing wrong with the full set was when I tried tmpfs+loop+
> swapping kernel builds in "mem=700M cgroup_disabled=memory", of course
> with CONFIG_DEBUG_LIST=y. That soon collapsed in a splurge of OOM kills
> and list_del corruption messages: __list_del_entry_valid < list_del <
> __page_cache_release < __put_page < put_page < __try_to_reclaim_swap <
> free_swap_and_cache < shmem_free_swap < shmem_undo_range.
I have been run kernel build with a "mem=700M cgroup_disabled=memory" qemu-kvm
with a swapfile for 3 hours, Hope I could catch sth while waiting for your
kindly reproduce scripts. Thanks Hugh!
>
> When I next tried with "mem=1G" and memcg enabled (but not being used),
> that managed some iterations, no OOM kills, no list_del warnings (was
> it swapping? perhaps, perhaps not, I was trying to go easy on it just
> to see if "cgroup_disabled=memory" had been the problem); but when
> rebooting after that, again list_del corruption messages and crash
> (I didn't note them down).
>
> So I didn't take much notice of what the mmotm crash backtrace showed
> (but IIRC shmem and swap were in it).
Is there some place to get mmotm's crash backtrace?
>
> Alex, I'm afraid you're focusing too much on performance results,
> without doing the basic testing needed - I thought we had given you
> some hints on the challenging areas (swapping, move_charge_at_immigrate,
> page migration) when we attached a *correctly working* 5.3 version back
> on 23rd August:
>
> https://lore.kernel.org/linux-mm/alpine.LSU.2.11.1908231736001.16920@eggly.anvils/
>
> (Correctly working, except missing two patches I'd mistakenly dropped
> as unnecessary in earlier rebases: but our discussions with Johannes
> later showed to be very necessary, though their races rarely seen.)
>
Did you mean the Johannes's question of race on page->memcg in previous email?
"> I don't see what prevents the lruvec from changing under compaction,
> neither in your patches nor in Hugh's. Maybe I'm missing something?"
https://lkml.org/lkml/2019/11/22/2153
>From then on, I have tired 2 solutions to protect page->memcg,
first use lock_page_memcg(wrong) and 2nd new solution, taking PageLRU bit as page
isoltion precondition which may work for memcg migration, and page
migration in compaction etc. Could you like to give some comments on this?
> I have not had the time (and do not expect to have the time) to review
> your series: maybe it's one or two small fixes away from being complete,
> or maybe it's still fundamentally flawed, I do not know. I had naively
> hoped that you would help with a patchset that worked, rather than
> cutting it down into something which does not.>
Sorry, Hugh, I didn't know you have per memcg lru_lock patchset before I sent
out my first verion.
> Submitting your series to routine testing is much easier for me than
> reviewing it: but then, yes, it's a pity that I don't find the time
> to report the results on intervening versions, which also crashed.
>
> What I have to do now, is set aside time today and tomorrow, to package
> up the old scripts I use, describe them and their environment, and send
> them to you (cc akpm in case I fall under a bus): so that you can
> reproduce the crashes for yourself, and get to work on them.
>
Thanks advance for your coming testing scripts, I believe it will help a lot.
BTW, I try my best to orgnize this patches to make it stright, a senior experts
like you, won't cost much time to go through whole patches. and give some precious
comment!
I am looking forward to hear comments from you. :)
Thanks
Alex
Powered by blists - more mailing lists