lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Mar 2020 20:17:46 -0800 (PST)
From:   Hugh Dickins <hughd@...gle.com>
To:     Qian Cai <cai@....pw>
cc:     Matthew Wilcox <willy@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>, aarcange@...hat.com,
        Alex Shi <alex.shi@...ux.alibaba.com>,
        daniel.m.jordan@...cle.com, hannes@...xchg.org, hughd@...gle.com,
        khlebnikov@...dex-team.ru, kirill@...temov.name,
        kravetz@...ibm.com, mhocko@...nel.org, mm-commits@...r.kernel.org,
        tj@...nel.org, vdavydov.dev@...il.com, yang.shi@...ux.alibaba.com,
        linux-mm@...ck.org
Subject: Re: [failures] mm-vmscan-remove-unnecessary-lruvec-adding.patch
 removed from -mm tree

On Thu, 5 Mar 2020, Qian Cai wrote:
> > On Mar 5, 2020, at 10:38 PM, Matthew Wilcox <willy@...radead.org> wrote:
> > 
> > On Thu, Mar 05, 2020 at 10:32:18PM -0500, Qian Cai wrote:
> >>> On Mar 5, 2020, at 9:50 PM, akpm@...ux-foundation.org wrote:
> >>> The patch titled
> >>>    Subject: mm/vmscan: remove unnecessary lruvec adding
> >>> has been removed from the -mm tree.  Its filename was
> >>>    mm-vmscan-remove-unnecessary-lruvec-adding.patch
> >>> 
> >>> This patch was dropped because it had testing failures
> >> 
> >> Andrew, do you have more information about this failure? I hit a bug
> >> here under memory pressure and am wondering if this is related
> >> which might save me some time digging…

Very likely related.

> > 
> > See Hugh's message from a few minutes ago:

Thanks Matthew.

> > 
> > Subject: Re: [PATCH v9 00/21] per lruvec lru_lock for memcg
> 
> I don’t see it on lore.kernel or anywhere. Private email?

You're right, sorry I didn't notice, lots of ccs but
neither lkml nor linux-mm were on that thread from the start:

From hughd@...gle.com Thu Mar  5 18:16:06 2020
Date: Thu, 5 Mar 2020 18:15:40 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Andew Morton <akpm@...ux-foundation.org>, Alex Shi <alex.shi@...ux.alibaba.com>
Cc: cgroups@...r.kernel.org, mgorman@...hsingularity.net, tj@...nel.org, hughd@...gle.com, khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com, yang.shi@...ux.alibaba.com, willy@...radead.org, hannes@...xchg.org, lkp@...el.com, Fengguang Wu <fengguang.wu@...el.com>, Rong Chen <rong.a.chen@...el.com>
Subject: Re: [PATCH v9 00/21] per lruvec lru_lock for memcg

On Tue, 3 Mar 2020, Alex Shi wrote:
> 在 2020/3/3 上午6:12, Andrew Morton 写道:
> >> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> >> and Yun Wang.
> > I'm not seeing a lot of evidence of review and test activity yet.  But
> > I think I'll grab patches 01-06 as they look like fairly
> > straightforward improvements.
> 
> cc Fengguang and Rong Chen
> 
> I did some local functional testing and kselftest, they all look fine.
> 0day only warn me if some case failed. Is it no news is good news? :)

And now the bad news.

Andrew, please revert those six (or seven as they ended up in mmotm).
5.6-rc4-mm1 without them runs my tmpfs+loop+swapping+memcg+ksm kernel
build loads fine (did four hours just now), but 5.6-rc4-mm1 itself
crashed just after starting - seconds or minutes I didn't see,
but it did not complete an iteration.

I thought maybe those six would be harmless (though I've not looked
at them at all); but knew already that the full series is not good yet:
I gave it a try over 5.6-rc4 on Monday, and crashed very soon on simpler
testing, in different ways from what hits mmotm.

The first thing wrong with the full set was when I tried tmpfs+loop+
swapping kernel builds in "mem=700M cgroup_disabled=memory", of course
with CONFIG_DEBUG_LIST=y. That soon collapsed in a splurge of OOM kills
and list_del corruption messages: __list_del_entry_valid < list_del <
__page_cache_release < __put_page < put_page < __try_to_reclaim_swap <
free_swap_and_cache < shmem_free_swap < shmem_undo_range.

When I next tried with "mem=1G" and memcg enabled (but not being used),
that managed some iterations, no OOM kills, no list_del warnings (was
it swapping? perhaps, perhaps not, I was trying to go easy on it just
to see if "cgroup_disabled=memory" had been the problem); but when
rebooting after that, again list_del corruption messages and crash
(I didn't note them down).

So I didn't take much notice of what the mmotm crash backtrace showed
(but IIRC shmem and swap were in it).

Alex, I'm afraid you're focusing too much on performance results,
without doing the basic testing needed - I thought we had given you
some hints on the challenging areas (swapping, move_charge_at_immigrate,
page migration) when we attached a *correctly working* 5.3 version back
on 23rd August:

https://lore.kernel.org/linux-mm/alpine.LSU.2.11.1908231736001.16920@eggly.anvils/

(Correctly working, except missing two patches I'd mistakenly dropped
as unnecessary in earlier rebases: but our discussions with Johannes
later showed to be very necessary, though their races rarely seen.)

I have not had the time (and do not expect to have the time) to review
your series: maybe it's one or two small fixes away from being complete,
or maybe it's still fundamentally flawed, I do not know.  I had naively
hoped that you would help with a patchset that worked, rather than
cutting it down into something which does not.

Submitting your series to routine testing is much easier for me than
reviewing it: but then, yes, it's a pity that I don't find the time
to report the results on intervening versions, which also crashed.

What I have to do now, is set aside time today and tomorrow, to package
up the old scripts I use, describe them and their environment, and send
them to you (cc akpm in case I fall under a bus): so that you can
reproduce the crashes for yourself, and get to work on them.

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ