[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZ01PPYMzcTyX_cwr836jGonJT=fwT3ovc4ixW44keRgg@mail.gmail.com>
Date: Thu, 29 Aug 2024 14:54:25 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Piotr Oniszczuk <piotr.oniszczuk@...il.com>
Cc: Pedro Falcato <pedro.falcato@...il.com>, Nhat Pham <nphamcs@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Linux regressions mailing list <regressions@...ts.linux.dev>, LKML <linux-kernel@...r.kernel.org>,
Johannes Weiner <hannes@...xchg.org>, Linux-MM <linux-mm@...ck.org>
Subject: Re: [regression] oops on heavy compilations ("kernel BUG at
mm/zswap.c:1005!" and "Oops: invalid opcode: 0000")
On Thu, Aug 29, 2024 at 8:51 AM Piotr Oniszczuk
<piotr.oniszczuk@...il.com> wrote:
>
>
>
> > Wiadomość napisana przez Yosry Ahmed <yosryahmed@...gle.com> w dniu 27.08.2024, o godz. 20:48:
> >
> > On Sun, Aug 25, 2024 at 9:24 AM Piotr Oniszczuk
> > <piotr.oniszczuk@...il.com> wrote:
> >>
> >>
> >>
> >>> Wiadomość napisana przez Pedro Falcato <pedro.falcato@...il.com> w dniu 25.08.2024, o godz. 17:05:
> >>>
> >>> Also, could you try a memtest86 on your machine, to shake out potential hardware problems?
> >>
> >>
> >> I found less time consuming way to trigger issue: 12c24t cross compile of llvm with „only 16G” of ram - as this triggers many heavy swappings (top swap usage gets 8-9G out of 16G swap part)
> >>
> >> With such setup - on 6.9.12 - i’m getting not available system (due cpu soft lockup) just in 1..3h
> >> (usually first or second compile iteration; i wrote simple scrip compiling in loop + counting interations)
> >
> > Are we sure that the soft lockup problem is related to the originally
> > reported problem? It seems like in v6.10 you hit a BUG in zswap
> > (corruption?), and in v6.9 you hit a soft lockup with a zswap lock
> > showing up in the splat. Not sure how they are relevant.
>
> If so then i’m interpreting this as:
>
> a\ 2 different bugs
>
> or
>
> b\ 6.10 issue is result of 6.9 bug
>
> In such case i think we may:
>
> 1. fix 6.9 first (=get it stable for let say 30h continuous compil.)
> 2. apply fix to 6.10 then test stability on 6.10
>
> >
> > Is the soft lockup reproducible in v6.10 as well?
> >
> > Since you have a narrow window (6.8.2 to 6.9) and a reproducer for the
> > soft lockup problem, can you try bisecting?
> >
> > Thanks!
>
>
>
> May you pls help me with reducing amount of work here?
>
> 1. by narrowing # of bisect iternations?
My information about the good (v6.8) and bad (v6.9) versions come from
your report. I am not sure how I can help narrow down the number of
bisect iterations. Do you mind elaborating?
> On my side each iteration is like
> -build arch pkg
> -install on builder
> -compile till first hang (2..3h probably for bad) or 20h (for good)
> this means days and i’m a bit short with time as all this is my hobby (so competes with all rest of my life...)
>
> or
>
> 2. Ideally will be to have list of revert 6.9 commit candidates (starting from most probable falling commit)
> i’ll revert and test
Looking at the zswap commits between 6.8 and 6.9, ignoring cleanups
and seemingly irrelevant patches (e.g. swapoff fixups), I think the
some likely candidates could be the following, but this is not really
based on any scientific methodology:
44c7c734a5132 mm/zswap: split zswap rb-tree
c2e2ba770200b mm/zswap: only support zswap_exclusive_loads_enabled
a230c20e63efe mm/zswap: zswap entry doesn't need refcount anymore
8409a385a6b41 mm/zswap: improve with alloc_workqueue() call
0827a1fb143fa mm/zswap: invalidate zswap entry when swap entry free
I also noticed that you are using z3fold as the zpool. Is the problem
reproducible with zsmalloc? I wouldn't be surprised if there's a
z3fold bug somewhere.
>
> i’ll really appreciate help here….
>
Powered by blists - more mailing lists