lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAOUHufawX4Xsq7UV9NQPwvUp2+3ZV95u8ZDLR-VKRTibS-Qn9w@mail.gmail.com>
Date:   Tue, 7 Jun 2022 18:20:22 -0600
From:   Yu Zhao <yuzhao@...gle.com>
To:     Michael Cree <mcree@...on.net.nz>
Cc:     Linux-MM <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Hillf Danton <hdanton@...a.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: Alpha: rare random memory corruption/segfault in user space bisected

On Mon, May 30, 2022 at 2:25 AM Michael Cree <mcree@...on.net.nz> wrote:
>
> On Mon, May 23, 2022 at 02:56:12PM -0600, Yu Zhao wrote:
> > On Wed, May 11, 2022 at 2:37 PM Michael Cree <mcree@...on.net.nz> wrote:
> > >
> > > On Sat, May 07, 2022 at 11:27:15AM -0700, Yu Zhao wrote:
> > > > On Fri, May 6, 2022 at 6:57 PM Hillf Danton <hdanton@...a.com> wrote:
> > > > >
> > > > > On Sat, 7 May 2022 09:21:25 +1200 Michael Cree wrote:
> > > > > > Alpha kernel has been exhibiting rare and random memory
> > > > > > corruptions/segaults in user space since the 5.9.y kernel.  First seen
> > > > > > on the Debian Ports build daemon when running 5.10.y kernel resulting
> > > > > > in the occasional (one or two a day) build failures with gcc ICEs either
> > > > > > due to self detected corrupt memory structures or segfaults.  Have been
> > > > > > running 5.8.y kernel without such problems for over six months.
> > > > > >
> > > > > > Tried bisecting last year but went off track with incorrect good/bad
> > > > > > determinations due to rare nature of bug.  After trying a 5.16.y kernel
> > > > > > early this year and seen the bug is still present retried the bisection
> > > > > > and have got to:
> > > > > >
> > > > > > aae466b0052e1888edd1d7f473d4310d64936196 is the first bad commit
> > > > > > commit aae466b0052e1888edd1d7f473d4310d64936196
> > > > > > Author: Joonsoo Kim <iamjoonsoo.kim@....com>
> > > > > > Date:   Tue Aug 11 18:30:50 2020 -0700
> > > > > >
> > > > > >     mm/swap: implement workingset detection for anonymous LRU
> > > >
> > > > This commit seems innocent to me. While not ruling out anything, i.e.,
> > > > this commit, compiler, qemu, userspace itself, etc., my wild guess is
> > > > the problem is memory barrier related. Two lock/unlock pairs, which
> > > > imply two full barriers, were removed. This is not a small deal on
> > > > Alpha, since it imposes no constraints on cache coherency, AFAIK.
> > > >
> > > > Can you please try the attached patch on top of this commit? Thanks!
> > >
> > > Thanks, I have that running now for a day without any problem showing
> > > up, but that's not long enough to be sure it has fixed the problem. Will
> > > get back to you after another day or two of testing.
> >
> > Any luck? Thanks!
>
> Sorry for the delay in replying.  Testing has taken longer due to an
> unexpected hitch.  The patch proved to be good but for a double check I
> retested the above commit without the patch but it now won't fail which
> calls into question whether aae466b0052e188 is truly the bad commit. I
> have gone back to the prior bad commit in the bisection (25788738eb9c)
> and it failed again confirming it is bad.  So it looks like the first
> bad commit is somewhere between aae466b0052e188 and 25788738eb9c (a
> total of five commits inclusive, four if we take aae466b0052e188 as
> good) and I am now building 471e78cc7687337abd1 and will test that.

No worries. Thanks for the update.

Were swap devices used when the ICEs happened? If so,
1) What kind of swap devices, e.g., zram, block device, etc.?
2) aae466b0052e188 might have made the kernel swap more frequently and
thus the problem easier to reproduce. Assuming this is the case, then
setting swappiness to 200 might help reproduce the problem.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ