lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180919183839.2k4jw4cmyzbtgjfh@delI>
Date:   Wed, 19 Sep 2018 20:38:39 +0200
From:   Tobias Hommel <netdev-list@...oetigt.de>
To:     Steffen Klassert <steffen.klassert@...unet.com>
Cc:     Wolfgang Walter <linux@...m.de>,
        Kristian Evensen <kristian.evensen@...il.com>,
        Network Development <netdev@...r.kernel.org>,
        weiwan@...gle.com, edumazet@...gle.com
Subject: Re: kernels > v4.12 oops/crash with ipsec-traffic: bisected to
 b838d5e1c5b6e57b10ec8af2268824041e3ea911: ipv4: mark DST_NOGC and remove the
 operation of dst_free()

> After running for about 24 hours, I now encountered another panic. This time it
> is caused by an out of memory situation. Although the trace shows action in the
> filesystem code I'm posting it here because I cannot isolate the error and
> maybe it is caused by our NULL pointer bug or by the new fix.
> I do not have a serial console attached, so I could only attach a screenshot of
> the panic to this mail.
> 
> I am running v4.19-rc3 from git with the above mentioned patch applied.
> After 19 hours everything still looked fine, XfrmFwdHdrError value was at ~950.
> Overall memory usage shown by htop was at 1.2G/15.6G.
> I had htop running via ssh so I was able to see at least some status post
> mortem. Uptime: 23:50:57
> Overall memory usage was at 10.2G/15.6G and user processes were just
> using the usual amount of memory, so it looks like the kernel was eating up at
> least 9G of RAM.
> 
> Maybe this information is not very helpful for debugging, but it is at least a
> warning that something might still be wrong.
> 
> I'll try to gather some more information and keep you updated.

Running stable under load for more than 5 days now, I was not able to reproduce
that OOM situation. I leave it at that, the fix for the initial bug is fine for
me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ