lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170612122336.GA17592@lnxrabinv.se.axis.com>
Date:   Mon, 12 Jun 2017 14:23:36 +0200
From:   Rabin Vincent <rabin@....in>
To:     Marc Burkhardt <marc@...c.ngoe.de>, shli@...com, mhocko@...e.com
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [4.9.28] vmscan: shrink_slab: ext4_es_scan+0x0/0x150 negative
 objects to delete nr=-2147483624

On Thu, May 18, 2017 at 07:21:49AM +0200, Marc Burkhardt wrote:
> tonight my dmesg was flooded with mesages like 
> 
> vmscan: shrink_slab: ext4_es_scan+0x0/0x150 negative objects to delete nr=-2147483624
> 
> Is that an integer overflow happening in ext4?
> 
> It's the first time I see this message. Any help on how to debug/reprocude this
> are appreciated. Please advice if you want me to investigate this.

I haven't attempted to debug nor reproduce it, but what I can tell you
is that it does not not have anything to with ext4.  I've seen similar
messages with a completely different slab, on 4.9.26:

 [367594.725081] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482285
 [367595.046073] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147479427
 [367595.279228] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482317
 [367595.459529] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482353
 [367595.497191] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482386
 [367595.521578] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482413
 [367595.551109] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482501
 [367598.344400] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482458
 [367598.369103] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482493
 [367598.403148] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482521
 [367598.422815] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482611
 [367598.524128] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147483238
 [367601.554775] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482245
 [367601.582922] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482279
 [367601.620175] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482307
 [367602.958946] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147479516
 [367603.630417] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482412
 [367603.746885] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482512
 [367603.769490] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482217
 [367604.155461] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147479940
 [367604.174624] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482635
 [367604.197573] vmscan: shrink_slab: super_cache_scan+0x0/0x19c negative objects to delete nr=-2147482595

I don't see any fixes/changes to mm/vmscan.c in newer 4.9-stable kernels
other than these ones which were already merged v4.9.14:

 $ git shortlog v4.9..v4.9.30 -- mm/vmscan.c
 Michal Hocko (3):
       mm, memcg: fix the active list aging for lowmem requests when memcg is enabled
       mm, vmscan: cleanup lru size claculations
       mm, vmscan: consider eligible zones in get_scan_count
 
 Shaohua Li (1):
       mm/vmscan.c: set correct defer count for shrinker

Perhaps one of the above people or someone else in linux-mm recognizes this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ