lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 31 May 2017 13:52:26 -0700 From: Andrew Morton <akpm@...ux-foundation.org> To: kernel test robot <xiaolong.ye@...el.com> Cc: Josef Bacik <josef@...icpanda.com>, Stephen Rothwell <sfr@...b.auug.org.au>, Josef Bacik <jbacik@...com>, Rik van Riel <riel@...hat.com>, Johannes Weiner <hannes@...xchg.org>, LKML <linux-kernel@...r.kernel.org>, lkp@...org Subject: Re: [lkp-robot] [mm] aefd950b83: divide_error:#[##] On Wed, 31 May 2017 14:31:16 +0800 kernel test robot <xiaolong.ye@...el.com> wrote: > > FYI, we noticed the following commit: > > commit: aefd950b83d2d8cf4d3c270546c8725f866da191 ("mm: make kswapd try harder to keep active pages in cache") > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master > > in testcase: boot > > ... > > [ 160.541829] divide error: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC > > ... > > [ 160.587334] RIP: shrink_node+0x47f/0x5a0 RSP: ffffc900001bbd78 > > ... hm. This? --- a/mm/vmscan.c~mm-make-kswapd-try-harder-to-keep-active-pages-in-cache-fix-2 +++ a/mm/vmscan.c @@ -2724,7 +2724,7 @@ static bool shrink_node(pg_data_t *pgdat if (sc->nr_reclaimed - nr_reclaimed) { reclaimable = true; } else if (sc->inactive_only && !skip_slab) { - unsigned long percent; + unsigned long percent = 100; /* * We didn't reclaim anything this go around, so the @@ -2735,7 +2735,8 @@ static bool shrink_node(pg_data_t *pgdat * hoping that eventually we'll start freeing enough * objects to reclaim space. */ - percent = (slab_reclaimed * 100 / slab_scanned); + if (slab_scanned) + percent = (slab_reclaimed * 100 / slab_scanned); if (percent < 50) sc->inactive_only = 0; else Or this? --- a/mm/vmscan.c~mm-make-kswapd-try-harder-to-keep-active-pages-in-cache-fix-3 +++ a/mm/vmscan.c @@ -2628,7 +2628,7 @@ static bool shrink_node(pg_data_t *pgdat }; unsigned long node_lru_pages = 0; unsigned long slab_reclaimed = 0; - unsigned long slab_scanned = 0; + unsigned long slab_scanned = 1; /* Avoid div-by-zero */ struct mem_cgroup *memcg; nr_reclaimed = sc->nr_reclaimed; _
Powered by blists - more mailing lists