lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 21 May 2020 13:06:28 -0700 (PDT) From: Hugh Dickins <hughd@...gle.com> To: Johannes Weiner <hannes@...xchg.org> cc: Michal Hocko <mhocko@...nel.org>, Hugh Dickins <hughd@...gle.com>, Naresh Kamboju <naresh.kamboju@...aro.org>, Chris Down <chris@...isdown.name>, Yafang Shao <laoar.shao@...il.com>, Anders Roxell <anders.roxell@...aro.org>, "Linux F2FS DEV, Mailing List" <linux-f2fs-devel@...ts.sourceforge.net>, linux-ext4 <linux-ext4@...r.kernel.org>, linux-block <linux-block@...r.kernel.org>, Andrew Morton <akpm@...ux-foundation.org>, open list <linux-kernel@...r.kernel.org>, Linux-Next Mailing List <linux-next@...r.kernel.org>, linux-mm <linux-mm@...ck.org>, Arnd Bergmann <arnd@...db.de>, Andreas Dilger <adilger.kernel@...ger.ca>, Jaegeuk Kim <jaegeuk@...nel.org>, Theodore Ts'o <tytso@....edu>, Chao Yu <chao@...nel.org>, Andrea Arcangeli <aarcange@...hat.com>, Matthew Wilcox <willy@...radead.org>, Chao Yu <yuchao0@...wei.com>, lkft-triage@...ts.linaro.org, Roman Gushchin <guro@...com>, Cgroups <cgroups@...r.kernel.org> Subject: Re: mm: mkfs.ext4 invoked oom-killer on i386 - pagecache_get_page On Thu, 21 May 2020, Johannes Weiner wrote: > > Very much appreciate you guys tracking it down so quickly. Sorry about > the breakage. > > I think mem_cgroup_disabled() checks are pretty good markers of public > entry points to the memcg API, so I'd prefer that even if a bit more > verbose. What do you think? An explicit mem_cgroup_disabled() check would be fine, but I must admit, the patch below is rather too verbose for my own taste. Your call. > > --- > From cd373ec232942a9bc43ee5e7d2171352019a58fb Mon Sep 17 00:00:00 2001 > From: Hugh Dickins <hughd@...gle.com> > Date: Thu, 21 May 2020 14:58:36 -0400 > Subject: [PATCH] mm: memcontrol: prepare swap controller setup for integration > fix > > Fix crash with cgroup_disable=memory: > > > > > > + mkfs -t ext4 /dev/disk/by-id/ata-TOSHIBA_MG04ACA100N_Y8NRK0BPF6XF > > > > > mke2fs 1.43.8 (1-Jan-2018) > > > > > Creating filesystem with 244190646 4k blocks and 61054976 inodes > > > > > Filesystem UUID: 3bb1a285-2cb4-44b4-b6e8-62548f3ac620 > > > > > Superblock backups stored on blocks: > > > > > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, > > > > > 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, > > > > > 102400000, 214990848 > > > > > Allocating group tables: 0/7453 done > > > > > Writing inode tables: 0/7453 done > > > > > Creating journal (262144 blocks): [ 35.502102] BUG: kernel NULL > > > > > pointer dereference, address: 000000c8 > > > > > [ 35.508372] #PF: supervisor read access in kernel mode > > > > > [ 35.513506] #PF: error_code(0x0000) - not-present page > > > > > [ 35.518638] *pde = 00000000 > > > > > [ 35.521514] Oops: 0000 [#1] SMP > > > > > [ 35.524652] CPU: 0 PID: 145 Comm: kswapd0 Not tainted > > > > > 5.7.0-rc6-next-20200519+ #1 > > > > > [ 35.532121] Hardware name: Supermicro SYS-5019S-ML/X11SSH-F, BIOS > > > > > 2.2 05/23/2018 > > > > > [ 35.539507] EIP: mem_cgroup_get_nr_swap_pages+0x28/0x60 > > do_memsw_account() used to be automatically false when the cgroup > controller was disabled. Now that it's replaced by > cgroup_memory_noswap, for which this isn't true, make the > mem_cgroup_disabled() checks explicit in the swap control API. > > [hannes@...xchg.org: use mem_cgroup_disabled() in all API functions] > Reported-by: Naresh Kamboju <naresh.kamboju@...aro.org> > Debugged-by: Hugh Dickins <hughd@...gle.com> > Debugged-by: Michal Hocko <mhocko@...nel.org> > Signed-off-by: Johannes Weiner <hannes@...xchg.org> > --- > mm/memcontrol.c | 47 +++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 41 insertions(+), 6 deletions(-) I'm certainly not against a mem_cgroup_disabled() check in the only place that's been observed to need it, as a fixup to merge into your original patch; but this seems rather an over-reaction - and I'm a little surprised that setting mem_cgroup_disabled() doesn't just force cgroup_memory_noswap, saving repetitious checks elsewhere (perhaps there's a difficulty in that, I haven't looked). Historically, I think we've added mem_cgroup_disabled() checks (accessing a cacheline we'd rather avoid) where they're necessary, rather than at every "interface". And you seem to be in a very "goto out" mood today - we all have our "goto out" days, alternating with our "return 0" days :) Hugh > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 3e000a316b59..850bca380562 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6811,6 +6811,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) > VM_BUG_ON_PAGE(PageLRU(page), page); > VM_BUG_ON_PAGE(page_count(page), page); > > + if (mem_cgroup_disabled()) > + return; > + > if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) > return; > > @@ -6876,6 +6879,10 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) > struct mem_cgroup *memcg; > unsigned short oldid; > > + if (mem_cgroup_disabled()) > + return 0; > + > + /* Only cgroup2 has swap.max */ > if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > return 0; > > @@ -6920,6 +6927,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) > struct mem_cgroup *memcg; > unsigned short id; > > + if (mem_cgroup_disabled()) > + return; > + > id = swap_cgroup_record(entry, 0, nr_pages); > rcu_read_lock(); > memcg = mem_cgroup_from_id(id); > @@ -6940,12 +6950,25 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) > { > long nr_swap_pages = get_nr_swap_pages(); > > - if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) > - return nr_swap_pages; > + if (mem_cgroup_disabled()) > + goto out; > + > + /* Swap control disabled */ > + if (cgroup_memory_noswap) > + goto out; > + > + /* > + * Only cgroup2 has swap.max, cgroup1 does mem+sw accounting, > + * which does not place restrictions specifically on swap. > + */ > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > + goto out; > + > for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) > nr_swap_pages = min_t(long, nr_swap_pages, > READ_ONCE(memcg->swap.max) - > page_counter_read(&memcg->swap)); > +out: > return nr_swap_pages; > } > > @@ -6957,18 +6980,30 @@ bool mem_cgroup_swap_full(struct page *page) > > if (vm_swap_full()) > return true; > - if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) > - return false; > + > + if (mem_cgroup_disabled()) > + goto out; > + > + /* Swap control disabled */ > + if (cgroup_memory_noswap) > + goto out; > + > + /* > + * Only cgroup2 has swap.max, cgroup1 does mem+sw accounting, > + * which does not place restrictions specifically on swap. > + */ > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > + goto out; > > memcg = page->mem_cgroup; > if (!memcg) > - return false; > + goto out; > > for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) > if (page_counter_read(&memcg->swap) * 2 >= > READ_ONCE(memcg->swap.max)) > return true; > - > +out: > return false; > } > > -- > 2.26.2
Powered by blists - more mailing lists