lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20200121090612.GH29276@dhcp22.suse.cz>
Date:   Tue, 21 Jan 2020 10:06:12 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Cong Wang <xiyou.wangcong@...il.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: avoid blocking lock_page() in kcompactd

On Tue 21-01-20 16:26:24, Hillf Danton wrote:
> 
> On Mon, 20 Jan 2020 14:41:50 -0800 Cong Wang wrote:
> > On Fri, Jan 10, 2020 at 1:22 AM Mel Gorman <mgorman@...hsingularity.net> wrote:
> > >
> > > On Thu, Jan 09, 2020 at 02:56:46PM -0800, Cong Wang wrote:
> > > > We observed kcompactd hung at __lock_page():
> > > >
> > > >  INFO: task kcompactd0:57 blocked for more than 120 seconds.
> > > >        Not tainted 4.19.56.x86_64 #1
> > > >  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > > >  kcompactd0      D    0    57      2 0x80000000
> > > >  Call Trace:
> > > >   ? __schedule+0x236/0x860
> > > >   schedule+0x28/0x80
> > > >   io_schedule+0x12/0x40
> > > >   __lock_page+0xf9/0x120
> > > >   ? page_cache_tree_insert+0xb0/0xb0
> > > >   ? update_pageblock_skip+0xb0/0xb0
> > > >   migrate_pages+0x88c/0xb90
> > > >   ? isolate_freepages_block+0x3b0/0x3b0
> > > >   compact_zone+0x5f1/0x870
> > > >   kcompactd_do_work+0x130/0x2c0
> > > >   ? __switch_to_asm+0x35/0x70
> > > >   ? __switch_to_asm+0x41/0x70
> > > >   ? kcompactd_do_work+0x2c0/0x2c0
> > > >   ? kcompactd+0x73/0x180
> > > >   kcompactd+0x73/0x180
> > > >   ? finish_wait+0x80/0x80
> > > >   kthread+0x113/0x130
> > > >   ? kthread_create_worker_on_cpu+0x50/0x50
> > > >   ret_from_fork+0x35/0x40
> > > >
> > > > which faddr2line maps to:
> > > >
> > > >   migrate_pages+0x88c/0xb90:
> > > >   lock_page at include/linux/pagemap.h:483
> > > >   (inlined by) __unmap_and_move at mm/migrate.c:1024
> > > >   (inlined by) unmap_and_move at mm/migrate.c:1189
> > > >   (inlined by) migrate_pages at mm/migrate.c:1419
> > > >
> > > > Sometimes kcompactd eventually got out of this situation, sometimes not.
> > > >
> > > > I think for memory compaction, it is a best effort to migrate the pages,
> > > > so it doesn't have to wait for I/O to complete. It is fine to call
> > > > trylock_page() here, which is pretty much similar to
> > > > buffer_migrate_lock_buffers().
> > > >
> > > > Given MIGRATE_SYNC_LIGHT is used on compaction path, just relax the
> > > > check for it.
> > > >
> > >
> > > Is this a single page being locked for a long time or multiple pages
> > > being locked without reaching a reschedule point?
> > 
> > Not sure whether it is single page or multiple pages, but I successfully
> > located the process locking the page (or pages), and I used perf to
> > capture its stack trace:
> > 
> > 
> >  ffffffffa722aa06 shrink_inactive_list
> >  ffffffffa722b3d7 shrink_node_memcg
> >  ffffffffa722b85f shrink_node
> >  ffffffffa722bc89 do_try_to_free_pages
> >  ffffffffa722c179 try_to_free_mem_cgroup_pages
> >  ffffffffa7298703 try_charge
> >  ffffffffa729a886 mem_cgroup_try_charge
> >  ffffffffa720ec03 __add_to_page_cache_locked
> >  ffffffffa720ee3a add_to_page_cache_lru
> >  ffffffffa7312ddb iomap_readpages_actor
> >  ffffffffa73133f7 iomap_apply
> >  ffffffffa73135da iomap_readpages
> >  ffffffffa722062e read_pages
> >  ffffffffa7220b3f __do_page_cache_readahead
> >  ffffffffa7210554 filemap_fault
> >  ffffffffc039e41f __xfs_filemap_fault
> >  ffffffffa724f5e7 __do_fault
> >  ffffffffa724c5f2 __handle_mm_fault
> >  ffffffffa724cbc6 handle_mm_fault
> >  ffffffffa70a313e __do_page_fault
> >  ffffffffa7a00dfe page_fault
> > 
> > This process got stuck in this situation for a long time (since I sent out
> > this patch) without making any progress. It behaves like stuck in an infinite
> > loop, although the EIP still moves around within mem_cgroup_try_charge().
> 
> Make page reclaim in try_charge() async assuming sync reclaim is unnecessary
> without memory pressure and it does not help much under heavy pressure. Skipping
> reclaim is only confined in page fault context to avoid adding too much a time.
> 
> --- linux-5.5-rc3/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2525,6 +2525,12 @@ force:
>  	if (do_memsw_account())
>  		page_counter_charge(&memcg->memsw, nr_pages);
>  	css_get_many(&memcg->css, nr_pages);
> +	/*
> +	 * reclaim high limit pages soon without holding resources like
> +	 * page lock e.g in page fault context
> +	 */
> +	if (unlikely(current->flags & PF_MEMALLOC))
> +		schedule_work(&memcg->high_work);
>  
>  	return 0;
>  
> --- linux-5.5-rc3/mm/filemap.c	
> +++ b/mm/filemap.c
> @@ -863,8 +863,14 @@ static int __add_to_page_cache_locked(st
>  	mapping_set_update(&xas, mapping);
>  
>  	if (!huge) {
> +		bool was_set = current->flags & PF_MEMALLOC;
> +		if (!was_set)
> +			current->flags |= PF_MEMALLOC;
> +
>  		error = mem_cgroup_try_charge(page, current->mm,
>  					      gfp_mask, &memcg, false);
> +		if (!was_set)
> +			current->flags &= ~PF_MEMALLOC;
>  		if (error)
>  			return error;

Not only this doesn't help at all because the direct reclaim for the
hard limit is done already. The high limit is also already handled
properly when we are under the hard limit. So this patch doesn't really
make much sense to me.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ