[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15e3a0af-7e02-83fb-4b72-b05f6d7ded71@contabo.de>
Date: Thu, 26 Jul 2018 12:00:44 +0200
From: Tino Lehnig <tino.lehnig@...tabo.de>
To: Minchan Kim <minchan@...nel.org>
Cc: ngupta@...are.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Zram writeback feature unstable with heavy swap utilization -
BUG: Bad page state in process...
On 07/26/2018 08:10 AM, Tino Lehnig wrote:
>> A thing I could imagine is
>> [0bcac06f27d75, skip swapcache for swapin of synchronous device]
>> It was merged into v4.15. Could you check it by bisecting?
>
> Thanks, I will check that.
So I get the same behavior as in v4.15-rc1 after this commit. All prior
builds are fine.
I have also tested all other 4.15 rc builds now and the symptoms are the
same through rc8. KVM processes become unresponsive and I see kernel
messages like the one below. This happens with and without the writeback
feature being used. The bad page state bug appears very rarely in these
versions and only when writeback is active.
Starting with rc9, I only get the same bad page state bug as in all
newer kernels.
--
[ 363.494793] INFO: task kworker/4:2:498 blocked for more than 120 seconds.
[ 363.494872] Not tainted 4.14.0-zram-pre-rc1 #17
[ 363.494943] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 363.495021] kworker/4:2 D 0 498 2 0x80000000
[ 363.495029] Workqueue: events async_pf_execute
[ 363.495030] Call Trace:
[ 363.495037] ? __schedule+0x3bc/0x830
[ 363.495039] schedule+0x32/0x80
[ 363.495042] io_schedule+0x12/0x40
[ 363.495045] __lock_page_or_retry+0x302/0x320
[ 363.495047] ? page_cache_tree_insert+0xa0/0xa0
[ 363.495051] do_swap_page+0x4ab/0x860
[ 363.495054] __handle_mm_fault+0x77b/0x10c0
[ 363.495056] handle_mm_fault+0xc6/0x1b0
[ 363.495059] __get_user_pages+0xf9/0x620
[ 363.495061] ? update_load_avg+0x5d6/0x6d0
[ 363.495064] get_user_pages_remote+0x137/0x1f0
[ 363.495067] async_pf_execute+0x62/0x180
[ 363.495071] process_one_work+0x184/0x380
[ 363.495073] worker_thread+0x4d/0x3c0
[ 363.495076] kthread+0xf5/0x130
[ 363.495078] ? process_one_work+0x380/0x380
[ 363.495080] ? kthread_create_worker_on_cpu+0x50/0x50
[ 363.495083] ? do_group_exit+0x3a/0xa0
[ 363.495086] ret_from_fork+0x1f/0x30
--
Kind regards,
Tino Lehnig
Powered by blists - more mailing lists