[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190319153737.GK5996@hirez.programming.kicks-ass.net>
Date: Tue, 19 Mar 2019 16:37:37 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Ingo Molnar <mingo@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: Do not re-read h_load_next during hierarchical
load calculation v2
On Tue, Mar 19, 2019 at 12:36:10PM +0000, Mel Gorman wrote:
> Changelog since v1
> o Use WRITE_ONCE
> o Add Fixes:
> o Add reviewed-by for the READ_ONCE part as I considered it to still be
> ok even after the WRITE_ONCE
>
> A NULL pointer dereference bug was reported on a distribution kernel but
> the same issue should be present on mainline kernel. It occured on s390
> but should not be arch-specific. A partial oops looks like
>
> [775277.408564] Unable to handle kernel pointer dereference in virtual kernel address space
> ...
> [775277.408759] Call Trace:
> [775277.408763] ([<0002c11c56899c61>] 0x2c11c56899c61)
> [775277.408766] [<0000000000177bb4>] try_to_wake_up+0xfc/0x450
> [775277.408773] [<000003ff81ede872>] vhost_poll_wakeup+0x3a/0x50 [vhost]
> [775277.408777] [<0000000000194ae4>] __wake_up_common+0xbc/0x178
> [775277.408779] [<0000000000194f86>] __wake_up_common_lock+0x9e/0x160
> [775277.408780] [<00000000001950de>] __wake_up_sync_key+0x4e/0x60
> [775277.408785] [<00000000005d911e>] sock_def_readable+0x5e/0x98
>
> The bug hits any time between 1 hour to 3 days. The dereference occurs
> in update_cfs_rq_h_load when accumulating h_load. The problem is that
> cfq_rq->h_load_next is not protected by any locking and can be updated
> by parallel calls to task_h_load. Depending on the compiler, code may be
> generated that re-reads cfq_rq->h_load_next after the check for NULL and
> then oops when reading se->avg.load_avg. The dissassembly showed that it
> was possible to reread h_load_next after the check for NULL.
>
> While this does not appear to be an issue for later compilers, it's still
> an accident if the correct code is generated. Full locking in this path
> would have high overhead so this patch uses READ_ONCE to read h_load_next
> only once and check for NULL before dereferencing. It was confirmed that
> there were no further oops after 10 days of testing.
>
> As Peter pointed out, it is also necessary to use WRITE_ONCE to avoid any
> potential problems with store tearing.
>
> Fixes: 685207963be9 ("sched: Move h_load calculation to task_h_load()")
> [peterz@...radead.org: Use WRITE_ONCE to protect against store tearing]
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
> Reviewed-by: Valentin Schneider <valentin.schneider@....com>
> Cc: stable@...r.kernel.org
Thanks!
Powered by blists - more mailing lists