[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2167643.HFCj9E3NaD@kreacher>
Date: Fri, 11 Oct 2019 11:49:29 +0200
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Andy Whitcroft <apw@...onical.com>
Cc: linux-pm@...r.kernel.org, Len Brown <len.brown@...el.com>,
Pavel Machek <pavel@....cz>,
Andrea Righi <andrea.righi@...onical.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] PM / hibernate: memory_bm_find_bit -- tighten node optimisation
On Wednesday, September 25, 2019 4:39:12 PM CEST Andy Whitcroft wrote:
> When looking for a bit by number we make use of the cached result from the
> preceding lookup to speed up operation. Firstly we check if the requested
> pfn is within the cached zone and if not lookup the new zone. We then
> check if the offset for that pfn falls within the existing cached node.
> This happens regardless of whether the node is within the zone we are
> now scanning. With certain memory layouts it is possible for this to
> false trigger creating a temporary alias for the pfn to a different bit.
> This leads the hibernation code to free memory which it was never allocated
> with the expected fallout.
>
> Ensure the zone we are scanning matches the cached zone before considering
> the cached node.
>
> Deep thanks go to Andrea for many, many, many hours of hacking and testing
> that went into cornering this bug.
>
> Reported-by: Andrea Righi <andrea.righi@...onical.com>
> Tested-by: Andrea Righi <andrea.righi@...onical.com>
> Signed-off-by: Andy Whitcroft <apw@...onical.com>
> ---
> kernel/power/snapshot.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 83105874f255..26b9168321e7 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -734,8 +734,15 @@ static int memory_bm_find_bit(struct memory_bitmap *bm, unsigned long pfn,
> * We have found the zone. Now walk the radix tree to find the leaf node
> * for our PFN.
> */
> +
> + /*
> + * If the zone we wish to scan is the the current zone and the
> + * pfn falls into the current node then we do not need to walk
> + * the tree.
> + */
> node = bm->cur.node;
> - if (((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
> + if (zone == bm->cur.zone &&
> + ((pfn - zone->start_pfn) & ~BM_BLOCK_MASK) == bm->cur.node_pfn)
> goto node_found;
>
> node = zone->rtree;
>
Applying as 5.5 material, thanks!
Powered by blists - more mailing lists