lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd30f96b-44d2-4127-a019-f02bc2689aa2@amd.com>
Date: Tue, 8 Jul 2025 10:28:21 -0400
From: Mario Limonciello <mario.limonciello@....com>
To: Samuel Zhang <guoqing.zhang@....com>, alexander.deucher@....com,
 christian.koenig@....com, rafael@...nel.org, len.brown@...el.com,
 pavel@...nel.org, gregkh@...uxfoundation.org, dakr@...nel.org,
 airlied@...il.com, simona@...ll.ch, ray.huang@....com,
 matthew.auld@...el.com, matthew.brost@...el.com,
 maarten.lankhorst@...ux.intel.com, mripard@...nel.org, tzimmermann@...e.de
Cc: lijo.lazar@....com, victor.zhao@....com, haijun.chang@....com,
 Qing.Ma@....com, Owen.Zhang2@....com, linux-pm@...r.kernel.org,
 linux-kernel@...r.kernel.org, amd-gfx@...ts.freedesktop.org,
 dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v3 3/5] PM: hibernate: shrink shmem pages after
 dev_pm_ops.prepare()

On 7/8/2025 3:42 AM, Samuel Zhang wrote:
> When hibernate with data center dGPUs, huge number of VRAM data will be
> moved to shmem during dev_pm_ops.prepare(). These shmem pages take a lot
> of system memory so that there's no enough free memory for creating the
> hibernation image. This will cause hibernation fail and abort.
> 
> After dev_pm_ops.prepare(), call shrink_all_memory() to force move shmem
> pages to swap disk and reclaim the pages, so that there's enough system
> memory for hibernation image and less pages needed to copy to the image.
> 
> This patch can only flush and free about half shmem pages. It will be
> better to flush and free more pages, even all of shmem pages, so that
> there're less pages to be copied to the hibernation image and the overall
> hibernation time can be reduced.
> 
> Signed-off-by: Samuel Zhang <guoqing.zhang@....com>

AFAICT this didn't tangibly change and was just reordered in the series, 
I think you should carry Rafael's A-b tag forward.

> ---
>   kernel/power/hibernate.c | 26 ++++++++++++++++++++++++++
>   1 file changed, 26 insertions(+)
> 
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index 10a01af63a80..7ae9d9a7aa1d 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -370,6 +370,23 @@ static int create_image(int platform_mode)
>   	return error;
>   }
>   
> +static void shrink_shmem_memory(void)
> +{
> +	struct sysinfo info;
> +	unsigned long nr_shmem_pages, nr_freed_pages;
> +
> +	si_meminfo(&info);
> +	nr_shmem_pages = info.sharedram; /* current page count used for shmem */
> +	/*
> +	 * The intent is to reclaim all shmem pages. Though shrink_all_memory() can
> +	 * only reclaim about half of them, it's enough for creating the hibernation
> +	 * image.
> +	 */
> +	nr_freed_pages = shrink_all_memory(nr_shmem_pages);
> +	pr_debug("requested to reclaim %lu shmem pages, actually freed %lu pages\n",
> +			nr_shmem_pages, nr_freed_pages);
> +}
> +
>   /**
>    * hibernation_snapshot - Quiesce devices and create a hibernation image.
>    * @platform_mode: If set, use platform driver to prepare for the transition.
> @@ -411,6 +428,15 @@ int hibernation_snapshot(int platform_mode)
>   		goto Thaw;
>   	}
>   
> +	/*
> +	 * Device drivers may move lots of data to shmem in dpm_prepare(). The shmem
> +	 * pages will use lots of system memory, causing hibernation image creation
> +	 * fail due to insufficient free memory.
> +	 * This call is to force flush the shmem pages to swap disk and reclaim
> +	 * the system memory so that image creation can succeed.
> +	 */
> +	shrink_shmem_memory();
> +
>   	suspend_console();
>   	pm_restrict_gfp_mask();
>   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ