[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 3 Mar 2022 11:33:50 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Nicolas Saenz Julienne <nsaenzju@...hat.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, frederic@...nel.org, tglx@...utronix.de,
mgorman@...e.de, linux-rt-users@...r.kernel.org, vbabka@...e.cz,
cl@...ux.com, paulmck@...nel.org, willy@...radead.org
Subject: Re: [PATCH 1/2] mm/page_alloc: Access lists in 'struct
per_cpu_pages' indirectly
On Tue, Feb 08, 2022 at 11:07:49AM +0100, Nicolas Saenz Julienne wrote:
> In preparation to adding remote per-cpu page list drain support, let's
> bundle 'struct per_cpu_pages's' page lists and page count into a new
> structure: 'struct pcplists', and have all code access it indirectly
> through a pointer. It'll be used by upcoming patches in order to
> maintain multiple instances of 'pcplists' and switch the pointer
> atomically.
>
> The 'struct pcplists' instance lives inside 'struct per_cpu_pages', and
> shouldn't be accessed directly. It is setup as such since these
> structures are used during early boot when no memory allocation is
> possible and to simplify memory hotplug code paths.
>
> free_pcppages_bulk() and __rmqueue_pcplist()'s function signatures
> change a bit so as to accommodate these changes without affecting
> performance.
>
> No functional change intended.
>
> Signed-off-by: Nicolas Saenz Julienne <nsaenzju@...hat.com>
> ---
>
> Changes since RFC:
> - Add more info in commit message.
> - Removed __private attribute, in hindsight doesn't really fit what
> we're doing here.
> - Use raw_cpu_ptr() where relevant to avoid warnings.
>
> include/linux/mmzone.h | 10 +++--
> mm/page_alloc.c | 87 +++++++++++++++++++++++++-----------------
> mm/vmstat.c | 6 +--
> 3 files changed, 62 insertions(+), 41 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 3fff6deca2c0..b4cb85d9c6e8 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -381,7 +381,6 @@ enum zone_watermarks {
>
> /* Fields and list protected by pagesets local_lock in page_alloc.c */
> struct per_cpu_pages {
> - int count; /* number of pages in the list */
> int high; /* high watermark, emptying needed */
> int batch; /* chunk size for buddy add/remove */
> short free_factor; /* batch scaling factor during free */
> @@ -389,8 +388,13 @@ struct per_cpu_pages {
> short expire; /* When 0, remote pagesets are drained */
> #endif
>
> - /* Lists of pages, one per migrate type stored on the pcp-lists */
> - struct list_head lists[NR_PCP_LISTS];
> + struct pcplists *lp;
> + struct pcplists {
> + /* Number of pages in the pcplists */
> + int count;
> + /* Lists of pages, one per migrate type stored on the pcp-lists */
> + struct list_head lists[NR_PCP_LISTS];
> + } __pcplists; /* Do not access directly */
> };
Perhaps its useful to add something like: "should be acessed through ..."
Looks good otherwise.
Powered by blists - more mailing lists