lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <830691cf-cb96-443e-b6eb-2adfe2edd587@arm.com>
Date: Tue, 30 Jan 2024 11:20:56 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Alexandru Elisei <alexandru.elisei@....com>, catalin.marinas@....com,
 will@...nel.org, oliver.upton@...ux.dev, maz@...nel.org,
 james.morse@....com, suzuki.poulose@....com, yuzenghui@...wei.com,
 arnd@...db.de, akpm@...ux-foundation.org, mingo@...hat.com,
 peterz@...radead.org, juri.lelli@...hat.com, vincent.guittot@...aro.org,
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
 mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
 mhiramat@...nel.org, rppt@...nel.org, hughd@...gle.com
Cc: pcc@...gle.com, steven.price@....com, vincenzo.frascino@....com,
 david@...hat.com, eugenis@...gle.com, kcc@...gle.com, hyesoo.yu@...sung.com,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 kvmarm@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
 linux-arch@...r.kernel.org, linux-mm@...ck.org,
 linux-trace-kernel@...r.kernel.org
Subject: Re: [PATCH RFC v3 09/35] mm: cma: Introduce cma_remove_mem()



On 1/25/24 22:12, Alexandru Elisei wrote:
> Memory is added to CMA with cma_declare_contiguous_nid() and
> cma_init_reserved_mem(). This memory is then put on the MIGRATE_CMA list in
> cma_init_reserved_areas(), where the page allocator can make use of it.

cma_declare_contiguous_nid() reserves memory in memblock and marks the
for subsequent CMA usage, where as cma_init_reserved_areas() activates
these memory areas through init_cma_reserved_pageblock(). Standard page
allocator only receives these memory via free_reserved_page() - only if
the page block activation fails.

> 
> If a device manages multiple CMA areas, and there's an error when one of
> the areas is added to CMA, there is no mechanism for the device to prevent

What kind of error ? init_cma_reserved_pageblock() fails ? But that will
not happen until cma_init_reserved_areas().

> the rest of the areas, which were added before the error occured, from
> being later added to the MIGRATE_CMA list.

Why is this mechanism required ? cma_init_reserved_areas() scans over all
CMA areas and try and activate each of them sequentially. Why is not this
sufficient ?

> 
> Add cma_remove_mem() which allows a previously reserved CMA area to be
> removed and thus it cannot be used by the page allocator.

Successfully activated CMA areas do not get used by the buddy allocator.

> 
> Signed-off-by: Alexandru Elisei <alexandru.elisei@....com>
> ---
> 
> Changes since rfc v2:
> 
> * New patch.
> 
>  include/linux/cma.h |  1 +
>  mm/cma.c            | 30 +++++++++++++++++++++++++++++-
>  2 files changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index e32559da6942..787cbec1702e 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -48,6 +48,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>  					unsigned int order_per_bit,
>  					const char *name,
>  					struct cma **res_cma);
> +extern void cma_remove_mem(struct cma **res_cma);
>  extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
>  			      bool no_warn);
>  extern int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count,
> diff --git a/mm/cma.c b/mm/cma.c
> index 4a0f68b9443b..2881bab12b01 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -147,8 +147,12 @@ static int __init cma_init_reserved_areas(void)
>  {
>  	int i;
>  
> -	for (i = 0; i < cma_area_count; i++)
> +	for (i = 0; i < cma_area_count; i++) {
> +		/* Region was removed. */
> +		if (!cma_areas[i].count)
> +			continue;

Skip previously added CMA area (now zeroed out) ?

>  		cma_activate_area(&cma_areas[i]);
> +	}
>  
>  	return 0;
>  }

cma_init_reserved_areas() gets called via core_initcall(). Some how
platform/device needs to call cma_remove_mem() before core_initcall()
gets called ? This might be time sensitive.

> @@ -216,6 +220,30 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>  	return 0;
>  }
>  
> +/**
> + * cma_remove_mem() - remove cma area
> + * @res_cma: Pointer to the cma region.
> + *
> + * This function removes a cma region created with cma_init_reserved_mem(). The
> + * ->count is set to 0.
> + */
> +void __init cma_remove_mem(struct cma **res_cma)
> +{
> +	struct cma *cma;
> +
> +	if (WARN_ON_ONCE(!res_cma || !(*res_cma)))
> +		return;
> +
> +	cma = *res_cma;
> +	if (WARN_ON_ONCE(!cma->count))
> +		return;
> +
> +	totalcma_pages -= cma->count;
> +	cma->count = 0;
> +
> +	*res_cma = NULL;
> +}
> +
>  /**
>   * cma_declare_contiguous_nid() - reserve custom contiguous area
>   * @base: Base address of the reserved area optional, use 0 for any

But first please do explain what are the errors device or platform might
see on a previously marked CMA area so that removing them on way becomes
necessary preventing their activation via cma_init_reserved_areas().

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ