lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 17 Oct 2010 12:18:48 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH 2/3] find a contiguous range.

Hi Kame,
Sorry for the late review.

On Wed, Oct 13, 2010 at 12:17 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> Unlike memory hotplug, at an allocation of contigous memory range, address
> may not be a problem. IOW, if a requester of memory wants to allocate 100M of
> of contigous memory, placement of allocated memory may not be a problem.
> So, "finding a range of memory which seems to be MOVABLE" is required.
>
> This patch adds a functon to isolate a length of memory within [start, end).

Typo
function

> This function returns a pfn which is 1st page of isolated contigous chunk

Typo
contiguous

> of given length within [start, end).
>
> If no_search=true is passed as argument, start address is always same to

I don't like no_search argument name. It would be better to show not
the implement but context.
How about "bool strict" or "ALLOC_FIXED"?
> the specified "base" addresss.
Typo
address,
Let's add following description.
"Some devices want to bind memory to some memory bank. In this case,
no_search and base address fix
can be helpful."

>
> After isolation, free memory within this area will never be allocated.
> But some pages will remain as "Used/LRU" pages. They should be dropped by
> page reclaim or migration.

At first I saw the above description, I got confused. How about this?
After it isolates some pages in the range, the part of some pages are
freed but others could be used processes now.
Next patch[3/3] try to move or reclaim used pages by page
migration/reclaim for obtaining big contiguous page.

>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> ---
>  mm/page_isolation.c |  130 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 130 insertions(+)
>
> Index: mmotm-1008/mm/page_isolation.c
> ===================================================================
> --- mmotm-1008.orig/mm/page_isolation.c
> +++ mmotm-1008/mm/page_isolation.c
> @@ -9,6 +9,7 @@
>  #include <linux/pageblock-flags.h>
>  #include <linux/memcontrol.h>
>  #include <linux/migrate.h>
> +#include <linux/memory_hotplug.h>
>  #include <linux/mm_inline.h>
>  #include "internal.h"
>
> @@ -254,3 +255,132 @@ out:
>        return ret;
>  }
>
> +/*
> + * Functions for getting contiguous MOVABLE pages in a zone.
> + */
> +struct page_range {
> +       unsigned long base; /* Base address of searching contigouous block */

Typo contiguous.
Please, specify that it's a pfn number.

> +       unsigned long end;
> +       unsigned long pages;/* Length of contiguous block */
> +};
> +
> +static inline unsigned long  MAX_ORDER_ALIGN(unsigned long x)
> +{
> +       return ALIGN(x, MAX_ORDER_NR_PAGES);
> +}
> +
> +static inline unsigned long MAX_ORDER_BASE(unsigned long x)
> +{
> +       return x & ~(MAX_ORDER_NR_PAGES - 1);
> +}
> +
> +int __get_contig_block(unsigned long pfn, unsigned long nr_pages, void *arg)
> +{
> +       struct page_range *blockinfo = arg;
> +       unsigned long end;
> +
> +       end = pfn + nr_pages;
> +       pfn = MAX_ORDER_ALIGN(pfn);
> +       end = MAX_ORDER_BASE(end);
> +
> +       if (end < pfn)
> +               return 0;
> +       if (end - pfn >= blockinfo->pages) {
> +               blockinfo->base = pfn;
> +               blockinfo->end = end;
> +               return 1;
> +       }
> +       return 0;
> +}
> +
> +static void __trim_zone(struct page_range *range)

Hmm..
I think this function name can't present enough meaning.
Let's move description in body of function to the head.

/*
 * In most case, each zone's [start_pfn, end_pfn) has no
 * overlap between each other. But some arch allows it and
 * we need to check it here. If it happens, range end is changed
 * to only include pfns in a zone.
 */

> +{
> +       struct zone *zone;
> +       unsigned long pfn;
> +       /*
> +        * In most case, each zone's [start_pfn, end_pfn) has no
> +        * overlap between each other. But some arch allows it and
> +        * we need to check it here.
> +        */
> +       for (pfn = range->base, zone = page_zone(pfn_to_page(pfn));
> +            pfn < range->end;
> +            pfn += MAX_ORDER_NR_PAGES) {
> +
> +               if (zone != page_zone(pfn_to_page(pfn)))
> +                       break;
> +       }
> +       range->end = min(pfn, range->end);
> +       return;

Unnecessary return.

> +}
> +
> +/*
> + * This function is for finding a contiguous memory block which has length
> + * of pages and MOVABLE. If it finds, make the range of pages as ISOLATED
> + * and return the first page's pfn.
> + * If no_search==true, this function doesn't scan the range but tries to
> + * isolate the range of memory.
> + */
> +
> +static unsigned long find_contig_block(unsigned long base,
> +               unsigned long end, unsigned long pages, bool no_search)
> +{
> +       unsigned long pfn, pos;
> +       struct page_range blockinfo;
> +       int ret;
> +
> +       pages = MAX_ORDER_ALIGN(pages);
> +retry:
> +       blockinfo.base = base;
> +       blockinfo.end = end;
> +       blockinfo.pages = pages;
> +       /*
> +        * At first, check physical page layout and skip memory holes.
> +        */
> +       ret = walk_system_ram_range(base, end - base, &blockinfo,
> +               __get_contig_block);
> +       if (!ret)
> +               return 0;
> +       /* check contiguous pages in a zone */
> +       __trim_zone(&blockinfo);
> +
> +
> +       /* Ok, we found contiguous memory chunk of size. Isolate it.*/
> +       for (pfn = blockinfo.base; pfn + pages < blockinfo.end;
> +            pfn += MAX_ORDER_NR_PAGES) {
> +               /* If no_search==true, base addess should be same to 'base' */
> +               if (no_search && pfn != base)
> +                       break;
> +               /* Better code is necessary here.. */
> +               for (pos = pfn; pos < pfn + pages; pos++) {
> +                       struct page *p;
> +
> +                       if (!pfn_valid_within(pos))
> +                               break;
> +                       p = pfn_to_page(pos);
> +                       if (PageReserved(p))
> +                               break;
> +                       /* This may hit a page on per-cpu queue. */

Couldn't we drain per-cpu queue before this function?

> +                       if (page_count(p) && !PageLRU(p))
> +                               break;
> +                       /* Need to skip order of pages */
> +               }
> +               if (pos != pfn + pages) {
> +                       pfn = MAX_ORDER_BASE(pos);
> +                       continue;
> +               }
> +               /*
> +                * Now, we know [base,end) of a contiguous chunk.
> +                * Don't need to take care of memory holes.
> +                */
> +               if (!start_isolate_page_range(pfn, pfn + pages))
> +                       return pfn;
> +       }
> +
> +       /* failed */
> +       if (!no_search && blockinfo.end + pages < end) {
> +               /* Move base address and find the next block of RAM. */
> +               base = blockinfo.end;
> +               goto retry;
> +       }
> +       return 0;
> +}
>
>



-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists