lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130812031647.GB8043@kvack.org>
Date:	Sun, 11 Aug 2013 23:16:47 -0400
From:	Benjamin LaHaise <bcrl@...ck.org>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Krzysztof Kozlowski <k.kozlowski@...sung.com>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	Kyungmin Park <kyungmin.park@...sung.com>,
	Dave Hansen <dave.hansen@...el.com>, guz.fnst@...fujitsu.com
Subject: Re: [RFC PATCH v2 0/4] mm: reclaim zbud pages on migration and compaction

Hello Minchan,

On Mon, Aug 12, 2013 at 11:25:35AM +0900, Minchan Kim wrote:
> Hello,
> 
> On Fri, Aug 09, 2013 at 12:22:16PM +0200, Krzysztof Kozlowski wrote:
> > Hi,
> > 
> > Currently zbud pages are not movable and they cannot be allocated from CMA
> > region. These patches try to address the problem by:
> 
> The zcache, zram and GUP pages for memory-hotplug and/or CMA are
> same situation.
> 
> > 1. Adding a new form of reclaim of zbud pages.
> > 2. Reclaiming zbud pages during migration and compaction.
> > 3. Allocating zbud pages with __GFP_RECLAIMABLE flag.
> 
> So I'd like to solve it with general approach.
> 
> Each subsystem or GUP caller who want to pin pages long time should
> create own migration handler and register the page into pin-page
> control subsystem like this.
> 
> driver/foo.c
> 
> int foo_migrate(struct page *page, void *private);
> 
> static struct pin_page_owner foo_migrate = {
>         .migrate = foo_migrate;
> };
> 
> int foo_allocate()
> {
>         struct page *newpage = alloc_pages();
>         set_pinned_page(newpage, &foo_migrate);
> }
> 
> And in compaction.c or somewhere where want to move/reclaim the page,
> general VM can ask to owner if it founds it's pinned page.
> 
> mm/compaction.c
> 
>         if (PagePinned(page)) {
>                 struct pin_page_info *info = get_page_pin_info(page);
>                 info->migrate(page);
>                 
>         }
> 
> Only hurdle for that is that we should introduce a new page flag and
> I believe if we all agree this approch, we can find a solution at last.
> 
> What do you think?

I don't like this approach.  There will be too many collisions in the 
hash that's been implemented (read: I don't think you can get away with 
a naive implementation for core infrastructure that has to suite all 
users), you've got a global spin lock, and it doesn't take into account 
NUMA issues.  The address space migratepage method doesn't have those 
issues (at least where it is usable as in aio's use-case).

If you're going to go down this path, you'll have to decide if *all* users 
of pinned pages are going to have to subscribe to supporting the un-pinning 
of pages, and that means taking a real hard look at how O_DIRECT pins pages.  
Once you start thinking about that, you'll find that addressing the 
performance concerns is going to be an essential part of any design work to 
be done in this area.

		-ben
-- 
"Thought is the essence of where you are now."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ