[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130814110556.GH10849@gmail.com>
Date: Wed, 14 Aug 2013 13:05:56 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Travis <travis@....com>, Nathan Zimmer <nzimmer@....com>,
Peter Anvin <hpa@...or.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, Robin Holt <holt@....com>,
Rob Landley <rob@...dley.net>,
Daniel J Blueman <daniel@...ascale-asia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Yinghai Lu <yinghai@...nel.org>, Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC v3 0/5] Transparent on-demand struct page initialization
embedded in the buddy allocator
* Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> [...]
>
> Ok, so I don't know all the issues, and in many ways I don't even really
> care. You could do it other ways, I don't think this is a big deal. The
> part I hate is the runtime hook into the core MM page allocation code,
> so I'm just throwing out any random thing that comes to my mind that
> could be used to avoid that part.
So, my hope was that it's possible to have a single, simple, zero-cost
runtime check [zero cost for already initialized pages], because it can be
merged into already existing page flag mask checks present here and
executed for every freshly allocated page:
static inline int check_new_page(struct page *page)
{
if (unlikely(page_mapcount(page) |
(page->mapping != NULL) |
(atomic_read(&page->_count) != 0) |
(page->flags & PAGE_FLAGS_CHECK_AT_PREP) |
(mem_cgroup_bad_page_check(page)))) {
bad_page(page);
return 1;
}
return 0;
}
We already run this for every new page allocated and the initialization
check could hide in PAGE_FLAGS_CHECK_AT_PREP in a zero-cost fashion.
I'd not do any of the ensure_page_is_initialized() or
__expand_page_initialization() complications in this patch-set - each page
head represents itself and gets iterated when check_new_page() is done.
During regular bootup we'd initialize like before, except we don't set up
the page heads but memset() them to zero. With each page head 32 bytes
this would mean 8 GB of page head memory to clear per 1 TB - with 16 TB
that's 128 GB to clear - that ought to be possible to do rather quickly,
perhaps with some smart SMP cross-call approach that makes sure that each
memset is done in a node-local fashion. [*]
Such an approach should IMO be far smaller and less invasive than the
patches presented so far: it should be below 100 lines or so.
I don't know why there's such a big difference between the theory I
outlined and the invasive patch-set implemented so far in practice,
perhaps I'm missing some complication. I was trying to probe that
difference, before giving up on the idea and punting back to the async
hotplug-ish approach which would obviously work well too.
All in one, I think async init just hides the real problem - there's no
way memory init should take this long.
Thanks,
Ingo
[*] alternatively maybe the main performance problem is that node-local
memory is set up on a remote (boot) node? In that case I'd try to
optimize it by migrating the memory init code's current node by using
set_cpus_allowed() to live migrate from node to node, tracking the
node whose struct page array is being initialized.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists