[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200407152544.GA9557@carbon.lan>
Date: Tue, 7 Apr 2020 08:25:44 -0700
From: Roman Gushchin <guro@...com>
To: Michal Hocko <mhocko@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Aslan Bakirov <aslan@...com>, <linux-mm@...ck.org>,
<kernel-team@...com>, <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...riel.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Andreas Schaufler <andreas.schaufler@....de>,
Randy Dunlap <rdunlap@...radead.org>,
Joonsoo Kim <js1304@...il.com>
Subject: Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic
hugepages using cma
On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote:
> On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
> [...]
> My ack still applies but I have only noticed two minor things now.
Hello, Michal!
>
> [...]
> > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> > set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > set_page_refcounted(page);
> > if (hstate_is_gigantic(h)) {
> > + /*
> > + * Temporarily drop the hugetlb_lock, because
> > + * we might block in free_gigantic_page().
> > + */
> > + spin_unlock(&hugetlb_lock);
> > destroy_compound_gigantic_page(page, huge_page_order(h));
> > free_gigantic_page(page, huge_page_order(h));
> > + spin_lock(&hugetlb_lock);
>
> This is OK with the current code because existing paths do not have to
> revalidate the state AFAICS but it is a bit subtle. I have checked the
> cma_free path and it can only sleep on the cma->lock unless I am missing
> something. This lock is only used for cma bitmap manipulation and the
> mutex sounds like an overkill there and it can be replaced by a
> spinlock.
>
> Sounds like a follow up patch material to me.
I had the same idea and even posted a patch:
https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0
However, Joonsoo pointed out that in some cases the bitmap operation might
be too long for a spinlock.
Alternatively, we can implement an asynchronous delayed release on the cma side,
I just don't know if it's worth it (I mean adding code/complexity).
>
> [...]
> > + for_each_node_state(nid, N_ONLINE) {
> > + int res;
> > +
> > + size = min(per_node, hugetlb_cma_size - reserved);
> > + size = round_up(size, PAGE_SIZE << order);
> > +
> > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> > + 0, false, "hugetlb",
> > + &hugetlb_cma[nid], nid);
> > + if (res) {
> > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> > + res, nid);
> > + break;
>
> Do we really have to break out after a single node failure? There might
> be other nodes that can satisfy the allocation. You are not cleaning up
> previous allocations so there is a partial state and then it would make
> more sense to me to simply s@...ak@...tinue@ here.
But then we should iterate over all nodes in alloc_gigantic_page()?
Currently if hugetlb_cma[0] is NULL it will immediately switch back
to the fallback approach.
Actually, Idk how realistic are use cases with complex node configuration,
so that we can hugetlb_cma areas can be allocated only on some of them.
I'd leave it up to the moment when we'll have a real world example.
Then we probably want something more sophisticated anyway...
I have no strong opinion here, so if you really think we should s/break/continue,
I'm fine with it too.
Thanks!
Powered by blists - more mailing lists