[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e061f44c-c1f9-d02a-59db-0cd9b213df6f@nextfour.com>
Date: Fri, 10 Jul 2020 10:22:09 +0300
From: Mika Penttilä <mika.penttila@...tfour.com>
To: Alex Shi <alex.shi@...ux.alibaba.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Linux-MM <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: a question of split_huge_page
On 10.7.2020 10.00, Alex Shi wrote:
>
> 在 2020/7/10 下午1:28, Mika Penttilä 写道:
>>> Thanks a lot for quick reply!
>>> What I am confusing is the call chain: __iommu_dma_alloc_pages()
>>> to split_huge_page(), in the func, splited page,
>>> page = alloc_pages_node(nid, alloc_flags, order);
>>> And if the pages were added into lru, they maybe reclaimed and lost,
>>> that would be a panic bug. But in fact, this never happened for long time.
>>> Also I put a BUG() at the line, it's nevre triggered in ltp, and run_vmtests
>> In __iommu_dma_alloc_pages, after split_huge_page(), who is taking a
>> reference on tail pages? Seems tail pages are freed and the function
>> errornously returns them in pages[] array for use?
>>
> Why you say so? It looks like the tail page returned and be used
> pages = __iommu_dma_alloc_pages() in iommu_dma_alloc_remap()
> and still on node's lru. Is this right?
>
> thanks!
IMHO they are new pages coming from alloc_pages_node() so they are not
on lru. And split_huge_page() frees not pinned tail pages again to page
allocator.
Thanks,
Mika
Download attachment "pEpkey.asc" of type "application/pgp-keys" (3107 bytes)
Powered by blists - more mailing lists