[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a9dd7f8a-ef30-9eb4-4834-37801d43b96f@amd.com>
Date: Tue, 9 Feb 2021 13:57:17 +0100
From: Christian König <christian.koenig@....com>
To: John Stultz <john.stultz@...aro.org>
Cc: lkml <linux-kernel@...r.kernel.org>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
Liam Mark <lmark@...eaurora.org>,
Chris Goldsworthy <cgoldswo@...eaurora.org>,
Laura Abbott <labbott@...nel.org>,
Brian Starkey <Brian.Starkey@....com>,
Hridya Valsaraju <hridya@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Sandeep Patil <sspatil@...gle.com>,
Daniel Mentz <danielmentz@...gle.com>,
Ørjan Eide <orjan.eide@....com>,
Robin Murphy <robin.murphy@....com>,
Ezequiel Garcia <ezequiel@...labora.com>,
Simon Ser <contact@...rsion.fr>,
James Jones <jajones@...dia.com>,
linux-media <linux-media@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool
implementation
Am 09.02.21 um 13:11 schrieb Christian König:
> [SNIP]
>>>> +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page)
>>>> +{
>>>> + spin_lock(&pool->lock);
>>>> + list_add_tail(&page->lru, &pool->items);
>>>> + pool->count++;
>>>> + atomic_long_add(1 << pool->order, &total_pages);
>>>> + spin_unlock(&pool->lock);
>>>> +
>>>> + mod_node_page_state(page_pgdat(page),
>>>> NR_KERNEL_MISC_RECLAIMABLE,
>>>> + 1 << pool->order);
>>> Hui what? What should that be good for?
>> This is a carryover from the ION page pool implementation:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fstaging%2Fandroid%2Fion%2Fion_page_pool.c%3Fh%3Dv5.10%23n28&data=04%7C01%7Cchristian.koenig%40amd.com%7Cc4eadb0a9cf6491d99ba08d8ca173457%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637481548325174885%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=FUjZK5NSDMUYfU7vGeE4fDU2HCF%2FYyNBwc30aoLLPQ4%3D&reserved=0
>>
>>
>> My sense is it helps with the vmstat/meminfo accounting so folks can
>> see the cached pages are shrinkable/freeable. This maybe falls under
>> other dmabuf accounting/stats discussions, so I'm happy to remove it
>> for now, or let the drivers using the shared page pool logic handle
>> the accounting themselves?
Intentionally separated the discussion for that here.
As far as I can see this is just bluntly incorrect.
Either the page is reclaimable or it is part of our pool and freeable
through the shrinker, but never ever both.
In the best case this just messes up the accounting, in the worst case
it can cause memory corruption.
Christian.
Powered by blists - more mailing lists