[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7df099f-27f7-adc6-4e87-9903ac00cbea@amd.com>
Date: Tue, 9 Feb 2021 18:46:28 +0100
From: Christian König <christian.koenig@....com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: John Stultz <john.stultz@...aro.org>,
lkml <linux-kernel@...r.kernel.org>,
Daniel Vetter <daniel@...ll.ch>,
Sumit Semwal <sumit.semwal@...aro.org>,
Liam Mark <lmark@...eaurora.org>,
Chris Goldsworthy <cgoldswo@...eaurora.org>,
Laura Abbott <labbott@...nel.org>,
Brian Starkey <Brian.Starkey@....com>,
Hridya Valsaraju <hridya@...gle.com>,
Sandeep Patil <sspatil@...gle.com>,
Daniel Mentz <danielmentz@...gle.com>,
Ørjan Eide <orjan.eide@....com>,
Robin Murphy <robin.murphy@....com>,
Ezequiel Garcia <ezequiel@...labora.com>,
Simon Ser <contact@...rsion.fr>,
James Jones <jajones@...dia.com>,
linux-media <linux-media@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool
implementation
Am 09.02.21 um 18:33 schrieb Suren Baghdasaryan:
> On Tue, Feb 9, 2021 at 4:57 AM Christian König <christian.koenig@....com> wrote:
>> Am 09.02.21 um 13:11 schrieb Christian König:
>>> [SNIP]
>>>>>> +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page)
>>>>>> +{
>>>>>> + spin_lock(&pool->lock);
>>>>>> + list_add_tail(&page->lru, &pool->items);
>>>>>> + pool->count++;
>>>>>> + atomic_long_add(1 << pool->order, &total_pages);
>>>>>> + spin_unlock(&pool->lock);
>>>>>> +
>>>>>> + mod_node_page_state(page_pgdat(page),
>>>>>> NR_KERNEL_MISC_RECLAIMABLE,
>>>>>> + 1 << pool->order);
>>>>> Hui what? What should that be good for?
>>>> This is a carryover from the ION page pool implementation:
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fstaging%2Fandroid%2Fion%2Fion_page_pool.c%3Fh%3Dv5.10%23n28&data=04%7C01%7Cchristian.koenig%40amd.com%7Cdccccff8edcd4d147a5b08d8cd20cff2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637484888114923580%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9%2BIBC0tezSV6Ci4S3kWfW%2BQvJm4mdunn3dF6C0kyfCw%3D&reserved=0
>>>>
>>>>
>>>> My sense is it helps with the vmstat/meminfo accounting so folks can
>>>> see the cached pages are shrinkable/freeable. This maybe falls under
>>>> other dmabuf accounting/stats discussions, so I'm happy to remove it
>>>> for now, or let the drivers using the shared page pool logic handle
>>>> the accounting themselves?
>> Intentionally separated the discussion for that here.
>>
>> As far as I can see this is just bluntly incorrect.
>>
>> Either the page is reclaimable or it is part of our pool and freeable
>> through the shrinker, but never ever both.
> IIRC the original motivation for counting ION pooled pages as
> reclaimable was to include them into /proc/meminfo's MemAvailable
> calculations. NR_KERNEL_MISC_RECLAIMABLE defined as "reclaimable
> non-slab kernel pages" seems like a good place to account for them but
> I might be wrong.
Yeah, that's what I see here as well. But exactly that is utterly nonsense.
Those pages are not "free" in the sense that get_free_page could return
them directly.
Regards,
Christian.
>
>> In the best case this just messes up the accounting, in the worst case
>> it can cause memory corruption.
>>
>> Christian.
Powered by blists - more mailing lists