[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <07ed1421-89f8-8845-b254-21730207c185@amd.com>
Date: Mon, 19 Apr 2021 18:37:13 +0200
From: Christian König <christian.koenig@....com>
To: Michal Hocko <mhocko@...e.com>
Cc: Peter.Enderborg@...y.com, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, sumit.semwal@...aro.org,
adobriyan@...il.com, akpm@...ux-foundation.org,
songmuchun@...edance.com, guro@...com, shakeelb@...gle.com,
neilb@...e.de, samitolvanen@...gle.com, rppt@...nel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org, willy@...radead.org
Subject: Re: [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo
Am 19.04.21 um 18:11 schrieb Michal Hocko:
> On Mon 19-04-21 17:44:13, Christian König wrote:
>> Am 19.04.21 um 17:19 schrieb Peter.Enderborg@...y.com:
>>> On 4/19/21 5:00 PM, Michal Hocko wrote:
>>>> On Mon 19-04-21 12:41:58, Peter.Enderborg@...y.com wrote:
>>>>> On 4/19/21 2:16 PM, Michal Hocko wrote:
>>>>>> On Sat 17-04-21 12:40:32, Peter Enderborg wrote:
>>>>>>> This adds a total used dma-buf memory. Details
>>>>>>> can be found in debugfs, however it is not for everyone
>>>>>>> and not always available. dma-buf are indirect allocated by
>>>>>>> userspace. So with this value we can monitor and detect
>>>>>>> userspace applications that have problems.
>>>>>> The changelog would benefit from more background on why this is needed,
>>>>>> and who is the primary consumer of that value.
>>>>>>
>>>>>> I cannot really comment on the dma-buf internals but I have two remarks.
>>>>>> Documentation/filesystems/proc.rst needs an update with the counter
>>>>>> explanation and secondly is this information useful for OOM situations
>>>>>> analysis? If yes then show_mem should dump the value as well.
>>>>>>
>>>>>> From the implementation point of view, is there any reason why this
>>>>>> hasn't used the existing global_node_page_state infrastructure?
>>>>> I fix doc in next version. Im not sure what you expect the commit message to include.
>>>> As I've said. Usual justification covers answers to following questions
>>>> - Why do we need it?
>>>> - Why the existing data is insuficient?
>>>> - Who is supposed to use the data and for what?
>>>>
>>>> I can see an answer for the first two questions (because this can be a
>>>> lot of memory and the existing infrastructure is not production suitable
>>>> - debugfs). But the changelog doesn't really explain who is going to use
>>>> the new data. Is this a monitoring to raise an early alarm when the
>>>> value grows? Is this for debugging misbehaving drivers? How is it
>>>> valuable for those?
>>>>
>>>>> The function of the meminfo is: (From Documentation/filesystems/proc.rst)
>>>>>
>>>>> "Provides information about distribution and utilization of memory."
>>>> True. Yet we do not export any random counters, do we?
>>>>
>>>>> Im not the designed of dma-buf, I think global_node_page_state as a kernel
>>>>> internal.
>>>> It provides a node specific and optimized counters. Is this a good fit
>>>> with your new counter? Or the NUMA locality is of no importance?
>>> Sounds good to me, if Christian Koenig think it is good, I will use that.
>>> It is only virtio in drivers that use the global_node_page_state if
>>> that matters.
>> DMA-buf are not NUMA aware at all. On which node the pages are allocated
>> (and if we use pages at all and not internal device memory) is up to the
>> exporter and importer.
> The question is not whether it is NUMA aware but whether it is useful to
> know per-numa data for the purpose the counter is supposed to serve.
No, not at all. The pages of a single DMA-buf could even be from
different NUMA nodes if the exporting driver decides that this is
somehow useful.
Christian.
Powered by blists - more mailing lists