lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Apr 2021 17:44:13 +0200
From:   Christian König <christian.koenig@....com>
To:     Peter.Enderborg@...y.com, mhocko@...e.com
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        sumit.semwal@...aro.org, adobriyan@...il.com,
        akpm@...ux-foundation.org, songmuchun@...edance.com, guro@...com,
        shakeelb@...gle.com, neilb@...e.de, samitolvanen@...gle.com,
        rppt@...nel.org, linux-media@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org,
        willy@...radead.org
Subject: Re: [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo

Am 19.04.21 um 17:19 schrieb Peter.Enderborg@...y.com:
> On 4/19/21 5:00 PM, Michal Hocko wrote:
>> On Mon 19-04-21 12:41:58, Peter.Enderborg@...y.com wrote:
>>> On 4/19/21 2:16 PM, Michal Hocko wrote:
>>>> On Sat 17-04-21 12:40:32, Peter Enderborg wrote:
>>>>> This adds a total used dma-buf memory. Details
>>>>> can be found in debugfs, however it is not for everyone
>>>>> and not always available. dma-buf are indirect allocated by
>>>>> userspace. So with this value we can monitor and detect
>>>>> userspace applications that have problems.
>>>> The changelog would benefit from more background on why this is needed,
>>>> and who is the primary consumer of that value.
>>>>
>>>> I cannot really comment on the dma-buf internals but I have two remarks.
>>>> Documentation/filesystems/proc.rst needs an update with the counter
>>>> explanation and secondly is this information useful for OOM situations
>>>> analysis? If yes then show_mem should dump the value as well.
>>>>
>>>>  From the implementation point of view, is there any reason why this
>>>> hasn't used the existing global_node_page_state infrastructure?
>>> I fix doc in next version.  Im not sure what you expect the commit message to include.
>> As I've said. Usual justification covers answers to following questions
>> 	- Why do we need it?
>> 	- Why the existing data is insuficient?
>> 	- Who is supposed to use the data and for what?
>>
>> I can see an answer for the first two questions (because this can be a
>> lot of memory and the existing infrastructure is not production suitable
>> - debugfs). But the changelog doesn't really explain who is going to use
>> the new data. Is this a monitoring to raise an early alarm when the
>> value grows? Is this for debugging misbehaving drivers? How is it
>> valuable for those?
>>
>>> The function of the meminfo is: (From Documentation/filesystems/proc.rst)
>>>
>>> "Provides information about distribution and utilization of memory."
>> True. Yet we do not export any random counters, do we?
>>
>>> Im not the designed of dma-buf, I think  global_node_page_state as a kernel
>>> internal.
>> It provides a node specific and optimized counters. Is this a good fit
>> with your new counter? Or the NUMA locality is of no importance?
> Sounds good to me, if Christian Koenig think it is good, I will use that.
> It is only virtio in drivers that use the global_node_page_state if
> that matters.

DMA-buf are not NUMA aware at all. On which node the pages are allocated 
(and if we use pages at all and not internal device memory) is up to the 
exporter and importer.

Christian.

>
>
>>> dma-buf is a device driver that provides a function so I might be
>>> on the outside. However I also see that it might be relevant for a OOM.
>>> It is memory that can be freed by killing userspace processes.
>>>
>>> The show_mem thing. Should it be a separate patch?
>> This is up to you but if you want to expose the counter then send it in
>> one series.
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ