lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee87cc64-d602-4ee5-8545-dd19407241c3@vivo.com>
Date: Thu, 4 Jul 2024 10:49:43 +0800
From: Huan Yang <link@...o.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>,
 Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
 Muchun Song <muchun.song@...ux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>,
 "Matthew Wilcox (Oracle)" <willy@...radead.org>,
 David Hildenbrand <david@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
 Chris Li <chrisl@...nel.org>, Dan Schatzberg <schatzberg.dan@...il.com>,
 Kairui Song <kasong@...cent.com>, cgroups@...r.kernel.org,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 Christian Brauner <brauner@...nel.org>, opensource.kernel@...o.com
Subject: Re: [RFC PATCH 0/4] Introduce PMC(PER-MEMCG-CACHE)


在 2024/7/4 1:27, Shakeel Butt 写道:
> On Wed, Jul 03, 2024 at 10:23:35AM GMT, Huan Yang wrote:
>> 在 2024/7/3 3:27, Roman Gushchin 写道:
> [...]
>>> Hello Huan,
>>>
>>> thank you for sharing your work.
>> thanks
>>> Some high-level thoughts:
>>> 1) Naming is hard, but it took me quite a while to realize that you're talking
>> Haha, sorry for my pool english
>>> about free memory. Cache is obviously an overloaded term, but per-memcg-cache
>>> can mean absolutely anything (pagecache? cpu cache? ...), so maybe it's not
>> Currently, my idea is that all memory released by processes under memcg will
>> go into the `cache`,
>>
>> and the original attributes will be ignored, and can be freely requested by
>> processes under memcg.
>>
>> (so, dma-buf\page cache\heap\driver, so on). Maybe named PMP more friendly?
>> :)
>>
>>> the best choice.
>>> 2) Overall an idea to have a per-memcg free memory pool makes sense to me,
>>> especially if we talk 2MB or 1GB pages (or order > 0 in general).
>> I like it too :)
>>> 3) You absolutely have to integrate the reclaim mechanism with a generic
>>> memory reclaim mechanism, which is driven by the memory pressure.
>> Yes, I all think about it.
>>> 4) You claim a ~50% performance win in your workload, which is a lot. It's not
>>> clear to me where it's coming from. It's hard to believe the page allocation/release
>>> paths are taking 50% of the cpu time. Please, clarify.
>> Let me describe it more specifically. In our test scenario, we have 8GB of
>> RAM, and our camera application
>>
>> has a complex set of algorithms, with a peak memory requirement of up to
>> 3GB.
>>
>> Therefore, in a multi-application background scenario, starting the camera
>> and taking photos will create a
>>
>> very high memory pressure. In this scenario, any released memory will be
>> quickly used by other processes (such as file pages).
>>
>> So, during the process of switching from camera capture to preview, DMA-BUF
>> memory will be released,
>>
>> while the memory used for the preview algorithm will be simultaneously
>> requested.
>>
>> We need to take a lot of slow path routes to obtain enough memory for the
>> preview algorithm, and it seems that the
>>
>> just released DMA-BUF memory does not provide much help.
>>
>> But using PMC (let's call it that for now), we are able to quickly meet the
>> memory needs of the subsequent preview process
>>
>> with the just released DMA-BUF memory, without having to go through the slow
>> path, resulting in a significant performance improvement.
>>
>> (of course, break migrate type may not good.)
>>
> Please correct me if I am wrong, IIUC you have applcations with
> different latency or performance requirements, running on the same
> system but the system is memory constraint. You want applications with
> stringent performance requirement to go less in the allocation slowpath
> and want the lower priority (or no perf requirement) applications to do
> more slowpath work (reclaim/compaction) for themselves as well as for
> the high priority applications.
Yes, The PMC does have the idea of priority control.
In the smartphone, the most strongly perceived aspect by users is the 
foreground app.
In the scenario I described, the camera application should have absolute 
priority for memory,
and its internal memory usage should be given priority to meet its 
needs.(Especially when we
set the PMC's allocation after the buddy free.)
>
> What about the allocations from the softirqs or non-memcg-aware kernel
> allocations?

Sorry softirqs I can't explain. But, many kernel thread also set into 
root memcg.

In our scenario, we set all processes related to the camera application 
to the same memcg.(both user
and kernel thread)

>
> An alternative approach would be something similar to the watermark
> based approach. Low priority applications (or kswapds) doing
> reclaim/compaction at a higher newly defined watermark and the higher
> priority applications are protected through the usual memcg protection.

Also, Please correct me if I am wrong.

I understand that even with boost, water level control cannot finely 
control which
applications or processes should be recycled with a high water level.
Application grouping and selection need to be re-implemented.

Through PMC, we can proactively group the processes required by the 
application,
only opening them when they enter the foreground and closing them when 
in the background.

>
> I can see another use-case for whatever the solution we comeup with and
> that is userspace reliable oom-killer.
Yes, LMKD is helpfull.
That's unfortunate, but our product also has other dimensions of 
assessment, including application persistence.
This means that when the camera is launched, we can only kill 
unnecessary applications to free up a small amount

of memory to meet its startup requirements. However, when it requests 
memory for taking a photo,

the memory allocation is relatively lazy during the kill-check phase.

And one more thing, the memory released by killing applications may not 
necessarily meet the
instantaneous memory requirements.(Many zram compress page, not too fast)

Thanks,

HY

>
> Shakeel
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ