lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAKEwX=PLW=oj2DmsgaynXhY_SYb0VOw9i64K=RrZxhGySxdtvQ@mail.gmail.com> Date: Thu, 14 Dec 2023 18:19:05 -0800 From: Nhat Pham <nphamcs@...il.com> To: Chris Li <chrisl@...nel.org> Cc: Johannes Weiner <hannes@...xchg.org>, Minchan Kim <minchan@...nel.org>, akpm@...ux-foundation.org, tj@...nel.org, lizefan.x@...edance.com, cerasuolodomenico@...il.com, yosryahmed@...gle.com, sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com, mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com, muchun.song@...ux.dev, hughd@...gle.com, corbet@....net, konrad.wilk@...cle.com, senozhatsky@...omium.org, rppt@...nel.org, linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org, david@...t.cz, Kairui Song <kasong@...cent.com>, Zhongkun He <hezhongkun.hzk@...edance.com> Subject: Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling On Thu, Dec 14, 2023 at 2:55 PM Chris Li <chrisl@...nel.org> wrote: > > On Thu, Dec 14, 2023 at 2:11 PM Johannes Weiner <hannes@...xchg.org> wrote: > > > > On Thu, Dec 14, 2023 at 09:34:06AM -0800, Christopher Li wrote: > > > On Thu, Dec 14, 2023 at 9:11 AM Johannes Weiner <hannes@...xchg.org> wrote: > > > > > > > > Hi Johannes, > > > > > > > > > > I haven't been following the thread closely, but I noticed the discussion > > > > > about potential use cases for zram with memcg. > > > > > > > > > > One interesting idea I have is to implement a swap controller per cgroup. > > > > > This would allow us to tailor the zram swap behavior to the specific needs of > > > > > different groups. > > > > > > > > > > For example, Group A, which is sensitive to swap latency, could use zram swap > > > > > with a fast compression setting, even if it sacrifices some compression ratio. > > > > > This would prioritize quick access to swapped data, even if it takes up more space. > > > > > > > > > > On the other hand, Group B, which can tolerate higher swap latency, could benefit > > > > > from a slower compression setting that achieves a higher compression ratio. > > > > > This would maximize memory efficiency at the cost of slightly slower data access. > > > > > > > > > > This approach could provide a more nuanced and flexible way to manage swap usage > > > > > within different cgroups. > > > > > > > > That makes sense to me. > > > > > > > > It sounds to me like per-cgroup swapfiles would be the easiest > > > > solution to this. Then you can create zram devices with different > > > > configurations and assign them to individual cgroups. > > > > > > Ideally you need zram then following swap file after the zram. That > > > would be a list of the swap files rather than just one swapfile per > > > cgroup. > > > > > > > This would also apply to Kairu's usecase: assign zrams and hdd backups > > > > as needed on a per-cgroup basis. > > > > > > Same there, Kairui's request involves ZRAM and at least one extra swap > > > file. In other words, you really need a per cgroup swap file list. > > > > Why is that a problem? > > It is not a problem. It is the necessary infrastructure to support the > requirement. I am merely saying just having one swap file is not > enough. > > > > > swapon(zram, cgroup=foo) > > swapon(hdd, cgroup=foo) > > Interesting idea. I assume you want to use swapon/swapoff to turn on > off a device for a specific cgroup. > That seems to implite each cgroup will have a private copy of the swap > device list. > > I have considered the memory.swap.tiers for the same thing, with one > minor optimization. The list is system wide maintained with a name. > The per cgroup just has a pointer to that named list. There shouldn't > be too many such lists of swap back end combinations on the system. > > We are getting into the weeds. The bottom line is, we need to have per > cgroup a swap file list. That is the necessary evil we can't get away > with. Highly agree. This is getting waaayyyy too deep into the weeds, and the conversation has practically spiralled out of the original intention of this patch - its purported problem and proposed solution. Not to say that none of this is useful, but I sense that we first need to do the following: a) List out the requirements that the new interface has to support: the tiers made available to the cgroup, hierarchical structure (i.e do we want a tier list to have more than 1 non-zswap level? Maybe we won't need it after all, in which case the swapon solution is perhaps sufficient). b) Carefully evaluate the proposed candidates. It could be an altered memory.swap.tiers, or an extended swapon/swapoff. Perhaps we should organize a separate meeting or email thread to discuss this in detail, and write out proposed solutions for everyone to evaluate. In the meantime, I think that we should merge this new knob as-is. > > > > > > > In addition, it would naturally solve scalability and isolation > > > > problems when multiple containers would otherwise be hammering on the > > > > same swap backends and locks. > > > > > > > > It would also only require one, relatively simple new interface, such > > > > as a cgroup parameter to swapon(). > > > > > > > > That's highly preferable over a complex configuration file like > > > > memory.swap.tiers that needs to solve all sorts of visibility and > > > > namespace issues and duplicate the full configuration interface of > > > > every backend in some new, custom syntax. > > > > > > If you don't like the syntax of memory.swap.tiers, I am open to > > > suggestions of your preferred syntax as well. The essicents of the > > > swap.tiers is a per cgroup list of the swap back ends. The names imply > > > that. I am not married to any given syntax of how to specify the list. > > > Its goal matches the above requirement pretty well. > > > > Except Minchan said that he would also like different zram parameters > > depending on the cgroup. > > Minchan's requirement is new. We will need to expand the original > "memory.swap.tiers" to support such usage. > > > There is no way we'll add a memory.swap.tiers with a new configuration > > language for backend parameters. > > > > I agree that we don't want a complicated configuration language for > "memory.swap.tiers". > > Those backend parameters should be configured on the back end side. > The "memory.swap.tiers" just reference the already configured object. > Just brainstorming: > /dev/zram0 has compression algo1 for fast speed low compression ratio. > /dev/zram1 has compression algo2 for slow speed high compression ratio. > > "memory.swap.tiers" point to zram0 or zram1 or a custom list has "zram0 + hdd" > > Chris
Powered by blists - more mailing lists