lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZowN8FvmdiEGr_rC@tiehlicka>
Date: Mon, 8 Jul 2024 18:04:00 +0200
From: Michal Hocko <mhocko@...e.com>
To: xiujianfeng <xiujianfeng@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, tj@...nel.org,
	lizefan.x@...edance.com, hannes@...xchg.org, corbet@....net,
	cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Sidhartha Kumar <sidhartha.kumar@...cle.com>,
	Miaohe Lin <linmiaohe@...wei.com>,
	Baolin Wang <baolin.wang@...ux.alibaba.com>
Subject: Re: [PATCH -next] mm/hugetlb_cgroup: introduce peak and rsvd.peak to
 v2

On Mon 08-07-24 21:40:39, xiujianfeng wrote:
> 
> 
> On 2024/7/8 20:48, Michal Hocko wrote:
> > On Wed 03-07-24 13:38:04, Andrew Morton wrote:
> >> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@...wei.com> wrote:
> >>
> >>>
> >>>
> >>> On 2024/7/3 9:58, Andrew Morton wrote:
> >>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@...wei.com> wrote:
> >>>>
> >>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum
> >>>>> usage of resources, as in some scenarios it is necessary to configure
> >>>>> the value of max/rsvd.max based on the peak usage of resources.
> >>>>
> >>>> "in some scenarios it is necessary" is not a strong statement.  It
> >>>> would be helpful to fully describe these scenarios so that others can
> >>>> better understand the value of this change.
> >>>>
> >>>
> >>> Hi Andrew,
> >>>
> >>> Is the following description acceptable for you?
> >>>
> >>>
> >>> Since HugeTLB doesn't support page reclaim, enforcing the limit at
> >>> page fault time implies that, the application will get SIGBUS signal
> >>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
> >>> application needs to know exactly how many HugeTLB pages it uses before
> >>> hand, and the sysadmin needs to make sure that there are enough
> >>> available on the machine for all the users to avoid processes getting
> >>> SIGBUS.
> > 
> > yes, this is pretty much a definition of hugetlb.
> > 
> >>> When running some open-source software, it may not be possible to know
> >>> the exact amount of hugetlb it consumes, so cannot correctly configure
> >>> the max value. If there is a peak metric, we can run the open-source
> >>> software first and then configure the max based on the peak value.
> > 
> > I would push back on this. Hugetlb workloads pretty much require to know
> > the number of hugetlb pages ahead of time. Because you need to
> > preallocate them for the global hugetlb pool. What I am really missing
> > in the above justification is an explanation of how come you know how to
> > configure the global pool but you do not know that for a particular
> > cgroup. How exactly do you configure the global pool then?
> 
> Yes, in this scenario, it's indeed challenging to determine the
> appropriate size for the global pool. Therefore, a feasible approach is
> to initially configure a larger value. Once the software is running
> within the container successfully, the maximum value for the container
> and the size of the system's global pool can be determined based on the
> peak value, otherwise, increase the size of the global pool and try
> again. so I believe the peak metric is useful for this scenario.

This sounds really backwards to me. Not that I care much about peak
value itself. It is not really anything disruptive to add nor maintain
but this approach to configuring the system just feels completely wrong.
You shouldn't be really using hugetlb cgroup controller if you do not
have a very specific idea about expected and therefore allowed hugetlb
pool consumption.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ