lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEemH2eH0a6vHhv80hDcTBxTUYHALrOKjtvWnajCwPk_zLpJ3Q@mail.gmail.com>
Date: Sun, 21 Dec 2025 17:35:28 +0800
From: Li Wang <liwang@...hat.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: akpm@...ux-foundation.org, linux-kselftest@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
	Mark Brown <broonie@...nel.org>, Shuah Khan <shuah@...nel.org>, Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v2 2/3] selftests/mm/charge_reserved_hugetlb.sh: add waits
 with timeout helper

David Hildenbrand (Red Hat) <david@...nel.org> wrote:

> On 12/21/25 09:58, Li Wang wrote:
> > The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> > unbounded and could hang forever if the expected cgroup file value never
> > appears (e.g. due to bugs, timing issues, or unexpected behavior).
>
> Did you actually hit that in practice? Just wondering.

Yes.

On an aarch64 64k setup with 512MB hugepages, the test failed earlier
(hugetlbfs got mounted with an effective size of 0 due to size=256M), so
write_to_hugetlbfs couldn’t allocate the expected pages. After that, the
script’s wait loops never observed the target value, so they spun forever.

Detail see below logs.

>
> >
> > --- Error log ---
> >    # uname -r
> >    6.12.0-xxx.el10.aarch64+64k
> >
> >    # ls /sys/kernel/mm/hugepages/hugepages-*
> >    hugepages-16777216kB/  hugepages-2048kB/  hugepages-524288kB/
> >
> >    #./charge_reserved_hugetlb.sh -cgroup-v2
> >    # -----------------------------------------
> >    ...
> >    # nr hugepages = 10
> >    # writing cgroup limit: 5368709120
> >    # writing reseravation limit: 5368709120
> >    ...
> >    # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    # Waiting for hugetlb memory reservation to reach size 2684354560.
> >    # 0
> >    ...
> >
> > Introduce a small helper, wait_for_file_value(), and use it for:
> >    - waiting for reservation usage to drop to 0,
> >    - waiting for reservation usage to reach a given size,
> >    - waiting for fault usage to reach a given size.
> >
> > This makes the waits consistent and adds a hard timeout (120 tries with
> > 0.5s sleep) so the test fails instead of stalling indefinitely.
> >
> > Signed-off-by: Li Wang <liwang@...hat.com>
> > Cc: David Hildenbrand <david@...nel.org>
> > Cc: Mark Brown <broonie@...nel.org>
> > Cc: Shuah Khan <shuah@...nel.org>
> > Cc: Waiman Long <longman@...hat.com>
> > ---
> >   .../selftests/mm/charge_reserved_hugetlb.sh   | 47 ++++++++++---------
> >   1 file changed, 26 insertions(+), 21 deletions(-)
> >
> > diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > index e1fe16bcbbe8..249a5776c074 100755
> > --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > @@ -100,7 +100,7 @@ function setup_cgroup() {
> >     echo writing cgroup limit: "$cgroup_limit"
> >     echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
> >
> > -  echo writing reseravation limit: "$reservation_limit"
> > +  echo writing reservation limit: "$reservation_limit"
> >     echo "$reservation_limit" > \
> >       $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
> >
> > @@ -112,41 +112,46 @@ function setup_cgroup() {
> >     fi
> >   }
> >
> > +function wait_for_file_value() {
> > +  local path="$1"
> > +  local expect="$2"
> > +  local max_tries="120"
> > +
> > +  local i cur
>
> I would just move "cur" into the loop; I don't see a reason to print it
> on the error path when you printed the value on the last "Waiting" line?
>
>         local cur="$(cat "$path")"

+1

>
> Also, not sure if you really need the "local i" here.
>
> What if the path does not exist, do we want to catch that earlier and
> bail out instead of letting "cat" fail here?

Yes, we can add a file check before the "cat" loop.

>
> > +  for ((i=1; i<=max_tries; i++)); do
> > +    cur="$(cat "$path")"
> > +    if [[ "$cur" == "$expect" ]]; then
> > +      return 0
> > +    fi
> > +    echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> > +    sleep 0.5
>
> Any reason we don't go for the more intuitive "wait 1s" - max 60s wait?

Sure, the total loop time are same.

-- 
Regards,
Li Wang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ