lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHFvGwcQ+8JOjwR3B=KtHiVqC1=eiNgGv33z29443VJdFg@mail.gmail.com>
Date:   Wed, 23 Aug 2023 18:10:29 +0200
From:   Mateusz Guzik <mjguzik@...il.com>
To:     Jan Kara <jack@...e.cz>
Cc:     Dennis Zhou <dennis@...nel.org>, linux-kernel@...r.kernel.org,
        tj@...nel.org, cl@...ux.com, akpm@...ux-foundation.org,
        shakeelb@...gle.com, linux-mm@...ck.org
Subject: Re: [PATCH 0/2] execve scalability issues, part 1

On 8/23/23, Jan Kara <jack@...e.cz> wrote:
> I didn't express myself well. Sure atomics are expensive compared to plain
> arithmetic operations. But I wanted to say - we had atomics for RSS
> counters before commit f1a7941243 ("mm: convert mm's rss stats into
> percpu_counter") and people seemed happy with it until there were many CPUs
> contending on the updates. So maybe RSS counters aren't used heavily enough
> for the difference to practically matter? Probably operation like faulting
> in (or unmapping) tmpfs file has the highest chance of showing the cost of
> rss accounting compared to the cost of the remainder of the operation...
>

These stats used to be decentralized by storing them in task_struct,
the commit complains about values deviating too much.

The value would get synced every 64 uses, from the diff:
-/* sync counter once per 64 page faults */
-#define TASK_RSS_EVENTS_THRESH (64)
-static void check_sync_rss_stat(struct task_struct *task)
-{
-       if (unlikely(task != current))
-               return;
-       if (unlikely(task->rss_stat.events++ > TASK_RSS_EVENTS_THRESH))
-               sync_mm_rss(task->mm);
-}

other than that it was a non-atomic update in struct thread.

-static void add_mm_counter_fast(struct mm_struct *mm, int member, int val)
-{
-       struct task_struct *task = current;
-
-       if (likely(task->mm == mm))
-               task->rss_stat.count[member] += val;
-       else
-               add_mm_counter(mm, member, val);
-}

So the question is how much does this matter. My personal approach is
that avoidable slowdowns (like atomics here) only facilitate further
avoidable slowdowns as people can claim there is a minuscule change in
% to baseline. But if the baseline is already slow....

Anyhow, I just found that patch failed to completely remove
SPLIT_RSS_COUNTING. I'm going to submit something about that later.

-- 
Mateusz Guzik <mjguzik gmail.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ