lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a1e06d4-7f41-4f37-a9b5-d1610e8d8669@efficios.com>
Date: Wed, 4 Sep 2024 14:28:05 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 linux-kernel@...r.kernel.org, Valentin Schneider <vschneid@...hat.com>,
 Mel Gorman <mgorman@...e.de>, Steven Rostedt <rostedt@...dmis.org>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>, Ben Segall
 <bsegall@...gle.com>, Rasmus Villemoes <linux@...musvillemoes.dk>,
 Dmitry Vyukov <dvyukov@...gle.com>, Marco Elver <elver@...gle.com>
Subject: Re: [RFC PATCH 2/2] sched: Improve cache locality of RSEQ concurrency
 IDs for intermittent workloads

On 2024-09-04 11:50, Mathieu Desnoyers wrote:
> On 2024-09-04 11:24, Yury Norov wrote:
[...]
>>
>> This all doesn't look like a hot path. And anyways, speculating around
>> performance without numbers on hands sounds cheap.
> 
> This is done whenever userspace invokes sched_setaffinity, or changes
> its cgroup cpuset. It may not be the most important fast-path in the
> world, but I expect some workloads to issue sched_setaffinity whenever
> they create a thread, so it's not a purely slow-path either.
> 
>> In my experience, iterators with a very lightweight payload are ~100
>> times slower comparing to dedicated bitmap ops. Check this for example:
>> 3cea8d4753277.
>>
>> If you're really cared about performance here, I'd suggest you to
>> compare your iterators approach with something like this:
>>
>>    cpumask_or(mm_allowed, mm_allowed, cpumask);
>>    atomic_set(&mm->nr_cpus_allowed, cpumask_weight(mm_allowed);

Here are the benchmark results. Each test use two entirely filled
bitmaps as input to mimic the common scenario for cpus allowed
being updated with a subset of the original process CPUs allowed,
and also the common case where the initial cpumask is filled.

#define BITMAP_LEN      (4096UL * 8 * 10)
(len = BITMAP_LEN)

* Approach 1:

        int nr_set = 0;
        for_each_andnot_bit(bit, bitmap, bitmap2, len)
                nr_set += !test_and_set_bit(bit, bitmap2);
        if (nr_set)
                atomic_add(nr_set, &total);

Time: 4680 ns

* Approach 2:

        int nr_set = 0;
        for_each_set_bit(bit, bitmap, len)
                nr_set += !test_and_set_bit(bit, bitmap2);
        if (nr_set)
                atomic_add(nr_set, &total);

Time: 1791537 ns

* Approach 3:

        mutex_lock(&lock);
        bitmap_or(bitmap2, bitmap, bitmap2, len);
        atomic_set(&total, bitmap_weight(bitmap2, len));
        mutex_unlock(&lock);

Time: 79591 ns

So approach 1 is 382 times faster than approach 2, and 17 times
faster than approach 3. And this is only single-threaded,
I expect approaches 2&3 to perform even worse when contended
due to the many fully ordered test_and_set_bit (2) and due to
locking (3).

The test hardware is a AMD EPYC 9654 96-Core Processor.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ