lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Aug 2018 11:34:13 -0400
From:   Waiman Long <longman@...hat.com>
To:     Dave Chinner <david@...morbit.com>
Cc:     "Darrick J. Wong" <darrick.wong@...cle.com>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] xfs: Prevent multiple wakeups of the same log
 space waiter

On 08/26/2018 08:21 PM, Dave Chinner wrote:
> On Sun, Aug 26, 2018 at 04:53:14PM -0400, Waiman Long wrote:
>> The current log space reservation code allows multiple wakeups of the
>> same sleeping waiter to happen. This is a just a waste of cpu time as
>> well as increasing spin lock hold time. So a new XLOG_TIC_WAKING flag is
>> added to track if a task is being waken up and skip the wake_up_process()
>> call if the flag is set.
>>
>> Running the AIM7 fserver workload on a 2-socket 24-core 48-thread
>> Broadwell system with a small xfs filesystem on ramfs, the performance
>> increased from 91,486 jobs/min to 192,666 jobs/min with this change.
> Oh, I just noticed you are using a ramfs for this benchmark,
>
> tl; dr: Once you pass a certain point, ramdisks can be *much* slower
> than SSDs on journal intensive workloads like AIM7. Hence it would be
> useful to see if you have the same problems on, say, high
> performance nvme SSDs.

Oh sorry, I made a mistake.

There were some problems with my test configuration. I was actually
running the test on a regular enterprise-class disk device mount on /.

Filesystem                              1K-blocks     Used Available
Use% Mounted on
/dev/mapper/rhel_hp--xl420gen9--01-root  52403200 11284408  41118792  22% /

It was not an SSD, nor ramdisk. I reran the test on ramdisk, the
performance of the patched kernel was 679,880 jobs/min which was a bit
more than double the 285,221 score that I got on a regular disk.

So the filesystem used wasn't tiny, though it is still not very large.
The test was supposed to create 16 ramdisks and distribute the test
tasks to the ramdisks. Instead, they were all pounding on the same
filesystem worsening the spinlock contention problem.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ