lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33efde37-562f-4c6a-72ba-2277533e3781@gmail.com>
Date:   Tue, 17 Nov 2020 16:55:26 +0100
From:   Brice Goglin <brice.goglin@...il.com>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
        x86@...nel.org, Borislav Petkov <bp@...e.de>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Len Brown <len.brown@...el.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
        Dave Hansen <dave.hansen@...el.com>,
        "Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>,
        Kan Liang <kan.liang@...ux.intel.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Subject: Re: [PATCH 1/4] drivers core: Introduce CPU type sysfs interface


Le 12/11/2020 à 11:49, Greg Kroah-Hartman a écrit :
> On Thu, Nov 12, 2020 at 10:10:57AM +0100, Brice Goglin wrote:
>> Le 12/11/2020 à 07:42, Greg Kroah-Hartman a écrit :
>>> On Thu, Nov 12, 2020 at 07:19:48AM +0100, Brice Goglin wrote:
>>>>
>>>> Hello
>>>>
>>>> Sorry for the late reply. As the first userspace consumer of this
>>>> interface [1], I can confirm that reading a single file to get the mask
>>>> would be better, at least for performance reason. On large platforms, we
>>>> already have to read thousands of sysfs files to get CPU topology and
>>>> cache information, I'd be happy not to read one more file per cpu.
>>>>
>>>> Reading these sysfs files is slow, and it does not scale well when
>>>> multiple processes read them in parallel.
>>> Really?  Where is the slowdown?  Would something like readfile() work
>>> better for you for that?
>>> 	https://lore.kernel.org/linux-api/20200704140250.423345-1-gregkh@linuxfoundation.org/
>>
>> I guess readfile would improve the sequential case by avoiding syscalls
>> but it would not improve the parallel case since syscalls shouldn't have
>> any parallel issue?
> syscalls should not have parallel issues at all.
>
>> We've been watching the status of readfile() since it was posted on LKML
>> 6 months ago, but we were actually wondering if it would end up being
>> included at some point.
> It needs a solid reason to be merged.  My "test" benchmarks are fun to
> run, but I have yet to find a real need for it anywhere as the
> open/read/close syscall overhead seems to be lost in the noise on any
> real application workload that I can find.
>
> If you have a real need, and it reduces overhead and cpu usage, I'm more
> than willing to update the patchset and resubmit it.
>
>

Hello

I updated hwloc to use readfile instead of open+read+close on all those
small sysfs/procfs files. Unfortunately the improvement is very small,
only a couple percents. On a 40 core server, our library starts in 38ms
instead of 39ms. I can't deploy your patches on larger machines, but I
tested our code on a copy of their sysfs files saved on a local disk :
For a 256-thread KNL, we go from 15ms to 14ms. For a 896-core SGI
machine, from 73ms to 71ms.

I see 200ns improvement for readfile (2300) vs open+read+close (2500) on
my server when reading a single cpu topology file. With several
thousands of sysfs files to read in the above large hwloc tests, it
confirms an overall improvement in the order of 1ms.

So, just like you said, the overhead seems to be pretty much lost in the
noise of hwloc doing its own stuff after reading hundreds of sysfs files :/

Brice


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ