[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ccbe4bd6-5550-48ce-b056-42fc47e2e468@gmail.com>
Date: Mon, 7 Aug 2023 15:51:45 +0530
From: Shreenidhi Shedi <yesshedi@...il.com>
To: Masahiro Yamada <masahiroy@...nel.org>,
Greg KH <gregkh@...uxfoundation.org>
Cc: dhowells@...hat.com, dwmw2@...radead.org,
linux-kernel@...r.kernel.org, sshedi@...are.com
Subject: Re: [PATCH v6 0/7] refactor file signing program
On 07/08/23 13:47, Shreenidhi Shedi wrote:
> On 07/08/23 07:53, Masahiro Yamada wrote:
>> On Thu, Jun 1, 2023 at 6:08 PM Greg KH <gregkh@...uxfoundation.org>
>> wrote:
>>>
>>> On Thu, Jun 01, 2023 at 02:33:23PM +0530, Shreenidhi Shedi wrote:
>>>> On Wed, 31-May-2023 22:20, Greg KH wrote:
>>>>> On Wed, May 31, 2023 at 09:01:24PM +0530, Shreenidhi Shedi wrote:
>>>>>> On Wed, 31-May-2023 20:08, Greg KH wrote:
>>>>>>> On Tue, Apr 25, 2023 at 04:14:49PM +0530, Shreenidhi Shedi wrote:
>>>>>>>> On Wed, 22-Mar-2023 01:03, Shreenidhi Shedi wrote:
>>>>>>>> Can you please review the latest patch series? I think I have
>>>>>>>> addressed your
>>>>>>>> concerns. Thanks.
>>>>>>>
>>>>>>> The big question is, "who is going to use these new features"? This
>>>>>>> tool is only used by the in-kernel build scripts, and if they do not
>>>>>>> take advantage of these new options you have added, why are they
>>>>>>> needed?
>>>>>>>
>>>>>>> thanks,
>>>>>>>
>>>>>>> greg k-h
>>>>>>
>>>>>> Hi Greg,
>>>>>>
>>>>>> Thanks for the response.
>>>>>>
>>>>>> We use it in VMware Photon OS. Following is the link for the same.
>>>>>> https://github.com/vmware/photon/blob/master/SPECS/linux/spec_install_post.inc#L4
>>>>>>
>>>>>> If this change goes in, it will give a slight push to our build
>>>>>> performance.
>>>>>
>>>>> What exactly do you mean by "slight push"?
>>>>
>>>> Instead of invoking the signing tool binary for each module, we can
>>>> pass
>>>> modules in bulk and it will reduce the build time by couple of seconds.
>>>
>>> Then why not modify the in-kernel build system to also do this, allowing
>>> everyone to save time and money (i.e. energy)?
>>>
>>> Why keep the build savings to yourself?
>>>
>>> thanks,
>>>
>>> greg k-h
>>
>>
>> If I understand correctly,
>> "sign-file: add support to sign modules in bulk"
>> is the only benefit in the patchset.
>>
>> I tested the bulk option, but I did not see build savings.
>>
>>
>>
>> My evaluation:
>> 1. 'make allmodconfig all', then 'make modules_install'.
>> (9476 modules installed)
>>
>> 2. I ran 'perf stat' for single signing vs bulk signing
>> (5 runs for each).
>> I changed the -n option in scripts/signfile.sh
>>
>>
>>
>>
>> A. single sign
>>
>> Sign one module per scripts/sign-file invocation.
>>
>> find "${MODULES_PATH}" -name *.ko -type f -print0 | \
>> xargs -r -0 -P$(nproc) -x -n1 sh -c "..."
>>
>>
>>
>> Performance counter stats for './signfile-single.sh' (5 runs):
>>
>> 22.33 +- 2.26 seconds time elapsed ( +- 10.12% )
>>
>>
>>
>>
>> B. bulk sign
>>
>> Sign 32 modules per scripts/sign-file invocation
>>
>> find "${MODULES_PATH}" -name *.ko -type f -print0 | \
>> xargs -r -0 -P$(nproc) -x -n32 sh -c "..."
>>
>>
>> Performance counter stats for './signfile-bulk.sh' (5 runs):
>>
>> 24.78 +- 3.01 seconds time elapsed ( +- 12.14% )
>>
>>
>>
>>
>> The bulk option decreases the process forks of scripts/sign-file
>> but I did not get even "slight push".
>>
>>
>>
>
> That's some really interesting data. I'm surprised that with stand alone
> run bulk signing is not performing as expected. Can you give the full
> command you used for bulk sign? Reduced number of forks should
> eventually lead to getting more done in less time.
>
> But I got ~1.4 seconds boost when I did "make module_install".
>
> I gave the data in my other response as well. Copying the same here
> because this has in better context.
>
> root@...dev:~/linux-6.3.5 # ./test.sh orig
>
> real 0m14.699s
> user 0m55.519s
> sys 0m9.036s
>
> root@...dev:~/linux-6.3.5 # ./test.sh new
>
> real 0m13.327s
> user 0m46.885s
> sys 0m6.770s
>
> Here is my test script.
> ```
> #!/bin/bash
>
> set -e
>
> if [ "$1" != "new" ] && [ "$1" != "orig" ]; then
> echo "invalid arg, ($0 [orig|new])" >&2
> exit 1
> fi
>
> rm -rf $PWD/tmp
>
> s="scripts/sign-file.c"
> m="scripts/Makefile.modinst"
> fns=($s $m)
>
> for f in ${fns[@]}; do
> cp $f.$1 $f
> done
>
> cd scripts
> gcc -o sign-file sign-file.c -lcrypto
> cd -
>
> time make modules_install INSTALL_MOD_PATH=$PWD/tmp -s -j$(nproc)
> ```
>
I ran the signfile script again using perf. Almost same as the method
you followed.
I have 991 modules in the target modules directory. Following is the report:
```
root@...dev:~/linux-6.3.5 #
perf stat ./signfile.sh sha384 certs/signing_key.pem 1
Performance counter stats for './signfile.sh sha384
certs/signing_key.pem 1':
18,498.62 msec task-clock # 7.901
CPUs utilized
6,211 context-switches # 335.755 /sec
52 cpu-migrations # 2.811 /sec
554,414 page-faults # 29.971 K/sec
2.341202651 seconds time elapsed
14.891415000 seconds user
3.018111000 seconds sys
root@...dev:~/linux-6.3.5 #
perf stat ./signfile.sh sha384 certs/signing_key.pem 32
Performance counter stats for './signfile.sh sha384
certs/signing_key.pem 32':
8,397.24 msec task-clock # 7.548
CPUs utilized
1,237 context-switches # 147.310 /sec
0 cpu-migrations # 0.000 /sec
22,529 page-faults # 2.683 K/sec
1.112510013 seconds time elapsed
8.057543000 seconds user
0.323572000 seconds sys
```
And now the interesting part. I tested the time saved with only
modules_sign.
root@...dev:~/linux-6.3.5 # ./b.sh new
real 0m1.756s
user 0m8.481s
sys 0m0.553s
root@...dev:~/linux-6.3.5 # ./b.sh orig
real 0m3.078s
user 0m16.801s
sys 0m3.096s
root@...dev:~/linux-6.3.5 # ./b.sh new
real 0m1.757s
user 0m8.554s
sys 0m0.504s
root@...dev:~/linux-6.3.5 # ./b.sh orig
real 0m3.098s
user 0m16.855s
sys 0m3.073s
And signfile.sh script also shows the same. I tweaked it a bit to accept
number of process as another arg.
root@...dev:~/linux-6.3.5 #
time ./signfile.sh sha384 certs/signing_key.pem 1
real 0m2.343s
user 0m14.916s
sys 0m2.890s
root@...dev:~/linux-6.3.5 #
time ./signfile.sh sha384 certs/signing_key.pem 32
real 0m1.120s
user 0m8.120s
sys 0m0.276s
So, every run is saving ~2 seconds of time. I think something is wrong
in the way you tested. Please check once at your end.
--
Shedi
Powered by blists - more mailing lists