lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADUfDZpH5v8jxphVRGvD5o-jLXiDbTw0SsAxzTSCLGyua9erjQ@mail.gmail.com>
Date: Wed, 16 Apr 2025 15:30:15 -0700
From: Caleb Sander Mateos <csander@...estorage.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-block@...r.kernel.org, 
	linux-kernel@...r.kernel.org, Eric Biggers <ebiggers@...nel.org>
Subject: Re: [PATCH] scatterlist: inline sg_next()

On Wed, Apr 16, 2025 at 3:05 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Wed, 16 Apr 2025 10:06:13 -0600 Caleb Sander Mateos <csander@...estorage.com> wrote:
>
> > sg_next() is a short function called frequently in I/O paths. Define it
> > in the header file so it can be inlined into its callers.
>
> Does this actually make anything faster?
>
> net/ceph/messenger_v2.c has four calls to sg_next().  x86_64 defconfig:

Hmm, I count 7 calls in the source code. And that excludes possible
functions defined in included header files that also call sg_next().
And the functions which call sg_next() could themselves be inlined,
resulting in even more calls. The object file looks to have 7 calls to
sg_next():
$ readelf -r net/ceph/messenger_v2.o | grep -c sg_next
7

>
> x1:/usr/src/25> size net/ceph/messenger_v2.o
>    text    data     bss     dec     hex filename
>   31486    2212       0   33698    83a2 net/ceph/messenger_v2.o
>
> after:
>
>   31742    2212       0   33954    84a2 net/ceph/messenger_v2.o
>
> More text means more cache misses.  Possibly the patch slows things down??

Yes, it's true that inlining doesn't necessarily improve performance.
For reference, the workload I am looking at is issuing 32 KB NVMe
reads, which results in calling sg_next() from nvme_pci_setup_prps().
About 0.5% of the CPU time is spent in sg_next() itself (not counting
the cost of calling into it).
Inlining the function could help save the cost of the call + return,
as well as improve branch prediction rates for the if (sg_is_last(sg))
check by creating a separate copy of the branch in each caller.
My guess is that most workloads (like mine) don't call sg_next() from
all that many places. So even though inlining would duplicate the code
into all callers, not all the callers are hot. The number of locations
actually loaded into the instruction cache are likely to be relatively
few, so the increase in cached instructions wouldn't be as steep as
the text size suggests.
That's all to say: the costs and benefits are workload-dependent. And
in all likelihood, they will be pretty small either way.

Best,
Caleb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ