lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aML4FLHPvjELZR4W@visitorckw-System-Product-Name>
Date: Fri, 12 Sep 2025 00:25:56 +0800
From: Kuan-Wei Chiu <visitorckw@...il.com>
To: Caleb Sander Mateos <csander@...estorage.com>
Cc: Guan-Chun Wu <409411716@....tku.edu.tw>, akpm@...ux-foundation.org,
	axboe@...nel.dk, ceph-devel@...r.kernel.org, ebiggers@...nel.org,
	hch@....de, home7438072@...il.com, idryomov@...il.com,
	jaegeuk@...nel.org, kbusch@...nel.org,
	linux-fscrypt@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-nvme@...ts.infradead.org, sagi@...mberg.me, tytso@....edu,
	xiubli@...hat.com
Subject: Re: [PATCH v2 1/5] lib/base64: Replace strchr() for better
 performance

Hi Caleb,

On Thu, Sep 11, 2025 at 08:50:12AM -0700, Caleb Sander Mateos wrote:
> On Thu, Sep 11, 2025 at 12:33 AM Guan-Chun Wu <409411716@....tku.edu.tw> wrote:
> >
> > From: Kuan-Wei Chiu <visitorckw@...il.com>
> >
> > The base64 decoder previously relied on strchr() to locate each
> > character in the base64 table. In the worst case, this requires
> > scanning all 64 entries, and even with bitwise tricks or word-sized
> > comparisons, still needs up to 8 checks.
> >
> > Introduce a small helper function that maps input characters directly
> > to their position in the base64 table. This reduces the maximum number
> > of comparisons to 5, improving decoding efficiency while keeping the
> > logic straightforward.
> >
> > Benchmarks on x86_64 (Intel Core i7-10700 @ 2.90GHz, averaged
> > over 1000 runs, tested with KUnit):
> >
> > Decode:
> >  - 64B input: avg ~1530ns -> ~126ns (~12x faster)
> >  - 1KB input: avg ~27726ns -> ~2003ns (~14x faster)
> >
> > Signed-off-by: Kuan-Wei Chiu <visitorckw@...il.com>
> > Co-developed-by: Guan-Chun Wu <409411716@....tku.edu.tw>
> > Signed-off-by: Guan-Chun Wu <409411716@....tku.edu.tw>
> > ---
> >  lib/base64.c | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/base64.c b/lib/base64.c
> > index b736a7a43..9416bded2 100644
> > --- a/lib/base64.c
> > +++ b/lib/base64.c
> > @@ -18,6 +18,21 @@
> >  static const char base64_table[65] =
> >         "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
> 
> Does base64_table still need to be NUL-terminated?
> 
Right, it doesn't need to be nul-terminated.

> >
> > +static inline const char *find_chr(const char *base64_table, char ch)
> 
> Don't see a need to pass in base64_table, the function could just
> access the global variable directly.
> 
> > +{
> > +       if ('A' <= ch && ch <= 'Z')
> > +               return base64_table + ch - 'A';
> > +       if ('a' <= ch && ch <= 'z')
> > +               return base64_table + 26 + ch - 'a';
> > +       if ('0' <= ch && ch <= '9')
> > +               return base64_table + 26 * 2 + ch - '0';
> > +       if (ch == base64_table[26 * 2 + 10])
> > +               return base64_table + 26 * 2 + 10;
> > +       if (ch == base64_table[26 * 2 + 10 + 1])
> > +               return base64_table + 26 * 2 + 10 + 1;
> > +       return NULL;
> 
> This is still pretty branchy. One way to avoid the branches would be
> to define a reverse lookup table mapping base64 chars to their values
> (or a sentinel value for invalid chars). Have you benchmarked that
> approach?
> 
We've considered that approach and agree it could very likely be faster.
However, since a later patch in this series will add support for users to
provide their own base64 table, adopting a reverse lookup table would also
require each user to supply a corresponding reverse table. We're not sure
whether the extra memory overhead in exchange for runtime speed would be
an acceptable tradeoff for everyone, and it might also cause confusion on
the API side as to why it's mandatory to pass in a reverse table.

By contrast, the simple inline function gives us a clear performance
improvement without additional memory cost or complicating the API. That
said, if there's consensus that a reverse lookup table is worthwhile, we
can certainly revisit the idea.

Regards,
Kuan-Wei

> 
> > +}
> > +
> >  /**
> >   * base64_encode() - base64-encode some binary data
> >   * @src: the binary data to encode
> > @@ -78,7 +93,7 @@ int base64_decode(const char *src, int srclen, u8 *dst)
> >         u8 *bp = dst;
> >
> >         for (i = 0; i < srclen; i++) {
> > -               const char *p = strchr(base64_table, src[i]);
> > +               const char *p = find_chr(base64_table, src[i]);
> >
> >                 if (src[i] == '=') {
> >                         ac = (ac << 6);
> > --
> > 2.34.1
> >
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ