lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0f11655f21243ad983bd24381cdc245@AcuMS.aculab.com>
Date:   Tue, 22 Jun 2021 08:38:50 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Nick Kossifidis' <mick@....forth.gr>,
        Matteo Croce <mcroce@...ux.microsoft.com>
CC:     "linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Atish Patra <atish.patra@....com>,
        "Emil Renner Berthing" <kernel@...il.dk>,
        Akira Tsukamoto <akira.tsukamoto@...il.com>,
        Drew Fustini <drew@...gleboard.org>,
        Bin Meng <bmeng.cn@...il.com>, Guo Ren <guoren@...nel.org>
Subject: RE: [PATCH v3 3/3] riscv: optimized memset

From: Nick Kossifidis
> Sent: 22 June 2021 02:08
> 
> Στις 2021-06-17 18:27, Matteo Croce έγραψε:
> > +
> > +void *__memset(void *s, int c, size_t count)
> > +{
> > +	union types dest = { .u8 = s };
> > +
> > +	if (count >= MIN_THRESHOLD) {
> > +		const int bytes_long = BITS_PER_LONG / 8;
> 
> You could make 'const int bytes_long = BITS_PER_LONG / 8;'

What is wrong with sizeof (long) ?
...
> > +		unsigned long cu = (unsigned long)c;
> > +
> > +		/* Compose an ulong with 'c' repeated 4/8 times */
> > +		cu |= cu << 8;
> > +		cu |= cu << 16;
> > +#if BITS_PER_LONG == 64
> > +		cu |= cu << 32;
> > +#endif
> > +
> 
> You don't have to create cu here, you'll fill dest buffer with 'c'
> anyway so after filling up enough 'c's to be able to grab an aligned
> word full of them from dest, you can just grab that word and keep
> filling up dest with it.

That will be a lot slower - especially if run on something like x86.
A write-read of the same size is optimised by the store-load forwarder.
But the byte write, word read will have to go via the cache.

You can just write:
	cu = (unsigned long)c * 0x0101010101010101ull;
and let the compiler sort out the best way to generate the constant.

> 
> > +#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> > +		/* Fill the buffer one byte at time until the destination
> > +		 * is aligned on a 32/64 bit boundary.
> > +		 */
> > +		for (; count && dest.uptr % bytes_long; count--)
> 
> You could reuse & mask here instead of % bytes_long.
> 
> > +			*dest.u8++ = c;
> > +#endif
> 
> I noticed you also used CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on your
> memcpy patch, is it worth it here ? To begin with riscv doesn't set it
> and even if it did we are talking about a loop that will run just a few
> times to reach the alignment boundary (worst case scenario it'll run 7
> times), I don't think we gain much here, even for archs that have
> efficient unaligned access.

With CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS it probably isn't worth
even checking the alignment.
While aligning the copy will be quicker for an unaligned buffer they
almost certainly don't happen often enough to worry about.
In any case you'd want to do a misaligned word write to the start
of the buffer - not separate byte writes.
Provided the buffer is long enough you can also do a misaligned write
to the end of the buffer before filling from the start.

I suspect you may need either barrier() or use a ptr to packed
to avoid the perverted 'undefined behaviour' fubar.'

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ