lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a71b234ec05b6ce842a3d6da552ba30@mailhost.ics.forth.gr>
Date:   Tue, 22 Jun 2021 03:46:16 +0300
From:   Nick Kossifidis <mick@....forth.gr>
To:     Matteo Croce <mcroce@...ux.microsoft.com>
Cc:     linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-arch@...r.kernel.org,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Atish Patra <atish.patra@....com>,
        Emil Renner Berthing <kernel@...il.dk>,
        Akira Tsukamoto <akira.tsukamoto@...il.com>,
        Drew Fustini <drew@...gleboard.org>,
        Bin Meng <bmeng.cn@...il.com>,
        David Laight <David.Laight@...lab.com>,
        Guo Ren <guoren@...nel.org>
Subject: Re: [PATCH v3 2/3] riscv: optimized memmove

Στις 2021-06-17 18:27, Matteo Croce έγραψε:
> +
> +/*
> + * Simply check if the buffer overlaps an call memcpy() in case,
> + * otherwise do a simple one byte at time backward copy.
> + */
> +void *__memmove(void *dest, const void *src, size_t count)
> +{
> +	if (dest < src || src + count <= dest)
> +		return memcpy(dest, src, count);
> +
> +	if (dest > src) {
> +		const char *s = src + count;
> +		char *tmp = dest + count;
> +
> +		while (count--)
> +			*--tmp = *--s;
> +	}
> +	return dest;
> +}
> +EXPORT_SYMBOL(__memmove);
> +

Copying backwards byte-per-byte is suboptimal, I understand this is not 
a very common scenario but you could at least check if they are both 
word-aligned e.g. (((src + len) | (dst + len)) & mask), or missaligned 
by the same offset e.g. (((src + len) ^ (dst + len)) & mask) and still 
end up doing word-by-word copying. Ideally it would be great if you 
re-used the same technique you used for forwards copying on your memcpy.

> +void *memmove(void *dest, const void *src, size_t count) __weak
> __alias(__memmove);
> +EXPORT_SYMBOL(memmove);

As I mentioned on your memcpy patch, if you implement memmove, you can 
just alias memcpy to memmove and we won't have to worry about memcpy 
being used on overlapping regions.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ