lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D1C10635-7799-4400-B85E-746D156D714A@sifive.com>
Date:   Mon, 27 Nov 2023 10:49:05 +0800
From:   Jerry Shih <jerry.shih@...ive.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     Paul Walmsley <paul.walmsley@...ive.com>, palmer@...belt.com,
        Albert Ou <aou@...s.berkeley.edu>, herbert@...dor.apana.org.au,
        davem@...emloft.net, andy.chiu@...ive.com, greentime.hu@...ive.com,
        conor.dooley@...rochip.com, guoren@...nel.org, bjorn@...osinc.com,
        heiko@...ech.de, ardb@...nel.org, phoebe.chen@...ive.com,
        hongrong.hsu@...ive.com, linux-riscv@...ts.infradead.org,
        linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org
Subject: Re: [PATCH 07/12] RISC-V: crypto: add Zvkg accelerated GCM GHASH
 implementation

On Nov 22, 2023, at 09:42, Eric Biggers <ebiggers@...nel.org> wrote:
> On Thu, Oct 26, 2023 at 02:36:39AM +0800, Jerry Shih wrote:
>> +struct riscv64_ghash_context {
>> +	be128 key;
>> +};
>> +
>> +struct riscv64_ghash_desc_ctx {
>> +	be128 shash;
>> +	u8 buffer[GHASH_BLOCK_SIZE];
>> +	u32 bytes;
>> +};
> 
> I recommend calling the first struct 'riscv64_ghash_tfm_ctx', and making the
> pointers to it be named 'tctx'.  That would more clearly distinguish it from the
> desc_ctx / dctx.

Fixed.

>> +
>> +typedef void (*ghash_func)(be128 *Xi, const be128 *H, const u8 *inp,
>> +			   size_t len);
>> +
>> +static inline void ghash_blocks(const struct riscv64_ghash_context *ctx,
>> +				struct riscv64_ghash_desc_ctx *dctx,
>> +				const u8 *src, size_t srclen, ghash_func func)
>> +	if (crypto_simd_usable()) {
>> +		kernel_vector_begin();
>> +		func(&dctx->shash, &ctx->key, src, srclen);
>> +		kernel_vector_end();
> 
> The indirection to ghash_func is unnecessary, since the only value is
> gcm_ghash_rv64i_zvkg.
> 
> This also means that ghash_update() should be folded into ghash_update_zvkg(),
> and ghash_final() into ghash_final_zvkg().

Fixed. The `gcm_ghash_rv64i_zvkg()` is folded into `ghash_update_zvkg()` and
`ghash_final_zvkg()`.

>> +	} else {
>> +		while (srclen >= GHASH_BLOCK_SIZE) {
>> +			crypto_xor((u8 *)&dctx->shash, src, GHASH_BLOCK_SIZE);
>> +			gf128mul_lle(&dctx->shash, &ctx->key);
>> +			srclen -= GHASH_BLOCK_SIZE;
>> +			src += GHASH_BLOCK_SIZE;
>> +		}
>> +	}
> 
> The assembly code uses the equivalent of the following do-while loop instead:
> 
>        do {
>                srclen -= GHASH_BLOCK_SIZE;
>        } while (srclen);
> 
> I.e., it assumes the length here is nonzero and a multiple of 16, which it is.
> 
> To avoid confusion, I recommend making the C code use the same do-while loop.

Fixed.

>>       const struct riscv64_ghash_context *ctx =
>>              crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
> 
> crypto_tfm_ctx(crypto_shash_tfm(tfm)) should be crypto_shash_ctx(tfm)

Fixed.
But the original code do the same thing.

>> +static int ghash_final(struct shash_desc *desc, u8 *out, ghash_func func)
>> +{
>> +	const struct riscv64_ghash_context *ctx =
>> +		crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
>> +	struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
>> +	int i;
>> +
>> +	if (dctx->bytes) {
>> +		for (i = dctx->bytes; i < GHASH_BLOCK_SIZE; i++)
>> +			dctx->buffer[i] = 0;
>> +
>> +		ghash_blocks(ctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE, func);
>> +		dctx->bytes = 0;
>> +	}
>> +
> 
> Setting dctx->bytes above is unnecessary.

Fixed.

>> +static int ghash_init(struct shash_desc *desc)
>> +{
>> +	struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
>> +
>> +	*dctx = (struct riscv64_ghash_desc_ctx){};
>> +
>> +	return 0;
>> +}
>> +
>> +static int ghash_update_zvkg(struct shash_desc *desc, const u8 *src,
>> +			     unsigned int srclen)
>> +{
>> +	return ghash_update(desc, src, srclen, gcm_ghash_rv64i_zvkg);
>> +}
>> +
>> +static int ghash_final_zvkg(struct shash_desc *desc, u8 *out)
>> +{
>> +	return ghash_final(desc, out, gcm_ghash_rv64i_zvkg);
>> +}
>> +
>> +static int ghash_setkey(struct crypto_shash *tfm, const u8 *key,
>> +			unsigned int keylen)
>> +{
>> +	struct riscv64_ghash_context *ctx =
>> +		crypto_tfm_ctx(crypto_shash_tfm(tfm));
>> +
>> +	if (keylen != GHASH_BLOCK_SIZE)
>> +		return -EINVAL;
>> +
>> +	memcpy(&ctx->key, key, GHASH_BLOCK_SIZE);
>> +
>> +	return 0;
>> +}
>> +
>> +static struct shash_alg riscv64_ghash_alg_zvkg = {
>> +	.digestsize = GHASH_DIGEST_SIZE,
>> +	.init = ghash_init,
>> +	.update = ghash_update_zvkg,
>> +	.final = ghash_final_zvkg,
>> +	.setkey = ghash_setkey,
> 
> IMO it's helpful to order the shash functions as follows, both in their
> definitions and their fields in struct shash_alg:
> 
>    setkey
>    init
>    update
>    final
> 
> That matches the order in which they're called.

I have different opinion. I reorder the initialization in the order declared.
That will help us to check whether the function/member is missed.

> - Eric


-Jerry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ