lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67d9acd0-692f-95d4-2c92-4e43e1d0100c@loongson.cn>
Date:   Mon, 17 Oct 2022 12:22:00 +0800
From:   Jinyang He <hejinyang@...ngson.cn>
To:     Huacai Chen <chenhuacai@...ngson.cn>,
        Huacai Chen <chenhuacai@...nel.org>
Cc:     loongarch@...ts.linux.dev, Xuefeng Li <lixuefeng@...ngson.cn>,
        Tiezhu Yang <yangtiezhu@...ngson.cn>,
        Guo Ren <guoren@...nel.org>, Xuerui Wang <kernel@...0n.name>,
        Jiaxun Yang <jiaxun.yang@...goat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2] LoongArch: Add unaligned access support

Hi, Huacai,


On 2022/10/17 上午10:23, Huacai Chen wrote:
> [...]
> +	default:
> +		panic("unexpected fd '%d'", fd);
Due to the optimization of gcc, the panic() is unused actually and leave
the symbol 'read/write_fpr' in vmlinux. Maybe we can use unreachable() and

always_inline.

> [...]
> +
> +fault:
> +	/* roll back jump/branch */
> +	regs->csr_era = origpc;
> +	regs->regs[1] = origra;

I'm not sure where the csr_era and regs[1] was damaged...

> [...]
>
> +/*
> + * unsigned long unaligned_read(void *addr, void *value, unsigned long n, bool sign)
> + *
> + * a0: addr
> + * a1: value
> + * a2: n
> + * a3: sign
> + */
> +SYM_FUNC_START(unaligned_read)
> +	beqz	a2, 5f
> +
> +	li.w	t1, 8
IMHO we can avoid the constant reg t1.
> +	li.w	t2, 0
> +
> +	addi.d	t0, a2, -1
> +	mul.d	t1, t0, t1
> +	add.d 	a0, a0, t0
> +
> +	beq	a3, zero, 2f
beqz
> +1:	ld.b	t3, a0, 0
> +	b	3f
> +
> +2:	ld.bu	t3, a0, 0
> +3:	sll.d	t3, t3, t1
> +	or	t2, t2, t3
> +	addi.d	t1, t1, -8
> +	addi.d	a0, a0, -1
> +	addi.d	a2, a2, -1
> +	bgt	a2, zero, 2b
bgtz
> +4:	st.d	t2, a1, 0
> +
> +	move	a0, a2
> +	jr	ra
> +
> +5:	li.w    a0, -EFAULT
> +	jr	ra
> +
> +	fixup_ex 1, 6, 1
> +	fixup_ex 2, 6, 0
> +	fixup_ex 4, 6, 0
> +SYM_FUNC_END(unaligned_read)
> +
> +/*
> + * unsigned long unaligned_write(void *addr, unsigned long value, unsigned long n)
> + *
> + * a0: addr
> + * a1: value
> + * a2: n
> + */
> +SYM_FUNC_START(unaligned_write)
> +	beqz	a2, 3f
> +
> +	li.w	t0, 0
> +1:	srl.d	t1, a1, t0
> +2:	st.b	t1, a0, 0
> +	addi.d	t0, t0, 8
> +	addi.d	a2, a2, -1
> +	addi.d	a0, a0, 1
> +	bgt	a2, zero, 1b
bgtz
> +
> +	move	a0, a2
> +	jr	ra
> +
> +3:	li.w    a0, -EFAULT
> +	jr	ra
> +
> +	fixup_ex 2, 4, 1
> +SYM_FUNC_END(unaligned_write)

Thanks,

Jinyang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ