[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d156242a-f104-4925-9736-624a4ba8210d@rivosinc.com>
Date: Mon, 27 Nov 2023 13:57:59 +0100
From: Clément Léger <cleger@...osinc.com>
To: Ben Dooks <ben.dooks@...ethink.co.uk>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Christoph Hellwig <hch@...radead.org>,
kernel test robot <lkp@...el.com>
Subject: Re: [PATCH v2] riscv: fix incorrect use of __user pointer
On 27/11/2023 13:46, Clément Léger wrote:
>
>
>>> @@ -369,16 +366,14 @@ static inline int get_insn(struct pt_regs *regs,
>>> ulong epc, ulong *r_insn)
>>> *r_insn = insn;
>>> return 0;
>>> }
>>> - insn_addr++;
>>> - if (__read_insn(regs, tmp, insn_addr))
>>> + epc += sizeof(u16);
>>> + if (__read_insn(regs, tmp, epc, u16))
>>> return -EFAULT;
>>> *r_insn = (tmp << 16) | insn;
>>> return 0;
>>> } else {
>>> - u32 __user *insn_addr = (u32 __user *)epc;
>>> -
>>> - if (__read_insn(regs, insn, insn_addr))
>>> + if (__read_insn(regs, insn, epc, u32))
>>> return -EFAULT;
>>> if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) {
>>> *r_insn = insn;
>>> @@ -491,7 +486,7 @@ int handle_misaligned_load(struct pt_regs *regs)
>>> val.data_u64 = 0;
>>> for (i = 0; i < len; i++) {
>>> - if (load_u8(regs, (void *)(addr + i), &val.data_bytes[i]))
>>> + if (load_u8(regs, addr + i, &val.data_bytes[i]))
>>> return -1;
>>> }
>>> @@ -589,7 +584,7 @@ int handle_misaligned_store(struct pt_regs *regs)
>>> return -EOPNOTSUPP;
>>> for (i = 0; i < len; i++) {
>>> - if (store_u8(regs, (void *)(addr + i), val.data_bytes[i]))
>>> + if (store_u8(regs, addr + i, val.data_bytes[i]))
>>>
>>
>> Would it not be easier to have a switch here for memcpy or copy_to_user?
>
> Most probably yes. I'll update the V3 with these modifications. We'll
> get rid of store_u8()/load_u8() entirely.
While memcpy/copy_from_user will be slower than David Laight proposed
solution (read two 4/8 bytes long once and shifting the content) it is
more maintainable and this path of code will not suffer a few lost cycle
(emulating misaligned access is already horribly slow...).
BTW, need to check but maybe we can get rid of all the specific M_MODE
code entirely (__read_insn) also.
Clément
>
> Thanks,
>
> Clément
>
>>
>> return -1;
>>> }
>>>
>>
Powered by blists - more mailing lists