[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6098915f-c5ac-40dc-80af-026891e92d4d@rivosinc.com>
Date: Sun, 17 Sep 2023 16:10:59 +0200
From: Clément Léger <cleger@...osinc.com>
To: "Masami Hiramatsu (Google)" <mhiramat@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org,
Beau Belgrave <beaub@...ux.microsoft.com>
Subject: Re: [PATCH] tracing/user_events: align uaddr on unsigned long
alignment
On 15/09/2023 04:54, Masami Hiramatsu (Google) wrote:
> On Thu, 14 Sep 2023 15:11:02 +0200
> Clément Léger <cleger@...osinc.com> wrote:
>
>> enabler->uaddr can be aligned on 32 or 64 bits. If aligned on 32 bits,
>> this will result in a misaligned access on 64 bits architectures since
>> set_bit()/clear_bit() are expecting an unsigned long (aligned) pointer.
>> On architecture that do not support misaligned access, this will crash
>> the kernel. Align uaddr on unsigned long size to avoid such behavior.
>> This bug was found while running kselftests on RISC-V.
>>
>> Fixes: 7235759084a4 ("tracing/user_events: Use remote writes for event enablement")
>> Signed-off-by: Clément Léger <cleger@...osinc.com>
>> ---
>> kernel/trace/trace_events_user.c | 12 +++++++++---
>> 1 file changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
>> index 6f046650e527..580c0fe4b23e 100644
>> --- a/kernel/trace/trace_events_user.c
>> +++ b/kernel/trace/trace_events_user.c
>> @@ -479,7 +479,7 @@ static int user_event_enabler_write(struct user_event_mm *mm,
>> bool fixup_fault, int *attempt)
>> {
>> unsigned long uaddr = enabler->addr;
>> - unsigned long *ptr;
>> + unsigned long *ptr, bit_offset;
>> struct page *page;
>> void *kaddr;
>> int ret;
>> @@ -511,13 +511,19 @@ static int user_event_enabler_write(struct user_event_mm *mm,
>> }
>>
>> kaddr = kmap_local_page(page);
>> +
>> + bit_offset = uaddr & (sizeof(unsigned long) - 1);
>> + if (bit_offset) {
>> + bit_offset *= 8;
>> + uaddr &= ~(sizeof(unsigned long) - 1);
>> + }
>> ptr = kaddr + (uaddr & ~PAGE_MASK);
>>
>> /* Update bit atomically, user tracers must be atomic as well */
>> if (enabler->event && enabler->event->status)
>> - set_bit(ENABLE_BIT(enabler), ptr);
>> + set_bit(ENABLE_BIT(enabler) + bit_offset, ptr);
>> else
>> - clear_bit(ENABLE_BIT(enabler), ptr);
>> + clear_bit(ENABLE_BIT(enabler) + bit_offset, ptr);
>
> What we need are generic set_bit_aligned() and clear_bit_aligned(), which align the ptr
> by unsigned long. (I think it should be done in set_bit/clear_bit, for architecture
> which requires aligned access...)
>
> #define LONG_ALIGN_DIFF(p) (p) & (sizeof(long) -1)
> #define LONG_ALINGNED(p) (p) & ~(sizeof(long) - 1)
>
> static inline void set_bit_aligned(int bit, unsigned long *ptr)
> {
> int offs = LONG_ALIGN_DIFF(ptr) * 8;
>
> #ifdef __BIGENDIAN
> if (bit >= offs) {
> set_bit(bit - offs, LONG_ALIGNED(ptr));
> } else {
> set_bit(bit + BITS_PER_LONG - offs, LONG_ALIGNED(ptr) + 1);
> }
> #else
> if (bit < BITS_PER_LONG - offs) {
> set_bit(bit + offs, LONG_ALIGNED(ptr));
> } else {
> set_bit(bit - BITS_PER_LONG + offs, LONG_ALIGNED(ptr) + 1);
> }
> #endif
> }
Hi Masami,
Indeed, that is a more elegant version, thanks for the snippet.
Clément
>
> And use it.
>
> Thank you,
>
>>
>> kunmap_local(kaddr);
>> unpin_user_pages_dirty_lock(&page, 1, true);
>> --
>> 2.40.1
>>
>
>
Powered by blists - more mailing lists