[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210419121932.GA30004@willie-the-truck>
Date: Mon, 19 Apr 2021 13:19:33 +0100
From: Will Deacon <will@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Catalin Marinas <catalin.marinas@....com>,
He Zhe <zhe.he@...driver.com>, oleg@...hat.com,
linux-arm-kernel@...ts.infradead.org, paul@...l-moore.com,
eparis@...hat.com, linux-audit@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] arm64: ptrace: Add is_syscall_success to handle
compat
On Fri, Apr 16, 2021 at 02:34:41PM +0100, Mark Rutland wrote:
> On Fri, Apr 16, 2021 at 01:33:22PM +0100, Catalin Marinas wrote:
> > On Fri, Apr 16, 2021 at 03:55:31PM +0800, He Zhe wrote:
> > > The general version of is_syscall_success does not handle 32-bit
> > > compatible case, which would cause 32-bit negative return code to be
> > > recoganized as a positive number later and seen as a "success".
> > >
> > > Since is_compat_thread is defined in compat.h, implementing
> > > is_syscall_success in ptrace.h would introduce build failure due to
> > > recursive inclusion of some basic headers like mutex.h. We put the
> > > implementation to ptrace.c
> > >
> > > Signed-off-by: He Zhe <zhe.he@...driver.com>
> > > ---
> > > arch/arm64/include/asm/ptrace.h | 3 +++
> > > arch/arm64/kernel/ptrace.c | 10 ++++++++++
> > > 2 files changed, 13 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
> > > index e58bca832dff..3c415e9e5d85 100644
> > > --- a/arch/arm64/include/asm/ptrace.h
> > > +++ b/arch/arm64/include/asm/ptrace.h
> > > @@ -328,6 +328,9 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
> > > regs->regs[0] = rc;
> > > }
> > >
> > > +extern inline int is_syscall_success(struct pt_regs *regs);
> > > +#define is_syscall_success(regs) is_syscall_success(regs)
> > > +
> > > /**
> > > * regs_get_kernel_argument() - get Nth function argument in kernel
> > > * @regs: pt_regs of that context
> > > diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
> > > index 170f42fd6101..3266201f8c60 100644
> > > --- a/arch/arm64/kernel/ptrace.c
> > > +++ b/arch/arm64/kernel/ptrace.c
> > > @@ -1909,3 +1909,13 @@ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task)
> > > else
> > > return valid_native_regs(regs);
> > > }
> > > +
> > > +inline int is_syscall_success(struct pt_regs *regs)
> > > +{
> > > + unsigned long val = regs->regs[0];
> > > +
> > > + if (is_compat_thread(task_thread_info(current)))
> > > + val = sign_extend64(val, 31);
> > > +
> > > + return !IS_ERR_VALUE(val);
> > > +}
> >
> > It's better to use compat_user_mode(regs) here instead of
> > is_compat_thread(). It saves us from worrying whether regs are for the
> > current context.
> >
> > I think we should change regs_return_value() instead. This function
> > seems to be called from several other places and it has the same
> > potential problems if called on compat pt_regs.
>
> I think this is a problem we created for ourselves back in commit:
>
> 15956689a0e60aa0 ("arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return)
>
> AFAICT, the perf regs samples are the only place this matters, since for
> ptrace the compat regs are implicitly truncated to compat_ulong_t, and
> audit expects the non-truncated return value. Other architectures don't
> truncate here, so I think we're setting ourselves up for a game of
> whack-a-mole to truncate and extend wherever we need to.
>
> Given that, I suspect it'd be better to do something like the below.
>
> Will, thoughts?
I think perf is one example, but this is also visible to userspace via the
native ptrace interface and I distinctly remember needing this for some
versions of arm64 strace to work correctly when tracing compat tasks.
So I do think that clearing the upper bits on the return path is the right
approach, but it sounds like we need some more work to handle syscall(-1)
and audit (what exactly is the problem here after these patches have been
applied?)
Will
Powered by blists - more mailing lists