[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d981664a47150acbe5eab9c666c9ce6676848037.camel@kernel.crashing.org>
Date: Thu, 17 May 2018 23:44:15 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Christophe LEROY <christophe.leroy@....fr>,
Mathieu Malaterre <malat@...ian.org>
Cc: Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 5/5] powerpc/lib: inline memcmp() for small constant
sizes
On Thu, 2018-05-17 at 15:21 +0200, Christophe LEROY wrote:
> > > +static inline int __memcmp8(const void *p, const void *q, int off)
> > > +{
> > > + s64 tmp = be64_to_cpu(*(u64*)(p + off)) - be64_to_cpu(*(u64*)(q + off));
> >
> > I always assumed 64bits unaligned access would trigger an exception.
> > Is this correct ?
>
> As far as I know, an unaligned access will only occur when the operand
> of lmw, stmw, lwarx, or stwcx. is not aligned.
>
> Maybe that's different for PPC64 ?
It's very implementation specific.
Recent ppc64 chips generally don't trap (unless it's cache inhibited
space). Earlier variants might trap on page boundaries or segment
boundaries. Some embedded parts are less forgiving... some earlier
POWER chips will trap on unaligned in LE mode...
I wouldn't worry too much about it though. I think if 8xx shows an
improvement then it's probably fine everywhere else :-)
Cheers,
Ben.
Powered by blists - more mailing lists