[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YKJ0egfeNxw2Aoxl@smile.fi.intel.com>
Date: Mon, 17 May 2021 16:49:46 +0300
From: Andy Shevchenko <andy.shevchenko@...il.com>
To: Dan Carpenter <dan.carpenter@...cle.com>
Cc: Colin King <colin.king@...onical.com>,
Shubhrajyoti Datta <shubhrajyoti.datta@...inx.com>,
Srinivas Neeli <srinivas.neeli@...inx.com>,
Michal Simek <michal.simek@...inx.com>,
Linus Walleij <linus.walleij@...aro.org>,
Bartosz Golaszewski <bgolaszewski@...libre.com>,
"open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
linux-arm Mailing List <linux-arm-kernel@...ts.infradead.org>,
kernel-janitors <kernel-janitors@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH][next] gpio: xilinx: Fix potential integer overflow on
shift of a u32 int
On Mon, May 17, 2021 at 04:36:43PM +0300, Dan Carpenter wrote:
> On Mon, May 17, 2021 at 10:07:20AM +0300, Andy Shevchenko wrote:
> > On Fri, May 14, 2021 at 12:26 PM Dan Carpenter <dan.carpenter@...cle.com> wrote:
> > > On Thu, May 13, 2021 at 09:52:27AM +0100, Colin King wrote:
> >
> > ...
> >
> > > > const unsigned long offset = (bit % BITS_PER_LONG) & BIT(5);
> > > >
> > > > map[index] &= ~(0xFFFFFFFFul << offset);
> > > > - map[index] |= v << offset;
> > > > + map[index] |= (unsigned long)v << offset;
> > >
> > > Doing a shift by BIT(5) is super weird.
> >
> > Not the first place in the kernel with such a trick.
> >
> > > It looks like a double shift
> > > bug and should probably trigger a static checker warning. It's like
> > > when people do BIT(BIT(5)).
> > >
> > > It would be more readable to write it as:
> > >
> > > int shift = (bit % BITS_PER_LONG) ? 32 : 0;
> >
> > Usually this code is in a kinda fast path. Have you checked if the
> > compiler generates the same or better code when you are using ternary?
>
> I wrote a little benchmark to see which was faster and they're the same
> as far as I can see.
Thanks for checking.
Besides the fact that offset should be 0 for 32-bit always and if compiler can
proof that...
The test below doesn't take into account the exact trick is used for offset
(i.e. implicit dependency between BITS_PER_LONG, size of unsigned long, and
using 5th bit out of value). I don't know if compiler can properly optimize
the ternary in this case (but it looks like it should generate the same code).
That said, I would rather to see the diff between assembly of the exact
function before and after your proposal.
> static inline __attribute__((__gnu_inline__)) unsigned long xgpio_set_value_orig(unsigned long *map, int bit, u32 v)
> {
> int shift = (bit % 64) & ((((1UL))) << (5));
> return v << shift;
> }
>
> static inline __attribute__((__gnu_inline__)) unsigned long xgpio_set_value_new(unsigned long *map, int bit, u32 v)
> {
> int shift = (bit % 64) ? 32 : 0;
> return v << shift;
> }
>
> int main(void)
> {
> int i;
>
> for (i = 0; i < INT_MAX; i++)
> xgpio_set_value_orig(NULL, i, 0);
>
> // for (i = 0; i < INT_MAX; i++)
> // xgpio_set_value_new(NULL, i, 0);
>
> return 0;
> }
>
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists