[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxYkyC00pDzarnVU@smile.fi.intel.com>
Date: Mon, 21 Oct 2024 12:54:16 +0300
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
platform-driver-x86@...r.kernel.org,
Mika Westerberg <mika.westerberg@...ux.intel.com>,
Hans de Goede <hdegoede@...hat.com>, Ferry Toth <fntoth@...il.com>
Subject: Re: [PATCH v2 1/3] platform/x86: intel_scu_ipc: Replace workaround
by 32-bit IO
On Mon, Oct 21, 2024 at 12:49:08PM +0300, Ilpo Järvinen wrote:
> On Mon, 21 Oct 2024, Andy Shevchenko wrote:
> > On Mon, Oct 21, 2024 at 12:24:57PM +0300, Ilpo Järvinen wrote:
> > > On Mon, 21 Oct 2024, Andy Shevchenko wrote:
...
> > > > + for (nc = 0, offset = 0; nc < 4; nc++, offset += 4)
> > > > + wbuf[nc] = ipc_data_readl(scu, offset);
> > > > + memcpy(data, wbuf, count);
> > >
> > > So do we actually need to read more than
> > > DIV_ROUND_UP(min(count, 16U), sizeof(u32))? Because that's the approach
> > > used in intel_scu_ipc_dev_command_with_size() which you referred to.
> >
> > I'm not sure I follow. We do IO for whole (16-bytes) buffer, but return only
> > asked _bytes_ to the user.
>
> So always reading 16 bytes is not part of the old workaround? Because it
> has a "lets read enough" feel.
Ah, now I got it! Yes, we may reduce the reads to just needed ones.
The idea is that we always have to perform 32-bit reads independently
on the amount of data we want.
> > > > }
> > > > mutex_unlock(&ipclock);
> > > > return err;
> > >
> > > FYI (unrelated to this patch), there seems to be some open-coded
> > > FIELD_PREP()s in pwr_reg_rdwr(), some of which is common code between
> > > those if branches too.
> >
> > This code is quite old and full of tricks that has to be tested. So, yes
> > while it's possible to convert, I would like to do it in a small (baby)
> > steps. This series is already quite intrusive from this perspective :-)
>
> Yeah, no pressure, I just noted down what I saw. :-)
Thanks, I will keep this.
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists