[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250731203124.GI8494@horms.kernel.org>
Date: Thu, 31 Jul 2025 21:31:24 +0100
From: Simon Horman <horms@...nel.org>
To: Gur Stavi <gur.stavi@...wei.com>
Cc: andrew+netdev@...n.ch, christophe.jaillet@...adoo.fr, corbet@....net,
davem@...emloft.net, edumazet@...gle.com, fuguiming@...artners.com,
gongfan1@...wei.com, guoxin09@...wei.com, helgaas@...nel.org,
jdamato@...tly.com, kuba@...nel.org, lee@...ger.us,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
luosifu@...wei.com, meny.yossefi@...wei.com, mpe@...erman.id.au,
netdev@...r.kernel.org, pabeni@...hat.com,
przemyslaw.kitszel@...el.com, shenchenyang1@...ilicon.com,
shijing34@...wei.com, sumang@...vell.com, vadim.fedorenko@...ux.dev,
wulike1@...wei.com, zhoushuai28@...wei.com,
zhuyikai1@...artners.com
Subject: Re: [PATCH net-next v10 1/8] hinic3: Async Event Queue interfaces
On Thu, Jul 31, 2025 at 09:34:20PM +0300, Gur Stavi wrote:
> > On Thu, Jul 31, 2025 at 03:58:39PM +0300, Gur Stavi wrote:
...
> > Thanks, I think I am closer to understanding things now.
> >
> > Let me try and express things in my own words:
> >
> > 1. On the hardware side, things are stored in a way that may be represented
> > as structures with little-endian values. The members of the structures may
> > have different sizes: 8-bit, 16-bit, 32-bit, ...
> >
> > 2. The hardware runs the equivalent of swab32_array() over this data
> > when writing it to (or reading it from) the host. So we get a
> > "byte jumble".
> >
> > 3. In this patch, the hinic3_cmdq_buf_swab32 reverses this jumbling
> > by running he equivalent of swab32_array() over this data again.
> >
> > As 3 exactly reverses 2, what is left are structures exactly as in 1.
> >
>
> Yes. Your understanding matches mine.
Great. Sorry for taking a while to get there.
> > If so, I agree this makes sense and I am sorry for missing this before.
> >
> > And if so, is the intention for the cmdq "coherent structs" in the driver
> > to look something like this.
> >
> > struct {
> > u8 a;
> > u8 b;
> > __le16 c;
> > __le32 d;
> > };
> >
> > If so, this seems sensible to me.
> >
> > But I think it would be best so include some code in this patchset
> > that makes use of such structures - sorry if it is there, I couldn't find
> > it just now.
> >
> > And, although there is no intention for the driver to run on big endian
> > systems, the __le* fields should be accessed using cpu_to_le*/le*_to_cpu
> > helpers.
>
> There was a long and somewhat heated debate about this issue.
> https://lore.kernel.org/netdev/20241230192326.384fd21d@kernel.org/
> I agree that having __le in the code is better coding practice.
> But flooding the code with cpu_to_le and le_to_cpu does hurt readability.
> And there are precedences of drivers that avoid it.
>
> However, our dev team (I am mostly an advisor) decided to give it a try anyway.
> I hope they manage to survive it.
Thanks, I appreciate it.
I look forward to reviewing what they come up with.
Powered by blists - more mailing lists