[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8aa33911-5e34-4a03-90de-81f42648ab5d@intel.com>
Date: Wed, 12 Jun 2024 12:07:05 +0200
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
To: Jakub Kicinski <kuba@...nel.org>, Alexander Lobakin
<aleksander.lobakin@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, Tony Nguyen
<anthony.l.nguyen@...el.com>, "David S. Miller" <davem@...emloft.net>, "Eric
Dumazet" <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Mina Almasry
<almasrymina@...gle.com>, <nex.sw.ncis.osdt.itp.upstreaming@...el.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH iwl-next 01/12] libeth: add cacheline / struct alignment
helpers
On 5/30/24 03:34, Jakub Kicinski wrote:
> On Tue, 28 May 2024 15:48:35 +0200 Alexander Lobakin wrote:
>> diff --git a/scripts/kernel-doc b/scripts/kernel-doc
>> index 95a59ac78f82..d0cf9a2d82de 100755
>> --- a/scripts/kernel-doc
>> +++ b/scripts/kernel-doc
>> @@ -1155,6 +1155,7 @@ sub dump_struct($$) {
>> $members =~ s/\bstruct_group_attr\s*\(([^,]*,){2}/STRUCT_GROUP(/gos;
>> $members =~ s/\bstruct_group_tagged\s*\(([^,]*),([^,]*),/struct $1 $2; STRUCT_GROUP(/gos;
>> $members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos;
>> + $members =~ s/\blibeth_cacheline_group\s*\(([^,]*,)/struct { } $1; STRUCT_GROUP(/gos;
>> $members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos;
>>
>> my $args = qr{([^,)]+)};
>
> Having per-driver grouping defines is a no-go.
[1]
> Do you need the defines in the first place?
this patch was a tough one for me too, but I see the idea promising
> Are you sure the assert you're adding are not going to explode
> on some weird arch? Honestly, patch 5 feels like a little too
> much for a driver..
>
definitively some of the patch 5 should be added here as doc/example,
but it would be even better to simplify a bit
--
I think that "mark this struct as explicit split into cachelines" is
a hard hard C problem in general, especially in the kernel context,
*but* I think that this could be simplified for your use case - split
into exactly 3 (possibly empty) sections: mostly-Read, RW, COLD?
Given that it will be a generic solution (would fix the [1] above),
and be also easier to use, like:
CACHELINE_STRUCT_GROUP(idpf_q_vector,
CACHELINE_STRUCT_GROUP_RD(/* read mostly */
struct idpf_vport *vport;
u16 num_rxq;
u16 num_txq;
u16 num_bufq;
u16 num_complq;
struct idpf_rx_queue **rx;
struct idpf_tx_queue **tx;
struct idpf_buf_queue **bufq;
struct idpf_compl_queue **complq;
struct idpf_intr_reg intr_reg;
),
CACHELINE_STRUCT_GROUP_RW(
struct napi_struct napi;
u16 total_events;
struct dim tx_dim;
u16 tx_itr_value;
bool tx_intr_mode;
u32 tx_itr_idx;
struct dim rx_dim;
u16 rx_itr_value;
bool rx_intr_mode;
u32 rx_itr_idx;
),
CACHELINE_STRUCT_GROUP_COLD(
u16 v_idx;
cpumask_var_t affinity_mask;
)
);
Note that those three inner macros have distinct meaningful names not to
have this working, but to aid human reader, then checkpatch/check-kdoc.
Technically could be all the same CACHELINE_GROUP().
I'm not sure if (at most) 3 cacheline groups are fine for the general
case, but it would be best to have just one variant of the
CACHELINE_STRUCT_GROUP(), perhaps as a vararg.
Powered by blists - more mailing lists