lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZvwK1PnvREjf_wvK@LQ3V64L9R2>
Date: Tue, 1 Oct 2024 07:44:36 -0700
From: Joe Damato <jdamato@...tly.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@...el.com>, netdev@...r.kernel.org,
	Tony Nguyen <anthony.l.nguyen@...el.com>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
	open list <linux-kernel@...r.kernel.org>,
	Simon Horman <horms@...nel.org>
Subject: Re: [RFC net-next 1/1] idpf: Don't hard code napi_struct size

On Tue, Oct 01, 2024 at 03:14:07PM +0200, Alexander Lobakin wrote:
> From: Joe Damato <jdamato@...tly.com>
> Date: Mon, 30 Sep 2024 15:17:46 -0700
> 
> > On Mon, Sep 30, 2024 at 03:10:41PM +0200, Przemek Kitszel wrote:
> >> On 9/30/24 14:38, Alexander Lobakin wrote:
> >>> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> >>> Date: Mon, 30 Sep 2024 14:33:45 +0200
> >>>
> >>>> From: Joe Damato <jdamato@...tly.com>
> >>>> Date: Wed, 25 Sep 2024 18:00:17 +0000
> >>
> >>> struct napi_struct doesn't have any such fields and doesn't depend on
> >>> the kernel configuration, that's why it's hardcoded.
> >>> Please don't change that, just adjust the hardcoded values when needed.
> >>
> >> This is the crucial point, and I agree with Olek.
> >>
> >> If you will find it more readable/future proof, feel free to add
> >> comments like /* napi_struct */ near their "400" part in the hardcode.
> >>
> >> Side note: you could just run this as a part of your netdev series,
> >> given you will properly CC.
> > 
> > I've already sent the official patch because I didn't hear back on
> > this RFC.
> > 
> > Sorry, but I respectfully disagree with you both on this; I don't
> > think it makes sense to have code that will break if fields are
> > added to napi_struct thereby requiring anyone who works on the core
> > to update this code over and over again.
> > 
> > I understand that the sizeofs are "meaningless" because of your
> > desire to optimize cachelines, but IMHO and, again, respectfully, it
> > seems strange that any adjustments to core should require a change
> > to this code.
> 
> But if you change any core API, let's say rename a field used in several
> drivers, you anyway need to adjust the affected drivers.

Sorry, but that's a totally different argument.

There are obvious cases where touching certain parts of core would
require changes to drivers, yes. I agree on that if I change an API
or a struct field name, or remove an enum, then this affects drivers
which must be updated.

idpf does not fall in this category as it relies on the *size* of
the structure, not the field names. Adding a new field wouldn't
break any of your existing code accessing fields in the struct since
I haven't removed a field.

Adding a new field may adjust the size. According to your previous
email: idpf cares about the size because it wants the cachelines to
look a certain way in pahole. OK, but why is that the concern of
someone working on core code?

It doesn't make sense to me that if I add a new field, I now need to
look at pahole output for idpf to make sure you will approve of the
cacheline placement.

> It's a common practice that some core changes require touching drivers.
> Moreover, sizeof(struct napi_struct) doesn't get changed often, so I
> don't see any issue in adjusting one line in idpf by just increasing one
> value there by sizeof(your_new_field).

The problem is: what if everyone starts doing this? Trying to rely
on the specific size of some core data structure in their driver
code for cacheline placement?

Do I then need to shift through each driver with pahole and adjust
the cacheline placement of each affected structure because I added a
field to napi_struct ?

The burden of optimizing cache line placement should fall on the
driver maintainers, not the person adding the field to napi_struct.

It would be different if I deleted a field or renamed a field. In
those cases: sure that's my issue to deal with changing the affected
drivers. Just like it would be if I removed an enum value.

> If you do that, then:
> + you get notified that you may affect the performance of different
>   drivers (napi_struct is usually embedded into perf-critical
>   structures in drivers);

But I don't want to think about idpf; it's not my responsibility at
all to ensure that the cacheline placement in idpf is optimal.

> + I get notified (Cced) that someone's change will affect idpf, so I'll
>   be aware of it and review the change;
> - you need to adjust one line in idpf.

Adjust one line in idpf and then go through another revision if the
maintainers don't like what the cachelines look like in pahole?

And I need to do this for something totally unrelated that idpf
won't even support (because I'm not adding support for it in the
RFC) ?

> Is it just me or these '+'s easily outweight that sole '-'?

I disagree even more than I disagreed yesterday.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ