lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZM4CajvI1uNYRNf0@vergenet.net>
Date: Sat, 5 Aug 2023 10:03:54 +0200
From: Simon Horman <horms@...nel.org>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Simon Horman <horms@...nel.org>, intel-wired-lan@...ts.osuosl.org,
	Tony Nguyen <anthony.l.nguyen@...el.com>, netdev@...r.kernel.org,
	Jacob Keller <jacob.e.keller@...el.com>,
	Jesse Brandeburg <jesse.brandeburg@...el.com>
Subject: Re: [PATCH iwl-next v2] ice: split ice_aq_wait_for_event() func into
 two

On Fri, Aug 04, 2023 at 04:54:48PM +0200, Przemek Kitszel wrote:
> On 8/4/23 16:35, Simon Horman wrote:
> > On Thu, Aug 03, 2023 at 11:13:47AM -0400, Przemek Kitszel wrote:
> > > Mitigate race between registering on wait list and receiving
> > > AQ Response from FW.
> > > 
> > > ice_aq_prep_for_event() should be called before sending AQ command,
> > > ice_aq_wait_for_event() should be called after sending AQ command,
> > > to wait for AQ Response.
> > > 
> > > struct ice_aq_task is exposed to callers, what takes burden of memory
> > > ownership out from AQ-wait family of functions.
> > > 
> > > Embed struct ice_rq_event_info event into struct ice_aq_task
> > > (instead of it being a ptr), to remove some more code from the callers.
> 
> see [1] below
> 
> > > 
> > > Additional fix: one of the checks in ice_aq_check_events() was off by one.
> > 
> > Hi Przemek,
> > 
> > This patch seems to be doing three things:
> > 
> > 1. Refactoring code, in order to allow
> > 2. Addressing a race condition
> 
> those two are hard to split, perhaps some shuffling of code prior to actual
> 2., eg [1] above.

Sure, that is a reasonable point.

> > 3. Correcting an off-by-one error
> 
> That's literally one line-fix, which would be overwritten/touched by next
> patch then.

True. But it also a bit hard to find in the current setup.
Anyway, I don't feel particularly strongly about this,
it was more a point for consideration.

> > All good stuff. But all complex, and 1 somewhat buries 2 and 3.
> > I'm wondering if the patch could be broken up into smaller patches
> > to aid both review new and inspection later.
> 
> Overall, I've started with more patches locally when developing that, and
> with "avoid trashing" principle concluded to squash.
> Still, I agree that next attempt at splitting would be beneficial, will post
> v3.
> 
> > 
> > The above notwithstanding, the code does seems fine to me.
> > 
> > > Please note, that this was found by reading the code,
> > > an actual race has not yet materialized.
> > 
> > Sure. But I do wonder if a fixes tag might be appropriate anyway.
> 
> For this off-by-one, (3. on your list) sure.
> 
> For the race (2.), I think it's not so good - ice_aq_wait_for_event() was
> introduced to handle FW update that is counted in seconds, so the race was
> theoretical in that scenario. Later we started adding new usages to
> (general, in principle) waiting "API", with more to come, so still worth
> "fixing".

Understood.

I think this does make me lean towards 3. being better off a separate patch.
But it's your call.

> > > Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@...el.com>
> 
> Anyway, let's see what v3 will bring :)

:)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ