lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <038697aa-a11c-45ce-a270-258403cc1457@redhat.com>
Date: Thu, 20 Nov 2025 12:26:49 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Gui-Dong Han <hanguidong02@...il.com>, 3chas3@...il.com,
 horms@...nel.org, kuba@...nel.org
Cc: linux-atm-general@...ts.sourceforge.net, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org, baijiaju1990@...il.com, stable@...r.kernel.org
Subject: Re: [PATCH REPOST net v2] atm/fore200e: Fix possible data race in
 fore200e_open()

On 11/18/25 4:33 AM, Gui-Dong Han wrote:
> Protect access to fore200e->available_cell_rate with rate_mtx lock to
> prevent potential data race.
> 
> In this case, since the update depends on a prior read, a data race
> could lead to a wrong fore200e.available_cell_rate value.
> 
> The field fore200e.available_cell_rate is generally protected by the lock
> fore200e.rate_mtx when accessed. In all other read and write cases, this
> field is consistently protected by the lock, except for this case and
> during initialization.
> 
> This potential bug was detected by our experimental static analysis tool,
> which analyzes locking APIs and paired functions to identify data races
> and atomicity violations.
> 
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Cc: stable@...r.kernel.org
> Signed-off-by: Gui-Dong Han <hanguidong02@...il.com>
> Reviewed-by: Simon Horman <horms@...nel.org>
> ---
> v2:
> * Added a description of the data race hazard in fore200e_open(), as
> suggested by Jakub Kicinski and Simon Horman.

It looks like you missed Jakub's reply on v2:

https://lore.kernel.org/netdev/20250123071201.3d38d8f6@kernel.org/

The above comment is still not sufficient: you should describe
accurately how 2 (or more) CPUs could actually race causing the
corruption, reporting the relevant call paths leading to the race.

Thanks,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ