lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240905182521.2f9f4c1c@kernel.org>
Date: Thu, 5 Sep 2024 18:25:21 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, Jiri Pirko <jiri@...nulli.us>, Madhu Chittim
 <madhu.chittim@...el.com>, Sridhar Samudrala <sridhar.samudrala@...el.com>,
 Simon Horman <horms@...nel.org>, John Fastabend <john.fastabend@...il.com>,
 Sunil Kovvuri Goutham <sgoutham@...vell.com>, Jamal Hadi Salim
 <jhs@...atatu.com>, Donald Hunter <donald.hunter@...il.com>,
 anthony.l.nguyen@...el.com, przemyslaw.kitszel@...el.com,
 intel-wired-lan@...ts.osuosl.org, edumazet@...gle.com
Subject: Re: [PATCH v6 net-next 07/15] net-shapers: implement shaper cleanup
 on queue deletion

On Thu, 5 Sep 2024 20:02:38 +0200 Paolo Abeni wrote:
> > The dev->lock has to be taken here, around those three lines,
> > and then set / group must check QUEUE ids against
> > dev->real_num_tx_queues, no? Otherwise the work
> > net_shaper_set_real_num_tx_queues() does is prone to races?  
> 
> Yes, I think such race exists, but I'm unsure that tacking the lock 
> around the above code will be enough.

I think "enough" will be subjective. Right now patch 7 provides no real
guarantee.

> i.e. if the relevant devices has 16 channel queues the set() races with 
> a channel reconf on different CPUs:
> 
> CPU 1						CPU 2
> 
> set_channels(8)
> 
> driver_set_channel()
> // actually change the number of queues to
> // 8, dev->real_num_tx_queues is still 16
> // dev->lock is not held yet because the
> // driver still has to call
> // netif_set_real_num_tx_queues()
> 						set(QUEUE_15,...)
> 						// will pass validation
> 						// but queue 15 does not
> 						// exist anymore

That may be true - in my proposal the driver can only expect that once
netif_set_real_num_tx_queues() returns core will not issue rate limit
ops on disabled queues. Driver has to make sure rate limit ops for old
queues are accepted all the way up to the call to set_real and ops for
new queues are accepted immediately after.

Importantly, the core's state is always consistent - given both the
flushing inside net_shaper_set_real_num_tx_queues() and proposed check
would be under netdev->lock.

For the driver -- let me flip the question around -- what do you expect
the locking scheme to be in case of channel count change? Alternatively
we could just expect the driver to take netdev->lock around the
appropriate section of code and we'd do:

void net_shaper_set_real_num_tx_queues(struct net_device *dev, ...)
{
	...
	if (!READ_ONCE(dev->net_shaper_hierarchy))
		return;

	lockdep_assert_held(dev->lock);
	...
}

I had a look at iavf, and there is no relevant locking around the queue
count check at all, so that doesn't help..

> Acquiring dev->lock around set_channel() will not be enough: some driver 
> change the channels number i.e. when enabling XDP.

Indeed, trying to lock before calling the driver would be both a huge
job and destined to fail.

> I think/fear we need to replace the dev->lock with the rtnl lock to 
> solve the race for good.

Maybe :( I think we need *an* answer for:
 - how we expect the driver to protect itself (assuming that the racy
   check in iavf_verify_handle() actually serves some purpose, which
   may not be true);
 - how we ensure consistency of core state (no shapers for queues which
   don't exist, assuming we agree having shapers for queues which
   don't exist is counter productive).

Reverting back to rtnl_lock for all would be sad, the scheme of
expecting the driver to take netdev->lock could work?
It's the model we effectively settled on in devlink.
Core->driver callbacks are always locked by the core,
for driver->core calls driver should explicitly take the lock
(some wrappers for lock+op+unlock are provided).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ