lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zj4rGRracFyEDZqa@hog>
Date: Fri, 10 May 2024 16:11:37 +0200
From: Sabrina Dubroca <sd@...asysnail.net>
To: Antonio Quartulli <antonio@...nvpn.net>
Cc: netdev@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>,
	Sergey Ryazanov <ryazanov.s.a@...il.com>,
	Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>,
	Andrew Lunn <andrew@...n.ch>, Esben Haabendal <esben@...nix.com>
Subject: Re: [PATCH net-next v3 07/24] ovpn: introduce the ovpn_peer object

2024-05-10, 14:34:16 +0200, Antonio Quartulli wrote:
> On 10/05/2024 12:30, Sabrina Dubroca wrote:
> > 2024-05-09, 16:53:42 +0200, Antonio Quartulli wrote:
> > > 
> > > 
> > > On 09/05/2024 16:36, Antonio Quartulli wrote:
> > > > On 09/05/2024 16:17, Sabrina Dubroca wrote:
> > > > > 2024-05-09, 15:44:26 +0200, Antonio Quartulli wrote:
> > > > > > On 09/05/2024 15:04, Sabrina Dubroca wrote:
> > > > > > > > > > +void ovpn_peer_release(struct ovpn_peer *peer)
> > > > > > > > > > +{
> > > > > > > > > > +    call_rcu(&peer->rcu, ovpn_peer_release_rcu);
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +/**
> > > > > > > > > > + * ovpn_peer_delete_work - work scheduled to
> > > > > > > > > > release peer in process context
> > > > > > > > > > + * @work: the work object
> > > > > > > > > > + */
> > > > > > > > > > +static void ovpn_peer_delete_work(struct work_struct *work)
> > > > > > > > > > +{
> > > > > > > > > > +    struct ovpn_peer *peer = container_of(work, struct ovpn_peer,
> > > > > > > > > > +                          delete_work);
> > > > > > > > > > +    ovpn_peer_release(peer);
> > > > > > > > > 
> > > > > > > > > Does call_rcu really need to run in process context?
> > > > > > > > 
> > > > > > > > Reason for switching to process context is that we have to invoke
> > > > > > > > ovpn_nl_notify_del_peer (that sends a netlink event to
> > > > > > > > userspace) and the
> > > > > > > > latter requires a reference to the peer.
> > > > > > > 
> > > > > > > I'm confused. When you say "requires a reference to the peer", do you
> > > > > > > mean accessing fields of the peer object? I don't see why this
> > > > > > > requires ovpn_nl_notify_del_peer to to run from process context.
> > > > > > 
> > > > > > ovpn_nl_notify_del_peer sends a netlink message to userspace and
> > > > > > I was under
> > > > > > the impression that it may block/sleep, no?
> > > > > > For this reason I assumed it must be executed in process context.
> > > > > 
> > > > > With s/GFP_KERNEL/GFP_ATOMIC/, it should be ok to run from whatever
> > > > > context. Firing up a workqueue just to send a 100B netlink message
> > > > > seems a bit overkill.
> > > > 
> > > > Oh ok, I thought the send could be a problem too.
> > > > 
> > > > Will test with GFP_ATOMIC then. Thanks for the hint.
> > > 
> > > I am back and unfortunately we also have (added by a later patch):
> > > 
> > >   294         napi_disable(&peer->napi);
> > >   295         netif_napi_del(&peer->napi);
> > 
> > Do you need the napi instance to be per peer, or can it be per
> > netdevice? If it's per netdevice you can clean it up in
> > ->priv_destructor.
> 
> In an ideal world, at some point I could leverage on multiple CPUs handling
> traffic from multiple peers, therefore every queue in the driver is per
> peer, NAPI included.

But for that they could also be per-CPU at the device level, instead
of being created at the peer level.

> Does it make sense?
> 
> Now, whether this is truly feasible from the core perspective is something I
> don't know yet.

If those packets arrive on different CPUs/queues, probably.

> For sure, for the time being I could shrink everything to one queue.
> There is one workqueue only encrypting/decrypting packets right now,
> therefore multiple NAPI queues are not truly useful at this time.
> 
> > 
> > > that need to be executed in process context.
> > > So it seems I must fire up the worker anyway..
> > 
> > I hope with can simplify all that logic. There's some complexity
> > that's unavoidable in this kind of driver, but maybe not as much as
> > you've got here.
> > 
> 
> I am all for simplification, and rest assured that the current version is
> already much simpler than what it originally was :-)

Ouch :)

-- 
Sabrina


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ