[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190325095733.GC3622@lahna.fi.intel.com>
Date: Mon, 25 Mar 2019 11:57:33 +0200
From: Mika Westerberg <mika.westerberg@...ux.intel.com>
To: Lukas Wunner <lukas@...ner.de>
Cc: linux-kernel@...r.kernel.org,
Michael Jamet <michael.jamet@...el.com>,
Yehezkel Bernat <YehezkelShB@...il.com>,
Andreas Noever <andreas.noever@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
netdev@...r.kernel.org
Subject: Re: [PATCH v2 17/28] thunderbolt: Add support for full PCIe daisy
chains
On Sun, Mar 24, 2019 at 12:31:44PM +0100, Lukas Wunner wrote:
> On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote:
> > @@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw)
> > }
> > }
> >
> > +static void tb_switch_authorize(struct work_struct *work)
> > +{
> > + struct tb_switch *sw = container_of(work, typeof(*sw), work);
> > +
> > + mutex_lock(&sw->tb->lock);
> > + if (!sw->is_unplugged)
> > + tb_domain_approve_switch(sw->tb, sw);
> > + mutex_unlock(&sw->tb->lock);
> > +}
> > +
>
> You're establishing PCI tunnels by having tb_scan_port() schedule
> tb_switch_authorize() via a work item, which in turn calls
> tb_domain_approve_switch(), which in turn calls tb_approve_switch(),
> which in turn calls tb_tunnel_pci().
>
> This seems kind of like a roundabout way of doing things, in particular
> since all switches are hardcoded to be automatically authorized.
>
> Why don't you just invoke tb_tunnel_pci() directly from tb_scan_port()?
Indeed, it does not make much sense to schedule separate work item just
for this.
I'm will remove it in v3. However, instead of always creating PCIe
tunnels I'm going to propose that we implement the "user" security level
in the software connection manager by default. While DMA protection
relies on IOMMU, doing this allows user to turn off PCIe tunneling
completely (or implement their own whitelisting of known good devices
for example).
> And why is the work item needed? I'm also confused that the work item
> has been present in struct tb_switch for 2 years but is put to use only
> now.
Yes, you are right - the work item here is not needed. It is actually
remnant from the original patch series. I'll cook a patch removing it.
> > -static void tb_activate_pcie_devices(struct tb *tb)
> > +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
> > {
> [...]
> > + /*
> > + * Look up available down port. Since we are chaining, it is
> > + * typically found right above this switch.
> > + */
> > + down = NULL;
> > + parent_sw = tb_to_switch(sw->dev.parent);
> > + while (parent_sw) {
> > + down = tb_find_unused_down_port(parent_sw);
> > + if (down)
> > + break;
> > + parent_sw = tb_to_switch(parent_sw->dev.parent);
> > + }
>
> The problem I see here is that there's no guarantee that the switch
> on which you're selecting a down port is actually itself connected
> with a PCI tunnel. E.g., allocation of a tunnel to that parent
> switch may have failed. In that case you end up establishing a
> tunnel between that parent switch and the newly connected switch
> but the tunnel is of no use.
Since this is going through tb_domain_approve_switch() it does not allow
PCIe tunnel creation if the parent is not authorized first.
> It would seem more logical to me to walk down the chain of newly
> connected switches and try to establish a PCI tunnel to each of
> them in order. By deferring tunnel establishment to a work item,
> I think the tunnels may be established in an arbitrary order, right?
The workqueue is ordered so AFAIK they should be run in the order the
hotplug happened. In any case I'm going to remove the work item so this
should not be an issue.
Powered by blists - more mailing lists