lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 24 Mar 2019 12:31:44 +0100
From:   Lukas Wunner <lukas@...ner.de>
To:     Mika Westerberg <mika.westerberg@...ux.intel.com>
Cc:     linux-kernel@...r.kernel.org,
        Michael Jamet <michael.jamet@...el.com>,
        Yehezkel Bernat <YehezkelShB@...il.com>,
        Andreas Noever <andreas.noever@...il.com>,
        "David S . Miller" <davem@...emloft.net>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        netdev@...r.kernel.org
Subject: Re: [PATCH v2 17/28] thunderbolt: Add support for full PCIe daisy
 chains

On Wed, Feb 06, 2019 at 04:17:27PM +0300, Mika Westerberg wrote:
> @@ -63,6 +71,16 @@ static void tb_discover_tunnels(struct tb_switch *sw)
>  	}
>  }
>  
> +static void tb_switch_authorize(struct work_struct *work)
> +{
> +	struct tb_switch *sw = container_of(work, typeof(*sw), work);
> +
> +	mutex_lock(&sw->tb->lock);
> +	if (!sw->is_unplugged)
> +		tb_domain_approve_switch(sw->tb, sw);
> +	mutex_unlock(&sw->tb->lock);
> +}
> +

You're establishing PCI tunnels by having tb_scan_port() schedule
tb_switch_authorize() via a work item, which in turn calls
tb_domain_approve_switch(), which in turn calls tb_approve_switch(),
which in turn calls tb_tunnel_pci().

This seems kind of like a roundabout way of doing things, in particular
since all switches are hardcoded to be automatically authorized.

Why don't you just invoke tb_tunnel_pci() directly from tb_scan_port()?
And why is the work item needed?  I'm also confused that the work item
has been present in struct tb_switch for 2 years but is put to use only
now.


> -static void tb_activate_pcie_devices(struct tb *tb)
> +static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
>  {
[...]
> +	/*
> +	 * Look up available down port. Since we are chaining, it is
> +	 * typically found right above this switch.
> +	 */
> +	down = NULL;
> +	parent_sw = tb_to_switch(sw->dev.parent);
> +	while (parent_sw) {
> +		down = tb_find_unused_down_port(parent_sw);
> +		if (down)
> +			break;
> +		parent_sw = tb_to_switch(parent_sw->dev.parent);
> +	}

The problem I see here is that there's no guarantee that the switch
on which you're selecting a down port is actually itself connected
with a PCI tunnel.  E.g., allocation of a tunnel to that parent
switch may have failed.  In that case you end up establishing a
tunnel between that parent switch and the newly connected switch
but the tunnel is of no use.

It would seem more logical to me to walk down the chain of newly
connected switches and try to establish a PCI tunnel to each of
them in order.  By deferring tunnel establishment to a work item,
I think the tunnels may be established in an arbitrary order, right?

Thanks,

Lukas

Powered by blists - more mailing lists