lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87618083B2453E4A8714035B62D6799250275B81@FMSMSX105.amr.corp.intel.com>
Date:	Wed, 25 Mar 2015 18:35:41 +0000
From:	"Tantilov, Emil S" <emil.s.tantilov@...el.com>
To:	Vlad Zolotarov <vladz@...udius-systems.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"avi@...udius-systems.com" <avi@...udius-systems.com>,
	"gleb@...udius-systems.com" <gleb@...udius-systems.com>,
	"Skidmore, Donald C" <donald.c.skidmore@...el.com>
Subject: RE: [PATCH net-next v6 4/7] ixgbevf: Add a RETA query code

>-----Original Message-----
>From: Vlad Zolotarov [mailto:vladz@...udius-systems.com] 
>Sent: Wednesday, March 25, 2015 2:28 AM
>Subject: Re: [PATCH net-next v6 4/7] ixgbevf: Add a RETA query code

<snip>

>> Have you tested what happens if you run:
>>
>> while true
>> do
>> 	ethtool --show-rxfh-indir ethX
>> done
>>
>> in the background while passing traffic through the VF?
>I understand your concerns but let's start with clarifying a few things. 
>First, VF driver is by definition not trusted. If it (or its user) 
>decides to do anything malicious (like u proposed above) that would 
>eventually hurt (only this) VF's performance - nobody should care. 
>However the right question here would be: "How the above use case may 
>hurt the corresponding PF or other VFs' performance?" And since the 
>mailbox operation involves quite a few MMIO writes and reads this may 
>slow the PF quite a bit and this may be a problem that should be taken 
>care of. However it wasn't my patch series that have introduced it. The 
>same problem would arise if Guest would change VF's MAC address in a 
>tight loop like above. Namely any VF slow path operation that would 
>eventually cause the VF-PF channel transaction may be used to create an 
>attack on a PF.

There are operations that can be disruptive to the VF I am not arguing that,
the issue introduced by these patches has mostly to do with the fact that now
we can hit the mailbox more often for what is mostly static information.

Especially with ethtool we already had to deal with an issue caused by net-snmp:
https://sourceforge.net/p/e1000/mailman/message/32188362/

Where net-snmp was being too aggressive when collecting information, even if most of it was static.

>
> Perhaps storing the RSS key and the table is better option than having to invoke the mailbox on every read.

>I don't think this could work if I understand your proposal correctly. 
>The only way to cache the result that would decrease the number of mbox 
>transactions would be to cache it in the VF. But how could i invalidate 
>this cache if the table content has been changed by a PF? I think the 
>main source of a confusion here is that u assume that PF driver is a 
>Linux ixgbe driver that doesn't support an indirection table change at 
>the moment. As I have explained above - this should not be assumed.

You keep mentioning other drivers - what other driver do you mean?
All the PF drivers that enable SRIOV are maintained and supported by Intel.

For HW older than X550 we can simply not allow the RSS hash to be modified if the driver is loaded in SRIOV mode.
This way the RSS info can be read once the driver is loaded. For X550 this can all be done in the VF, so you can avoid calling the mailbox altogether.
I understand this is a bit limiting, but this is due to HW limitation anyway (VFs do not have their own RSS config).

Thanks,
Emil


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ