lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 26 Mar 2015 11:35:05 +0200
From:	Vlad Zolotarov <vladz@...udius-systems.com>
To:	Alexander Duyck <alexander.h.duyck@...hat.com>,
	"Tantilov, Emil S" <emil.s.tantilov@...el.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	"avi@...udius-systems.com" <avi@...udius-systems.com>,
	"gleb@...udius-systems.com" <gleb@...udius-systems.com>,
	"Skidmore, Donald C" <donald.c.skidmore@...el.com>
Subject: Re: [PATCH net-next v6 4/7] ixgbevf: Add a RETA query code



On 03/25/15 23:04, Alexander Duyck wrote:
>
> On 03/25/2015 01:17 PM, Vlad Zolotarov wrote:
>>
>>
>> On 03/25/15 20:35, Tantilov, Emil S wrote:
>>>> -----Original Message-----
>>>> From: Vlad Zolotarov [mailto:vladz@...udius-systems.com]
>>>> Sent: Wednesday, March 25, 2015 2:28 AM
>>>> Subject: Re: [PATCH net-next v6 4/7] ixgbevf: Add a RETA query code
>>> <snip>
>>>
>>>>> Have you tested what happens if you run:
>>>>>
>>>>> while true
>>>>> do
>>>>>     ethtool --show-rxfh-indir ethX
>>>>> done
>>>>>
>>>>> in the background while passing traffic through the VF?
>>>> I understand your concerns but let's start with clarifying a few 
>>>> things.
>>>> First, VF driver is by definition not trusted. If it (or its user)
>>>> decides to do anything malicious (like u proposed above) that would
>>>> eventually hurt (only this) VF's performance - nobody should care.
>>>> However the right question here would be: "How the above use case may
>>>> hurt the corresponding PF or other VFs' performance?" And since the
>>>> mailbox operation involves quite a few MMIO writes and reads this may
>>>> slow the PF quite a bit and this may be a problem that should be taken
>>>> care of. However it wasn't my patch series that have introduced it. 
>>>> The
>>>> same problem would arise if Guest would change VF's MAC address in a
>>>> tight loop like above. Namely any VF slow path operation that would
>>>> eventually cause the VF-PF channel transaction may be used to 
>>>> create an
>>>> attack on a PF.
>>> There are operations that can be disruptive to the VF I am not 
>>> arguing that,
>>> the issue introduced by these patches has mostly to do with the fact 
>>> that now
>>> we can hit the mailbox more often for what is mostly static 
>>> information.
>>>
>>> Especially with ethtool we already had to deal with an issue caused 
>>> by net-snmp:
>>> https://sourceforge.net/p/e1000/mailman/message/32188362/
>>>
>>> Where net-snmp was being too aggressive when collecting information, 
>>> even if most of it was static.
>>
>> Emil, I don't really understand what are u trying to protect here 
>> against. If a user would want to shoot him/herself in the leg - 
>> he/she would still be able to do it with the other mailbox involving 
>> operations like MAC change. So, what's the sense to add useless lines?
>>
>>>
>>>> Perhaps storing the RSS key and the table is better option than 
>>>> having to invoke the mailbox on every read.
>>>> I don't think this could work if I understand your proposal correctly.
>>>> The only way to cache the result that would decrease the number of 
>>>> mbox
>>>> transactions would be to cache it in the VF. But how could i 
>>>> invalidate
>>>> this cache if the table content has been changed by a PF? I think the
>>>> main source of a confusion here is that u assume that PF driver is a
>>>> Linux ixgbe driver that doesn't support an indirection table change at
>>>> the moment. As I have explained above - this should not be assumed.
>>> You keep mentioning other drivers - what other driver do you mean?
>>> All the PF drivers that enable SRIOV are maintained and supported by 
>>> Intel.
>>>
>>> For HW older than X550 we can simply not allow the RSS hash to be 
>>> modified if the driver is loaded in SRIOV mode.
>>> This way the RSS info can be read once the driver is loaded. For 
>>> X550 this can all be done in the VF, so you can avoid calling the 
>>> mailbox altogether.
>>> I understand this is a bit limiting, but this is due to HW 
>>> limitation anyway (VFs do not have their own RSS config).
>>
>> Let me remind u that Linux, FreeBSD, XEN  and DPDK PF drivers are all 
>> open source so u can't actually go and "not allow" things. ;) And 
>> although Intel developers contribute most of the code there are and 
>> will be other contributors too so I doubt the proposed above approach 
>> fits the open source spirit well. ;)
>
> Actually these drivers already support multiple OSes just fine. The 
> part where I think you are confused is that you assume they all use 
> the same Mailbox API which they likely wouldn't.  I would suggest 
> taking a look at ixgbe_pfvf_api_rev in mbx.h of the VF driver. 
> Different OSes have different things that can be supported, so for 
> example the ixgbe_mbox_api_20 is reserved for a Solaris based PF/VF 
> combination.  I would suspect that FreeBSD will likely have to conform 
> to the existing APIs, or report that it only supports a different 
> version of the mailbox API.

I didn't assume a common API at all. My point was that u can't just go 
and "forbid" standard things like changing the indirection table in the 
open source project(s). Therefore u shouldn't assume it and thus caching 
the indirection table doesn't seem a future-proof solution to me.


>
>>
>> The user should actually not query the indirection table and a hash 
>> key too often. And if he/she does - it should be his/her problem.
>> However, if like with the ixgbevf_set_num_queues() u insist on your 
>> way of doing this (on caching the indirection table and hash key) - 
>> then please let me know and I will add it. Because, frankly, I care 
>> about the PF part of this series much more than for the VF part... ;)
>
> I would say you don't need to cache it, but for 82599 and x540 there 
> isn't any need to store more than 3 bits per entry, 384b, or 12 DWORDs 
> for the entire RETA of the VF since the hardware can support at most 8 
> queues w/ SR-IOV.  Then you only need one message instead of 3 which 
> will reduce quite a bit of the complication with all of this.
>
> Also it might make more sense to start working on displaying this on 
> the PF before you start trying to do this on the VF.  As far as I know 
> ixgbe still doesn't have this functionality and it would make much 
> more sense to enable that first on ixgbe before you start trying to 
> find a way to feed the data to the VF.

Let's agree on the next steps:

 1. I'll reduce the series scope to 82599 and x540 devices.
 2. I'll add the same ethtool operations I've already added to VF to PF
    devices as well.
 3. I'll implement the compression the Alex so desperately wants... ;)
 4. I won't implement the caching of the indirection and RSS hash key
    query results in the VF level.


Pls., confirm that all u guys (Alex, Emil, Jeff) agree to that.
Thanks,
vlad



>
> - Alex

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ