lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Mar 2018 23:30:13 +0100
From:   Jiri Pirko <jiri@...nulli.us>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     Alexander Duyck <alexander.duyck@...il.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Sridhar Samudrala <sridhar.samudrala@...el.com>,
        David Miller <davem@...emloft.net>,
        Netdev <netdev@...r.kernel.org>, virtio-dev@...ts.oasis-open.org,
        "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
        "Duyck, Alexander H" <alexander.h.duyck@...el.com>,
        Jakub Kicinski <kubakici@...pl>
Subject: Re: [PATCH v4 2/2] virtio_net: Extend virtio to use VF datapath when
 available

Mon, Mar 05, 2018 at 05:11:32PM CET, stephen@...workplumber.org wrote:
>On Mon, 5 Mar 2018 10:21:18 +0100
>Jiri Pirko <jiri@...nulli.us> wrote:
>
>> Sun, Mar 04, 2018 at 10:58:34PM CET, alexander.duyck@...il.com wrote:
>> >On Sun, Mar 4, 2018 at 10:50 AM, Jiri Pirko <jiri@...nulli.us> wrote:  
>> >> Sun, Mar 04, 2018 at 07:24:12PM CET, alexander.duyck@...il.com wrote:  
>> >>>On Sat, Mar 3, 2018 at 11:13 PM, Jiri Pirko <jiri@...nulli.us> wrote:  
>> 
>> [...]
>> 
>> >  
>> >>>Currently we only have agreement from Michael on taking this code, as
>> >>>such we are working with virtio only for now. When the time comes that  
>> >>
>> >> If you do duplication of netvsc in-driver bonding in virtio_net, it will
>> >> stay there forever. So what you say is: "We will do it halfway now
>> >> and promise to fix it later". That later will never happen, I'm pretty
>> >> sure. That is why I push for in-driver bonding shared code as a part of
>> >> this patchset.  
>> >
>> >You want this new approach and a copy of netvsc moved into either core
>> >or some module of its own. I say pick an architecture. We are looking
>> >at either 2 netdevs or 3. We are not going to support both because
>> >that will ultimately lead to a terrible user experience and make
>> >things quite confusing.
>> >  
>> >> + if you would be pushing first driver to do this, I would understand.
>> >> But the first driver is already in. You are pushing second. This is the
>> >> time to do the sharing, unification of behaviour. Next time is too late.  
>> >
>> >That is great, if we want to share then lets share. But what you are
>> >essentially telling us is that we need to fork this solution and
>> >maintain two code paths, one for 2 netdevs, and another for 3. At that
>> >point what is the point in merging them together?  
>> 
>> Of course, I vote for the same behaviour for netvsc and virtio_net. That
>> is my point from the very beginning.
>> 
>> Stephen, what do you think? Could we please make virtio_net and netvsc
>> behave the same and to use a single code with well-defined checks and
>> restrictions for this feature?
>
>Eventually, yes both could share common code routines. In reality,
>the failover stuff is only a very small part of either driver so
>it is not worth stretching to try and cover too much. If you look,
>the failover code is just using routines that already exist for
>use by bonding, teaming, etc.

Yeah, we consern was also about the code that processes the netdev
notifications and does auto-enslave and all related stuff.


>
>There will always be two drivers, the ring buffers and buffering
>are very different between vmbus and virtio. It would help to address
>some of the awkward stuff like queue selection and offload handling
>in a common way.

Agreed.


>
>Don't worry too much about backports. The backport can use the
>old code if necessary.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ