[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <73a7a01d-bcf4-3309-ccba-3359eb11d0a2@redhat.com>
Date: Thu, 1 Mar 2018 14:41:07 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: linux-kernel@...r.kernel.org,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org, David Miller <davem@...emloft.net>
Subject: Re: [RFC PATCH v2] ptr_ring: linked list fallback
On 2018年02月28日 23:43, Michael S. Tsirkin wrote:
> On Wed, Feb 28, 2018 at 10:20:33PM +0800, Jason Wang wrote:
>>
>> On 2018年02月28日 22:01, Michael S. Tsirkin wrote:
>>> On Wed, Feb 28, 2018 at 02:28:21PM +0800, Jason Wang wrote:
>>>> On 2018年02月28日 12:09, Michael S. Tsirkin wrote:
>>>>>>> Or we can add plist to a union:
>>>>>>>
>>>>>>>
>>>>>>> struct sk_buff {
>>>>>>> union {
>>>>>>> struct {
>>>>>>> /* These two members must be first. */
>>>>>>> struct sk_buff *next;
>>>>>>> struct sk_buff *prev;
>>>>>>> union {
>>>>>>> struct net_device *dev;
>>>>>>> /* Some protocols might use this space to store information,
>>>>>>> * while device pointer would be NULL.
>>>>>>> * UDP receive path is one user.
>>>>>>> */
>>>>>>> unsigned long dev_scratch;
>>>>>>> };
>>>>>>> };
>>>>>>> struct rb_node rbnode; /* used in netem & tcp stack */
>>>>>>> + struct plist plist; /* For use with ptr_ring */
>>>>>>> };
>>>>>>>
>>>>>> This look ok.
>>>>>>
>>>>>>>> For XDP, we need to embed plist in struct xdp_buff too,
>>>>>>> Right - that's pretty straightforward, isn't it?
>>>>>> Yes, it's not clear to me this is really needed for XDP consider the lock
>>>>>> contention it brings.
>>>>>>
>>>>>> Thanks
>>>>> The contention is only when the ring overflows into the list though.
>>>>>
>>>> Right, but there's usually a mismatch of speed between producer and
>>>> consumer. In case of a fast producer, we may get this contention very
>>>> frequently.
>>>>
>>>> Thanks
>>> This is not true in my experiments. In my experiments, ring size of 4k
>>> bytes is enough to see packet drops in single %s of cases.
>>>
>>> To you have workloads where rings are full most of the time?
>> E.g using xdp_redirect to redirect packets from ixgbe to tap. In my test,
>> ixgeb can produce ~8Mpps. But vhost can only consume ~3.5Mpps.
> Then you are better off just using a small ring and dropping
> packets early, right?
Yes, so I believe we won't use this for XDP.
Thanks
>>> One other nice side effect of this patch is that instead of dropping
>>> packets quickly it slows down producer to match consumer speeds.
>> In some case, producer may not want to be slowed down, e.g in devmap which
>> can redirect packets into several different interfaces.
>>> IOW, it can go either way in theory, we will need to test and see the effect.
>>>
>> Yes.
>>
>> Thanks
Powered by blists - more mailing lists