[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9qS-_H7Z5Gjw7SbZS0fO84vzpx4ZNHu0Ay=2krZpJQy3A@mail.gmail.com>
Date: Fri, 19 Mar 2021 13:03:06 -0600
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
David Miller <davem@...emloft.net>,
Vladimir Oltean <olteanv@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andriin@...com>,
Eric Dumazet <edumazet@...gle.com>,
Wei Wang <weiwan@...gle.com>,
"Cong Wang ." <cong.wang@...edance.com>,
Taehee Yoo <ap420073@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, linuxarm@...neuler.org,
Marc Kleine-Budde <mkl@...gutronix.de>,
linux-can@...r.kernel.org, Thomas Gschwantner <tharre3@...il.com>
Subject: Re: [Linuxarm] Re: [RFC v2] net: sched: implement TCQ_F_CAN_BYPASS
for lockless qdisc
On Thu, Mar 18, 2021 at 1:33 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
> > That offer definitely still stands. Generalization sounds like a lot of fun.
> >
> > Keep in mind though that it's an eventually consistent queue, not an
> > immediately consistent one, so that might not match all use cases. It
> > works with wg because we always trigger the reader thread anew when it
> > finishes, but that doesn't apply to everyone's queueing setup.
>
> Thanks for mentioning this.
>
> "multi-producer, single-consumer" seems to match the lockless qdisc's
> paradigm too, for now concurrent enqueuing/dequeuing to the pfifo_fast's
> queues() is not allowed, it is protected by producer_lock or consumer_lock.
The other thing is that if you've got memory for a ring buffer rather
than a list queue, we worked on an MPMC ring structure for WireGuard a
few years ago that we didn't wind up using in the end, but it lives
here:
https://git.zx2c4.com/wireguard-monolithic-historical/tree/src/mpmc_ptr_ring.h?h=tg/mpmc-benchmark
Powered by blists - more mailing lists