[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALW65jbaV4WCznjo4NxYe2Vs0NLLU+xV6-Z4sV9DNu91A+sUGA@mail.gmail.com>
Date: Mon, 9 Feb 2026 19:41:07 +0800
From: Qingfang Deng <dqfext@...il.com>
To: Vadim Fedorenko <vadim.fedorenko@...ux.dev>
Cc: Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
linux-ppp@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH net-next] ppp: don't store tx skb in the fastpath
On Mon, Feb 9, 2026 at 7:17 PM Vadim Fedorenko
<vadim.fedorenko@...ux.dev> wrote:
> On 09/02/2026 02:11, Qingfang Deng wrote:
> > Currently, ppp->xmit_pending is used in ppp_send_frame() to pass a skb
> > to ppp_push(), and holds the skb when a PPP channel cannot immediately
> > transmit it. This state is redundant because the transmit queue
> > (ppp->file.xq) can already handle the backlog. Furthermore, during
> > normal operation, an skb is queued in file.xq only to be immediately
> > dequeued, causing unnecessary overhead.
> >
> > Refactor the transmit path to avoid stashing the skb when possible:
> > - Remove ppp->xmit_pending.
> > - Rename ppp_send_frame() to ppp_prepare_tx_skb(), and don't call
> > ppp_push() in it. It returns 1 if the skb is consumed
> > (dropped/handled) or 0 if it can be passed to ppp_push().
> > - Update ppp_push() to accept the skb. It returns 1 if the skb is
> > consumed, or 0 if the channel is busy.
> > - Optimize __ppp_xmit_process():
> > - Fastpath: If the queue is empty, attempt to send the skb directly
> > via ppp_push(). If busy, queue it.
> > - Slowpath: If the queue is not empty, or fastpath failed, process
> > the backlog in file.xq. Split dequeueing loop into a separate
> > function ppp_xmit_flush() so ppp_channel_push() uses that directly
> > instead of passing a NULL skb to __ppp_xmit_process().
> >
> > This simplifies the states and reduces locking in the fastpath.
>
> Quite insteresting optimization. Did you measure the improvements? Like
> pps over PPP interface, or the length of backlog at some ppp rate?
Not yet. I may test it with PPPoE when I have access to a network
traffic generator tomorrow.
Powered by blists - more mailing lists