lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAMZdPi8KdWCke5s03Bvy_4NZcDsgp+jPW5dqvCHyiU2C==tsmw@mail.gmail.com> Date: Mon, 12 Sep 2022 14:53:07 +0200 From: Loic Poulain <loic.poulain@...aro.org> To: Sreehari Kancharla <sreehari.kancharla@...ux.intel.com> Cc: netdev@...r.kernel.org, kuba@...nel.org, davem@...emloft.net, johannes@...solutions.net, ryazanov.s.a@...il.com, m.chetan.kumar@...el.com, chandrashekar.devegowda@...el.com, linuxwwan@...el.com, chiranjeevi.rapolu@...ux.intel.com, haijun.liu@...iatek.com, ricardo.martinez@...ux.intel.com, andriy.shevchenko@...ux.intel.com, dinesh.sharma@...el.com, ilpo.jarvinen@...ux.intel.com, moises.veleta@...el.com, sreehari.kancharla@...el.com Subject: Re: [PATCH net-next 2/2] net: wwan: t7xx: Add NAPI support Hi Sreehari, On Fri, 9 Sept 2022 at 18:40, Sreehari Kancharla <sreehari.kancharla@...ux.intel.com> wrote: > > From: Haijun Liu <haijun.liu@...iatek.com> > > Replace the work queue based RX flow with a NAPI implementation > Remove rx_thread and dpmaif_rxq_work. > Introduce dummy network device. its responsibility is > - Binds one NAPI object for each DL HW queue and acts as > the agent of all those network devices. > - Use NAPI object to poll DL packets. > - Helps to dispatch each packet to the network interface. > > Signed-off-by: Haijun Liu <haijun.liu@...iatek.com> > Co-developed-by: Sreehari Kancharla <sreehari.kancharla@...ux.intel.com> > Signed-off-by: Sreehari Kancharla <sreehari.kancharla@...ux.intel.com> > Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@...el.com> > Acked-by: Ricardo Martinez <ricardo.martinez@...ux.intel.com> > Acked-by: M Chetan Kumar <m.chetan.kumar@...ux.intel.com> > --- > drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 14 +- > drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 220 +++++++-------------- > drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 1 + > drivers/net/wwan/t7xx/t7xx_netdev.c | 93 ++++++++- > drivers/net/wwan/t7xx/t7xx_netdev.h | 5 + > 5 files changed, 167 insertions(+), 166 deletions(-) > > diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h > index 1225ca0ed51e..0ce4505e813d 100644 > --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h > +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h > @@ -20,6 +20,7 @@ [...] > -static void t7xx_dpmaif_rxq_work(struct work_struct *work) > +int t7xx_dpmaif_napi_rx_poll(struct napi_struct *napi, const int budget) > { > - struct dpmaif_rx_queue *rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work); > - struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl; > - int ret; > + struct dpmaif_rx_queue *rxq = container_of(napi, struct dpmaif_rx_queue, napi); > + struct t7xx_pci_dev *t7xx_dev = rxq->dpmaif_ctrl->t7xx_dev; > + int ret, once_more = 0, work_done = 0; > > atomic_set(&rxq->rx_processing, 1); > /* Ensure rx_processing is changed to 1 before actually begin RX flow */ > @@ -917,22 +840,54 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work) > > if (!rxq->que_started) { > atomic_set(&rxq->rx_processing, 0); > - dev_err(dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index); > - return; > + dev_err(rxq->dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index); > + return work_done; > } > > - ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev); > - if (ret < 0 && ret != -EACCES) > - return; > + if (!rxq->sleep_lock_pending) { > + ret = pm_runtime_resume_and_get(rxq->dpmaif_ctrl->dev); AFAIK, polling is not called in a context allowing you to sleep (e.g. performing a synced pm runtime operation). > + if (ret < 0 && ret != -EACCES) > + return work_done; > > - t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev); > - if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) > - t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq); > + t7xx_pci_disable_sleep(t7xx_dev); > + } > > - t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev); > - pm_runtime_mark_last_busy(dpmaif_ctrl->dev); > - pm_runtime_put_autosuspend(dpmaif_ctrl->dev); > + ret = try_wait_for_completion(&t7xx_dev->sleep_lock_acquire); The logic seems odd, why not simply scheduling napi polling when you are really ready to handle it, i.e when you have awake condition & rx ready. > + if (!ret) { > + napi_complete_done(napi, work_done); > + rxq->sleep_lock_pending = true; > + napi_reschedule(napi); > + return work_done; > + } > + [...] Regards, Loic
Powered by blists - more mailing lists