lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 08 Dec 2017 14:19:07 -0500 (EST)
From:   David Miller <davem@...emloft.net>
To:     niklas.cassel@...s.com
Cc:     Joao.Pinto@...opsys.com, peppe.cavallaro@...com,
        alexandre.torgue@...com, niklass@...s.com, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next] net: stmmac: fix broken dma_interrupt
 handling for multi-queues

From: Niklas Cassel <niklas.cassel@...s.com>
Date: Thu,  7 Dec 2017 23:56:10 +0100

> There is nothing that says that number of TX queues == number of RX
> queues. E.g. the ARTPEC-6 SoC has 2 TX queues and 1 RX queue.
> 
> This code is obviously wrong:
> for (chan = 0; chan < tx_channel_count; chan++) {
>     struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan];
> 
> priv->rx_queue has size MTL_MAX_RX_QUEUES, so this will send an
> uninitialized napi_struct to __napi_schedule(), causing us to
> crash in net_rx_action(), because napi_struct->poll is zero.
 ...
> Since each DMA channel can be used for rx and tx simultaneously,
> the current code should probably be rewritten so that napi_struct is
> embedded in a new struct stmmac_channel.
> That way, stmmac_poll() can call stmmac_tx_clean() on just the tx queue
> where we got the IRQ, instead of looping through all tx queues.
> This is also how the xgbe driver does it (another driver for this IP).
> 
> Fixes: c22a3f48ef99 ("net: stmmac: adding multiple napi mechanism")
> Signed-off-by: Niklas Cassel <niklas.cassel@...s.com>

Applied, but indeed a lot more fixes are needed in this area.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ