lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Jun 2021 15:28:22 +0200
From:   Maxime Ripard <maxime@...no.tech>
To:     nicolas saenz julienne <nsaenz@...nel.org>
Cc:     Florian Fainelli <f.fainelli@...il.com>,
        Doug Berger <opendmb@...il.com>,
        bcm-kernel-feedback-list@...adcom.com,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: Kernel Panic in skb_release_data using genet

On Tue, Jun 01, 2021 at 11:33:18AM +0200, nicolas saenz julienne wrote:
> On Mon, 2021-05-31 at 19:36 -0700, Florian Fainelli wrote:
> > > That is also how I boot my Pi4 at home, and I suspect you are right, if
> > > the VPU does not shut down GENET's DMA, and leaves buffer addresses in
> > > the on-chip descriptors that point to an address space that is managed
> > > totally differently by Linux, then we can have a serious problem and
> > > create some memory corruption when the ring is being reclaimed. I will
> > > run a few experiments to test that theory and there may be a solution
> > > using the SW_INIT reset controller to have a big reset of the controller
> > > before handing it over to the Linux driver.
> > 
> > Adding a WARN_ON(reg & DMA_EN) in bcmgenet_dma_disable() has not shown
> > that the TX or RX DMA have been left running during the hand over from
> > the VPU to the kernel. I checked out drm-misc-next-2021-05-17 to reduce
> > as much as possible the differences between your set-up and my set-up
> > but so far have not been able to reproduce the crash in booting from NFS
> > repeatedly, I will try again.
> 
> FWIW I can reproduce the error too. That said it's rather hard to reproduce,
> something in the order of 1 failure every 20 tries.

Yeah, it looks like it's only from a cold boot and comes in "bursts",
where you would get like 5 in a row and be done with it for a while.

Maxime

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ