lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CY4PR1001MB231125B16A35324A79270373E8E00@CY4PR1001MB2311.namprd10.prod.outlook.com>
Date:   Thu, 19 Nov 2020 05:19:56 +0000
From:   "Ramsay, Lincoln" <Lincoln.Ramsay@...i.com>
To:     Igor Russkikh <irusskikh@...vell.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Dmitry Bogdanov <dbogdanov@...vell.com>
Subject: Re: [EXT] [PATCH] aquantia: Reserve space when allocating an SKB

Hi Igor,

> > With your solution, packets of size (AQ_CFG_RX_FRAME_MAX - AQ_SKB_PAD) up to
> > size of AQ_CFG_RX_FRAME_MAX will overwrite the area of page they designated
> > to. Ultimately, HW will do a memory corruption of next page.
> 
> The code in aq_get_rxpages seems to suggest that multiple frames can fit in a rxpage, so
> maybe the logic there prevents overwriting? (at the expense of not fitting as many
> frames into the page before it has to get a new one?)

I am not terribly experienced with such low-level code, but it looks to me looks like a page is allocated (4k) and then DMA mapped to the device. Frames are 2k, so only 2 can fit into a single mapped page. If the mapping was done with an offset of AQ_SKB_PAD, that'd leave space for the SKB headroom but it would mean only a single frame could fit into that mapped page. Since this is the "I only have 1 fragment less than 2k" code path, maybe that's ok? I'm not sure if the hardware side can know that it's only allowed to write 1 frame into the buffer...

I noticed on my device that aq_ring_rx_clean always hits the "fast" codepath. I guess that just means I am not pushing it hard enough?

> > I think the only acceptable solution here would be removing that optimized
> > path of build_skb, and keep only napi_alloc_skb. Or, we can think of keeping
> > it under some configuration condition (which is also not good).
> 
> I'll attempt to confirm that this works too, at least for our tests :)

FWIW: This does work. I notice that this has to copy data into an allocated skb rather than building the skb around the data. I guess avoiding that copy is why the fast path existed in the first place?

Would you like me to post this patch (removing the fast path)?

Lincoln

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ