lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250110164152.0ededf8a@kernel.org>
Date: Fri, 10 Jan 2025 16:41:52 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: John Daley <johndale@...co.com>
Cc: andrew+netdev@...n.ch, benve@...co.com, davem@...emloft.net,
 edumazet@...gle.com, neescoba@...co.com, netdev@...r.kernel.org,
 pabeni@...hat.com, satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when
 MTU is less than page size

On Fri, 10 Jan 2025 15:52:04 -0800 John Daley wrote:
> >SG, but please don't report it via ethtool. Add it in 
> >enic_get_queue_stats_rx() as alloc_fail (and enic_get_base_stats()).
> >As one of the benefits you'll be able to use
> >tools/testing/selftests/drivers/net/hw/pp_alloc_fail.py
> >to test this stat and error handling in the driver.  
> 
> Fyi, after making suggested change I used pp_alloc_fail.py but no
> errors were injected. I think the path from page_pool_dev_alloc()
> does not call page_pool_alloc_pages()?
> 
> Here is what I beleive the call path is:
> page_pool_dev_alloc(rq->pool, &offset, &truesize)
>   page_pool_alloc(pool, offset, size, gfp)
>     netmem_to_page(page_pool_alloc_netmem(pool, offset, size, gfp));
>       page_pool_alloc_frag_netmem(pool, offset, *size, gfp);
>         page_pool_alloc_netmems(pool, gfp);
>           __page_pool_alloc_pages_slow(pool, gfp);
> 
> If I change the call from page_pool_dev_alloc() to
> page_pool_dev_alloc_pages() in the driver I do see the errors injected.

Ah, good point. I think the netmems conversion broke it :(
If we moved the error injection to happen on page_pool_alloc_netmems
it would work, right? Would I be able to convince you to test that
and send a patch? :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ