lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a49585d2-484d-4b1d-8f6b-197cd05fa1ce@amd.com>
Date: Fri, 2 Aug 2024 10:38:04 -0700
From: "Nelson, Shannon" <shannon.nelson@....com>
To: Nick Child <nnac123@...ux.ibm.com>, netdev@...r.kernel.org
Cc: bjking1@...ux.ibm.com, haren@...ux.ibm.com, ricklind@...ibm.com
Subject: Re: [PATCH net-next 2/2] ibmveth: Recycle buffers during replenish
 phase

On 8/2/2024 10:36 AM, Nick Child wrote:
> 
> On 8/1/24 18:07, Nelson, Shannon wrote:
>> On 8/1/2024 2:12 PM, Nick Child wrote:
>>>
>>> When the length of a packet is under the rx_copybreak threshold, the
>>> buffer is copied into a new skb and sent up the stack. This allows the
>>> dma mapped memory to be recycled back to FW.
>>>
>>> Previously, the reuse of the DMA space was handled immediately.
>>> This means that further packet processing has to wait until
>>> h_add_logical_lan finishes for this packet.
>>>
>>> Therefore, when reusing a packet, offload the hcall to the replenish
>>> function. As a result, much of the shared logic between the recycle and
>>> replenish functions can be removed.
>>>
>>> This change increases TCP_RR packet rate by another 15% (370k to 430k
>>> txns). We can see the ftrace data supports this:
>>> PREV: ibmveth_poll = 8078553.0 us / 190999.0 hits = AVG 42.3 us
>>> NEW:  ibmveth_poll = 7632787.0 us / 224060.0 hits = AVG 34.07 us
>>>
>>> Signed-off-by: Nick Child <nnac123@...ux.ibm.com>
>>> ---
>>>   drivers/net/ethernet/ibm/ibmveth.c | 144 ++++++++++++-----------------
>>>   1 file changed, 60 insertions(+), 84 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/ibm/ibmveth.c
>>> b/drivers/net/ethernet/ibm/ibmveth.c
>>> index e6eb594f0751..b619a3ec245b 100644
>>> --- a/drivers/net/ethernet/ibm/ibmveth.c
>>> +++ b/drivers/net/ethernet/ibm/ibmveth.c
>>> @@ -39,7 +39,8 @@
>>>   #include "ibmveth.h"
>>>
>>>   static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance);
>>> -static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter 
>>> *adapter);
>>> +static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter,
>>> +                                      bool reuse);
>>>   static unsigned long ibmveth_get_desired_dma(struct vio_dev *vdev);
>>>
>>>   static struct kobj_type ktype_veth_pool;
>>> @@ -226,6 +227,16 @@ static void ibmveth_replenish_buffer_pool(struct
>>> ibmveth_adapter *adapter,
>>>          for (i = 0; i < count; ++i) {
>>>                  union ibmveth_buf_desc desc;
>>>
>>> +               free_index = pool->consumer_index;
>>> +               index = pool->free_map[free_index];
>>> +               skb = NULL;
>>> +
>>> +               BUG_ON(index == IBM_VETH_INVALID_MAP);
>>
>> Maybe can replace with a WARN_ON with a break out of the loop?
>>
>> Otherwise this looks reasonable.
>>
>> Reviewed-by: Shannon Nelson <shannon.nelson@....com>
>>
> 
> Hi Shannon,
> Thanks for reviewing. Addressing your comment on both
> patches here as they are related. I agree we should
> replace the BUG_ON's but there are 6 other BUG_ON's in
> the driver that I would like to address as well. I am
> thinking I will send a different patch which removes
> all BUG_ON's in the driver (outside of this patchset).
> 
> Since this patchset only rearranges existing BUG_ONs, I
> will hold off on sending a v2 unless other feedback comes
> in. Thanks again.
> 

No problem - this is what I expected.
sln


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ