[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8617ba73-8a05-51c4-e52b-164687cecf07@linux.ibm.com>
Date: Tue, 28 Apr 2020 10:35:09 -0500
From: Thomas Falcon <tlfalcon@...ux.ibm.com>
To: Juliet Kim <julietk@...ux.vnet.ibm.com>, netdev@...r.kernel.org
Cc: linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH net] ibmvnic: Fall back to 16 H_SEND_SUB_CRQ_INDIRECT
entries with old FW
On 4/27/20 12:33 PM, Juliet Kim wrote:
> The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
> some platforms from 16 to 128. If Live Partition Mobility is used
> to migrate a running OS image from a newer source platform to an
> older target platform, then H_SEND_SUB_CRQ_INDIRECT will fail with
> H_PARAMETER if 128 entries are queued.
>
> Fix this by falling back to 16 entries if H_PARAMETER is returned
> from the hcall().
Thanks for the submission, but I am having a hard time believing that
this is what is happening since the driver does not support sending
multiple frames per hypervisor call at this time. Even if it were the
case, this approach would omit frame data needed by the VF, so the
second attempt may still fail. Are there system logs available that show
the driver is attempting to send transmissions with greater than 16
descriptors?
Thanks,
Tom
>
> Signed-off-by: Juliet Kim <julietk@...ux.vnet.ibm.com>
> ---
> drivers/net/ethernet/ibm/ibmvnic.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index 4bd33245bad6..b66c2f26a427 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -1656,6 +1656,17 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
> lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
> (u64)tx_buff->indir_dma,
> (u64)num_entries);
> +
> + /* Old firmware accepts max 16 num_entries */
> + if (lpar_rc == H_PARAMETER && num_entries > 16) {
> + tx_crq.v1.n_crq_elem = 16;
> + tx_buff->num_entries = 16;
> + lpar_rc = send_subcrq_indirect(adapter,
> + handle_array[queue_num],
> + (u64)tx_buff->indir_dma,
> + 16);
> + }
> +
> dma_unmap_single(dev, tx_buff->indir_dma,
> sizeof(tx_buff->indir_arr), DMA_TO_DEVICE);
> } else {
Powered by blists - more mailing lists