lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Apr 2023 14:17:53 -0700
From:   "Tantilov, Emil S" <emil.s.tantilov@...el.com>
To:     Shannon Nelson <shannon.nelson@....com>,
        <intel-wired-lan@...ts.osuosl.org>
CC:     <netdev@...r.kernel.org>, <joshua.a.hay@...el.com>,
        <sridhar.samudrala@...el.com>, <jesse.brandeburg@...el.com>,
        <anthony.l.nguyen@...el.com>, <willemb@...gle.com>,
        <decot@...gle.com>, <pabeni@...hat.com>, <kuba@...nel.org>,
        <edumazet@...gle.com>, <davem@...emloft.net>,
        <alan.brady@...el.com>, <madhu.chittim@...el.com>,
        <phani.r.burra@...el.com>, <shailendra.bhatnagar@...el.com>,
        <pavan.kumar.linga@...el.com>, <simon.horman@...igine.com>,
        <leon@...nel.org>
Subject: Re: [net-next v3 04/15] idpf: add core init and interrupt request

On 4/28/2023 12:50 PM, Shannon Nelson wrote:
> On 4/26/23 7:09 PM, Emil Tantilov wrote:
>> From: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
>>
>> As the mailbox is setup, add the necessary send and receive
>> mailbox message framework to support the virtchnl communication
>> between the driver and device Control Plane (CP).
>>
>> Add the core initialization. To start with, driver confirms the
>> virtchnl version with the CP. Once that is done, it requests
>> and gets the required capabilities and resources needed such as
>> max vectors, queues etc.
>>
>> Based on the vector information received in 'VIRTCHNL2_OP_GET_CAPS',
>> request the stack to allocate the required vectors. Finally add
>> the interrupt handling mechanism for the mailbox queue and enable
>> the interrupt.
>>
>> Note: Checkpatch issues a warning about IDPF_FOREACH_VPORT_VC_STATE and
>> IDPF_GEN_STRING being complex macros and should be enclosed in 
>> parentheses
>> but it's not the case. They are never used as a statement and instead 
>> only
>> used to define the enum and array.
>>
>> Co-developed-by: Alan Brady <alan.brady@...el.com>
>> Signed-off-by: Alan Brady <alan.brady@...el.com>
>> Co-developed-by: Emil Tantilov <emil.s.tantilov@...el.com>
>> Signed-off-by: Emil Tantilov <emil.s.tantilov@...el.com>
>> Co-developed-by: Joshua Hay <joshua.a.hay@...el.com>
>> Signed-off-by: Joshua Hay <joshua.a.hay@...el.com>
>> Co-developed-by: Madhu Chittim <madhu.chittim@...el.com>
>> Signed-off-by: Madhu Chittim <madhu.chittim@...el.com>
>> Co-developed-by: Phani Burra <phani.r.burra@...el.com>
>> Signed-off-by: Phani Burra <phani.r.burra@...el.com>
>> Co-developed-by: Shailendra Bhatnagar <shailendra.bhatnagar@...el.com>
>> Signed-off-by: Shailendra Bhatnagar <shailendra.bhatnagar@...el.com>
>> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@...el.com>
>> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@...el.com>
>> Reviewed-by: Willem de Bruijn <willemb@...gle.com>
>> ---
>>   drivers/net/ethernet/intel/idpf/idpf.h        | 137 ++-
>>   drivers/net/ethernet/intel/idpf/idpf_dev.c    |  17 +
>>   .../ethernet/intel/idpf/idpf_lan_pf_regs.h    |  43 +
>>   .../ethernet/intel/idpf/idpf_lan_vf_regs.h    |  38 +
>>   drivers/net/ethernet/intel/idpf/idpf_lib.c    | 343 ++++++++
>>   drivers/net/ethernet/intel/idpf/idpf_main.c   |  16 +
>>   drivers/net/ethernet/intel/idpf/idpf_txrx.h   |  26 +
>>   drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |  22 +-
>>   .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 801 ++++++++++++++++++
>>   9 files changed, 1441 insertions(+), 2 deletions(-)
>>   create mode 100644 drivers/net/ethernet/intel/idpf/idpf_txrx.h

<snip>

>> +/**
>> + * idpf_intr_req - Request interrupt capabilities
>> + * @adapter: adapter to enable interrupts on
>> + *
>> + * Returns 0 on success, negative on failure
>> + */
>> +int idpf_intr_req(struct idpf_adapter *adapter)
>> +{
>> +       u16 default_vports = idpf_get_default_vports(adapter);
>> +       int num_q_vecs, total_vecs, num_vec_ids;
>> +       int min_vectors, v_actual, err = 0;
>> +       unsigned int vector;
>> +       u16 *vecids;
>> +
>> +       total_vecs = idpf_get_reserved_vecs(adapter);
>> +       num_q_vecs = total_vecs - IDPF_MBX_Q_VEC;
>> +
>> +       err = idpf_send_alloc_vectors_msg(adapter, num_q_vecs);
>> +       if (err) {
>> +               dev_err(&adapter->pdev->dev,
>> +                       "Failed to allocate vectors: %d\n", err);
> 
> To aid in debugging, it would be good to include the number of vectors 
> in these error messages.
Sure.

> 
>> +
>> +               return -EAGAIN;
>> +       }
>> +
>> +       min_vectors = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports;
>> +       v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors,
>> +                                        total_vecs, PCI_IRQ_MSIX);
>> +       if (v_actual < min_vectors) {
>> +               dev_err(&adapter->pdev->dev, "Failed to allocate MSIX 
>> vectors: %d\n",
>> +                       v_actual);
>> +               err = -EAGAIN;
>> +               goto send_dealloc_vecs;
>> +       }
>> +
>> +       adapter->msix_entries = kcalloc(v_actual, sizeof(struct 
>> msix_entry),
>> +                                       GFP_KERNEL);
>> +
>> +       if (!adapter->msix_entries) {
>> +               err = -ENOMEM;
>> +               goto free_irq;
>> +       }
>> +
>> +       idpf_set_mb_vec_id(adapter);
>> +
>> +       vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL);
>> +       if (!vecids) {
>> +               err = -ENOMEM;
>> +               goto free_msix;
>> +       }
>> +
>> +       if (adapter->req_vec_chunks) {
>> +               struct virtchnl2_vector_chunks *vchunks;
>> +               struct virtchnl2_alloc_vectors *ac;
>> +
>> +               ac = adapter->req_vec_chunks;
>> +               vchunks = &ac->vchunks;
>> +
>> +               num_vec_ids = idpf_get_vec_ids(adapter, vecids, 
>> total_vecs,
>> +                                              vchunks);
>> +               if (num_vec_ids < v_actual) {
>> +                       err = -EINVAL;
>> +                       goto free_vecids;
>> +               }
>> +       } else {
>> +               int i = 0;
> 
> zero init unnecessary
> for that matter, I've recently seen someone using
>      for (int i = 0; ...
> in simple local for-loops
> I'm not sure that's an accepted practice yet, but would make sense here
> 
OK, we'll remove the zero init.

>> +
>> +               for (i = 0; i < v_actual; i++)
>> +                       vecids[i] = i;
>> +       }
>> +
>> +       for (vector = 0; vector < v_actual; vector++) {
>> +               adapter->msix_entries[vector].entry = vecids[vector];
>> +               adapter->msix_entries[vector].vector =
>> +                       pci_irq_vector(adapter->pdev, vector);
>> +       }
>> +
>> +       adapter->num_req_msix = total_vecs;
>> +       adapter->num_msix_entries = v_actual;
>> +       /* 'num_avail_msix' is used to distribute excess vectors to 
>> the vports
>> +        * after considering the minimum vectors required per each 
>> default
>> +        * vport
>> +        */
>> +       adapter->num_avail_msix = v_actual - min_vectors;
>> +
>> +       /* Fill MSIX vector lifo stack with vector indexes */
>> +       err = idpf_init_vector_stack(adapter);
>> +       if (err)
>> +               goto free_vecids;
>> +
>> +       err = idpf_mb_intr_init(adapter);
>> +       if (err)
>> +               goto deinit_vec_stack;
>> +       idpf_mb_irq_enable(adapter);
>> +       kfree(vecids);
>> +
>> +       return err;
>> +
>> +deinit_vec_stack:
>> +       idpf_deinit_vector_stack(adapter);
>> +free_vecids:
>> +       kfree(vecids);
>> +free_msix:
>> +       kfree(adapter->msix_entries);
>> +       adapter->msix_entries = NULL;
>> +free_irq:
>> +       pci_free_irq_vectors(adapter->pdev);
>> +send_dealloc_vecs:
>> +       idpf_send_dealloc_vectors_msg(adapter);
>> +
>> +       return err;
>> +}
>> +
>> +/**
>> + * idpf_service_task - Delayed task for handling mailbox responses
>> + * @work: work_struct handle to our data
>> + *
>> + */
>> +void idpf_service_task(struct work_struct *work)
>> +{
>> +       struct idpf_adapter *adapter;
>> +
>> +       adapter = container_of(work, struct idpf_adapter, 
>> serv_task.work);
>> +
>> +       if (test_bit(__IDPF_MB_INTR_MODE, adapter->flags)) {
>> +               if (test_and_clear_bit(__IDPF_MB_INTR_TRIGGER,
>> +                                      adapter->flags)) {
>> +                       idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_UNKNOWN,
>> +                                        NULL, 0);
>> +                       idpf_mb_irq_enable(adapter);
>> +               }
>> +       } else {
>> +               idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_UNKNOWN, NULL, 0);
>> +       }
>> +
>> +       if (idpf_is_reset_detected(adapter) &&
>> +           !idpf_is_reset_in_prog(adapter) &&
>> +           !test_bit(__IDPF_REMOVE_IN_PROG, adapter->flags)) {
>> +               dev_info(&adapter->pdev->dev, "HW reset detected\n");
>> +               set_bit(__IDPF_HR_FUNC_RESET, adapter->flags);
> 
> Is there any chance of some other thread starting a reset in the time 
> between your is_reset_in_progress check and setting this FUNC_RESET bit?
> 
We have a mutex called reset_lock in adapter that should take care of it.

<snip>

>> +/**
>> + * idpf_send_mb_msg - Send message over mailbox
>> + * @adapter: Driver specific private structure
>> + * @op: virtchnl opcode
>> + * @msg_size: size of the payload
>> + * @msg: pointer to buffer holding the payload
>> + *
>> + * Will prepare the control queue message and initiates the send api
>> + *
>> + * Returns 0 on success, negative on failure
>> + */
>> +int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
>> +                    u16 msg_size, u8 *msg)
>> +{
>> +       struct idpf_ctlq_msg *ctlq_msg;
>> +       struct idpf_dma_mem *dma_mem;
>> +       int err;
>> +
>> +       /* If we are here and a reset is detected nothing much can be
>> +        * done. This thread should silently abort and expected to
>> +        * be corrected with a new run either by user or driver
>> +        * flows after reset
>> +        */
>> +       if (idpf_is_reset_detected(adapter))
>> +               return 0;
>> +
>> +       err = idpf_mb_clean(adapter);
>> +       if (err)
>> +               return err;
>> +
>> +       ctlq_msg = kzalloc(sizeof(*ctlq_msg), GFP_ATOMIC);
>> +       if (!ctlq_msg)
>> +               return -ENOMEM;
>> +
>> +       dma_mem = kzalloc(sizeof(*dma_mem), GFP_ATOMIC);
>> +       if (!dma_mem) {
>> +               err = -ENOMEM;
>> +               goto dma_mem_error;
>> +       }
>> +
>> +       memset(ctlq_msg, 0, sizeof(struct idpf_ctlq_msg));
> 
> Unnecessary, kzalloc() did this for you already
We'll remove it.

> 
>> +       ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_pf;
>> +       ctlq_msg->func_id = 0;
>> +       ctlq_msg->data_len = msg_size;
>> +       ctlq_msg->cookie.mbx.chnl_opcode = op;
>> +       ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL2_STATUS_SUCCESS;
>> +       dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
>> +       dma_mem->va = dmam_alloc_coherent(&adapter->pdev->dev, 
>> dma_mem->size,
>> +                                         &dma_mem->pa, GFP_ATOMIC);
> 
> Were you going to replace these dmam as you did the devm?
> Or are you intentionally keeping them?
> 
We look into replacing the dmam calls as well.

>> +       if (!dma_mem->va) {
>> +               err = -ENOMEM;
>> +               goto dma_alloc_error;
>> +       }
>> +       memcpy(dma_mem->va, msg, msg_size);
>> +       ctlq_msg->ctx.indirect.payload = dma_mem;
>> +
>> +       err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
>> +       if (err)
>> +               goto send_error;
>> +
>> +       return 0;
>> +
>> +send_error:
>> +       dmam_free_coherent(&adapter->pdev->dev, dma_mem->size, 
>> dma_mem->va,
>> +                          dma_mem->pa);
>> +dma_alloc_error:
>> +       kfree(dma_mem);
>> +dma_mem_error:
>> +       kfree(ctlq_msg);
>> +
>> +       return err;
>> +}
>> +
>> +/**
>> + * idpf_set_msg_pending_bit - Wait for clear and set msg pending
>> + * @adapter: driver specific private structure
>> + * @vport: virtual port structure
>> + *
>> + * If clear sets msg pending bit, otherwise waits for it to clear before
>> + * setting it again. Returns 0 on success, negative on failure.
>> + */
>> +static int idpf_set_msg_pending_bit(struct idpf_adapter *adapter,
>> +                                   struct idpf_vport *vport)
>> +{
>> +       unsigned int retries = 100;
>> +
>> +       /* If msg pending bit already set, there's a message waiting 
>> to be
>> +        * parsed and we must wait for it to be cleared before copying 
>> a new
>> +        * message into the vc_msg buffer or else we'll stomp all over 
>> the
>> +        * previous message.
>> +        */
>> +       while (retries) {
>> +               if (!test_and_set_bit(__IDPF_VC_MSG_PENDING, 
>> adapter->flags))
>> +                       break;
>> +               msleep(20);
>> +               retries--;
>> +       }
>> +
>> +       return retries ? 0 : -ETIMEDOUT;
>> +}
>> +
>> +/**
>> + * idpf_set_msg_pending - Wait for msg pending bit and copy msg to buf
>> + * @adapter: driver specific private structure
>> + * @vport: virtual port structure
>> + * @ctlq_msg: msg to copy from
>> + * @err_enum: err bit to set on error
>> + *
>> + * Copies payload from ctlq_msg into vc_msg buf in adapter and sets 
>> msg pending
>> + * bit. Returns 0 on success, negative on failure.
>> + */
>> +static int idpf_set_msg_pending(struct idpf_adapter *adapter,
>> +                               struct idpf_vport *vport,
>> +                               struct idpf_ctlq_msg *ctlq_msg,
>> +                               enum idpf_vport_vc_state err_enum)
>> +{
>> +       if (ctlq_msg->cookie.mbx.chnl_retval) {
>> +               set_bit(err_enum, adapter->vc_state);
>> +
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (idpf_set_msg_pending_bit(adapter, vport)) {
>> +               set_bit(err_enum, adapter->vc_state);
>> +               dev_err(&adapter->pdev->dev, "Timed out setting msg 
>> pending\n");
>> +
>> +               return -ETIMEDOUT;
>> +       }
>> +
>> +       memcpy(adapter->vc_msg, ctlq_msg->ctx.indirect.payload->va,
>> +              min_t(int, ctlq_msg->ctx.indirect.payload->size,
>> +                    IDPF_DFLT_MBX_BUF_SIZE));
>> +
>> +       return 0;
>> +}
>> +
>> +/**
>> + * idpf_recv_vchnl_op - helper function with common logic when 
>> handling the
>> + * reception of VIRTCHNL OPs.
>> + * @adapter: driver specific private structure
>> + * @vport: virtual port structure
>> + * @ctlq_msg: msg to copy from
>> + * @state: state bit used on timeout check
>> + * @err_state: err bit to set on error
>> + */
>> +static void idpf_recv_vchnl_op(struct idpf_adapter *adapter,
>> +                              struct idpf_vport *vport,
>> +                              struct idpf_ctlq_msg *ctlq_msg,
>> +                              enum idpf_vport_vc_state state,
>> +                              enum idpf_vport_vc_state err_state)
>> +{
>> +       wait_queue_head_t *vchnl_wq = &adapter->vchnl_wq;
>> +       int err;
>> +
>> +       err = idpf_set_msg_pending(adapter, vport, ctlq_msg, err_state);
>> +       if (wq_has_sleeper(vchnl_wq)) {
>> +               /* sleeper is present and we got the pending bit */
>> +               set_bit(state, adapter->vc_state);
>> +
>> +               wake_up(vchnl_wq);
>> +       } else {
>> +               if (!err) {
>> +                       /* We got the pending bit, but release it if 
>> we cannot
>> +                        * find a thread waiting for the message.
>> +                        */
>> +                       dev_warn(&adapter->pdev->dev, "opcode %d 
>> received without waiting thread\n",
>> +                                ctlq_msg->cookie.mbx.chnl_opcode);
>> +                       clear_bit(__IDPF_VC_MSG_PENDING, adapter->flags);
>> +               } else {
>> +                       /* Clear the errors since there is no sleeper 
>> to pass them on */
>> +                       clear_bit(err_state, adapter->vc_state);
>> +               }
>> +       }
>> +}
>> +
>> +/**
>> + * idpf_recv_mb_msg - Receive message over mailbox
>> + * @adapter: Driver specific private structure
>> + * @op: virtchannel operation code
>> + * @msg: Received message holding buffer
>> + * @msg_size: message size
>> + *
>> + * Will receive control queue message and posts the receive buffer. 
>> Returns 0
>> + * on success and negative on failure.
>> + */
>> +int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
>> +                    void *msg, int msg_size)
>> +{
>> +       struct idpf_ctlq_msg ctlq_msg;
>> +       struct idpf_dma_mem *dma_mem;
>> +       bool work_done = false;
>> +       int num_retry = 2000;
>> +       u16 num_q_msg;
>> +       int err;
>> +
>> +       while (1) {
>> +               int payload_size = 0;
>> +
>> +               /* Try to get one message */
>> +               num_q_msg = 1;
>> +               dma_mem = NULL;
>> +               err = idpf_ctlq_recv(adapter->hw.arq, &num_q_msg, 
>> &ctlq_msg);
>> +               /* If no message then decide if we have to retry based on
>> +                * opcode
>> +                */
>> +               if (err || !num_q_msg) {
>> +                       /* Increasing num_retry to consider the delayed
>> +                        * responses because of large number of VF's 
>> mailbox
>> +                        * messages. If the mailbox message is 
>> received from
>> +                        * the other side, we come out of the sleep cycle
>> +                        * immediately else we wait for more time.
>> +                        */
>> +                       if (!op || !num_retry--)
>> +                               break;
>> +                       if (test_bit(__IDPF_REL_RES_IN_PROG, 
>> adapter->flags)) {
>> +                               err = -EIO;
>> +                               break;
>> +                       }
>> +                       msleep(20);
>> +                       continue;
>> +               }
>> +
>> +               /* If we are here a message is received. Check if we 
>> are looking
>> +                * for a specific message based on opcode. If it is 
>> different
>> +                * ignore and post buffers
>> +                */
>> +               if (op && ctlq_msg.cookie.mbx.chnl_opcode != op)
>> +                       goto post_buffs;
>> +
>> +               if (ctlq_msg.data_len)
>> +                       payload_size = 
>> ctlq_msg.ctx.indirect.payload->size;
>> +
>> +               /* All conditions are met. Either a message requested is
>> +                * received or we received a message to be processed
>> +                */
>> +               switch (ctlq_msg.cookie.mbx.chnl_opcode) {
>> +               case VIRTCHNL2_OP_VERSION:
>> +               case VIRTCHNL2_OP_GET_CAPS:
>> +                       if (ctlq_msg.cookie.mbx.chnl_retval) {
>> +                               dev_err(&adapter->pdev->dev, "Failure 
>> initializing, vc op: %u retval: %u\n",
>> +                                       ctlq_msg.cookie.mbx.chnl_opcode,
>> +                                       ctlq_msg.cookie.mbx.chnl_retval);
>> +                               err = -EBADMSG;
>> +                       } else if (msg) {
>> +                               memcpy(msg, 
>> ctlq_msg.ctx.indirect.payload->va,
>> +                                      min_t(int, payload_size, 
>> msg_size));
>> +                       }
>> +                       work_done = true;
>> +                       break;
>> +               case VIRTCHNL2_OP_ALLOC_VECTORS:
>> +                       idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
>> +                                          IDPF_VC_ALLOC_VECTORS,
>> +                                          IDPF_VC_ALLOC_VECTORS_ERR);
>> +                       break;
>> +               case VIRTCHNL2_OP_DEALLOC_VECTORS:
>> +                       idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
>> +                                          IDPF_VC_DEALLOC_VECTORS,
>> +                                          IDPF_VC_DEALLOC_VECTORS_ERR);
>> +                       break;
>> +               default:
>> +                       dev_warn(&adapter->pdev->dev,
>> +                                "Unhandled virtchnl response %d\n",
>> +                                ctlq_msg.cookie.mbx.chnl_opcode);
>> +                       break;
>> +               }
>> +
>> +post_buffs:
>> +               if (ctlq_msg.data_len)
>> +                       dma_mem = ctlq_msg.ctx.indirect.payload;
>> +               else
>> +                       num_q_msg = 0;
>> +
>> +               err = idpf_ctlq_post_rx_buffs(&adapter->hw, 
>> adapter->hw.arq,
>> +                                             &num_q_msg, &dma_mem);
>> +               /* If post failed clear the only buffer we supplied */
>> +               if (err && dma_mem)
>> +                       dmam_free_coherent(&adapter->pdev->dev, 
>> dma_mem->size,
>> +                                          dma_mem->va, dma_mem->pa);
>> +
>> +               /* Applies only if we are looking for a specific 
>> opcode */
>> +               if (work_done)
>> +                       break;
>> +       }
>> +
>> +       return err;
>> +}
>> +
>> +/**
>> + * __idpf_wait_for_event - wrapper function for wait on virtchannel 
>> response
>> + * @adapter: Driver private data structure
>> + * @vport: virtual port structure
>> + * @state: check on state upon timeout
>> + * @err_check: check if this specific error bit is set
>> + * @timeout: Max time to wait
>> + *
>> + * Checks if state is set upon expiry of timeout.  Returns 0 on success,
>> + * negative on failure.
>> + */
>> +static int __idpf_wait_for_event(struct idpf_adapter *adapter,
>> +                                struct idpf_vport *vport,
>> +                                enum idpf_vport_vc_state state,
>> +                                enum idpf_vport_vc_state err_check,
>> +                                int timeout)
>> +{
>> +       int time_to_wait, num_waits;
>> +       wait_queue_head_t *vchnl_wq;
>> +       unsigned long *vc_state;
>> +
>> +       time_to_wait = ((timeout <= IDPF_MAX_WAIT) ? timeout : 
>> IDPF_MAX_WAIT);
>> +       num_waits = ((timeout <= IDPF_MAX_WAIT) ? 1 : timeout / 
>> IDPF_MAX_WAIT);
>> +
>> +       vchnl_wq = &adapter->vchnl_wq;
>> +       vc_state = adapter->vc_state;
>> +
>> +       while (num_waits) {
>> +               int event;
>> +
>> +               /* If we are here and a reset is detected do not wait but
>> +                * return. Reset timing is out of drivers control. So
>> +                * while we are cleaning resources as part of reset if 
>> the
>> +                * underlying HW mailbox is gone, wait on mailbox 
>> messages
>> +                * is not meaningful
>> +                */
>> +               if (idpf_is_reset_detected(adapter))
>> +                       return 0;
>> +
>> +               event = wait_event_timeout(*vchnl_wq,
>> +                                          test_and_clear_bit(state, 
>> vc_state),
>> +                                          
>> msecs_to_jiffies(time_to_wait));
>> +               if (event) {
>> +                       if (test_and_clear_bit(err_check, vc_state)) {
>> +                               dev_err(&adapter->pdev->dev, "VC 
>> response error %s\n",
>> +                                       
>> idpf_vport_vc_state_str[err_check]);
>> +
>> +                               return -EINVAL;
>> +                       }
>> +
>> +                       return 0;
>> +               }
>> +               num_waits--;
>> +       }
>> +
>> +       /* Timeout occurred */
>> +       dev_err(&adapter->pdev->dev, "VC timeout, state = %s\n",
>> +               idpf_vport_vc_state_str[state]);
>> +
>> +       return -ETIMEDOUT;
>> +}
>> +
>> +/**
>> + * idpf_min_wait_for_event - wait for virtchannel response
>> + * @adapter: Driver private data structure
>> + * @vport: virtual port structure
>> + * @state: check on state upon timeout
>> + * @err_check: check if this specific error bit is set
>> + *
>> + * Returns 0 on success, negative on failure.
>> + */
>> +static int idpf_min_wait_for_event(struct idpf_adapter *adapter,
>> +                                  struct idpf_vport *vport,
>> +                                  enum idpf_vport_vc_state state,
>> +                                  enum idpf_vport_vc_state err_check)
>> +{
>> +       return __idpf_wait_for_event(adapter, vport, state, err_check,
>> +                                    IDPF_WAIT_FOR_EVENT_TIMEO_MIN);
>> +}
>> +
>> +/**
>> + * idpf_wait_for_event - wait for virtchannel response
>> + * @adapter: Driver private data structure
>> + * @vport: virtual port structure
>> + * @state: check on state upon timeout after 500ms
>> + * @err_check: check if this specific error bit is set
>> + *
>> + * Returns 0 on success, negative on failure.
>> + */
>> +static int idpf_wait_for_event(struct idpf_adapter *adapter,
>> +                              struct idpf_vport *vport,
>> +                              enum idpf_vport_vc_state state,
>> +                              enum idpf_vport_vc_state err_check)
>> +{
>> +       /* Increasing the timeout in __IDPF_INIT_SW flow to consider 
>> large
>> +        * number of VF's mailbox message responses. When a message is 
>> received
>> +        * on mailbox, this thread is wake up by the idpf_recv_mb_msg 
>> before the
> 
> s/wake up/woken/
>
OK.

>> +        * timeout expires. Only in the error case i.e. if no message is
>> +        * received on mailbox, we wait for the complete timeout which is
>> +        * less likely to happen.
>> +        */
>> +       return __idpf_wait_for_event(adapter, vport, state, err_check,
>> +                                    IDPF_WAIT_FOR_EVENT_TIMEO);
>> +}
>> +
>> +/**
>> + * idpf_send_ver_msg - send virtchnl version message
>> + * @adapter: Driver specific private structure
>> + *
>> + * Send virtchnl version message.  Returns 0 on success, negative on 
>> failure.
>> + */
>> +static int idpf_send_ver_msg(struct idpf_adapter *adapter)
>> +{
>> +       struct virtchnl2_version_info vvi;
>> +
>> +       if (adapter->virt_ver_maj) {
>> +               vvi.major = cpu_to_le32(adapter->virt_ver_maj);
>> +               vvi.minor = cpu_to_le32(adapter->virt_ver_min);
>> +       } else {
>> +               vvi.major = cpu_to_le32(IDPF_VIRTCHNL_VERSION_MAJOR);
>> +               vvi.minor = cpu_to_le32(IDPF_VIRTCHNL_VERSION_MINOR);
>> +       }
>> +
>> +       return idpf_send_mb_msg(adapter, VIRTCHNL2_OP_VERSION, 
>> sizeof(vvi),
>> +                               (u8 *)&vvi);
>> +}
>> +
>> +/**
>> + * idpf_recv_ver_msg - Receive virtchnl version message
>> + * @adapter: Driver specific private structure
>> + *
>> + * Receive virtchnl version message. Returns 0 on success, -EAGAIN if 
>> we need
>> + * to send version message again, otherwise negative on failure.
>> + */
>> +static int idpf_recv_ver_msg(struct idpf_adapter *adapter)
>> +{
>> +       struct virtchnl2_version_info vvi;
>> +       u32 major, minor;
>> +       int err;
>> +
>> +       err = idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_VERSION, &vvi, 
>> sizeof(vvi));
>> +       if (err)
>> +               return err;
>> +
>> +       major = le32_to_cpu(vvi.major);
>> +       minor = le32_to_cpu(vvi.minor);
>> +
>> +       if (major > IDPF_VIRTCHNL_VERSION_MAJOR) {
>> +               dev_warn(&adapter->pdev->dev, "Virtchnl major version 
>> greater than supported\n");
> 
> Printing the bad version value would be helpful here >
Sure.

>> +
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (major == IDPF_VIRTCHNL_VERSION_MAJOR &&
>> +           minor > IDPF_VIRTCHNL_VERSION_MINOR)
>> +               dev_warn(&adapter->pdev->dev, "Virtchnl minor version 
>> didn't match\n");
> 
> dittoI will go through these to make sure we have more info related to the 
errors.

>> +
>> +       /* If we have a mismatch, resend version to update receiver on 
>> what
>> +        * version we will use.
>> +        */
>> +       if (!adapter->virt_ver_maj &&
>> +           major != IDPF_VIRTCHNL_VERSION_MAJOR &&
>> +           minor != IDPF_VIRTCHNL_VERSION_MINOR)
>> +               err = -EAGAIN;
>> +
>> +       adapter->virt_ver_maj = major;
>> +       adapter->virt_ver_min = minor;
>> +
>> +       return err;
>> +}
>> +
>> +/**
>> + * idpf_send_get_caps_msg - Send virtchnl get capabilities message
>> + * @adapter: Driver specific private structure
>> + *
>> + * Send virtchl get capabilities message. Returns 0 on success, 
>> negative on
>> + * failure.
>> + */
>> +static int idpf_send_get_caps_msg(struct idpf_adapter *adapter)
>> +{
>> +       struct virtchnl2_get_capabilities caps = { };
>> +
>> +       caps.csum_caps =
>> +               cpu_to_le32(VIRTCHNL2_CAP_TX_CSUM_L3_IPV4       |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP   |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP   |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP  |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP   |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP   |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP  |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L3_IPV4       |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP   |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP   |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP  |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP   |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP   |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP  |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL |
>> +                           VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL |
>> +                           VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL |
>> +                           VIRTCHNL2_CAP_RX_CSUM_GENERIC);
>> +
>> +       caps.seg_caps =
>> +               cpu_to_le32(VIRTCHNL2_CAP_SEG_IPV4_TCP          |
>> +                           VIRTCHNL2_CAP_SEG_IPV4_UDP          |
>> +                           VIRTCHNL2_CAP_SEG_IPV4_SCTP         |
>> +                           VIRTCHNL2_CAP_SEG_IPV6_TCP          |
>> +                           VIRTCHNL2_CAP_SEG_IPV6_UDP          |
>> +                           VIRTCHNL2_CAP_SEG_IPV6_SCTP         |
>> +                           VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL);
>> +
>> +       caps.rss_caps =
>> +               cpu_to_le64(VIRTCHNL2_CAP_RSS_IPV4_TCP          |
>> +                           VIRTCHNL2_CAP_RSS_IPV4_UDP          |
>> +                           VIRTCHNL2_CAP_RSS_IPV4_SCTP         |
>> +                           VIRTCHNL2_CAP_RSS_IPV4_OTHER        |
>> +                           VIRTCHNL2_CAP_RSS_IPV6_TCP          |
>> +                           VIRTCHNL2_CAP_RSS_IPV6_UDP          |
>> +                           VIRTCHNL2_CAP_RSS_IPV6_SCTP         |
>> +                           VIRTCHNL2_CAP_RSS_IPV6_OTHER);
>> +
>> +       caps.hsplit_caps =
>> +               cpu_to_le32(VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4     |
>> +                           VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6);
>> +
>> +       caps.rsc_caps =
>> +               cpu_to_le32(VIRTCHNL2_CAP_RSC_IPV4_TCP          |
>> +                           VIRTCHNL2_CAP_RSC_IPV6_TCP);
>> +
>> +       caps.other_caps =
>> +               cpu_to_le64(VIRTCHNL2_CAP_SRIOV                 |
> 
> I think this routine is getting called for both PF and VF - does this 
> SRIOV capability make sense in the VF case?  Or does it simply get 
> filtered out by the CP's response and it doesn't matter?
> 
CP will filter it out.

<snip>

Thanks,
Emil

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ