[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200121080029.42b6ea7d@cakuba>
Date: Tue, 21 Jan 2020 08:00:29 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: sunil.kovvuri@...il.com
Cc: netdev@...r.kernel.org, davem@...emloft.net, mkubecek@...e.cz,
Sunil Goutham <sgoutham@...vell.com>,
Geetha sowjanya <gakula@...vell.com>,
Christina Jacob <cjacob@...vell.com>,
Subbaraya Sundeep <sbhatta@...vell.com>,
Aleksey Makarov <amakarov@...vell.com>
Subject: Re: [PATCH v4 02/17] octeontx2-pf: Mailbox communication with AF
On Tue, 21 Jan 2020 18:51:36 +0530, sunil.kovvuri@...il.com wrote:
> From: Sunil Goutham <sgoutham@...vell.com>
>
> In the resource virtualization unit (RVU) each of the PF and AF
> (admin function) share a 64KB of reserved memory region for
> communication. This patch initializes PF <=> AF mailbox IRQs,
> registers handlers for processing these communication messages.
> Also adds support to process these messages in both directions
> ie responses to PF initiated DOWN (PF => AF) messages and AF
> initiated UP messages (AF => PF).
>
> Mbox communication APIs and message formats are defined in AF driver
> (drivers/net/ethernet/marvell/octeontx2/af), mbox.h from AF driver is
> included here to avoid duplication.
>
> Signed-off-by: Geetha sowjanya <gakula@...vell.com>
> Signed-off-by: Christina Jacob <cjacob@...vell.com>
> Signed-off-by: Subbaraya Sundeep <sbhatta@...vell.com>
> Signed-off-by: Aleksey Makarov <amakarov@...vell.com>
> Signed-off-by: Sunil Goutham <sgoutham@...vell.com>
> +struct mbox {
^^
> + struct otx2_mbox mbox;
> + struct work_struct mbox_wrk;
> + struct otx2_mbox mbox_up;
> + struct work_struct mbox_up_wrk;
> + struct otx2_nic *pfvf;
> + void *bbuf_base; /* Bounce buffer for mbox memory */
> + struct mutex lock; /* serialize mailbox access */
> + int num_msgs; /*mbox number of messages*/
^ ^
> + int up_num_msgs;/* mbox_up number of messages*/
^ ^
> +};
>
> struct otx2_hw {
> struct pci_dev *pdev;
> u16 rx_queues;
> u16 tx_queues;
> u16 max_queues;
> +
> + /* MSI-X*/
^
The white space here is fairly loose
> + char *irq_name;
> + cpumask_var_t *affinity_mask;
> };
>
> +static inline void otx2_sync_mbox_bbuf(struct otx2_mbox *mbox, int devid)
> +{
> + u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
> + void *hw_mbase = mbox->hwbase + (devid * MBOX_SIZE);
> + struct otx2_mbox_dev *mdev = &mbox->dev[devid];
> + struct mbox_hdr *hdr;
> + u64 msg_size;
> +
> + if (mdev->mbase == hw_mbase)
> + return;
> +
> + hdr = hw_mbase + mbox->rx_start;
> + msg_size = hdr->msg_size;
> +
> + if (msg_size > mbox->rx_size - msgs_offset)
> + msg_size = mbox->rx_size - msgs_offset;
> +
> + /* Copy mbox messages from mbox memory to bounce buffer */
> + memcpy(mdev->mbase + mbox->rx_start,
> + hw_mbase + mbox->rx_start, msg_size + msgs_offset);
I'm slightly concerned about the use of non-iomem helpers like memset
and memcpy on what I understand to be IOMEM, and the lack of memory
barriers. But then again, I don't know much about iomem_wc(), is this
code definitely correct from memory ordering perspective?
(The memory barrier in otx2_mbox_msg_send() should probably be just
wmb(), syncing with HW is unrelated with SMP.)
Powered by blists - more mailing lists