lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 May 2021 21:04:52 +0300
From:   Shai Malin <malin1024@...il.com>
To:     Hannes Reinecke <hare@...e.de>
Cc:     Shai Malin <smalin@...vell.com>, netdev@...r.kernel.org,
        linux-nvme@...ts.infradead.org, davem@...emloft.net,
        kuba@...nel.org, sagi@...mberg.me, hch@....de, axboe@...com,
        kbusch@...nel.org, Ariel Elior <aelior@...vell.com>,
        Michal Kalderon <mkalderon@...vell.com>, okulkarni@...vell.com,
        pkushwaha@...vell.com
Subject: Re: [RFC PATCH v4 25/27] qedn: Add IO level fastpath functionality

On 5/2/21 2:54 PM, Hannes Reinecke wrote:
> On 4/29/21 9:09 PM, Shai Malin wrote:
> > This patch will present the IO level functionality of qedn
> > nvme-tcp-offload host mode. The qedn_task_ctx structure is containing
> > various params and state of the current IO, and is mapped 1x1 to the
> > fw_task_ctx which is a HW and FW IO context.
> > A qedn_task is mapped directly to its parent connection.
> > For every new IO a qedn_task structure will be assigned and they will be
> > linked for the entire IO's life span.
> >
> > The patch will include 2 flows:
> >    1. Send new command to the FW:
> >        The flow is: nvme_tcp_ofld_queue_rq() which invokes qedn_send_req()
> >        which invokes qedn_queue_request() which will:
> >       - Assign fw_task_ctx.
> >        - Prepare the Read/Write SG buffer.
> >        -  Initialize the HW and FW context.
> >        - Pass the IO to the FW.
> >
> >    2. Process the IO completion:
> >       The flow is: qedn_irq_handler() which invokes qedn_fw_cq_fp_handler()
> >        which invokes qedn_io_work_cq() which will:
> >        - process the FW completion.
> >        - Return the fw_task_ctx to the task pool.
> >        - complete the nvme req.
> >
> > Acked-by: Igor Russkikh <irusskikh@...vell.com>
> > Signed-off-by: Prabhakar Kushwaha <pkushwaha@...vell.com>
> > Signed-off-by: Omkar Kulkarni <okulkarni@...vell.com>
> > Signed-off-by: Michal Kalderon <mkalderon@...vell.com>
> > Signed-off-by: Ariel Elior <aelior@...vell.com>
> > Signed-off-by: Shai Malin <smalin@...vell.com>
> > ---
> >   drivers/nvme/hw/qedn/qedn.h      |   4 +
> >   drivers/nvme/hw/qedn/qedn_conn.c |   1 +
> >   drivers/nvme/hw/qedn/qedn_task.c | 269 ++++++++++++++++++++++++++++++-
> >   3 files changed, 272 insertions(+), 2 deletions(-)
> >
> Reviewed-by: Hannes Reinecke <hare@...e.de>

Thanks.

>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Kernel Storage Architect
> hare@...e.de                              +49 911 74053 688
> SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
> HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ