lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 3 Jun 2020 01:40:51 +0300
From:   Max Gurtovoy <maxg@...lanox.com>
To:     Jens Axboe <axboe@...nel.dk>, Jason Gunthorpe <jgg@...lanox.com>
Cc:     Stephen Rothwell <sfr@...b.auug.org.au>,
        Doug Ledford <dledford@...hat.com>,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Yamin Friedman <yaminf@...lanox.com>,
        Israel Rukshin <israelr@...lanox.com>,
        Christoph Hellwig <hch@....de>
Subject: Re: linux-next: manual merge of the block tree with the rdma tree


On 6/3/2020 12:37 AM, Jens Axboe wrote:
> On 6/2/20 1:09 PM, Jason Gunthorpe wrote:
>> On Tue, Jun 02, 2020 at 01:02:55PM -0600, Jens Axboe wrote:
>>> On 6/2/20 1:01 PM, Jason Gunthorpe wrote:
>>>> On Tue, Jun 02, 2020 at 11:37:26AM +0300, Max Gurtovoy wrote:
>>>>> On 6/2/2020 5:56 AM, Stephen Rothwell wrote:
>>>>>> Hi all,
>>>>> Hi,
>>>>>
>>>>> This looks good to me.
>>>>>
>>>>> Can you share a pointer to the tree so we'll test it in our labs ?
>>>>>
>>>>> need to re-test:
>>>>>
>>>>> 1. srq per core
>>>>>
>>>>> 2. srq per core + T10-PI
>>>>>
>>>>> And both will run with shared CQ.
>>>> Max, this is too much conflict to send to Linus between your own
>>>> patches. I am going to drop the nvme part of this from RDMA.
>>>>
>>>> Normally I don't like applying partial series, but due to this tree
>>>> split, you can send the rebased nvme part through the nvme/block tree
>>>> at rc1 in two weeks..

Yes, I'll send it in 2 weeks.

Actually I hoped the iSER patches for CQ pool will be sent in this 
series but eventually they were not.

This way we could have taken only the iser part and the new API.

I saw the pulled version too late since I wasn't CCed to it and it was 
already merged before I had a chance to warn you about possible conflict.

I think in general we should try to add new RDMA APIs first with 
iSER/SRP and avoid conflicting trees.


>>> Was going to comment that this is probably how it should have been
>>> done to begin with. If we have multiple conflicts like that between
>>> two trees, someone is doing something wrong...
>> Well, on the other hand having people add APIs in one tree and then
>> (promised) consumers in another tree later on has proven problematic
>> in the past. It is best to try to avoid that, but in this case I don't
>> think Max will have any delay to get the API consumer into nvme in two
>> weeks.
> Having conflicting trees is a problem. If there's a dependency for
> two trees for some new work, then just have a separate branch that's
> built on those two. For NVMe core work, then it should include the
> pending NVMe changes.

I guess it's hard to do so during merge window since the block and rdma 
trees are not in sync.

I think it would have been a good idea to add Jens to CC and mention 
that we're posting code that is maintained by 2 different trees in the 
cover latter.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ