lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Jun 2020 09:44:12 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Matias Bjørling <mb@...htnvm.io>,
        "javier.gonz@...sung.com" <javier@...igon.com>,
        Damien Le Moal <Damien.LeMoal@....com>
Cc:     Kanchan Joshi <joshi.k@...sung.com>,
        "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
        "bcrl@...ck.org" <bcrl@...ck.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-aio@...ck.org" <linux-aio@...ck.org>,
        "io-uring@...r.kernel.org" <io-uring@...r.kernel.org>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "selvakuma.s1@...sung.com" <selvakuma.s1@...sung.com>,
        "nj.shetty@...sung.com" <nj.shetty@...sung.com>
Subject: Re: [PATCH 3/3] io_uring: add support for zone-append

On 6/19/20 9:40 AM, Matias Bjørling wrote:
> On 19/06/2020 17.20, Jens Axboe wrote:
>> On 6/19/20 9:14 AM, Matias Bjørling wrote:
>>> On 19/06/2020 16.18, Jens Axboe wrote:
>>>> On 6/19/20 5:15 AM, Matias Bjørling wrote:
>>>>> On 19/06/2020 11.41, javier.gonz@...sung.com wrote:
>>>>>> Jens,
>>>>>>
>>>>>> Would you have time to answer a question below in this thread?
>>>>>>
>>>>>> On 18.06.2020 11:11, javier.gonz@...sung.com wrote:
>>>>>>> On 18.06.2020 08:47, Damien Le Moal wrote:
>>>>>>>> On 2020/06/18 17:35, javier.gonz@...sung.com wrote:
>>>>>>>>> On 18.06.2020 07:39, Damien Le Moal wrote:
>>>>>>>>>> On 2020/06/18 2:27, Kanchan Joshi wrote:
>>>>>>>>>>> From: Selvakumar S <selvakuma.s1@...sung.com>
>>>>>>>>>>>
>>>>>>>>>>> Introduce three new opcodes for zone-append -
>>>>>>>>>>>
>>>>>>>>>>>     IORING_OP_ZONE_APPEND     : non-vectord, similiar to
>>>>>>>>>>> IORING_OP_WRITE
>>>>>>>>>>>     IORING_OP_ZONE_APPENDV    : vectored, similar to IORING_OP_WRITEV
>>>>>>>>>>>     IORING_OP_ZONE_APPEND_FIXED : append using fixed-buffers
>>>>>>>>>>>
>>>>>>>>>>> Repurpose cqe->flags to return zone-relative offset.
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: SelvaKumar S <selvakuma.s1@...sung.com>
>>>>>>>>>>> Signed-off-by: Kanchan Joshi <joshi.k@...sung.com>
>>>>>>>>>>> Signed-off-by: Nitesh Shetty <nj.shetty@...sung.com>
>>>>>>>>>>> Signed-off-by: Javier Gonzalez <javier.gonz@...sung.com>
>>>>>>>>>>> ---
>>>>>>>>>>> fs/io_uring.c                 | 72
>>>>>>>>>>> +++++++++++++++++++++++++++++++++++++++++--
>>>>>>>>>>> include/uapi/linux/io_uring.h |  8 ++++-
>>>>>>>>>>> 2 files changed, 77 insertions(+), 3 deletions(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>>>>>>>>>> index 155f3d8..c14c873 100644
>>>>>>>>>>> --- a/fs/io_uring.c
>>>>>>>>>>> +++ b/fs/io_uring.c
>>>>>>>>>>> @@ -649,6 +649,10 @@ struct io_kiocb {
>>>>>>>>>>>       unsigned long        fsize;
>>>>>>>>>>>       u64            user_data;
>>>>>>>>>>>       u32            result;
>>>>>>>>>>> +#ifdef CONFIG_BLK_DEV_ZONED
>>>>>>>>>>> +    /* zone-relative offset for append, in bytes */
>>>>>>>>>>> +    u32            append_offset;
>>>>>>>>>> this can overflow. u64 is needed.
>>>>>>>>> We chose to do it this way to start with because struct io_uring_cqe
>>>>>>>>> only has space for u32 when we reuse the flags.
>>>>>>>>>
>>>>>>>>> We can of course create a new cqe structure, but that will come with
>>>>>>>>> larger changes to io_uring for supporting append.
>>>>>>>>>
>>>>>>>>> Do you believe this is a better approach?
>>>>>>>> The problem is that zone size are 32 bits in the kernel, as a number
>>>>>>>> of sectors.
>>>>>>>> So any device that has a zone size smaller or equal to 2^31 512B
>>>>>>>> sectors can be
>>>>>>>> accepted. Using a zone relative offset in bytes for returning zone
>>>>>>>> append result
>>>>>>>> is OK-ish, but to match the kernel supported range of possible zone
>>>>>>>> size, you
>>>>>>>> need 31+9 bits... 32 does not cut it.
>>>>>>> Agree. Our initial assumption was that u32 would cover current zone size
>>>>>>> requirements, but if this is a no-go, we will take the longer path.
>>>>>> Converting to u64 will require a new version of io_uring_cqe, where we
>>>>>> extend at least 32 bits. I believe this will need a whole new allocation
>>>>>> and probably ioctl().
>>>>>>
>>>>>> Is this an acceptable change for you? We will of course add support for
>>>>>> liburing when we agree on the right way to do this.
>>>>> I took a quick look at the code. No expert, but why not use the existing
>>>>> userdata variable? use the lowest bits (40 bits) for the Zone Starting
>>>>> LBA, and use the highest (24 bits) as index into the completion data
>>>>> structure?
>>>>>
>>>>> If you want to pass the memory address (same as what fio does) for the
>>>>> data structure used for completion, one may also play some tricks by
>>>>> using a relative memory address to the data structure. For example, the
>>>>> x86_64 architecture uses 48 address bits for its memory addresses. With
>>>>> 24 bit, one can allocate the completion entries in a 32MB memory range,
>>>>> and then use base_address + index to get back to the completion data
>>>>> structure specified in the sqe.
>>>> For any current request, sqe->user_data is just provided back as
>>>> cqe->user_data. This would make these requests behave differently
>>>> from everything else in that sense, which seems very confusing to me
>>>> if I was an application writer.
>>>>
>>>> But generally I do agree with you, there are lots of ways to make
>>>> < 64-bit work as a tag without losing anything or having to jump
>>>> through hoops to do so. The lack of consistency introduced by having
>>>> zone append work differently is ugly, though.
>>>>
>>> Yep, agree, and extending to three cachelines is big no-go. We could add
>>> a flag that said the kernel has changes the userdata variable. That'll
>>> make it very explicit.
>> Don't like that either, as it doesn't really change the fact that you're
>> now doing something very different with the user_data field, which is
>> just supposed to be passed in/out directly. Adding a random flag to
>> signal this behavior isn't very explicit either, imho. It's still some
>> out-of-band (ish) notification of behavior that is different from any
>> other command. This is very different from having a flag that says
>> "there's extra information in this other field", which is much cleaner.
>>
> Ok. Then it's pulling in the bits from cqe->res and cqe->flags that you 
> mention in the other mail. Sounds good.

I think that's the best approach, if we need > 32-bits. Maybe we can get
by just using ->res, if we switch to multiples of 512b instead for the
result like Pavel suggested. That'd provide enough room in ->res, and
would be preferable imho. But if we do need > 32-bits, then we can use
this approach.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ