[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAOi1vP_QGYdLPyK8fp5Aoz_Pahr+0Fbu=4LWgAgjyTJtba80uA@mail.gmail.com>
Date: Mon, 9 Jun 2025 15:13:16 +0200
From: Ilya Dryomov <idryomov@...il.com>
To: Viacheslav Dubeyko <Slava.Dubeyko@....com>
Cc: "max.kellermann@...os.com" <max.kellermann@...os.com>,
"ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>, Xiubo Li <xiubli@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Alex Markuze <amarkuze@...hat.com>
Subject: Re: [PATCH] fs/ceph/io: make ceph_start_io_*() killable
On Fri, Jun 6, 2025 at 7:42 PM Viacheslav Dubeyko <Slava.Dubeyko@....com> wrote:
>
> On Fri, 2025-06-06 at 19:34 +0200, Max Kellermann wrote:
> > On Fri, Jun 6, 2025 at 7:15 PM Viacheslav Dubeyko <Slava.Dubeyko@....com> wrote:
> > > I see the point. Our last discussion has finished with statement that Max
> > > doesn't care about this patch set and we don't need to pick it up. If he changed
> > > his mind, then I can return to the review of the patch. :) My understanding was
> > > that he prefers another person for the review. :) This is why I keep silence.
> >
> > I do care, always did. I answered your questions, but they were not
> > really about my patch but about whether error handling is necessary.
> > Well, yes, of course! The whole point of my patch is to add an error
> > condition that did not exist before. If locking can fail, of course
> > you have to check that and propagate the error to the caller (and
> > unlocking after a failed lock of course leads to sorrow). That is so
> > trivial, I don't even know where to start to explain this if that
> > isn't already obvious enough.
> >
> > If you keep questioning that, are you really qualified to do a code review?
> >
>
> OK. If I am not good enough, then somebody else can do the review. :)
The patch looked sensible to me, so I have picked it up into the
testing branch after some massaging as part of my own review:
https://github.com/ceph/ceph-client/commit/837b07491efc3e21cf08732f0320ce3ac52951f6
I tried to consider Slava's comments while at it. AFAICS the points
raised were: the need for __must_check to begin with, whether the new
error needs to be propagated and the comment on compiler_attributes.h
include.
For __must_check itself, I kept it -- it makes sense because ignoring
the return value would be a straight ticket to lock imbalance. Slava's
observation may have been that there are many similar scenarios where
__must_check isn't used, but that can't serve as a justification for
not adopting __must_check IMO. It's also there in the corresponding
NFS patch.
Propagating the new error also makes sense to me -- I don't think
CephFS does anything special with EINTR and control wouldn't return to
userspace anyway because of the kill. I don't see how "we simply need
not to execute the logic" behavior is possible without returning some
kind of error to the caller of e.g. ceph_read_iter().
For the comment, I dropped it because __must_check is very obviously
tied to compiler_attributes.h and such comments aren't common.
As I was touching the patch, I formatted the ternary if statements to
fit the rest of the Ceph client code more (despite inconsistencies none
of the existing ones are formatted that way) and made the return type
to be on the same line and __must_check come after it as per the coding
style (Documentation/process/coding-style.rst).
Thanks,
Ilya
Powered by blists - more mailing lists