lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 27 Jun 2014 08:19:58 -0700
From:	Doug Anderson <dianders@...omium.org>
To:	Lee Jones <lee.jones@...aro.org>
Cc:	Olof Johansson <olof@...om.net>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mfd: cros_ec_spi: set wakeup capability

Lee,

On Fri, Jun 27, 2014 at 1:16 AM, Lee Jones <lee.jones@...aro.org> wrote:
>> It's great to see this on the list, because I find your workflow as a
>> maintainer to be hard to follow as a developer.
>>
>> You applying patches but taking several days to push out makes it
>> completely opaque for someone to know if you just accidentally missed
>> to apply the patch after all (it happens, I've done that myself). It's
>> pretty common to expect a "thanks, applied" patch to show up in
>> linux-next within one day or so depending on timing.
>>
>> The fact that you had already pushed out a patch that you had replied
>> to even later makes for extra confusion. So I'm sorry Lee, but I don't
>> think Doug was unreasonable in asking for status here. Sometimes
>> maintainers forget to push, which is why it's a good idea to ping a
>> few days later if the patch hasn't showed up in -next.
>
> I completely understand and even empathise with the predicament of a
> diligent developer.  What you see above isn't me being cantankerous,
> but rather an explanation how things are handed in the MFD tree.

Yup, it's reasonable.  I will say that when I first read your response
it felt a little bit like you were lecturing me, which I didn't feel
like I deserved.  ...but I also know how easy it is to misconstrue
things over email and how hard it is to convey just the right tone
when there are so many emails and so much to do.

As I said in my earlier response I think that perhaps changing the
wording just a little from "Applied, thanks" would have made me wait
longer before querying.


> Taking into consideration my current workload and time constraints,
> the work-flow used is the best I can muster currently.  Lest we forget,
> this isn't the role for what I'm employed, I do this on-top of my
> daily tasks using time accumulated by starting early and finishing
> late _every day_.
>
> Building, testing and pushing after every patch, hell even on a daily
> basis would be difficult to sustain without it impacting my _proper_
> work.  What I need to do is write some more scripts which will help to
> relieve some of the burden, but again time is notable factor here.
>
> From a personal point of view, I prefer to be on top of the patches
> as they come in and have some lead time from when they are applied
> locally to finding their way into -next (bearing in mind that this
> lead time is seldom more than 24-48hrs), rather than do what others do
> and leave patches hanging on the list for weeks, then gather enough
> time to review, collect, test and push all in one session.
>
> A quick aside; given the state of maintainership in some of the other
> subsystems, I'm surprised that we're even having this conversation
> considering how responsive we are in MFD.

Just a quick note that I am certainly very appreciative of your
responsiveness to patches.  It is very rare that a MFD patch sits
around stagnating on the list without a review and that most
definitely helps with efficiency.  It also helps keep developers like
me motivated.  There's nothing more discouraging than spending a whole
bunch of time on the patch only to have it met with absolute silence.

-Doug
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ