lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Jun 2014 09:16:40 +0100
From:	Lee Jones <lee.jones@...aro.org>
To:	Olof Johansson <olof@...om.net>
Cc:	Doug Anderson <dianders@...omium.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mfd: cros_ec_spi: set wakeup capability

On Thu, 26 Jun 2014, Olof Johansson wrote:

> On Mon, Jun 23, 2014 at 2:26 AM, Lee Jones <lee.jones@...aro.org> wrote:
> >> Lee (-others),
> >
> > Re-CC'ing the list.
> >
> >> On Wed, Jun 18, 2014 at 2:20 AM, Lee Jones <lee.jones@...aro.org> wrote:
> >> >> From: Prathyush K <prathyush.k@...sung.com>
> >> >>
> >> >> Set the device as wakeup capable and register the wakeup source.
> >> >>
> >> >> Note: Though it makes more sense to have the SPI framework do this,
> >> >> (either via device tree or by board_info)
> >> >> this change is as per an existing mail chain:
> >> >> https://lkml.org/lkml/2009/8/27/291
> >> >>
> >> >> Signed-off-by: Prathyush K <prathyush.k@...sung.com>
> >> >> Signed-off-by: Doug Anderson <dianders@...omium.org>
> >> >> ---
> >> >> Note that I don't have suspend/resume actually working upstream, but I
> >> >> see that /sys/bus/spi/drivers/cros-ec-spi/spi2.0/power/wakeup exists
> >> >> with this patch and doesn't exist without it.
> >> >
> >> > Very well.  Applied, thanks.
> >>
> >> Thanks for applying!  ...did this go in some non-standard branch?  I
> >> see another of my patches got committed to your "for-mfd-next" tree on
> >> the 19th but I don't see this one...
> >
> > Patience Grasshopper.  When I say that it's applied, it means that I
> > have done so locally only.  After I've collected a few local patches
> > I'll usually then fix them all with with my SoB and push them out to
> > the public MFD tree.
> >
> > BTW, it's always best to leave the ML in as CC, so others can see the
> > answer to these types of questions.  It may save a few emails a year,
> > but every little helps. :)
> 
> It's great to see this on the list, because I find your workflow as a
> maintainer to be hard to follow as a developer.
> 
> You applying patches but taking several days to push out makes it
> completely opaque for someone to know if you just accidentally missed
> to apply the patch after all (it happens, I've done that myself). It's
> pretty common to expect a "thanks, applied" patch to show up in
> linux-next within one day or so depending on timing.
> 
> The fact that you had already pushed out a patch that you had replied
> to even later makes for extra confusion. So I'm sorry Lee, but I don't
> think Doug was unreasonable in asking for status here. Sometimes
> maintainers forget to push, which is why it's a good idea to ping a
> few days later if the patch hasn't showed up in -next.

I completely understand and even empathise with the predicament of a
diligent developer.  What you see above isn't me being cantankerous,
but rather an explanation how things are handed in the MFD tree.

Taking into consideration my current workload and time constraints,
the work-flow used is the best I can muster currently.  Lest we forget,
this isn't the role for what I'm employed, I do this on-top of my
daily tasks using time accumulated by starting early and finishing
late _every day_.

Building, testing and pushing after every patch, hell even on a daily
basis would be difficult to sustain without it impacting my _proper_
work.  What I need to do is write some more scripts which will help to
relieve some of the burden, but again time is notable factor here.

>From a personal point of view, I prefer to be on top of the patches
as they come in and have some lead time from when they are applied
locally to finding their way into -next (bearing in mind that this
lead time is seldom more than 24-48hrs), rather than do what others do
and leave patches hanging on the list for weeks, then gather enough
time to review, collect, test and push all in one session. 

A quick aside; given the state of maintainership in some of the other
subsystems, I'm surprised that we're even having this conversation
considering how responsive we are in MFD.

-- 
Lee Jones
Linaro STMicroelectronics Landing Team Lead
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ