lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1AE640813FDE7649BE1B193DEA596E883BB658E2@SHSMSX101.ccr.corp.intel.com>
Date:	Tue, 29 Mar 2016 05:37:14 +0000
From:	"Zheng, Lv" <lv.zheng@...el.com>
To:	Joe Perches <joe@...ches.com>,
	"Wysocki, Rafael J" <rafael.j.wysocki@...el.com>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	"Brown, Len" <len.brown@...el.com>
CC:	Lv Zheng <zetalog@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>
Subject: RE: [PATCH 01/30] ACPICA: Linuxize: reduce divergences for 20160212
 release

Hi,

> From: Joe Perches [mailto:joe@...ches.com]
> Subject: Re: [PATCH 01/30] ACPICA: Linuxize: reduce divergences for 20160212
> release
> 
> On Mon, 2016-03-28 at 03:02 +0000, Zheng, Lv wrote:
> > Hi,
> 
> Hello.
> 
> > > So why not fix the process script first?
> > > Maybe add something like:
> > > $ grep -E "^typedef\s+\w+\s*\*?\s*acpi_\w+" include/acpi/actypes.h | \
> > >   grep -Eoh "\bacpi_\w+"
> > >
> > > to the acpi_types variable in the lindent_single function
> > [Lv Zheng]
> > I don't think this can work given:
> > 1. we are not only dealing with typedefs, but structs, struct xxx will be
> converted into types during the release process.
> > 2. we have only upper cased type names in ACPICA upstream, but have the
> lower cased type names in Linux, and this doesn't solve that.
> > So I guess you didn't test your idea.
> 
> Good guess.
> 
> The "maybe add something like" should give you a clue.
> 
> > You need to pull ACPICA repo and do the followings to confirm if this is
> working:
> 
> No, I disagree.  _I_ don't need to.  You need to.
[Lv Zheng] 
Then you don't have to provide the solution as you are not the one executing the process.
I can fix it myself:
https://github.com/acpica/acpica/pull/129 
It would be merged by the ACPICA upstream in the near future.

I'll show you the difficulties of "process" later.

> 
> You shouldn't have a process that generates defective patches
> and then sends them to the list.
[Lv Zheng] 
You are not the one executing this process, so you don't know what's happening here.

Actually the Linux repo should be synced to the state of the ACPICA repo.
The defective patch is used for "syncing repo state", not for "fixing indentation problem" or something else.
So if we merged a "process fixing commit" into ACPICA upstream, you'll still have to see such kind of defective patches before this commit because of the state synchronization requirement.

There have already been many such indentation conflicts between Linux and ACPICA.
My current rule on the existing unsynced Linux side code conflict is based on the "syncing repo state" purpose:
I'll ignore them as long as no new linuxized ACPICA commits complain merge conflicts.
But if I saw merge conflict to a new linuxized commit, I'll revert the Linux side code to the __wrong__ but synced state in a separate patch.
That's why you can see this commit.

As a conclusion, the defective patch is because of the purpose - syncing repo state.
Then why do I use a separate patch?

The separated defective patch is the only patch we need to maintain manually, and all other linuxized ACPICA results needn't be maintained manually.
So you can imagine that we can do the recursive development/testing in the ACPICA upstream again and again.
And the linuxizing result should always require no human intervention as long as they can appear after the defective patch.

I have several situations for you to know my work flow.
1. ACPICA release
If this is not separated, then I should merge part of the defective patch into the new linuxized ACPICA commit that generates the merge conflict.
This is a kind of so called "human intervention".
Then if a bug was found after the release testing work (may take several days) was done, I would have to linuxize the whole series again after fixing the bug from the ACPICA upstream.
This results in a redo of the "human intervention". And this kind of "human intervention" may spread to all commits after the fixed one.
Furthermore, the "human intervention" could happen again and again during the recursive release testing process.

There are similar cases:
2. Fast path ACPICA commits
We have something that can't be confirmed from ACPICA development environment, and they need to go Linux repo first.
Such kind of patch series also contain such a separated defective patch.
If this is not separated, then since the series need to be rebased again and again during the development process (because of bug fixing or Linux upstream sync).
Then the "human intervention" need to be performed again and again.

3. The "process fixing commit"
If this problem is fixed, I need the Linux side correction to happen only when this commit is about to enter linux repo.
Otherwise, all commits between the merge point of this commit and the current repo head need "human interventions".
Sometimes, these commits need to be linuxized and posted on the Bugzilla/community to have users to test them.
As the working kernel of the reporters or the developers are different, the "human interventions" have to be performed again and again for them.
And there will be many different versions of such kind of linuxized ACPICA patch series. 
It's not convenient for everyone.

So you can sense, the workload of the "human intervention" depends on the following 2 facts:
1. How worse the "unsynced state" is in the current Linux repo, and this also depends on how big the linuxized "process fixing commit" will be.
2. The merge timing of the "process fixing commit" where the state should be synced and the merge timing of other unsynced commits that could happen before this synced point.

I cannot control the merge timing of the "process fixing commit" and the merge timing of other unsynced commits.
So I have to control the "unsynced state" otherwise my bandwidth will be easily filled up by the "human intervention" due to the uncontrollable "merge timings".
You can easily imagine that my every work minutes will be filled up with the "human intervention" if I cannot control the "unsynced state".
This is a kind of job for a machine, while you are forcing me to be a machine by simply saying: it's my business, not yours...
Is that worthy? Should I allocate more bandwidth to work on real issues rather than doing the "human intervention" with such a heavy load?

The above is a justification for why I was thinking I had already obtained the agreement or the forgiveness from Linux upstream to generate such kind of defective patch to make my life easier.

Thanks and best regards
-Lv

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ