lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Jan 2011 21:42:42 -0800
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	nobody <darwinskernel@...il.com>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.38-rc1

On Tue, Jan 18, 2011 at 9:10 PM, nobody <darwinskernel@...il.com> wrote:
>
> i wish i could do that for everything. i am pulling up to 500MiB every
> -rc cycle and mostly codes for hardwares i don't have.

No you're not.

The *WHOLE* Linux kernel git archive fits in one 388M pack. That's not
some -rc release, that's the whole git history going back to 2.6.12 or
whenever it was that git was started (admittedly fairly well-packed,
but still). The incremental for some -rc cycle can be megabytes, but
we're still talking single megabytes, not hundreds of megs.

So no, you're not pulling 500MiB every -rc cycle, unless you're doing
something stupid like using http (which will trigger a re-fetch, since
I end up repacking a couple of times a release) or always re-cloning.

So the fix is not to try to fetch just a part - that will not work.
Exactly because git is so good at packing, doing silly hacks like
"--depth=1" actually fetches MORE data than just doing one clone once
and then pulling incrementally after that. And no, you can't just get
partial trees (although you can then try to save some disk space by
checking out only partial trees - it's not worth the pain, though).

So the fix is:

 - don't use http or rsync (they're fine for the initial clone, but
don't ever use it for any incrementals, and even then I'd suggest
keeping to rsync and not http)

 - if you have some firewall issue that means that you can only use
http, get the firewall fixed. It's not a git issue.

no need to fix git.

When pulling from 2.6.37 to 2.6.38-rc1, it should look something like this:

  remote: Counting objects: 84898, done.
  remote: Compressing objects: 100% (14274/14274), done.
  Receiving objects: 100% (71245/71245), 21.07 MiB | 26.53 MiB/s, done.
  remote: Total 71245 (delta 59086), reused 67779 (delta 56042)
  Resolving deltas: 100% (59086/59086), completed with 7395 local objects.

ie you got 21.07MiB for the whole change between 2.6.37 and 2.6.38-rc1.

(it does expand if you do it every day because then you won't be able
to delta quite as well, and the "completed with 7395 local objects"
means that the local packs will be much bigger on disk because they
will be expanded to have all the base objects, but that 21MB should be
the approximate actual network usage)

Considering that just the _diff_ between 2.6.37 and 2.6.38-rc1 is
about 42MB in size, the fact is that git is damn good at network
bandwidth.

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ