This tip may be useful for those working on a local GIT tree, but in need to “export” it to a remote server (for example for building and testing).
I know that ideally one would “push” the changes to a commonly-accessible remote server, and pull back from the build/testing host. But sometimes we simply don’t have GIT available on that host, other times we’re just lazy.
The standard procedure then is to use the “git archive” command to export the contents of the repository to a tarball, optionally compress it, and then transfer it and unpack on the remote side:
$ git archive --format=tar --prefix=kernel_20101221a/ HEAD | bzip2 > kernel_20101221a.tar.bz2
$ scp kernel_20101221a.tar.bz2 firstname.lastname@example.org:.
kernel_20101221a.tar.bz2 100% 68MB 11.4MB/s 00:06
$ ssh email@example.com
Last login: Tue Dec 21 15:01:59 2010 from dyn531363.br.ibm.com
# tar xvf kernel_20101221a.tar.bz2
This has obvious problems… Even if you’re using SSH key authentication to avoid entering the password twice, you end-up typing too many commands and transferring too many bytes. For a project as large as the Linux kernel, this could be a real pain.
A smarter, more direct alternative would be to avoid creating a local archive at all, using ssh input/output tunneling and tar to do the hard-lifting for us:
$ git archive --format=tar --prefix=kernel_20101221b/ HEAD | ssh firstname.lastname@example.org tar xvf -
Since you may end-up doing this several times per day, an even smarter way would be to just transfer the absolute necessary – essentially what changed between the last and the current tree. RSYNC is the ideal tool to do that, since it will only send the differences between those files. We could, in theory, use rsync to transfer the whole tree, perhaps ignoring the .git directory since it has no use for us remotely. An even better option would be to transfer just what’s being tracked (and rsync would transfer just what changed). Doing that requires us to use “git ls-files” to list the files being tracked, and pipe that to a rsync command that reads the files to be transfered from standard input:
$ git ls-files -z | rsync -e ssh --files-from - -av0 . email@example.com:kernel_20101221c/
building file list ... done
The “git ls-files” command will list only the files being tracked in the current git tree. The “-z” argument for “git ls-files”, together with it’s counterpart “-0” in the rsync command tell those commands to use “\0” (the null character) as delimiter between files, so that is safe to deal with file names with spaces on them.
The final step is to create an alias that will invoke the command above:
$ git config --add alias.upload-spin '!git ls-files -z | rsync -e ssh --files-from - -av0 . firstname.lastname@example.org:kernel-dev/'
$ git upload-spin
building file list ... done
created directory kernel-dev
Note that “shell” aliases (i.e., starting with “!”) will execute those commands on the top-level directory for the GIT repository you are on, so the command above should work correctly even from sub-directories.
I hope the above tips can increase your productivity when working with cross-platform development (and more important, freeing you from a boring repetitive task to more do more value-add coding).
Leave a comment if you like it, have corrections or would like to show us some other tips.
Sometimes you’ll need a clean upload, meaning you’d like to remove untracked files from the remote side. Turns out this is not as easy as I have hoped.
I’m using the the output from “git ls-files” as an “inclusion filter”, plus an “include all dirs” and a “exclude everything” filter to the rsync command. The reasons why I’m using those are beyond the scope of this post (check the INCLUDE/EXCLUDE PATTERN RULES section at the rsync(1) man page), but the command below seems do to the trick:
git ls-files -z | rsync -avi0 -e ssh --include-from - --include '*/' --prune-empty-dirs \
--delete --delete-excluded --exclude '*' . email@example.com:kernel-dev/
I’ve created an alias called “update-pristine” that does that automatically for me. It will take more time to execute than the original version (I believe due to recursive path descend), but again, you should only use it when wanting to explicitly exclude everything that is not being tracked.