Category Archives: Open-Source

Exporting GIT repositories remotely

This tip may be useful for those working on a local GIT tree, but in need to “export” it to a remote server (for example for building and testing).

I know that ideally one would “push” the changes to a commonly-accessible remote server, and pull back from the build/testing host. But sometimes we simply don’t have GIT available on that host, other times we’re just lazy.

The standard procedure then is to use the “git archive” command to export the contents of the repository to a tarball, optionally compress it, and then transfer it and unpack on the remote side:

$ git archive --format=tar --prefix=kernel_20101221a/ HEAD | bzip2 > kernel_20101221a.tar.bz2
$ scp kernel_20101221a.tar.bz2's password:
kernel_20101221a.tar.bz2                                                  100%   68MB  11.4MB/s   00:06    
$ ssh's password:
Last login: Tue Dec 21 15:01:59 2010 from
# tar xvf kernel_20101221a.tar.bz2

This has obvious problems… Even if you’re using SSH key authentication to avoid entering the password twice, you end-up typing too many commands and transferring too many bytes. For a project as large as the Linux kernel, this could be a real pain.

A smarter, more direct alternative would be to avoid creating a local archive at all, using ssh input/output tunneling and tar to do the hard-lifting for us:

$ git archive --format=tar --prefix=kernel_20101221b/ HEAD | ssh tar xvf -'s password:

Since you may end-up doing this several times per day, an even smarter way would be to just transfer the absolute necessary – essentially what changed between the last and the current tree. RSYNC is the ideal tool to do that, since it will only send the differences between those files. We could, in theory, use rsync to transfer the whole tree, perhaps ignoring the .git directory since it has no use for us remotely. An even better option would be to transfer just what’s being tracked (and rsync would transfer just what changed). Doing that requires us to use “git ls-files” to list the files being tracked, and pipe that to a rsync command that reads the files to be transfered from standard input:

$ git ls-files -z | rsync -e ssh --files-from - -av0 .'s password:
building file list ... done

The “git ls-files” command will list only the files being tracked in the current git tree. The “-z” argument for “git ls-files”, together with it’s counterpart “-0” in the rsync command tell those commands to use “\0” (the null character) as delimiter between files, so that is safe to deal with file names with spaces on them.

The final step is to create an alias that will invoke the command above:

$ git config --add alias.upload-spin '!git ls-files -z | rsync -e ssh --files-from - -av0 .'
$ git upload-spin's password:
building file list ... done
created directory kernel-dev

Note that “shell” aliases (i.e., starting with “!”) will execute those commands on the top-level directory for the GIT repository you are on, so the command above should work correctly even from sub-directories.

I hope the above tips can increase your productivity when working with cross-platform development (and more important, freeing you from a boring repetitive task to more do more value-add coding).

Leave a comment if you like it, have corrections or would like to show us some other tips.



Sometimes you’ll need a clean upload, meaning you’d like to remove untracked files from the remote side. Turns out this is not as easy as I have hoped.

I’m using the the output from “git ls-files” as an “inclusion filter”, plus an “include all dirs” and a “exclude everything” filter to the rsync command. The reasons why I’m using those are beyond the scope of this post (check the INCLUDE/EXCLUDE PATTERN RULES section at the rsync(1) man page), but the command below seems do to the trick:

git ls-files -z | rsync -avi0 -e ssh --include-from - --include '*/' --prune-empty-dirs \
    --delete --delete-excluded --exclude '*' .

I’ve created an alias called “update-pristine” that does that automatically for me. It will take more time to execute than the original version (I believe due to recursive path descend), but again, you should only use it when wanting to explicitly exclude everything that is not being tracked.

My talk for LinuxCon Brazil 2010 (KVM Security)

I’m back from LinuxCon Brazil 2010. After spending two entire days off-line (interesting experience btw), I can finally upload the slide deck for my talk, “KVM Security – Where Are We At, Where Are We Going”, as promised.

I can’t spend time reporting on the event right now, so I’ll just summarize that it was in my opinion the best Linux-related even we had down here so far, with some good talks from both local and foreigner guys.

The funniest part, however, was seeing Linus having it’s own Justin Bieber moment, with girls freaking out and everything 😉

Thanks for everyone who attended. I hope we can all meet again next year for an even better event.

PS.: I ended-up canceling the Linux Professional Development BoF, due to confusions with scheduling and a couple of other things – Sorry for everyone who planned to attend, but keep in touch (comment here or email me at – I still have the idea of at least mapping the Linux professional development industry here in Brazil. We need better know each other, really!


New opencryptoki release available

I just now found the time to write about the latest opencryptoki version, which was released just over two weeks ago.

Opencryptoki version 2.3.2 was released roughly 6 months after 2.3.1, and brings a series or improvements and bug fixes:

  • Improved performance when handling many sessions or many session objects. An inefficient walk through a linked-list was part of the validation step for every operation involving session or object handles. While still lacking a more efficient data-structure, we where able to use the pointers themselves as handles, thus making the look-up in linear time as opposed to exponential time as it were. This improvement has significant impact for scenarios where a single process had more than 4000 sessions at once. Although we are still able to do some verification, this change may also expose buggy applications which may crash if trying to use invalid handles, so be advised.
  • Largely rewritten build scripts. This version went through a much needed refactor for the autoconf/automake build scripts, in the hope of having now a clearer and less error-prone build procedure.
  • New SPEC file for building RPM packages. The Opencryptoki binaries are now split into different sub-packages: the main opencryptoki package now brings only the slot daemon (pkcsslotd, initialization script) and administration utilities (pkcsconf, pkcs11_setup). The opencryptoki-libs package brings the PKCS#11 library itself. The packages opencryptoki-swtok, opencryptoki-tpmtok, opencryptoki-icatok and opencryptoki-ccatok bring token-specific plug-ins (aka STDLLs) that enables support for different kinds of crypto hardware. This way, the System Administrator can now choose to install only what’s necessary for his/her environment.
  • A nice addition by Kent Yoder that allows pkcsconf to display mechanisms names instead of only numeric identifiers
  • Kent also provided a couple of fixes to the software token (inaccuracies in mechanism list) and testcases
  • A couple of useful additions/fixes related to init-scripts and pkcsconf by Dan Horák
  • A number of RSA fixes and improvements by Ramon de Carvalho Valle, including an endianess bug in key-pair generation for the software token and improved PKCS#1 v1.5 padding functions.

As for the next version, we’re having a strong focus on making the testsuite better. You can follow the development log here.


Apresentação FISL 11: Segurança em Virtualização utilizando o KVM

Abaixo está o link para o PDF da minha apresentação utilizada no FISL 11 sobre “Segurança em Virtualização utilizando o KVM”.

Lembrando que eu devo abordar novamente este tópico na LinuxCon Brasil 2010, que acontecerá dia 31 de Agosto e 1° de Setembro deste ano – fique ligado na programação. Aproveito também para adiantar que eu devo conduzir um “Encontro de desenvolvedores profissionais de Linux” na mesma LinuxCon Brasil 2010. Deverá ser uma oportunidade para encontrar colegas das várias empresas que trabalham direamente com desenvolvimento do Sistema Operacional Linux, e debater sobre o mercado de trabalho, educação, e realizações. Entre em contato (klaus arroba ou deixe um comentário se estiver interessado neste mini-summit.

Comentários, correções e dúvidas são sempre bem-vindas!


Apresentação em PDF:

Reviewing patches

I always struggled at reviewing code.

Specially when the code to be reviewed is in reality a patch inlined in some e-mail… I hate monospaced fonts in my e-mail reader, and with all the context switches I got in my daily work, I simply can’t concentrate properly in order to follow what’s been proposed with that one patch out of many, in that long long patch series.

In the past, I used to apply them manually, then go over the code using Source Navigator and later cscope.

I still miss the ability to jump between symbol definition and use that cscope does the best, but I have a much more streamlined way of reviewing patches today, thanks to git, meld, and claws-mail.

The first thing is about git. Nowadays I use git in every coding project I use – even if the upstream project is not using git as SCM itself (I simply create a local repository and import). And this is not only for making reviewing patches easier, but all sorts of things, like fast branching and merging, easy cherry-picking, rebasing, commit amending, modern utilities et al. It’s really the 21st century version control system.

The second thing is meld. Meld is one good example of an intuitive interface that doesn’t get in the way. It can compare, merge and edit files (up to 3-way merge if needed). Supports all the major SCMs such as git, hg, cvs and svn (although I can’t find a reason why would anyone still use the last two, at least locally).

Meld side-by-side diff
Meld side-by-side diff

The forth thing, and where actually everything makes sense, is Claws-mail, which has the very useful (and unique?) ability to create custom actions to process messages.

Claws-Mail Actions
Claws-Mail Actions

Guess what happens when you combine Claws-Mail’s actions with a script that uses git and Meld? A very point-and-click way of reviewing patches:

Claws-Mail, git and Meld in action
Claws-Mail, git and Meld in action

The trick is in configuring an action in Claws-Mail that opens a terminal and calls a script. The script uses git-am to apply the patch contained within the selected mail message to some branch in your local git repository. After applying, it calls git-difftool to show the differences. git-difftool then calls any diff tool you might like (my suggestion stays with Meld).

I’m attaching the script for reference below:

## git-review-step
## (C) Copyright 2010 Klaus Heinrich Kiwi
## Licensed under CreativeCommons Attribution-ShareAlike 3.0 Unsupported
## for more info.
## dirname is where the git tree is located.
if [ "$#" -lt 1 ]; then
  echo "Invalid number of parameters"
  echo "usage: $(basename $0) <patch1> [patch2] [patch3] [...]"
  exit 1
cd $dirname
oldbranch=`git branch | grep -e '^* ' | cut -d " " -f 2`
# Save any uncommitted changes in the working dir or index
if git stash | grep HEAD; then
function restore() {
  echo "Reverting to original branch..."
  git checkout --force $oldbranch
  if [ -n "$savedchanges" ]; then
    echo "Restoring un-committed changes..."
    git stash pop
# Get branch to apply to
git branch
echo "Select branch to apply patches:"
echo "  Enter \"<branchname>\" to apply to an existing branch"
echo "  Enter \"<newname> [origref]\" to create a new branch from \"origref\""
echo "    reference (use current branch and HEAD if left blank)"
read -p "Apply patch(es) to branch (default is current):" -e -i $oldbranch newbranch origbranch
if [ -n "$newbranch" ]; then
  if git branch | grep -e "\b${newbranch}$"; then
    echo "Applying to existing branch \"$newbranch\""
    # Checkout
    if ! git checkout $newbranch; then
      echo "Error checkout out \"$newbranch\" - Aborting"
      read -p "Press Enter to continue"
      exit 1
    if [ -n "$origbranch" ]; then
      echo "Applying to new branch \"$newbranch\" created from \"$origbranch\" branch..."
      echo "Applying to new branch \"$newbranch\" created from \"$oldbranch\" branch..."
    if ! git checkout -b $newbranch $origbranch; then
      echo "Error creating \"$newbranch\" from \"$oldbranch\" - Aborting"
      read -p "Press Enter to continue"
      exit 1
    fi  # if ! git checkout ...
  fi    # if `git branch | grep ...
fi      # if [ -n $newbranch ...
# Apply patches to working dir using git-apply
while ! git am $amparams ${messages[@]}; do
  git am --abort
  echo "git-am failed. Retry (the whole chunk) with additional parameters?"
  read -p "git-am parameters (empty aborts):" -e -i $amparams amparams
  if [ -z "$amparams" ]; then
    echo "Aborting..."
    read -p "Press Enter to continue"
    exit 1
for (( i=${#messages[@]}; i > 0; i-- )); do
  PAGER='' git log --stat HEAD~${i}..HEAD~$((i-1))
  if git diff --check HEAD~${i}..HEAD~$((i-1)); then
    echo "WARNING: Commit introduces whitespace or indenting errors"
  git difftool HEAD~${i}..HEAD~$((i-1))
echo "Restoring working tree to original state"
read -p "Press Enter to continue"

Creative Commons License
git-review-step by Klaus Heinrich Kiwi is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Based on a work at

Guest blogging on Emily’s “Open Source Security” blog

Starting from today I am a proud contributor to Emily Ratliff’s Open Source Security blog. The blog brings information, news, discussions and opinions mainly about Linux and Open Source security in general, and, besides Emily and myself, has other members from the IBM’s Linux Technology Center Security Team as contributors.

My first post brings a little introduction to concepts such as authentication and authorization, and how Kerberos and LDAP can be used to perform those important roles, to later introduce the “Using MIT-Kerberos fo IBM Tivoli Directory Server backend” Blueprint which I authored by the end of last year.

Please go check it out. Comments are always welcome.

Update: blueprint link fixed