My talk for LinuxCon Brazil 2010 (KVM Security)

I’m back from LinuxCon Brazil 2010. After spending two entire days off-line (interesting experience btw), I can finally upload the slide deck for my talk, “KVM Security – Where Are We At, Where Are We Going”, as promised.

I can’t spend time reporting on the event right now, so I’ll just summarize that it was in my opinion the best Linux-related even we had down here so far, with some good talks from both local and foreigner guys.

The funniest part, however, was seeing Linus having it’s own Justin Bieber moment, with girls freaking out and everything 😉

Thanks for everyone who attended. I hope we can all meet again next year for an even better event.

PS.: I ended-up canceling the Linux Professional Development BoF, due to confusions with scheduling and a couple of other things – Sorry for everyone who planned to attend, but keep in touch (comment here or email me at – I still have the idea of at least mapping the Linux professional development industry here in Brazil. We need better know each other, really!


New opencryptoki release available

I just now found the time to write about the latest opencryptoki version, which was released just over two weeks ago.

Opencryptoki version 2.3.2 was released roughly 6 months after 2.3.1, and brings a series or improvements and bug fixes:

  • Improved performance when handling many sessions or many session objects. An inefficient walk through a linked-list was part of the validation step for every operation involving session or object handles. While still lacking a more efficient data-structure, we where able to use the pointers themselves as handles, thus making the look-up in linear time as opposed to exponential time as it were. This improvement has significant impact for scenarios where a single process had more than 4000 sessions at once. Although we are still able to do some verification, this change may also expose buggy applications which may crash if trying to use invalid handles, so be advised.
  • Largely rewritten build scripts. This version went through a much needed refactor for the autoconf/automake build scripts, in the hope of having now a clearer and less error-prone build procedure.
  • New SPEC file for building RPM packages. The Opencryptoki binaries are now split into different sub-packages: the main opencryptoki package now brings only the slot daemon (pkcsslotd, initialization script) and administration utilities (pkcsconf, pkcs11_setup). The opencryptoki-libs package brings the PKCS#11 library itself. The packages opencryptoki-swtok, opencryptoki-tpmtok, opencryptoki-icatok and opencryptoki-ccatok bring token-specific plug-ins (aka STDLLs) that enables support for different kinds of crypto hardware. This way, the System Administrator can now choose to install only what’s necessary for his/her environment.
  • A nice addition by Kent Yoder that allows pkcsconf to display mechanisms names instead of only numeric identifiers
  • Kent also provided a couple of fixes to the software token (inaccuracies in mechanism list) and testcases
  • A couple of useful additions/fixes related to init-scripts and pkcsconf by Dan Horák
  • A number of RSA fixes and improvements by Ramon de Carvalho Valle, including an endianess bug in key-pair generation for the software token and improved PKCS#1 v1.5 padding functions.

As for the next version, we’re having a strong focus on making the testsuite better. You can follow the development log here.


Apresentação FISL 11: Segurança em Virtualização utilizando o KVM

Abaixo está o link para o PDF da minha apresentação utilizada no FISL 11 sobre “Segurança em Virtualização utilizando o KVM”.

Lembrando que eu devo abordar novamente este tópico na LinuxCon Brasil 2010, que acontecerá dia 31 de Agosto e 1° de Setembro deste ano – fique ligado na programação. Aproveito também para adiantar que eu devo conduzir um “Encontro de desenvolvedores profissionais de Linux” na mesma LinuxCon Brasil 2010. Deverá ser uma oportunidade para encontrar colegas das várias empresas que trabalham direamente com desenvolvimento do Sistema Operacional Linux, e debater sobre o mercado de trabalho, educação, e realizações. Entre em contato (klaus arroba ou deixe um comentário se estiver interessado neste mini-summit.

Comentários, correções e dúvidas são sempre bem-vindas!


Apresentação em PDF:

New Blueprint available: Securing KVM guests and the host system

IBM recently made available another Blueprint of my authorship: Securing KVM guests and the host system.

The text, which also has a PDF version, brings a couple of steps and some discussion around the theme of KVM Security for the Red Hat Enterprise Linux running on IBM System x with Virtualization capability. Those include remote management aspects, host and guest security, a few suggestions for auditing and why not some image-at-rest cryptography?

The complete index follows:

  • Introduction
  • Securing KVM guests and the host system
    • Secured KVM remote management
    • Setting up secure remote management
    • Remote management using SSH tunnels
    • Remote management using SASL authentication and encryption
    • Remote management using TLS
  • Guest virtual network isolation options
    • Network port sharing with Ethernet bridges
    • Network port sharing using 802.1q VLANs
  • Auditing the KVM virtualization host and guests
    • Audit rules file
  • KVM guest image encryption
    • Using encryption in KVM guest images
    • Migrating existing guests to encrypted storage
    • Installing a new KVM guest
    • Storing encrypted guest images
  • Appendix A. Sample audit rules file
  • Appendix B. Troubleshooting

Feedback, comments, corrections and suggestions are welcome as always, and we now have a way to provide them directly in the text. Questions can be answered in the developerWorks Linux Security Community Forum.

Reviewing patches

I always struggled at reviewing code.

Specially when the code to be reviewed is in reality a patch inlined in some e-mail… I hate monospaced fonts in my e-mail reader, and with all the context switches I got in my daily work, I simply can’t concentrate properly in order to follow what’s been proposed with that one patch out of many, in that long long patch series.

In the past, I used to apply them manually, then go over the code using Source Navigator and later cscope.

I still miss the ability to jump between symbol definition and use that cscope does the best, but I have a much more streamlined way of reviewing patches today, thanks to git, meld, and claws-mail.

The first thing is about git. Nowadays I use git in every coding project I use – even if the upstream project is not using git as SCM itself (I simply create a local repository and import). And this is not only for making reviewing patches easier, but all sorts of things, like fast branching and merging, easy cherry-picking, rebasing, commit amending, modern utilities et al. It’s really the 21st century version control system.

The second thing is meld. Meld is one good example of an intuitive interface that doesn’t get in the way. It can compare, merge and edit files (up to 3-way merge if needed). Supports all the major SCMs such as git, hg, cvs and svn (although I can’t find a reason why would anyone still use the last two, at least locally).

Meld side-by-side diff
Meld side-by-side diff

The forth thing, and where actually everything makes sense, is Claws-mail, which has the very useful (and unique?) ability to create custom actions to process messages.

Claws-Mail Actions
Claws-Mail Actions

Guess what happens when you combine Claws-Mail’s actions with a script that uses git and Meld? A very point-and-click way of reviewing patches:

Claws-Mail, git and Meld in action
Claws-Mail, git and Meld in action

The trick is in configuring an action in Claws-Mail that opens a terminal and calls a script. The script uses git-am to apply the patch contained within the selected mail message to some branch in your local git repository. After applying, it calls git-difftool to show the differences. git-difftool then calls any diff tool you might like (my suggestion stays with Meld).

I’m attaching the script for reference below:

## git-review-step
## (C) Copyright 2010 Klaus Heinrich Kiwi
## Licensed under CreativeCommons Attribution-ShareAlike 3.0 Unsupported
## for more info.
## dirname is where the git tree is located.
if [ "$#" -lt 1 ]; then
  echo "Invalid number of parameters"
  echo "usage: $(basename $0) <patch1> [patch2] [patch3] [...]"
  exit 1
cd $dirname
oldbranch=`git branch | grep -e '^* ' | cut -d " " -f 2`
# Save any uncommitted changes in the working dir or index
if git stash | grep HEAD; then
function restore() {
  echo "Reverting to original branch..."
  git checkout --force $oldbranch
  if [ -n "$savedchanges" ]; then
    echo "Restoring un-committed changes..."
    git stash pop
# Get branch to apply to
git branch
echo "Select branch to apply patches:"
echo "  Enter \"<branchname>\" to apply to an existing branch"
echo "  Enter \"<newname> [origref]\" to create a new branch from \"origref\""
echo "    reference (use current branch and HEAD if left blank)"
read -p "Apply patch(es) to branch (default is current):" -e -i $oldbranch newbranch origbranch
if [ -n "$newbranch" ]; then
  if git branch | grep -e "\b${newbranch}$"; then
    echo "Applying to existing branch \"$newbranch\""
    # Checkout
    if ! git checkout $newbranch; then
      echo "Error checkout out \"$newbranch\" - Aborting"
      read -p "Press Enter to continue"
      exit 1
    if [ -n "$origbranch" ]; then
      echo "Applying to new branch \"$newbranch\" created from \"$origbranch\" branch..."
      echo "Applying to new branch \"$newbranch\" created from \"$oldbranch\" branch..."
    if ! git checkout -b $newbranch $origbranch; then
      echo "Error creating \"$newbranch\" from \"$oldbranch\" - Aborting"
      read -p "Press Enter to continue"
      exit 1
    fi  # if ! git checkout ...
  fi    # if `git branch | grep ...
fi      # if [ -n $newbranch ...
# Apply patches to working dir using git-apply
while ! git am $amparams ${messages[@]}; do
  git am --abort
  echo "git-am failed. Retry (the whole chunk) with additional parameters?"
  read -p "git-am parameters (empty aborts):" -e -i $amparams amparams
  if [ -z "$amparams" ]; then
    echo "Aborting..."
    read -p "Press Enter to continue"
    exit 1
for (( i=${#messages[@]}; i > 0; i-- )); do
  PAGER='' git log --stat HEAD~${i}..HEAD~$((i-1))
  if git diff --check HEAD~${i}..HEAD~$((i-1)); then
    echo "WARNING: Commit introduces whitespace or indenting errors"
  git difftool HEAD~${i}..HEAD~$((i-1))
echo "Restoring working tree to original state"
read -p "Press Enter to continue"

Creative Commons License
git-review-step by Klaus Heinrich Kiwi is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Based on a work at

xcryptolinz RPMs

In case anyone is looking for the xcryptolinz RPMs to support IBM cryptographic hardware in Secure Key mode (among other things) through the CCA API, they are actually placed in IBM’s software support page for cryptocards (link)

As of this posting, current version is 3.28-rc8, and only supported in the s390x architecture (System z).


IBM has released a new CCA library (ver. 4.0), supporting the newer IBM PCIe Cryptographic Coprocessor (aka CEX3C aka 4765) card.  The library now supports SHA-2 AES and RSA with modulus size up to 4096 bits (for capable hardware), besides other Secure-Key operations such as DES, 3DES and SHA-1.

Opencryptoki starting from version 2.3 supports (and in fact requires) this library in order to use the CCA token type.

If time permits, I’ll post more here about CCA support in openCryptoki in the future.


test libraries without ‘make install’

Quick oneliner to export LD_LIBRARY_PATH containing all the pathnames that brings a shared library (.so) file in, so the lazy ones like myself can sometimes risk running/testing software that uses these libraries without issuing a ‘make install’.

There are probably more clever/elegant ways to do that, but whatever:

export LD_LIBRARY_PATH=$(for j in \
  $(for i in \
    $(find . -name '*.so'); \
    do dirname $i; done | sort | uniq);\
  do readlink -f $j; done |\
  awk '{ printf "%s:", $0 }')

fglrx problems with Jaunty

Problems getting compiz (or any other 3D acceleration app) working with your ATI Radeon graphics card on the latest Ubuntu release, Jaunty Jackalope (9.04)?

Or better yet: you figured out that you were missing ATI’s proprietary driver, fglrx, and installed it on your own (since the Hardware Drivers wasn’t giving you the option to enable it in the first place)?

If that was the case, I think you already know that the reason why you’d get a hard freeze everytime the X server was comming up is that you were wrong, and Ubuntu and the Hardware Drivers app were right!

ATI’s Catalyst drivers (also known as fglrx drivers), in the default version shipped with Jaunty, are incompatible with some features from Xserver 1.6, which was introduced with Jaunty. Catalyst drivers up to version 9.3 will not work with Xorg version 7.4 or beyond.

The good news is that AMD has already released a 9.4 version that is compatible with the new server. You can get the new version, for Linux x86 and Linux 86_64 here:

The bad news is (or better, are):

  • Ubuntu’s packages haven’t been updated to include this version yet (as of the time of this writing) – so yes, you’d need to install them the manual way (see below for some tips)
  • Seems like the new driver still has some instabilities, notably when trying to use compiz with video overlay (try using totem to play a video while running a composite desktop – AT YOUR OWN RISK 😉

Oh wait! I actually have another piece of ‘good news’: Despite not having official packages for Ubuntu yet, I just found out how easy it’s to create those packages using ATI’s own binary:

# ./ --buildpkg Ubuntu/9.04

Yep, that’s pretty much it. Install the generated .deb files and reboot the system. You may want to run aticonfig –initial if you are not confident that Xorg will automatically detect your driver.

Having a proper package installed instead of just files laying around will allow the system to reconfigure itself when needed (i.e., upon kernel updates) and it also allows you to keep your files tracked, smooth upgrades, etc – generally a Good Thingâ„¢to do.


Guest blogging on Emily’s “Open Source Security” blog

Starting from today I am a proud contributor to Emily Ratliff’s Open Source Security blog. The blog brings information, news, discussions and opinions mainly about Linux and Open Source security in general, and, besides Emily and myself, has other members from the IBM’s Linux Technology Center Security Team as contributors.

My first post brings a little introduction to concepts such as authentication and authorization, and how Kerberos and LDAP can be used to perform those important roles, to later introduce the “Using MIT-Kerberos fo IBM Tivoli Directory Server backend” Blueprint which I authored by the end of last year.

Please go check it out. Comments are always welcome.

Update: blueprint link fixed


Cedilla (ç) symbol using American Keyboards in Linux

Disclaimer: this article refers to using Linux with an American keyboard to type Portuguese (pt-br) text, using the ‘us-intl’ keyboard layout.

One of the things that bothered me when upgrading between Ubuntu versions a while back (Feisty for Gutsy? Hardy? I can’t remember) is the changed behavior for inputting the Cedilla ‘ç’ symbol using American keyboards.

Ok, in fact, this behavior changed twice over the time, so I can remember at least three ways of doing that:

  1. Press the the single-quote (‘) key, then letter ‘c’ (' then c). In the us-intl keyboard layout, the single quote is a dead-key commonly used to insert an acute sign over vowels: á, é, í, ó, ú. Pressing the single-quote twice produces the single quote itself. A few years ago, the Cedilla sign was inputted by combining the single-quote dead key with the letter ‘c’ itself. This behavior changed: now, ' then c produces ć – maybe to support languages that require this symbol (after all we’re abusing the us-international layout) or simply a question of consistency.
  2. Combining the Right-Alt key with the comma (,) symbol (Ralt+,). After the change in the above item was introduced, the us-intl layout was tuned to allow many other types of symbols (Can you say UTF-8?). The Right-Alt key came to be what is called a Level-3 modifier (that is, a key modifier that works just like Shift, Ctrl and Alt/Meta – that is not a dead-key). With this modifier, one could produce several other symbols commonly used in western languages other than English itself, i.e.: ¡, ß, æ, ñ, ø etc. Also, the Level-3 modifier also allow us to input commonly used symbols that are outside the English alphabet: ², ³, ©, ®, ¢, § etc.
  3. Combining Right-Alt with the comma (,) symbol as dead-key, then pressing letter ‘c’ (Ralt+, then c). This is the the behavior introduced with newer versions of the us-intl keyboard layout. Just like before, you use the Right-Alt key as a Level-3 modifier, but instead of inserting the symbol directly, this key combination acts like a dead-key, waiting for the next key-press to determine what symbol is being inputted. In fact, this behavior was introduced with the ‘alt-intl’ layout which is said to mimic the previous behavior of us-intl (first item above) – well, at least for me it doesn’t exactly mimics the former behavior, and I personally prefer the second option.

Choosing between the second and the third options above is pretty straightforward if you’re using a relatively recent Linux distribution with Gnome 2.x and Xorg. Just use gnome-keyboard-properties to choose between USA International (with dead keys) or USA Alternative International (former us_intl) for the behavior found in item 2 and 3 above respectively. You can also have both configured and change between layouts using the Keyboard Indicator gnome applet:

gnome-keyboard-properties screenshot
gnome-keyboard-properties screenshot

In case you are not using Gnome and prefer to configure it directly in Xorg’s configuration, edit the InputDevice section at the server configuration file, usually /etc/X11/xorg.conf, to look like below:

  • Cedilla not as dead-key (behavior 2 above):
    Section "InputDevice"
    	Identifier	"Generic Keyboard"
    	Driver		"kbd"
    	Option		"XkbRules"	"xorg"
    	Option		"XkbModel"	"pc105"
    	Option		"XkbLayout"	"us"
    	Option		"XkbVariant"	"intl"
    #	Option		"XkbOptions"	"lv3:ralt_switch"
  • Cedilla as dead-key (behavior 3 above):
    Section "InputDevice"
    	Identifier	"Generic Keyboard"
    	Driver		"kbd"
    	Option		"XkbRules"	"xorg"
    	Option		"XkbModel"	"pc105"
    	Option		"XkbLayout"	"us"
    	Option		"XkbVariant"	"alt-intl"
    #	Option		"XkbOptions"	"lv3:ralt_switch"

The line with “XkbOptions” “lv3:ralt_switch” was indifferent to (at least) the Cedilla behavior, thus I commented it (I have a feeling that the ralt_switch behavior is included anyway for both variants).


If even after doing the above you can’t get the desired behavior, check the following:

  • Your window manager configuration usually precedes your X server configuration – That is, if you configured your keyboard layout using Gnome or KDE tools, the settings in your xorg.conf are probably being ignored. Within gnome, you can use gconf-editor and browse to /desktop/gnome/peripherals/keyboard/kdb to check the current effective configuration. Leave a comment if you know how to override this.
  • Your X server may have been automagically configured by HAL and friends. You can check that by opening the X server log (usually /var/log/Xorg.0.log) and looking for the evdev module. There should be a couple of messages showing which model/layout/variant was chosen:
    (**) Option "xkb_rules" "evdev"
    (**) AT Translated Set 2 keyboard: xkb_rules: "evdev"
    (**) Option "xkb_model" "pc102"
    (**) AT Translated Set 2 keyboard: xkb_model: "pc102"
    (**) Option "xkb_layout" "us"
    (**) AT Translated Set 2 keyboard: xkb_layout: "us"
    (**) Option "xkb_variant" "alt-intl"
    (**) AT Translated Set 2 keyboard: xkb_variant: "alt-intl"
    (**) Option "xkb_options" "lv3:ralt_switch"
    (**) AT Translated Set 2 keyboard: xkb_options: "lv3:ralt_switch"

    In this case, you can either:

    • Disable auto-detection from the side, adding an “AutoAddDevices” “off” option to your ServerLayout section (at the xorg.conf file):
      Section "ServerLayout"
      	Identifier	"Default Layout"
      	Screen		"Default Screen"
      	InputDevice	"Synaptics Touchpad"
      	Option		"AutoAddDevices" "off"
    • Create a policy at the HAL side, by creating a .fdi file, e.g.: /etc/hal/fdi/policy/10-keyboard.fdi, that reads:
      <?xml version="1.0" encoding="UTF-8"?>
        <deviceinfo version="0.2">
            <match key="info.capabilities" contains="input.keys">
              <merge key="input.x11_options.XkbVariant" type="string">intl</merge>
              <merge key="input.xkb.variant" type="string">intl</merge>

      (Change the variant from intl for alt-intl according to your choice). Remember to restart the HAL daemon (hald) for the changes to take effect

  • More information about HAL and the evdev module can be found at this blog post.

I hope this post was useful to shed some light into this (somewhat common) problem among pt-br users that don’t necessarily use Brazilian keyboards or run their systems in that locale. Leave a comment if you find this post useful, have questions, comments, corrections, anything 😉


Vinhedo – SP, Brazil