XMind as eclipse plugin

About XMind

Have you tried XMind? It is very nice, open source mind mapping tool. If you have not tried mind mapping at all, maybe it is a good time to start as it helps a lot to visualize any conceptualization and then conceptualize some complex matters even better.

XMind is built on top of Eclipse RCP and so far has been distributed also as eclipse plugin. However now this option is no longer supported. As I spend a lot of time in eclipse it is much more convenient to me, to have mind maps and code in the same tool.


I found the solution. Here is the script install-as-eclipse-plugin.sh:

#/bin/bash -e

if [ ! $2 ]; then
 echo "Usage ./install-as-eclipse-plugin.sh xmind_path eclipse_path"
 exit 1


if [ ! -d $XMIND_PATH/plugins ]; then
 echo "Error - no /plugins directory in $XMIND_PATH - probably not XMind directory"
 exit 2

if [ ! -d $ECLIPSE_PATH/plugins ]; then
 echo "Error - no /plugins directory in $ECLIPSE_PATH - probably not eclipse directory"
 exit 3

echo "Copying XMind plugins from $XMIND_PATH to eclipse at $ECLIPSE_PATH"

# remove old xmind plugins (if they exist)
rm -fr $ECLIPSE_PATH/dropins/xmind
mkdir -p $ECLIPSE_PATH/dropins/xmind/plugins

# exclude language variants, and os specific stuff
PLUGINS=`ls $XMIND_PATH/plugins | grep -v nl_ | grep -v linux | grep -v win32 | grep -v macos`
for PLUGIN in $PLUGINS; do
 PLUGIN_NAME=`echo $PLUGIN | cut -d "_" -f 1`
 FOUND=`find "${ECLIPSE_PATH}/plugins" -name "${PLUGIN_NAME}_*"`
 if [ "$FOUND" == "" ]; then
  echo "Copying: $PLUGIN"
  cp -r $XMIND_PATH/plugins/$PLUGIN $ECLIPSE_PATH/dropins/xmind/plugins
  echo "Plugin already installed: $PLUGIN_NAME"

This script will compare XMind plugins with plugins already available on eclipse installation and copy only these missing. It will also exclude all localization resources (I don't need them) and macosx stuff. Adjust these lines according to your needs.

All the plugins will be copied to dropins/xmind directory - it will ease the upgrade, if you want to remove xmind functionality from eclipse just remove this directory.

The only problem I have encountered so far is that on fresh eclipse installation, when new node is added to the map, there is exception from spell checker. Opening and closing xmind spell checking preferences in eclipse fixes this.

Update: I updated the script not to copy specific plugins, even if their version do not match. All the platform dependent stuff is excluded by default. The solution is tested with eclipse helios (3.6) and XMind 3.2.0.

How to use XMind in eclipse

Thanks to some comments I realized it could be not obvious how to use XMind inside eclipse. If you already have XMind file in one of your eclpse projects you can just click on it and it should open inside eclipse. Apparently you can also use File > Open File... from pull down menu to open any .xmind file. However you will not see all the Markers and Overview windows unless you open so called Mind Mapping perspective. You have only one default perspective in XMind, but several perspectives in default eclipse installation. Use Windows > Open Perspective > Other... > Mind Mapping from the pull down menu. You can also open separate windows specific to this perspective - just use Window > Show View. In order to create new XMind file, just open File > New File > Other... > Mind Mapping > Mind Map Workbook


wondershaper is a real wonder

Have you ever experienced "slow internet" when there is another ongoing transfer on your wire which fully saturates the bandwidth. Even if one can accept high latency of HTTP transfers (sooner or later the page will appear in the browser), in case of interactive sessions like SSH it is completely unacceptable. Wondershaper is a great remedy with minimal side effects - especially if your are running your router on Linux machine.

Wondershaper is using CBQ instead of HTB and I am aware that there are better approaches to traffic shaping in Linux. But "traffic shaping" is usually associated with the context of splitting the bandwidth of single internet link among many hosts. This is too much in my case and wondershaper does exactly what I expect.

My ADSL line is 1024/256kb officially. But in logs I can see different values:

ATM dev 0: ADSL line is up (1312 kb/s down | 320 kb/s up)

At first I tried to provide these values to wondershaper, however they were too high (high latency still and packet drops in addition). By using different values I finally established that official numbers give the best trade-off between latency and link speed.

In case of my link the average download rate is around 126KB/s. Now with wondershaper it is around 120KB/s - about 95% of the original speed - this the cost. However SSH sessions work like a charm when updates are being downloaded and the browser loads pages almost as fast as usually.

Now specific section of my /etc/network/interfaces (Debian specific) looks like:

auto ppp0
iface ppp0 inet ppp
    pre-up /usr/local/sbin/firewall.sh start
    pre-up while ! grep 'Line up' /proc/net/atm/speedtch:0 &>/dev/null; do sleep 1; done
    post-up echo "1" >/proc/sys/net/ipv4/ip_forward
    post-up wondershaper ppp0 1024 250
    pre-down wondershaper clean ppp0
    pre-down echo "0" >/proc/sys/net/ipv4/ip_forward
    post-down /usr/local/sbin/firewall.sh stop


With the 1024 256 values I still had some latency where my uplink was fully saturated. The 1024 250 values seems to work OK.


smartd.conf: the ultimate settings

By "ultimate settings" I mean options which suit my needs best ;) . I had problems finding comprehensive example of smartd.conf options. Here is what I finally came to:

/dev/sda         \ # The device to monitor
 -a              \ # Implies all standard testing and reporting.
 -n standby,10,q \ # Don't spin up disk if it is currently spun down
                 \ #   unless it is 10th attempt in a row. 
                 \ #   Don't report unsuccessful attempts anyway.
 -o on           \ # Automatic offline tests (usually every 4 hours).
 -S on           \ # Attribute autosave (I don't really understand
                 \ #   what it is for. If you can explain it to me
                 \ #   please drop me a line.
 -R 194          \ # Show real temperature in the logs.
 -R 231          \ # The same as above.
 -I 194          \ # Ignore temperature attribute changes
 -W 3,50,50      \ # Notify if the temperature changes 3 degrees
                 \ #   comparing to the last check or if
                 \ #   the temperature exceeds 50 degrees.
 -s (S/../.././02|L/../../1/22) \ # short test: every day between 2-3am
                                \ # long test every Monday between 10pm-2am
                                \ # (Long test takes a lot of time
                                \ # and it should be finished before
                                \ # daily short test starts.
                                \ # At 3am every day this disk will be
                                \ # utilized heavily as a backup storage)
 -m root         \ # To whom we should send mails.
 -M exec /usr/share/smartmontools/smartd-runner # standard debian script


dropbear: scp does not work, here is poor man's replacement

The dropbear project provides very interesting implementation of SSH server. It has advantage of very low memory footprint, which might be crucial in case of embedded devices. Personally I use dropbear on NSLU2 device (aka SLUG).

I was a little bit disappointed when I tried scp command against dropbear server (on debian host):

$ scp .ssh/id_rsa.pub root@slug:authorized_keys
bash: scp: command not found
lost connection

Another try:

sftp root@slug
Connecting to slug...
bash: /usr/lib/sftp-server: No such file or directory
Connection closed

Dropbear comes without scp and sftp-server commands. On debian you can find them in openssh-server package. Installing it would bring a lot of unwanted dependencies not even mentioning that I would have to make sure that OpenSSH server is not going to start and compete with my little dropbear. Here is simple scp-like solution.

cat .ssh/id_rsa.pub | ssh root@slug 'tee authorized_keys >/dev/null'

I hope it will help someone.


Sparse image bundle for Time Machine backup

The command is:

$ hdiutil create -size 150g -fs HFS+J -nospotlight -imagekey sparse-band-size=131072 -volname “hostname-backup” /tmp/HOSTNAME_MACADDRESSWITHOUTCOLONS.sparsebundle
the maximal size of the bundle in gigabytes (created image will be initially smaller though - about 250MB)
-fs HFS+J
the filesystem - using HFS is the main reason for preparing image anyway. TimeMachine depends on some HFS magic.
prevents Mac OS from indexing this image
-imagekey sparse-band-size=131072
some sources on the Internet claim this is the best choice in terms of performance

When the image is ready, it should be moved to the network share where backup will be stored. Choosing the new location in TimeMachine preferences is the last step.


gwt-servlet-war project, no more annoying gwt-servlet in the classpath

All the GWT libraries are available in the central maven repositories now. There are 4 different jars for single platform (gwt-user, gwt-dev, gwt-dev with platform classifier (thus there is even more jars the 4) and gwt-servlet.

If you are developing gwt application which will be packaged as war, these libraries should be included in pom.xml with something like this:


GWT jars marked with provided will be available during development, but not packaged into war. This approach have only one drawback, core gwt classes are present in both - gwt-user, and gwt-servlet jar in the same time. In case of developing project in eclipse with m2eclipse plugin it leads to very annoying behavior (not finding sources, showing two candidate types with the same name, etc.).

I have created new gwt-servlet-war project to overcome this problem. The idea is very simple. Instead of the last gwt-servlet dependency from above example, one should put:


After this change, war packaging will be supported by so called "war overlays" where wars specified as dependencies are merged into war being built.

And guess what gwt-servlet-war contains? Only gwt-servlet library in WEB-INF/lib. Thus project is in fact single pom.xml. Anyway It have to be packaged according to gwt versioning scheme.

If anyone have any idea how to set up maven repository on top of code.google.com's subversion, please contribute it to the project. Maybe I should promote this jar to be put in central maven repo, or rather suggest people responsible for putting gwt artifacts there to provide something of this kind along the line?


Maven google code upload plugin

The project is located here.

Example setup (works well with my projects) looks like:


The gcupload:gcupload maven target is added to our Hudson. We have differentia-javaica release build which takes care of everything - deploying jars to our Nexus as well as putting them to Google Code.


Differentia Javaica

I started a new project called differentia-javaica. Here is the quote from the original description:

The aim of this project is to compare two java source codes and check if they are equal. It is not a simple diff. It uses ANTLR to construct two Abstract Source Trees for java types and eventually compare these trees. As a consequence white spaces and comments will not affect comparison. Reordering of elements in source code will be treated as difference though.

This kind of comparison is especially helpful when writing unit tests for java source code generators. When we have expected source code it is possible to check if it equals to generated source code.

We are writing some Java source code generators right now at NCDC and this tools is quite helpful in unit testing. Thus we want to share it with community.