contributers categories 6 6 6 5 5 5
quick links
Cvs Fun and Tips!
Posted 9:08:36 AM on Thu, 6th September 2012 by Dean Image #19

Problem 1 - production fix from branched code

A bug is found in release release_x in branch branch_y of CVS module module_m, requiring file file_f to be modified, otherwise all other files in release_x to remain unchanged. The module with this one change is to be tagged with release_z.

Two main steps will be involved first you will need to create an isolated sandbox area for the branch code, than you will need to checkout the single file from the required module and required release, make the required changes to it and tag it individually with the new release tag. The next step involves migrating our changed file into the a current release, and than retagging everything again, so all files are part of the same release tag as the single modified file.

A. Make a branch sandbox, update a copy of file file_f from release release_x, and commit the change.

1. cvs co -r branch_b module_m
2. cd module_m
3. cvs update -p -r release_x file_f > file_f
4. vi file_f
5. cvs commit file_f
6. cvs tag release_z file_f
You can now blast away that sandbox.

B. Check out release_x of module_m, add just your new version of file_f, and tag the lot with release_z.

1. cvs co -r release_x module_m
2. cd module_m
3. cvs update -r release_z file_f
4. cvs tag release_z

Problem 2 - Adding a binary file to CVS

Not a great idea, but if required, use cvs add -kb filename. The -kb option tells CVS it's a binary file, do not add normal headers etc.

Once added with the -kb option, the option is sticky and subsequent updates can be treated normally without specifying the binary option.

Problem 3 - Creating a Branch

Problem 4 - Merging Branches and HEAD Back Together

Problem 5 - Merging Changes in HEAD into a Branch

Say for some module 'My_Module' you have committed changes into HEAD, and would also like to easily include those changes inside an existing branch 'Test_Branch' deployed in another environment.


Problem 6 - How do I add that Id line that CVS updates automatically?

Just add this line, and when you check-in CVS will update it for you

For perl

# $Id$
For C
// $Id$
Python and Makefiles
Ok maybe not, but I likes to make joke about Python being a language modelled after Makefiles

Problem 7 - Resurrecting/restoring deleted files

Just "add" the file like so. This looks strange, but it works.

cd cvs/something
cvs add File-I-Deleted.txt
cvs commit -m 'restoring file'

Problem 8 - Keep getting 'directory CVS specified in argument'

This is because a 'CVS' directory has been added, but cvs doesnt like you to add CVS to cvs. It's not a huge issue, but is mildly annoying.

Run this (longish) one liner are the base of your checked out cvs repo, and it will clear all CVS entries.

find . -name Entries -print0 | xargs -0 grep -l CVS | xargs perl  -pi -e's#D/CVS////##'

Problem 9 - Recursively add directories to cvs

find . -type d -print | grep -v CVS | xargs cvs add
find . -type f -print | grep -v CVS | xargs cvs add
cvs commit -m 'lots of stuff added'

Evil Stuff

Renaming Tags

cvs rtag -r <old tag/branch name> <new tag/branch name> <module name>

cvs rtag -d <old tag/branch name> <module name>

Moving Tags

cvs tag -r <new file revision> -F <tag> <file>

Deleting Tags

cvs rtag -d <tag/branch name> <module name>

Deleting Branches

cvs rtag -d -B <tag/branch name> <module name>

My LinkedIn strategy for success
Posted 10:00:00 PM on Sat, 7th July 2012 by Dean Image #12

Not signed up to LinkedIn? Here in Australia it's well worth your time to join it. But you can't just create an account and start getting weekly phone calls. Here's some tips that work for me.

Fill out the whole profile,

but don't add details like age, gender, marital status, birth place, and cut out jobs past perhaps 10 years. Equal opportunity employers don't need these details so let your skills speak for themselves.

List skills,

but don't list everything you have touched. Just list the skills you want to be hired because of. So perl, linux, mysql, postgresql etc. Otherwise you will get calls for mono on solaris wasting your time and the recruiters.

Join lots of groups,

this builds associations, but not really good ones.

Add all your work colleagues past and present

as this is what will start getting phone calls.

Ask a few for recommendations.

These are like references, pick a few good ones to list on your profile.

Start adding recruiters.

Some recruiters pay to use linkedin so can find you regardless of connections, these will contact you out of the blue. (This is how linkedin makes money!) Others don't or pay less so can only see a limited number of links out through their connections.

Recruiters add each other and move around a lot, so add every recruiter you come across. Then add every recruiter your friends come across and recommend your friends to recruiters.

Also, if you refer friends ask the recruiter for a piece of the finders fee. They will usually begrudgingly send you a $50 gift card. Don't be shy - after all, you earned it!

Get up over 100 connections and beyond.

You want to be over 100 connections, once you are up in 3-400 connections you will be doing very well. Don't be sheepish in adding everyone you know, this isnt Facebook.

Add sales reps for your suppliers, and add people from the community that you know. Sales reps are great contacts because they are typically well connected to your competitors.

Don't do these things...

Don't have a goofy email address - is best. Don't add a photo, don't connect twitter (sure list a link to it, just don't have it tweet into linkedin), don't add the aforementioned irrelevant personal details. And again, don't list details you don't want to be recruit based on.

Be ready to relocate.

Your state and/or city may not be very attractive to companies. That may not sit well with your philosophies on life, wealth and prosperity, but it's the reality. On the other hand your state and/or city might be flooded with your skill set, so you can go elsewhere to become a more valuable commodity.

Based on that you should get 4-6 cold emails a month.
I do! Plus about 1-2 past recruiters. Some even call in to reception at my work place, wanting to offer me a job interview!

Final advice...

When recruiters call, insist on their clients company name, salary range and full job description. Anything else is a time waster. Google is especially bad at this, they will waste days of your time then offer you pittance if at all. Also, lots of start-ups who think they are the next Google will do the same. Typically they will also provide free soda, install video games on cheap flat-screens and throw around a few bean bag chairs - then cut your salary offer by $10k. So avoid them and don't be afraid to decline job offers if you don't like the feel of the company. Soda is cheap and after work go sailing or something, don't hang around at work playing video games! When the company offers they will offer less than their maximum so counter offer for 5-10% more , its far easier to get this now than it will be to get a raise later.

Continued dbhub development
Posted 12:34:07 AM on Sat, 14th January 2012 by Dean Image #20

We use dbhub at our LANparty as our direct connect server software of choice. I am the first to admit that it has gone stale upstream, but feature wise its still ahead of the alternatives on Linux or Windows.
The sourceforge project page has sort of fallen to pieces and my attempts to contact anyone involved have failed. So I have taken the latest tarball (which is up to date with CVS on sourceforge) then dumped it in to git on github. From there I have applied some patches from Gentoo, imported a .spec file I found (and disclaim) and I have created some rudimentary debian package control files.
I welcome people forking and improving things, I have been able to compile dbhub on both i386 and amd64, Gentoo claims to be compiling on PPC and others. The dbhub project has provided in the past .ikpg's for openwrt which are mips architecture - so I suspect its portable enough. Check it out at

Some thoughts on promoting Perl Weekly Newsletter
Posted 7:00:40 PM on Mon, 17th October 2011 by Dean Image #9

I dont claim to be an expert by any means, but i have read a few books on small business marketing (internet and non-internet) in my endeavours with my LANparty event. So based on that reading plus my own experiences, here are some thoughts, in no particular order, on promoting the Perl Weekly Newsletter and really any other email newsletter. The reader is encouraged not to just use them as is, but to use them as a starting point for further ideas and experiments to find what works for you.

10 Perl Coding Tips
Posted 9:32:04 PM on Mon, 26th September 2011 by Dean Image #5

1. always 'use strict' and 'use warnings' - its been said a million times, but do it. Optionally remove 'use warnings' when moving code to production. If you can argue for or against this, then this tip isn't for you.

2. Use the 'perltidy' program to tidy up your code. You can apt-get it, yum install, etc etc. It tidies up your perl code in seconds, or perhaps someone elses code. Its configurable if you prefer different layouts. Be warned though, running perltidy then checking in to source repos breaks the 'blame' connection with the original author of each line.

3. Learn about the concept of 'context' as soon as possible. Dont ignore it. Context is at the very heart of perl programming - and related to tip 6.

4. Use Perl::Critic (or while you are still learning). Its a great way to improve your code and coding. Its like having a crowd of perl experts critique your code and provide detailed feedback.

5. Learn how to use 'perldoc' and get in to the habit early of writing pod yourself. Essentially you just type 'perldoc Some::Module' and you get something that looks like a manpage explaining how the module works.

6. Dont use 'length' to test variables. Perl isnt Java :) in Perl, variables can stand alone in the conditional part of a control statement. Just be awair that '0' also counts as false. Oh, avoid warnings in log files by remembering to test that a variable has content before regexing it or sticking it in strings - use your head about this as you dont need to get carried away.

7. Dont have two modules 'use' each other. Ive seen it, i cant imagine how 'perl' is supposed to make sense of such a scenario.

8. Start learning about how to write tests with Test::More and its friends. Use the 'prove' command to run your tests in bulk, and start getting in to the habit of writing testable code.

9. Read perl books! - Check out the O'Reilly series and various other publishers.

10. Understand that there isnt a 'proper' way of 'throwing exceptions' in Perl. There are few ways, two commone ones are the eval-die method which is somewhat Java-ish or the checking of a return value then inspect a variable approach.

Send email via a pipe to a script with exim
Posted 10:33:11 PM on Wed, 17th August 2011 by Dean Image #6

Here is a short example of how to have exim4 pipe emails to a script. Basically it uses the 'pipe' transport to feed the email to the script via STDIN. I'm going to assume the reader is familiar with exim enough that i won't explain what transports and routers are, or where they belong in the exim config file. In this example, i will have exim feed all emails for a single domain to the script, you can configure exim to use other criteria such as user names, regular expressions, database lookups etc. The exim4 documenation will explain these options. Anyway.

You should begin by creating a domain list such as

domainlist cmd_domains =
Then either add cmd_domains to local_domains, or look for the dnslookup driver and add the domain next to the list next to local_domains.

Add the below to the router section
# command router
  driver = accept
  domains = +cmd_domains
  transport = cmd_transport

Add this to the transports section
  debug_print = "T: using cmd_transport"
  driver = pipe
  command = /tmp/

There are lots of options as to whats send to the script, you will likely want to review them. Two such options above are 'delivery_date_add' and 'envelope_to_add', which add the date of delivery and the envelope 'to' field to the data send to the script.

The 'command' option is where you list you command. You can add exim config variables to that command string and when exim runs the command it will place those values in the args of the command.

As for the script, here is a sample. The message data is pipe'd to the script via STDIN. This script is not very usefull, but demonstrates how to read STDIN in perl.

use strict;
use warnings;

open my $fh, '>>', '/tmp/output.txt';
print $fh <STDIN>
close $fh;

And thats it! Now its up to you to do something with the email. You can use some other language than perl or even compile a program, just read STDIN and examine the program arguments

using perl in %pre in Anaconda
Posted 10:36:00 AM on Tue, 9th August 2011 by Dean Image #9

Here is how to make a perl image for anaconda (the installer for fedora, redhat, centos etc), which will allow you to use perl in the %pre stage. I've used staticperl from There is a small and a big staticperl version, which includes just a few or quite a lot of perl modules respectively.

# make an ext2 image of 50 megs and format it ext2, and mount
dd if=/dev/zero of=perl5.img bs=512 count=100000
losetup /dev/loop0 ./perl5.img
mkfs -t ext2 /dev/loop0
mount -t ext2 /dev/loop0 /mnt

# copy the files and unmount
cd /mnt
cd ~
umount /dev/loop0
losetup -d /dev/loop0

# thats it!

Now just put that somewhere accessible via http and specify it as an update image when you boot to anaconda.
linux updates=
Then use it in %pre with...
%pre --interpreter /tmp/updates/bigperl.bin

Icons for DIA
Posted 9:22:23 AM on Thu, 2nd December 2010 by Dean Image #1

Its hard to find icon sets for Dia. Overwhelmingly, searching for them finds links to forums and mail archives where people are asking for them.

Here are some links i have found.

Some Rack Server Icons

Some icons converted from Gnome

Expanding RAID using Areca
Posted 3:16:03 PM on Thu, 23rd September 2010 by Dean Image #20

Its extremely easy to expand a RAID set on Areca RAID cards. I have an ARC-1261 - however all the Areca cards have the same interface and use the same management software. Good stuff! Its just a pity there isnt a standard for hard RAID management in linux/freebsd like there is in OpenBSD. Areca is possibly the best raid card for OpenBSD but regardless, a note of warning... these steps may help you on Linux, FreeBSD and Windows but not in OpenBSD.

So yes this is how I have expanded my RAID 5 array in FreeBSD using the areca-cli command line management tool.

There are four steps. Three of which are in the Raid card, the third is OS level

  1. Plug in a new drive. Ideally it should be the same as the other drives in the candidate RAID set. Turn on your computer, assuming it boots to your OS successfully. Go to the command line and run areca-cli then go it your admin password with set password=0000. Now type rsf info and check that your raid array is healthy, that is that the State is Normal. Then check that the new disk is present with disk info. If your raid array isnt optimal and the new disk isnt present... well go fix it :) The new disk should have a Usage of Free
  2. From the two info commands in step 1 you will know the raid set number (left most column of the info output) and the disk number (again, the number in the left most column of the info output). We can now expand the raid set with the following command rsf expand raid=1 drv=13. Noting that raid=1 refers to the raid set not the raid level! So where i have raid=1 you should use the raid set number on your local machine that we found in step 1, and similarly where i have drv=13 you should use your disk number. If you run rsf info again, you will see that the State is now Migrating. This will take some time and you can see the progress as a percentage with vsf info. You can speed it along by increasing the background task priority with sys changept p=3, remember to set that back to 1 (Low) when you're done.
  3. Expand the volume set (more details here)
  4. Expand the partition and filesystem. This is specific to your operating system. I wont provide details here except to say that in freebsd one uses the <em>growfs</em> command, and if you formatted the drive raw rather than slicing/partitioning it (yes you can do that in FreeBSD) then there is no need to worry about enlarging the slice/partition.

It does take some time, but it can continue if you shut down the computer and you can continue to use the file-system while its expanding (albeit more slowly)

Using microdc2 for headless DC++ servage
Posted 11:41:54 PM on Tue, 18th May 2010 by Dean Image #2

microdc2 is an Free CLI based Direct Connect client for unix like operating systems. Perhaps its most attractive feature is that it has minimal external dependencies and is incredibly fast.

Its an excellent choice for running a linux/bsd DC++ server.

Firstly you will need to install it. You should be able to work that out yourself. Use yum or apt-get depending on your Linux, or the ports tree in your BSD. If this is beyond your knowledge - stop now and look at another option.

Secondly, you need to install screen. This will allow you to run the microdc2 program in a console and detach from it, then re-attach later. Install screen then read its man page :)

Third, you should run microdc2 as a non-root user, who has only read access to your files.  Now run microdc2 as that user for the first time. su'ing to that user is an ok idea, sudo may not set the HOME environment correctly.
This is important as microdc2 looks for its config in ~/.microdc2

Fourth, set some  shares with the share command, i.e. share /storage/vol1. Just repeat the command with other locations, and microdc2 will add them to its list and start hashing them.

Fifth, quit microdc2 with the exit command. And edit the ~/.microdc2/config file with your favourite editor. The format of this this file is just commands from the microdc2 command line, one command per line. These are run when the program starts, thus configuring the program. Its worth noting that nothing you do on the command line other than adding shares is preserved when you exit. So everything goes in the config file manually. Oh well.

Here is a sample config file...

set listenport 10101
set nick myservernick
set email
set description "dont ask for slots"
set downloaddir /storage/downloads
set speed LAN
set active 1
set auto_reconnect on
set slots 15

What does each line mean? Before i tell you, remember that there is a README in the microdc2 tarball, which is likely installed in /usr/share/doc/microdc2 or /usr/local/share/doc/microdc2

set listenport 10101

The port microdc2 will listen on. For a non-root user this has to be a high port i.e. > 1024, other than that just pick something at random like i have. Then open it on the local firewall, iptables for linux, ipfw or a number of others in BSD.

set nick myservernick

The nickname of the server.

set email

Your contact email.

set description "dont ask for slots"

The description as shown in the user listing. Set it to whatever you like

set downloaddir /storage/downloads

This where files that you download will be saved to. If you miss it out, it will default to the current directory when you started microdc2. Even if you are just sharing, its not a bad idea to just set it to something.

set speed LAN

The speed value is just meta-data, just set it to LAN.

set active 1

This tells microdc2 to actually do something. Other than just accepting configuration.

set auto_reconnect on

With this on, microdc2 will continue to try to connect when disconnected or when it fails to connect.

set slots 15

This is the number of slots available. If your not familiar with what slots are in Direct Connect, why are you reading this?


Has microdc2 immediately attempt to connect to a hub. I like to set this as the LAN i run has its hub at When im at other lans I can stop it with disconnect on the command line, then type connect <hub>
For reference <hub> can be either an ip or a hostname. Most lans will set up a dns alias for the dc hub as 'dchub', making connect dchub a potential good general case.

For a full list of options, run microdc2 and type set.

Keep in mind that you can try any setting on the command line, then save it to the config file.

Some ideas for additional configuration:-
You can have microdc2 log to a file using set log <options> and set logfile /some/log/file.txt
Reduce the amount of noise from event notifications by using set display <options>. In both cases the options are the same, and the full list of options is provided when you type set

Six and finally, run screen to make a detachable session, then run microdc2. It will then start up with all the options you have configured. You can detach with <ctrl>+<shift>+A+D list detached screens with screen -ls and resume with screen -r

Always put your files on disks separate to your OS!!!
Mount the archive filesystem with the 'noatime' flag. This will turn off updating the 'Access Time' flag, something you almost certainly dont care about, in exchange for a small increase in performance (I've read numbers saying 5-7% increase)