Category Archives: Tooling

Compile freedts 1.00 on EL6

Just a note to myself, maybe you will find this usefull as well.

In order to compile freedts-1.00 you need to have the GCC, unixODBC and unixODBC-devel packages installed.

Next download and un-tar the freedts package.

for some reason the ‘ODBC_INC’ variable isnt set properly in the configure script. This will lead to an ‘sql.h not found’ message when the –with-unixodbc switch is used. the fix for this is:

Locate the sql.h file on your system:
find / -iname sql.h -print

edit the freetds ./configure script and add the variable. Given example is specific for my system. Make sure you alter it accordingly.

Next configure the compiler
./configure –with-tdsver=7.0 –with-unixodbc=/usr/local –includedir=/usr/include

Make the install using in sequence:
make install
make clean



Howto: change the notification subject and allow KB images in GLPI version 0.90.3

For version 1.9.2 review:


GLPI released their new version 0.90.3.
With new each release two questions seem to be very persistent. These questions are:

  1. How can we change the default notification prefix: [GLPI ] in the email  subject.
  2. How do we enable images in the KB articles.

In this article you will be able to read my personal  opinion on the matter and how to change this GLPI behavior.

Why do you want to change the GLPI  notification prefix.

The most obvious reason is to allow your customer to quickly identify your companies tickets. The rule of thumb in modern system view design is enabling users to quickly: ‘scan, select. act’ Changing the subject to something intuitive enables your customers to do so.

Another point of interest is the possibility to daisy-chain multiple installations of GLPI. By configuring the notification subjects and schemes correctly you can daisy chain multiple installations allowing cross organization enterprise environments to be set up. This is impossible when all installations identify themselfs as ‘GLPI [key].’

How to alter the code to support your custom prefix in GLPI 0.90

In order to alter the subject prefix in GLPI 0.90,  firstly you need to configure your prefix in the Administration>Entities>[your entity]>notifications>Prefix for notifications. Changing this configuration field will correctly alter the prefix to that of your liking. No further code-hacks are required or advised.

Why do you want images in your KB.

Well this is -in my humble opinion- an no brainer. One images shows more detail then i can describe in a thousand words. Images also help speed up the resolution process, especially during nightly hours. It also allows the engineer to intuitively compare the version of the actual situation with the situation documented. Is it all positive then? no, there are some downsides to consider as well.

An image doesn’t replace the engineers know-how and sometimes you want to explicitly trigger this knowledge by not showing any images. Updated applications might look different, actually slowing down the resolution process. Another more technical downside is web server storage. All images need to be stored somewhere and might needlessly clutter the support-system. My point of view is that you need to decide whats best for your situation. Sadly GLPI doesnt allow you to choose yet, it forces images to be removed. If you do need image support, please apply the code-hack below.

Be aware, This wont enable image export to pdf.

How to enable images in the KB

First we need to enable the INSERT function that enables us to add images using the TinyMCE editor. In order to do this two changes need to be made.

In the inc/html.class.php file on line:3837 and line:3871 comment out ( // ) the lines that reads _html = _html.replace .. See screenshots for more details.

Optionally you can enable the ‘image’ button by adding image to the ‘theme_advanced_buttons2 : ‘ line. See images underneath for more details.


The next step is to enable the images to be shown. Without this change the HTMLAWED plugin will add a denied tag to the actual images effectivly telling the browser not to show the image. Additionally the resulting HTML code including the denied: tag will be stored in database also disabling this specific image after the next code modification. Enabling the images afterward requires an search and replace statement in the database. (See comments below).

In the file /lib/htmlawed/htmLawed.php on line 47 add ‘;src: data’ to the end of the line.

Make sure you use an screenshot tool that generates an inline HTML image on the clipboard. Greenshot is an free alternative that does this out of the box.


Recover from failed Dell perc raid5 logical disk

We encountered a failed logical disk on a Dell Perc SAS controller. After a quick review we discovered that two disks out of the four configured for RAID5 had failed. This event triggered the Perc controller to put the logical disk offline. Now what…

Everyone knows that when using a raid5 distributed partity with 4 disks the maximum redundancy is losing 1 disk. With two failed disks data loss is usually inevitable. SO, if this is also the case with your machine, please realize your chances of  recovering are slim. This article will not magically increase the chances you have on recovering. The logic of the Dell Perc SAS controller actually might.

First off, I will not accept any responsibility for damage done by following this article. Its content is intended to offer the troubleshooting engineer an possible solution path. Key knowledge is needed to interpret your situation correctly and with that the applicability of this article.

TIP: Save any data still available to you in a read-only state.
(If you have read only data, this article does not apply to you!)

What do you need?

Obviously you need to have two replacement disks available.
You also need to have a iDRAC (Dell remote access card) or some other means to access the systemlog.
You need to have physical access to the machine (to replace the disks and review the system behavior)


What to do?

Our specific setup:
 Controller  0
 -Logical volume 1, raid5
 + Disk 0:2     Online
 + Disk 0:3     Failed
 + Disk 0:4     Online
 + Disk 0:5     Failed

The chance both problematic disks 0:3 and 0:5 failed simultaneously is near to zero. What I mean to say by this is that disks 0:3 and 0:5 will have failed in a specific order. This means that the disk who failed first will have ancient data on it. In order to make an recovery attempt we need actual and not historical data. To this end we first need to identify the disk that failed first. This will be the disk that we will be replacing shortly.

Identifying the order in which the disks failed
Luckily most Dell machines ship with a Dell Remote Access Card (Drac). HP and other vendors have similar solutions. The iDRAC keeps an system log. In this log the iDRAC will report any system events. This also goes for the events triggered by the Perc SAS controler. Enter the iDRAC interface during boot <CTR+?> and review the eventslog. Use the timestamps in this log to identify the first disk that failed. Below an example of the Log output:

fotoIn our case, disk 0:5 failed prior to disk 0:3. Be absolutely sure that you identify the correct disk. We want the most current data to be used for a rebuild. If this is for any reason historical data, you will end up with corrupted data. Write the disk number of the disk that failed first on a piece of paper. This is the disk that needs to be replaced with a new one. This could be a stressful situation (for your manager), so be mindful that a stressed manager chasing you gut could confuse you. You do not want to mix the disks up, so keep checking your paper and do not second guess but check if your not sure.

Exit the IDRAC interface and reboot the machine and enter the Perc controller, usually <CTR+?> during boot. Note that the controller also reports the logical volume  being offline. If this is the case, enter the Physical Disks (PD) page (CTR+N in our case). Also note here that disks 0:3 and 0:5 are in a failed state. Select disk 0:3 and force this disk online using the operations menu (F2 in our case)  and accept the warning. DO NOT SELECT or ALTER THE DISK WITH HISTORICAL DATA (0:5)!!!

Now physically replace disk 0:5 with the spare you have available. If all is well, you should notice that the controller is automatically starting a rebuild (LEDS flashing fanatically).  Review your lcd-screen and note that disk 0:5 is now in a rebuilding state. Most controllers let you review the progress. On our controller the progress was hidden on the next page of the disk details in the physical disk (PD) page, which was reachable using the tab key. Wait for the controller to finish. (This can take quite a while, clock the time between % and muliply that with 100 then divide that with 60 to get the idea. Get a coffee or good night sleep).

Once the controller is finished it will in most cases place the replaced disk 0:5 in an OFFLINE state and the forced online disk (0:3) back in FAILED state. Now use the operations menu to force DISK 0:5 (rebuild disk) online and note the logical volume becoming available in a degraded state. Reboot the machine and wait for the OS to boot.

All done?

Well the logical volume should be available to the OS. This doesnt mean there is any readable data left on the device. Usually this will become apparent during OS boot. Most operating systems will perform a quick checkdisk during mount. Most errors will be found there. One of two things can happen:

1) Your disk is recovered but unclean and will be cleaned by the OS after which the boot will be successful or…
2) the disk is corrupted beyond the capabilities of a basic scan disk.

In the latter case you might want tot attempt additional repair steps and perform a OS partition recovery. In most cases, if this is your scenario, the chance you will successfully recover the data is very slim.

I hope you, like me, successfully recovered your disk.
(Thanks to the failure imminent detection and precaution functions the Dell Perc controller implement)

Hate to say it, but Powershell is cool!

Just to put it out there.
Some history.

There was Powershell. At first, I didnt quite understand its potential and role in the Microsoft product suite. Then came the ‘not-quite-headless’ windows server. I was like: oooh, It looks like Microsoft is Changing/learning and stripping useless overhead (read things that can potentially break, need maintenance, costs resources and thus it costs money). Up to this point I still didn’t quite understand the PowerShell potential and didn’t bother to look into it.

Then last month a team member needed to install Oracle Fail Safe on a Microsoft 2012 box. He needed to run an PowerShell script to set some things right and the script didn’t quite work. Hating the fact (as a former microsoft SE) not being of any real help, I figured, lets spend some time and start learning Powershell. Its time I get to  (ex-Windows NT4,2K,2K3 guy) understand this puppy everybody is revved up on.

In my search for a good learning site I came across the: Microsoft Virtual Academy and followed the course. After doing some of the course my conclusion was: Powershell (V3) is way more cool then I anticipated!


At first I thought another Linux-shell-clone was being created by Microsoft. But don’t let yourself (like I was at first) be fooled by the Linux looking Pipe approach. In the Microsoft implementation its not text that’s being redirected, its redirecting objects. For those that not understanding objects: Instead of sending  the unstructured text output of a command, its sending you the whole thing with structures and methods and everything. This enables you to do the wildest things without losing the oooh so important overview of things and is makes waaaay more sense.

The simplest way to explain this is by example. For instance, the following command gets the directory (as an object), pipes this object to a select method, then we select specific properties from this object then output and then manipulate “@{…}” some of this output while we are at it because we can. The result, a logical flow of information resulting in the usable and desired form I WANT IT. Did you ever use awk?!

This Object approach makes you wickedly flexible as you can see, formatting, using and manipulating data as you see fit.

Another cool thing is that you are not bound to some servers commandline console. You can output to the console sure, but there are also some several nice, cool options. For example: output an Get-Help article to a window with the -ShowWindow parameter.This enables you to view en search content from a nice scrollable window. Or output the table from the previous statement to an Gridview view.







If this is cool enough yet, there are tons of very cool features that are incorperated into powershell. A few of many are: Updateable help-system, Out of the box remoting to PS sessions on different machines, Remoting using sessions locally, Importing PS management modules from remote machines (so you dont need to install them over and over again), An PS webapplication for remote -mobile- management using powershell. Yeah, the list goes on.

Sadly all the nice graphical perks still need that blasted explorer.exe proces. I guess Microsoft still needs to develop an X alternative for that. Please Microsoft, lose the need for that explorer.exe process and you regain my trust 😉

Wrap-up, no tech is perfect, so my advice: Definitely look into is the free (Yeah ITS FREE) getting started training by Microsoft.

Now lets go back to my beautifully tweaked and optimized Oracle Enterprise Linux deployment. With awk(ward) GNU text pipes that noone really understands. Without the cool management interface, but still the OS I prefer in my HAHP-backend



Extract all content to disk from a SPS2007 content DB using PHP

Today I ran into a problem. We needed to migrate a huge amount of data from an old SharePoint 2007 content database without the availability of the MOSS front-end. All i had was the database and a corrupted sharepoint install that wasnt going to help me allot.

To overcome this problem I decided to write a little PHP application that would do this task for me. I allready had WAMP setup on my desktop, so i figured this to be the quickest route. Then i figured, maybe other people face this problem as well. So here it is, the code, and some helpers to get you going.

* @name           : index.php - MSSql Content connector
* @Author        : Chris Gralike
* @version        :
* @copyright     : WETFYWTDWI - what ever ** you want to do with it, no guarantees 🙂
*  This script ONLY READS the database tables, so dont give it more permissions 🙂

// What to search for in the directory structure.
$search = '';
// Where too put the files
$createdir = './Downloaded';
// What server too connect to.
$ServerName = 'amisnt05.amis.local';
// Database connection parameters.
$connectionInfo = array('Database' => 'MOSS_PROD_WSS_Content_WebApp02',
'UID' => 'php_login',
'PWD' => 'welcome12345678');
// This can be a very long task to complete, so disable the timelimit.
// Create a connection
$conn = sqlsrv_connect($ServerName, $connectionInfo)
or die( print_r( sqlsrv_errors(), true));

// The SQL statment to query the AllDocs tables.
$tsql = "SELECT dbo.AllDocs.Id,
dbo.AllDocStreams.Id as StreamId,
FROM dbo.AllDocs
RIGHT OUTER JOIN dbo.AllDocStreams ON dbo.AllDocs.Id = dbo.AllDocStreams.Id
WHERE AllDocs.DirName LIKE '%{$search}%'
AND AllDocs.SetupPath IS NULL
AND AllDocs.Extension != ''
// The result set
$result = sqlsrv_query($conn, $tsql);

// Process the results
while($row = sqlsrv_fetch_array($result,  SQLSRV_FETCH_ASSOC)){
// When create is true, then it will create the folders in
// in the foreach
$create = false;
$dirptr = $createdir;

// Find the folders and recreate them starting from the searchstring.
$folders = explode('/', $row['DirName']);
foreach ($folders as $val){
if($val == $search || $create == true || empty($search)){
$create = true;
$dirptr .= '/'.$val;
echo "INFO: created $dirptr <br/>";
echo "WARN: skipping $dirptr allready exists. <br />";

// Recreate the file
$filepath = $dirptr.'/'.$row['LeafName'];
if($fp = fopen($filepath,'w')){
fwrite($fp, $row['Content']);
echo "INFO: file {$row['LeafName']} written. <br />";
echo "ERROR: file {$row['LeafName']} could not be written in $filepath. <br />";
// Close the database connection.

Simply configure the first vars in the script and run the file. It might take a huge while before you get some output.

TIP: Use the $search to narrow down the query a bit.
It searches the DirName (I.e. Site\DocLib\Folder\SubFolder\)

The output will look like this.

INFO: created ./Downloaded/SearchCenter
INFO: created ./Downloaded/SearchCenter/Pages
INFO: file facetedsearch.aspx written.
WARN: skipping ./Downloaded/SearchCenter allready exists.
WARN: skipping ./Downloaded/SearchCenter/Pages allready exists.
INFO: file resultskeyword.aspx written.

Then please leave a comment 🙂

WARNING!: YOU NEED THE Microsoft MSSQL DRIVER FOR PHP, not the old php equivalent.
here are some tips on where to get it. My version was php 5.3.8

First off, mssql isnt supported out of the box anymore. when using PHP 5.2 and up, you need to get the Microsoft for PHP driver. Check this site for more information :

Its a bit of an hassle ill give you that.
Challenge: I needed 1.5hours to find the correct Lib,Install,Coding info and get it working.

Basically it requires you to download the native client, the drivers and an correct update of the php.ini your wamp instance is using.

Tip: Use <?php phpinfo() ?> to find the right version for your PHP compilation.
Search for : PHP Extension Build : API20090626,TS,VC9


Be sure to verify the SQL query inside the $tsql=” var and alter it accordingly. The other part should be pretty straight forward.

Cant wait…

After reading this benchmark:, I was wondering. isn’t it strange that their own benchmark is telling us they beat the good-old-nagios on all elements?

Do not get me wrong, I still believe that Centreon is a wonderful product. But i cant ignore the fact that the Nagios Core / Nagios XI are on the move as well.  And for some reason I get the feeling they are referring to history. Last time I was informed,  Nagios employed a team of developers as well in developing Nagios XI. And next to that, Nagios also enables OS programmers to commit code:

Please dear Centreon, don’t spoil your good name and stick to the facts. You have a great product, no need for bashing. Next to that, don’t use a translate without review in your benchmark documents.

Having that said, what happed to Icinga?

It seams that Icinga is well on their way as well seeing this sheet:

Anyway, any monitoring needs?

It seems a crossroad is nearing in which you are forced to pick a side and stick by it.

And yet somehow, I cant suppress a feeling of sadness about this development.
Goodbye years of scripting, hacking and rewriting happiness, Hello ‘next-next-finish’ world 😉

Any insights?
Please share them with us web reading it-guy folk  :=)

Backup script for GLPI (

If you are using the great GLPI tool, you will notice that the market value of the data inside will increase rapidly. This usually also implicates that it is ‘wise’ to back this data up.

There are many ways to do so using nice plugins, even nicer gui`s and apps. I (headstrong that I am), wanted something very basic and functional, easy to configure, and that will work in an environment that has multiple GLPI installations. Answer to my question: build something for your own.

So i scripted something for Linux that will allow you to backup the entire GLPI tree (where the uploaded files reside), and the sql database.

Because we use a deduped backup storage (datadomain), i dont have to worry about duplicate data. If you need to, then add something to clean the backup store. This script doesn’t account for that 🙂

This is the script:

# Wrote by Chris
# Goal is to easly backup glpi in a multi installation environment.


#Dont change anything after this point, unless you know what you are doing #
#No guarantees, une this script at own risk                                #

# Do some generic stuff here
# Add checks if you like 🙂
MYSQLDUMP=`which mysqldump`;
AWK=`which awk`;
FIND=`which find`;
DATE=`date +%d.%m.%Y`;
LOGTIME=`date +"%d-%m-%Y %H:%m"`;
DBCONFIG=`find $GLPI_DIR -name "config_db.php"`;
DBNAME=`grep "dbdefault" $DBCONFIG | awk -F '=' '{ gsub(/\047/,""); gsub(/\;/,""); gsub(/ /,""); print $2;}'`;

# Start working....
echo -e "$LOGTIME \t## New backup started ##" >> $LOGFILE;
echo -e "$LOGTIME \tpacking: $GLPISIZE.. into $BACKUP_DIR/backup.$DATE.tar.bz2 ..." >> $LOGFILE;
tar -cjPf $BACKUP_DIR/backup.$DATE.tar.bz2 $GLPI_DIR >> $LOGFILE;
echo -e "$LOGTIME \tCreating mysqldump into $BACKUP_DIR/sqldump.$DATE.sql ..." >> $LOGFILE;
mysqldump $DBNAME > $BACKUP_DIR/sqldump.$DATE.sql;
# Go back to original working directory.
echo -e "$LOGTIME \tAll done..." >> $LOGFILE;
echo "all done! ";

exit 0;

If you want to install this script follow the following instructions:

#This is for Oracle Enterprise Linux / RedHat EL distro`s
#Your environment might be slightly different.
cd /opt
mkdir ./scripts
cd scripts
vi ./
#insert the code above into the editor and save the lot using ':wq'
#alter the top of the script to match your environment.
chmod +x ./
#next create a symbolic link to the cron.daily, this might be different in your linux distro (see manual pages on your distro using 'man cron').
ln -s /opt/scripts/ /etc/cron.daily/backup
#monitor the /var/log/backup.log for details

Happy backing up 🙂

(Dont forget to clean the backup dir on a regular basis if you dont have the luxury of an deduping storage) missing in apache 2.2.19? Check This!

Hi there admins,

Today I spend an hour figuring out why the “Order” directive in apache 2.2.19 resulted in errors.

Knowing that “Order” was previously provided by “” i started my quest in figuring out why that module was missing. What did i find?

Mod_access was renamed or recompiled to “”.
as described here…

after adding the module again it worked like a bliss 🙂

Howto compile apache 2.2.x?, heres a hint 🙂

./configure --prefix=/u01/proxy/ #Where to install?\
--enable-ssl=shared \
--enable-proxy=shared \
--enable-proxy-connect=shared \
--enable-proxy-ftp=shared \
--enable-proxy-http=shared \
--enable-proxy-ajp=shared \
--enable-proxy-balancer=shared \
--enable-cache=shared \
--enable-file-cache=shared \
--enable-mem-cache=shared \
--enable-disk-cache=shared \
--enable-deflate=shared \
--enable-http=shared \
--enable-dav=shared \
--enable-vhost-alias=shared \
--enable-rewrite=shared \
--enable-so=shared \
--with-ssl=/usr/bin/openssl > ./reviewlog.txt
make >> ./reviewlog.txt
make install >> ./reviewlog.txt
make clean

Certificates, what to know…

Certificates is a tough and complex world to be in.

Here are the main things to remember when renewing old certificates, or requesting new ones 🙂

  • CAis a short for “Certificate Authority” and is usually a party that ‘Signs’ certificates on behalf of the requester. Because someone other then the party hosting a site signed the certificate it is assumed that dualism applies.
  • CSR is a short for “Certificate Signing Request” and contains the hash needed by any CA to create a “Signed” certificate.
  • Private Key Is the server keyportion of the certificate that enables the server to “Decrypt” traffic generated by a remote client using the provided certificate. This part of the certificate should always be kept save, and should never be exchanged with any 3rd party. He who has the private key can assume the identity of the server/service on which the certificate applies.
  • Public KeyIs the client keyportion of the certificate that allows a client to decrypt the traffic that is generated by the remote server. This key is exchanged encrypted using the certificate during connection time, and because only the server holds the server portion of the privatekey, he is the only one in the world who can theoretically decode this traffic containing the key.
  • Certificates CN (Common Name) should always comply with the url used by the visiting client. i.e. for google the CN would be
  • Certificates O (Organization) should match the company listed in the whois that is performed on the domain name. i.e. for google it would be “Google Inc.”
  • When you want to use the Certificates for Mobile Devices, a special certificate should be used. Check for more information.
  • SAN – is a short for “Subject Alternative Name” not to be mistaken with “Storage Active Network”, it is a special certificate that allows for multiple CNs. (multiple sites), also used in a number of Microsoft products.
  • If you have an option on this point dont use certificates that use MD5 cryptographic hash . These are considered to be weak, and might be blocked by future browsers being insecure. Weaknesses allow hackers to create a ‘valid’ certificate and steal the identity of you site by applying it. (though read, for the wiz-kids

This should help you on your way 🙂

this might also be usefull, CSR Checker that will also perform a few checks to make sure all the info inside the CSR adds up.

Draw dots in images using PHP, XMLHttp, Mysql.

A family member asked me if it was possible to use html, php, mysql to mark spots in an image of the human skeleton during an medical anamnesis collection. He wanted this to easily mark the complaint spots of his patients. He also needed to be able to remove marks when they where faulty and wanted this all to be stored in a database for easy reference and backup.

Seeing the complexity of it, I accepted the challenge he laid out for me.

The most complex part of this was the ‘how to remove’ spots from an image without deleting the actual fixed image of the skeleton. The solution I came up with was using an Fixed background image in HTML and use a transparent overlay that I will be updating using PHP and XMLHttp request headers.

The result of this approach was the following.

All that needs to be done during reporting is merging both images together, which is fairly easy using php.

But because I though someone else might use this script as well, I thought sharing it here with the world was a nice option. The application is endless, from damage reporting to Location reporting, and in the finished from its fairly easy to understand and adjust to your needs.

I added the sourcecode for this script to this post. You need to have a mysql database, php with GD enabled, browser that supports XHTMLRequest (all modern browsers). Simply dump the files into the a php html enabled location and make sure you create the required SQL table (usign the .sql script inside the archive) and have a go with it (you might need to alter the mysql user/pass inside the PHP script)….


Use the “save as” option to grab the file from this site. Rename the file to, unpack and access the srcfiles inside the rar archive… use winrar to unpack.

Let me know what you think, or if you found it usefull 🙂

p.s. I wasnt able to verify the author of the used skeleton image. If you are/know him, let me know if this application is allowed, and how to publish the credits.