Compile freedts 1.00 on EL6

Just a note to myself, maybe you will find this usefull as well.

In order to compile freedts-1.00 you need to have the GCC, unixODBC and unixODBC-devel packages installed.

Next download and un-tar the freedts package.

for some reason the ‘ODBC_INC’ variable isnt set properly in the configure script. This will lead to an ‘sql.h not found’ message when the –with-unixodbc switch is used. the fix for this is:

Locate the sql.h file on your system:
find / -iname sql.h -print

edit the freetds ./configure script and add the variable. Given example is specific for my system. Make sure you alter it accordingly.

Next configure the compiler
./configure –with-tdsver=7.0 –with-unixodbc=/usr/local –includedir=/usr/include

Make the install using in sequence:
make install
make clean


Howto: change the notification subject and allow KB images in GLPI version 0.90.3

GLPI released their new version 0.90.3.
With new each release two questions seem to be very persistent. These questions are:

  1. How can we change the default notification prefix: [GLPI ] in the email  subject.
  2. How do we enable images in the KB articles.

In this article you will be able to read my personal  opinion on the matter and how to change this GLPI behavior.

Why do you want to change the GLPI  notification prefix.

The most obvious reason is to allow your customer to quickly identify your companies tickets. The rule of thumb in modern system view design is enabling users to quickly: ‘scan, select. act’ Changing the subject to something intuitive enables your customers to do so.

Another point of interest is the possibility to daisy-chain multiple installations of GLPI. By configuring the notification subjects and schemes correctly you can daisy chain multiple installations allowing cross organization enterprise environments to be set up. This is impossible when all installations identify themselfs as ‘GLPI [key].’

How to alter the code to support your custom prefix in GLPI 0.90

In order to alter the subject prefix in GLPI 0.90,  firstly you need to configure your prefix in the Administration>Entities>[your entity]>notifications>Prefix for notifications. Changing this configuration field will correctly alter the prefix to that of your liking. No further code-hacks are required or advised.

Why do you want images in your KB.

Well this is -in my humble opinion- an no brainer. One images shows more detail then i can describe in a thousand words. Images also help speed up the resolution process, especially during nightly hours. It also allows the engineer to intuitively compare the version of the actual situation with the situation documented. Is it all positive then? no, there are some downsides to consider as well.

An image doesn’t replace the engineers know-how and sometimes you want to explicitly trigger this knowledge by not showing any images. Updated applications might look different, actually slowing down the resolution process. Another more technical downside is web server storage. All images need to be stored somewhere and might needlessly clutter the support-system. My point of view is that you need to decide whats best for your situation. Sadly GLPI doesnt allow you to choose yet, it forces images to be removed. If you do need image support, please apply the code-hack below.

Be aware, This wont enable image export to pdf.

How to enable images in the KB

First we need to enable the INSERT function that enables us to add images using the TinyMCE editor. In order to do this two changes need to be made.

In the inc/html.class.php file on line:3837 and line:3871 comment out ( // ) the lines that reads _html = _html.replace .. See screenshots for more details.

Optionally you can enable the ‘image’ button by adding image to the ‘theme_advanced_buttons2 : ‘ line. See images underneath for more details.


The next step is to enable the images to be shown. Without this change the HTMLAWED plugin will add a denied tag to the actual images effectivly telling the browser not to show the image. Additionally the resulting HTML code including the denied: tag will be stored in database also disabling this specific image after the next code modification. Enabling the images afterward requires an search and replace statement in the database. (See comments below).

In the file /lib/htmlawed/htmLawed.php on line 47 add ‘;src: data’ to the end of the line.

Make sure you use an screenshot tool that generates an inline HTML image on the clipboard. Greenshot is an free alternative that does this out of the box.


Configure unixODBC for use with PHP and MSSQL on Oracle Linux

This short article describes how to configure unixODBC 2.2.11-10.el5 in conjunction with PHP 5.3.3-26. Many forums out there contain articles that describe the absence of php_mssql drivers in the oracle yum repo. There various reason for that; non in which Oracle has a part. No matter what reason you like best, there is a decent alternative by using unixODBC.

To help all the people out that are just looking for a solution I wrote this article. I wont go into depths, ill just describe the major steps with some hints and tips. Happy reading🙂

The OS version of my virtual box image:

Linux sandboxpinguin 2.6.18- #1 SMP Mon Mar 29 18:27:00 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
  1. Make sure you have the latest version of unixODBC installed. If Yum is configured correctly the following command should do the trick.
     yum update unixODBC
  2. Download the freeDTS driver, this is the driver unixODBC will use to connect to mssql. If wget is available on your production environment (remove it) after running the command:

    . Make sure you are in an desired location like your home folder.

  3. Unpack the tar-Gzip ball by running the following command
    tar -xf  ./freetds-stable.tgz
  4. Browse into the unpacked folder and run the configure command
    ./configure --with-tdsver=7.2 --enable-msdblib
  5. If the configure ran without any issues you can link/compile the driver by running the command:

    and then

    make install

    and then

    make clean

    Congratulations, you are now the pride owner of the freeDTS driver🙂

The next part tend to get a bit fuzzy, feel free to ask questions in the comment and ill try to answer them to the best of my ability.

There are allot of articles available on how to configure the unixODBC DSN correctly. Be adviced: the config is specific for your setup and usually needs to be tweaked. In order to enable you to ill explain the concepts of unixODBC. Ill point out some documentation, commands and stuff. Afterward ill have a short tutorial of the steps i used to configure the odbc connection.

  1. unixODBC is configured properly by using the


  2. unixODBC will write into:

    files and uses input files to do so;

  3. The connection can be tested with osql commands from bash, but! it will use the hidden .odbc.ini file in your profile instead of the /etc/odbc.ini. PHP and odbcinst use the one in /etc/odbc.ini. If the one in your profile works than make sure it is identical to the one located in /etc/ directory. Below how the osql output will look if configured correctly. (charset isnt relevant in this stage)
  4. Documentation on how to use FreeTDS in conjunction with unixODBC can be found here.
  5. Documentation on how to use ODBC can be found here
    (ignore the freetds configuration here and use ad4 to figure the settings out for your setup)

Next ill describe my steps in order to make it work.

Configuring the FreeTDS driver

  1. Create the file /etc/odbcDriver.ini
  2. Insert the following in the file (check the paths)
    Description     = FreeTDS Driver with protocol v5.0
    Driver          = /usr/local/freetds/lib/
  3. Create the file /etc/odbc.ini
  4. Insert the following and tweak this to match your environment
    Description     = FreeTDS Driver with protocol v5.0
    Driver          = /usr/local/freetds/lib/
    Server          = [SERVERIP]
    Port            = [REMOTE_TSQL_PORT]
    ClientCharset   = UTF-8
    TDS_Version     = 7.1
    Database        = [DATABASENAME]
    Trusted_Connection = Yes      # Required with most MSSQL environments.
  5. Register the ODBC driver
     odbcinst -i -d -f /etc/odbcDriver.ini 
  6. Register the data source
     odbcinst -i -s -f /etc/odbc.ini

Finally test your config by using the php odbc functions.

$sql = "select 1 + 5 as outcome";

$conn = odbc_connect("ExampleSource" , "Username", "Password");
$result = odbc_exec($conn, $sql);
$row = odbc_fetch_array($result);
echo $row['outcome'];

Good luck querying🙂

No tuples available at this result index in […]

Recently I was creating a simple PHP webpage that required an MSSQL connection for a query. I installed the freetds-0.91-15 driver on my Oracle Linux and configured unixODBC-2.2.11-10.el5 appropriately. All went well and I soon had the first results on my page. My next task was to create a simple enough query that would ‘count’ some database rows. Simple enough, right? Well, at least up to the part where I ran into this error:

No tuples available at this result index in […]

All the posts I found referred to multiple result sets in which ‘odbc_next_result($result);’ should be the resolution. The weird part was, pasting of the echoed $sql into the MSSQL query box would give me the desired effect. One result with an count of the rows. The same SQL in the PHP script would give me the ‘no tuples’ error. I nearly fixed it the ugly way (counting the rows in an php while that would work) when I had an insight.

In the mssql output i noticed NULL fields in the table column that I was counting. So i figured, might counting NULL values somehow trigger the ‘No Tuples’ error im getting?

So this is what I ended up testing:

SELECT COUNT([sdk].[Events].[FirstName]) as Counted FROM [sdk].[Events]
WHERE [sdk].[Events].[PeripheralName] like '%Search%'
AND [sdk].[Events].[EventTime] BETWEEN '1' AND '31'

Which resulted in : Warning: odbc_fetch_array(): No tuples available at this result index in /var/www/db.class.php on line 52

SELECT COUNT([sdk].[Events].[FirstName]) as Counted FROM [sdk].[Events]
WHERE [sdk].[Events].[PeripheralName] like '%Search%'
AND [sdk].[Events].[EventTime] BETWEEN '1' AND '31'
AND [sdk].[Events].[FirstName] is not null

Which resulted in an working count query.

Fix (enable) GLPI 0.84.8 Inline knowledgebase images.

For some reason the in-line images keep getting blocked in GLPI. I think database cluttering is the main reason for this. I still think they should make it optional. But hey, who am i to tell INDEPNET whats right🙂


Enabling (Fixing) the images has become a bit more complex in comparison with my previous post. So, i cant be held responsible for any issues that might affect you after you followed this article. In my environment it works without any issues.

  1. Again, configure htmlAwed to accept data: in the src: schema.
    1. Open the htmlAwed lib with your favorite editor. The file is located at: GLPI_ROOT/lib/htmlawed/htmlawed.php
    2. Locate line 47.
    3. make the end of the line look like this:
      t; *:file, http, https; src: data';
  2. For some reason the ‘denied:’ part in ‘src=’denied:data:image/png;base64…’ is saved in the database. Because of this the ‘denied’ portion still turns up in the content (not because of htmlAwed). You can easily fix this by running the following SQL statement in your TEST database.
    update glpi_knowbaseitems set answer =REPLACE(answer, 'denied:', '');
    Query OK, 12 rows affected (0.29 sec)
    Rows matched: 284  Changed: 12  Warnings: 0

If all is well, the images should again be displayed in your knowledge base.

Tiny MCE will accept the inline images (inserted from the clpiboard with greenshot) and will display them correctly…

Got any remarks or tips, let me know🙂

Howto: Sonicwall SSL-VPN (NetExtender) on Windows 8.1

Those familiar with the Sonicwall SSL-VPN 2000 appliance and Windows are used to connect to the SSLVPN using the NetExtender software. Older versions of the NetExtender appliance will still offer this software when connected using the browser.There are various forums actually providing instructions on how-to install this old software on Windows 8.1. Most include instructions like disabling the WHQL (windows driver signing) check leaving your system vulnerable. Once the software is installed you will prob run in to various issues including: RRAS isn’t addressed properly, Unable to connect even though authentication is working fine, no routes are being added after a successful connection is established.

Not many people seem to know that Sonicwall mobile vpn provider is a build-in option in windows 8.1. It is -obviously- also the preferred method to connect. Naturally because all the Windows security mechanisms are kept in place using the readily available Sonicwall mobile provider. The instructions below will guide you through the steps required to configure an VPN profile for the SSLVPN appliance and offers an alternative to the older NetExtender software. Additionally consider the maintenance options you have implementing these using domain policies ;-) 

  1. Type: Windows key + S;
  2. In the search field type: VPN;
  3. Select the ‘manage virtual private networks’ option;
  4. Select ‘Add a VPN Connection’;
  5. In the ‘VPN provider’ select the ‘Sonicwall Mobile Connect’ option;
  6. Type a descriptive name in the ‘Connection name’ field;
    (this name will be visible throughout windows)
  7. In the ‘Server name or Address’ field type the webadress without the protocol portion. example:
    Adress field:
  8. Select save;
  9. Close all the windows;
  10. Type: Windows key + S;
  11. In the search field type: VPN;
  12. Now select ‘Connect to a network’;
  13. Select your created profile;
  14. In the username field use the following:
    domain\username (remember the domain portion is case sensitive!)
  15. Type your password;
  16. Connect.

If all is correct the connection should come up without any problems. If this is not the case, then please review the advanced settings. These settings are available in the ‘manage virtual private networks’ by selecting the ‘edit’ option on the created profile. (steps 1/3).

You can simply review the routes as follows:

  1. Type: Windows key + R;
  2. In the run field type: powershell;
  3. Run the command: route print | Out-GridView;

Hope this helps.

If you have already disabled driver signing in a previous attempt, then please re-enable it.
Driver root kits are fairly common and a real risk!

Recover from failed Dell perc raid5 logical disk

We encountered a failed logical disk on a Dell Perc SAS controller. After a quick review we discovered that two disks out of the four configured for RAID5 had failed. This event triggered the Perc controller to put the logical disk offline. Now what…

Everyone knows that when using a raid5 distributed partity with 4 disks the maximum redundancy is losing 1 disk. With two failed disks data loss is usually inevitable. SO, if this is also the case with your machine, please realize your chances of  recovering are slim. This article will not magically increase the chances you have on recovering. The logic of the Dell Perc SAS controller actually might.

First off, I will not accept any responsibility for damage done by following this article. Its content is intended to offer the troubleshooting engineer an possible solution path. Key knowledge is needed to interpret your situation correctly and with that the applicability of this article.

TIP: Save any data still available to you in a read-only state.
(If you have read only data, this article does not apply to you!)

What do you need?

Obviously you need to have two replacement disks available.
You also need to have a iDRAC (Dell remote access card) or some other means to access the systemlog.
You need to have physical access to the machine (to replace the disks and review the system behavior)


What to do?

Our specific setup:
 Controller  0
 -Logical volume 1, raid5
 + Disk 0:2     Online
 + Disk 0:3     Failed
 + Disk 0:4     Online
 + Disk 0:5     Failed

The chance both problematic disks 0:3 and 0:5 failed simultaneously is near to zero. What I mean to say by this is that disks 0:3 and 0:5 will have failed in a specific order. This means that the disk who failed first will have ancient data on it. In order to make an recovery attempt we need actual and not historical data. To this end we first need to identify the disk that failed first. This will be the disk that we will be replacing shortly.

Identifying the order in which the disks failed
Luckily most Dell machines ship with a Dell Remote Access Card (Drac). HP and other vendors have similar solutions. The iDRAC keeps an system log. In this log the iDRAC will report any system events. This also goes for the events triggered by the Perc SAS controler. Enter the iDRAC interface during boot <CTR+?> and review the eventslog. Use the timestamps in this log to identify the first disk that failed. Below an example of the Log output:

fotoIn our case, disk 0:5 failed prior to disk 0:3. Be absolutely sure that you identify the correct disk. We want the most current data to be used for a rebuild. If this is for any reason historical data, you will end up with corrupted data. Write the disk number of the disk that failed first on a piece of paper. This is the disk that needs to be replaced with a new one. This could be a stressful situation (for your manager), so be mindful that a stressed manager chasing you gut could confuse you. You do not want to mix the disks up, so keep checking your paper and do not second guess but check if your not sure.

Exit the IDRAC interface and reboot the machine and enter the Perc controller, usually <CTR+?> during boot. Note that the controller also reports the logical volume  being offline. If this is the case, enter the Physical Disks (PD) page (CTR+N in our case). Also note here that disks 0:3 and 0:5 are in a failed state. Select disk 0:3 and force this disk online using the operations menu (F2 in our case)  and accept the warning. DO NOT SELECT or ALTER THE DISK WITH HISTORICAL DATA (0:5)!!!

Now physically replace disk 0:5 with the spare you have available. If all is well, you should notice that the controller is automatically starting a rebuild (LEDS flashing fanatically).  Review your lcd-screen and note that disk 0:5 is now in a rebuilding state. Most controllers let you review the progress. On our controller the progress was hidden on the next page of the disk details in the physical disk (PD) page, which was reachable using the tab key. Wait for the controller to finish. (This can take quite a while, clock the time between % and muliply that with 100 then divide that with 60 to get the idea. Get a coffee or good night sleep).

Once the controller is finished it will in most cases place the replaced disk 0:5 in an OFFLINE state and the forced online disk (0:3) back in FAILED state. Now use the operations menu to force DISK 0:5 (rebuild disk) online and note the logical volume becoming available in a degraded state. Reboot the machine and wait for the OS to boot.

All done?

Well the logical volume should be available to the OS. This doesnt mean there is any readable data left on the device. Usually this will become apparent during OS boot. Most operating systems will perform a quick checkdisk during mount. Most errors will be found there. One of two things can happen:

1) Your disk is recovered but unclean and will be cleaned by the OS after which the boot will be successful or…
2) the disk is corrupted beyond the capabilities of a basic scan disk.

In the latter case you might want tot attempt additional repair steps and perform a OS partition recovery. In most cases, if this is your scenario, the chance you will successfully recover the data is very slim.

I hope you, like me, successfully recovered your disk.
(Thanks to the failure imminent detection and precaution functions the Dell Perc controller implement)

Apache and SSL

Yesterday someone remarked: With Apache you cant implement multiple SSL certificates behind one and the same IP address. This remark is actually not quite correct. A good opportunity to explain the basics behind SSL and explain why SSL implementations on servers with multiple sites can be challenging.

Understanding SSL.

SSL is an acronym for ‘Secure Socket Layer’ and is a method to encrypt traffic between client and host. SSL uses a key-pair that is provided by a digital certificate to encrypt the communication. To do this, SSL needs inform the client how to decrypt the traffic prior to the actual communication. This is done by the so called, SSL-handshake, in which a public-key is shared with the client.

Each of the parties (client and host) now have a public and private key available. With this so called keypair both parties are able to encrypt and decrypt the traffic that is being send. Please view the Deffie-Hellman key exchange wikipedia for a clear example of this algorithm.

The Deffie-Hellman example also illustrates the risk of a client losing or sharing its private key.

What is an certificate

In most cases an certificate has multiple purposes. One is obviously to encrypt the traffic. An additional task is to identify the host to the client. The host is usually identified using its public DNS name. By means of the CN (Common Name) field of the certificate, the client is able tot verify that the CN in the certificate is equal to the DNS adres the client is visiting. If either one is not equal, the client will generate an warning.

Why do you still get an warning when you use a valid DNS name in your certificates CN field? Well an additional check is necessary. An external Certificate Authority needs to back your claim. This is done by signing the certificate using an certificates private key that is only known by the Certificate Authority. The client is now able to check (with the public key of that CA) the validity of your certificate and its claim.

You might now understand why there was such a buzz over Diginotar making its private key available to hackers using its auto-signing process.

What is a SOCKET

The second principle to understand is the ‘socket.’ Litteraly a socket is an communication endpoint used by an application to send data. Its important to realize that a socket only contains protocol information (like TCP/UDP or ICMP) and various settings like timeout. Usually IP information isnt given in the socket definition. If a programmer wants the socket to be open on a specific device he usually needs to ‘explicitly bind’ the socket to that IP.

So an socket is nothing more than a ‘door’ to the network that an application can programmatically use to send data over an networked device.

Apache and SSL

In case of Apache httpd SSL is implemented on a listening IP:Port. When a client connects to Apache using the ip:port configured in httpd.conf the first thing that is performed is the SSL handshake. As we noted, the SSL handshake is performed prior to sending actual data. When this is completed, the http request header is send (encrypted) to the Apache instance.

When Apache implements multiple sites behind one socket, called virtual hosts, it uses the ‘http GET header’ to determin the right content (virtual host). Apache can only do this when it received a valid requestheader, that can only be offered after SSL has been implemented.

Now here is the issue.
The http request header we are talking about actually contains a DNS site name, for instance: GET /. Now the certificate used also has ‘’. In which case there wont be any problem. All checks out, no certificate error.

Now the second site hosted in a different virtual host, provided by the same Apache instance is called using the http request header: GET /. Now all hell breaks loose because our certificate still contains ‘’. The name doesnt match, and a certificate error is risen.

the reason for this is that SSL is actually being implemented by Apache, but prior to the actual request being send. There is no mechanism in place to determine the correct certificate, containing the correct CN prior to the http call.

Possible solutions?

When you are using multiple subdomains behind the same top level domain, for instance:,, The solution might be to use a so called ‘wildcard’ certificate. This certificates CN name looks like: CN=* and will match correctly against all the subdomains.

When you are using mutiple top level domain sitesnames like:,, implementing SNI might be a solution. Be warned, SNI has a limited backwards compatibility. The client needs to support SNI to work property.

You could use multiple ports on the server and SSL on the various ports. This will require your visitors to add a port to the url like: (default 443), (non default).

Alternativly use multiple IPs to bind the ssl. This will enable you to keep the default port 443. Requesting mulitple public IPs to do so might be costly, but is the most elegant solution (next to SNI).

Any questions?
Feel free to post them below🙂





Oracle Enterprise Linux 6.x networking

Lately I got many questions regarding the network configuration of Oracle Enterprise Linux 6 (Red Hat Enterprise Linux 6).
Enough to write a little article about it.

It seems that some of the network configuration was altered in OEL6. The reason as far as I know is the implementation of the NetworkManager daemon. I don’t know why they are using CamelCase for the daemon name, but mind that. Even though the NetworkManager should make the configuration as painless as possible (at least thats what the manual page said), it seems to actually make the configuration more of a pain for some.

Below I will cover some topics in an effort to get you going and remove the pain🙂

Configuring eth0 for manual operation

  • Step 1: disable the NetworkManager daemon
    service NetworkManager stop
  • Step 2: remove the NetworkManager from Init (start-up)
    chkconfig --level 2345 NetworkManager off
  • Step 3: open the ifcfg-eth0 config file (alter the suffix ‘eth0’ to match the adapter of your choice)
    vi /etc/sysconfig/network-scripts/ifcfg-eth0
  • Step 4: Alter the following to match your environment…
    HWADDR={Your MAC address here}
    #PREFIX=24    [can be used alternativly to NETMASK=]
  • Step 5: Write/close the configuration file (:wq in vi)
  • Step 6: Restart the network service
    service network restart
  • TIP 0: Obviously match the configuration above to match your home network.
  • TIP 1: NetworkManager is not always present in which case you can obviously skip step 1 – 2.
  • TIP 2: There are reports that is actually more stable then PREFIX=xx notation.
    My advice, use NETMASK= which is also better understood by non networking guys.
  • TIP 3: Not sure about the correct NETWORK, NETMASK, BROADCAST or PREFIX settings, give ipcalc a try:
    ipcalc --netmask {IPADDR}
    ipcalc --prefix {IPADDR} {NETMASK}
    ipcalc --broadcast {IPADDR} {NETMASK}
    ipcalc --network {IPADDR} {NETMASK}

Configuring DNS

DNS always seems to be a bugger and a hard one to understand. Do note that DNS is JUST A IP PHONEBOOK. Nothing fancy there. Also there are various ways of configuring DNS. One way is by adding the DNS configuration in the ifcfg-suffix configuration file with the DNS1=ip.ip.ip.ip DNS2=ip.ip.ip.ip keywords. As an effect, the networking service will update the appropriate configuration files. To be frank, I find this to be confusing and do not like duplicate configurations everywhere in my -has to be clean- environment. My advice is to configure the DNS is the appropriate files directly like this…

  • Step 1: Edit the resolve.conf where DNS is configured.
    vi /etc/resolv.conf
  • Step 2: Add or Alter the following to match your environment
    search mydomain.home
  • Step 3: Test to see if name resolution works
    set debug
  • TIP 1: Linux actually tries to find the ip in the /etc/hosts file first. If you know the hostnamename and FQDN to an certain IP and it can be classified as static. Consider using the hostsfile instead of a centralized DNS. This will boost performance if the name is resolved often. If multiple systems use and depend on a machine reference, use centralized DNS in order to lighten the administrative tasks.
    vi /etc/hosts
  • TIP 2: Experiencing slow log on times or slow application performance? A faulty DNS configuration might just be the cause. A quick way to test this is by temp. disabling DNS all together. This can be done by editing the /etc/nsswitch.conf file.
    vi /etc/nsswitch.conf
    • alter the line
      hosts:     files dns
    • to the line
      hosts: files
    • write the file and test if the performance has improved.
  • The reason for this is that DNS is often used to register user logon or session information based on the visitors IP address. Examples are the ssh daemon, ftp servers, webservers, linux logon, etc.


In some case you want linux to use alternative routes to access certain Linux resources. The way to go in these cases are creating routes. In most cases you want these to be presistant in which case ‘route add –‘ wont suffice. In our example we will create two new routes. On describing a route to a specific host, the other describing the route to a specific network. Alter the example to match your needs.

  • STEP 1: Create a new file called static-routes in the /etc/sysconfig/ directory
    vi /etc/sysconfig/static-routes
  • STEP 2: Add the following, obviously matching your specific needs
    any net gw metric 1
    any host gw metric 1
  • STEP 3: Restart the network service
    service network restart
  • TIP 1: SIOCADDRT: No such process means the designated gateway doesnt exsist on any known interface. (typo?)
  • TIP 2: view the route information usint the route command
  • TIP 3: use the ipcalc –prefix {IPADDR} {NETMASK} command to determin the right /prefix for your environment.
  • TIP 4: In older environments the ifup-routes is used, this shscript still exsists in the /etc/sysconfig/network-scripts/ifup-routes

Locate my mac address

The ifcfg-eth# config allows you to configure the specific mac address to guarantee the IP is bound to the right adapter. In virtualized environments this might save you a lot of trouble in the situation where the virtualized domain is altered. On the other hand it might cause trouble when the staticly configured MAC is migrated in virtual environments. Either case, you might want to know the MAC linux sees belonging to an certain adapter. You can find the MAC address in the following location:

 cat /sys/class/net/eth0/address

Obviously you need to alter eth0 in the path to match the adapter you are looking for. Not sure? The change directory to /sys/class/net and perform a list to see all discovered and registered adapters.

IPTables (Linux firewall)

By default IPtables (which is the linux firewall) is enabled. You can view the running configuration by checking the service status like this.

 service iptables status

You can simply turn the firewall off by modifying and applying steps 1-2 of the first configuring eth0 instruction. This will reduce the security of your linux platform significantly. My advice, add the ports you need for your services and let IPtables protect you. The easiest way is by simply editing the iptables configuration file.

 vi /etc/sysconfig/iptables 

Adding a port is as easy as copy/pasting the always present firewall rule that allowes port 22 (ssh). Copy past it and alter the -p (protocol) -dport (destination port) to match your needs. For example, allowing HTTP/HTTPS.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT

afterward restart iptables

service iptables restart

TIP: If you are experimenting with IPv6 (then your Instant COOL!), mind that the ipv6 firewall is called ip6tables and the configuration is called the same. The basic iptables doesnt handle ipv6 at all.

TIP: If you are using ipv6 code your IPv4 ip to ease administration. Example:

ipv6: 2001::0192:0168:0010:0001/64
Then route on the nibble of choice.

Additional questions?

Just post it below and maybe ill respond in due time🙂

Hate to say it, but Powershell is cool!

Just to put it out there.
Some history.

There was Powershell. At first, I didnt quite understand its potential and role in the Microsoft product suite. Then came the ‘not-quite-headless’ windows server. I was like: oooh, It looks like Microsoft is Changing/learning and stripping useless overhead (read things that can potentially break, need maintenance, costs resources and thus it costs money). Up to this point I still didn’t quite understand the PowerShell potential and didn’t bother to look into it.

Then last month a team member needed to install Oracle Fail Safe on a Microsoft 2012 box. He needed to run an PowerShell script to set some things right and the script didn’t quite work. Hating the fact (as a former microsoft SE) not being of any real help, I figured, lets spend some time and start learning Powershell. Its time I get to  (ex-Windows NT4,2K,2K3 guy) understand this puppy everybody is revved up on.

In my search for a good learning site I came across the: Microsoft Virtual Academy and followed the course. After doing some of the course my conclusion was: Powershell (V3) is way more cool then I anticipated!


At first I thought another Linux-shell-clone was being created by Microsoft. But don’t let yourself (like I was at first) be fooled by the Linux looking Pipe approach. In the Microsoft implementation its not text that’s being redirected, its redirecting objects. For those that not understanding objects: Instead of sending  the unstructured text output of a command, its sending you the whole thing with structures and methods and everything. This enables you to do the wildest things without losing the oooh so important overview of things and is makes waaaay more sense.

The simplest way to explain this is by example. For instance, the following command gets the directory (as an object), pipes this object to a select method, then we select specific properties from this object then output and then manipulate “@{…}” some of this output while we are at it because we can. The result, a logical flow of information resulting in the usable and desired form I WANT IT. Did you ever use awk?!

This Object approach makes you wickedly flexible as you can see, formatting, using and manipulating data as you see fit.

Another cool thing is that you are not bound to some servers commandline console. You can output to the console sure, but there are also some several nice, cool options. For example: output an Get-Help article to a window with the -ShowWindow parameter.This enables you to view en search content from a nice scrollable window. Or output the table from the previous statement to an Gridview view.







If this is cool enough yet, there are tons of very cool features that are incorperated into powershell. A few of many are: Updateable help-system, Out of the box remoting to PS sessions on different machines, Remoting using sessions locally, Importing PS management modules from remote machines (so you dont need to install them over and over again), An PS webapplication for remote -mobile- management using powershell. Yeah, the list goes on.

Sadly all the nice graphical perks still need that blasted explorer.exe proces. I guess Microsoft still needs to develop an X alternative for that. Please Microsoft, lose the need for that explorer.exe process and you regain my trust😉

Wrap-up, no tech is perfect, so my advice: Definitely look into is the free (Yeah ITS FREE) getting started training by Microsoft.

Now lets go back to my beautifully tweaked and optimized Oracle Enterprise Linux deployment. With awk(ward) GNU text pipes that noone really understands. Without the cool management interface, but still the OS I prefer in my HAHP-backend