Author Archives: Chris Gralike

GLPI 9.1.2 fix inline images KB

First make the stored images visible in the GLPI output

In %glpiroot%/lib/htmlawed/htmlawed.php add ‘; src: data’ in line 47

output

If there where any previous inline images stored, these should now be visible in the KB. If you have images stored, but they still do not turn up, you might need to replace the ‘denied’ string stored in the glpi database. Use the SQL below to achieve this.

update glpi_knowbaseitems set answer = REPLACE(answer, 'denied:', '');
Second make inserting inline HTML images possible (within the WYSIWYG)
In %glpiroot%/inc/html.class.php file on rule 3917 add the paste plugin by adding the paste plugin and adding the paste option to the editor. Review and alter your code to reflect the snippet below:
$js = "tinyMCE.init({
language: '$language',
browser_spellcheck: true,
mode: 'exact',
elements: '$name',
relative_urls: false,
remove_script_host: false,
entity_encoding: 'raw',
menubar: false,
statusbar: false,
skin: 'light',
plugins: [
'table directionality searchreplace paste',
'tabfocus autoresize link image',
'code fullscreen textcolor colorpicker',
'paste'
],
toolbar: 'styleselect | bold italic | forecolor backcolor | bullist numlist outdent indent | table link image | code fullscreen | paste',
paste_data_images: true,
});
";
After this you should again be able to past inline HTML images into GLPI KB articles.
TIP: Use the open source Greenshot screenshot tool that supports pasting inline HTML natively.

Fun watch with some information on OMG

The best quote (2:11): “They work with that subcontractor using a shared process, in case of Boeing that is called: working together”

Compile freedts 1.00 on EL6

Just a note to myself, maybe you will find this usefull as well.

In order to compile freedts-1.00 you need to have the GCC, unixODBC and unixODBC-devel packages installed.

Next download and un-tar the freedts package.
wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-1.00.tar.gz

for some reason the ‘ODBC_INC’ variable isnt set properly in the configure script. This will lead to an ‘sql.h not found’ message when the –with-unixodbc switch is used. the fix for this is:

Locate the sql.h file on your system:
find / -iname sql.h -print

edit the freetds ./configure script and add the variable. Given example is specific for my system. Make sure you alter it accordingly.
ODBC_INC=”/usr/include”

Next configure the compiler
./configure –with-tdsver=7.0 –with-unixodbc=/usr/local –includedir=/usr/include

Make the install using in sequence:
make
make install
make clean

 

Howto: change the notification subject and allow KB images in GLPI version 0.90.3

For version 1.9.2 review: https://sysengineers.wordpress.com/2017/06/27/glpi-9-1-2-fix-inline-images-kb/

 

GLPI released their new version 0.90.3.
With new each release two questions seem to be very persistent. These questions are:

  1. How can we change the default notification prefix: [GLPI ] in the email  subject.
  2. How do we enable images in the KB articles.

In this article you will be able to read my personal  opinion on the matter and how to change this GLPI behavior.

Why do you want to change the GLPI  notification prefix.

The most obvious reason is to allow your customer to quickly identify your companies tickets. The rule of thumb in modern system view design is enabling users to quickly: ‘scan, select. act’ Changing the subject to something intuitive enables your customers to do so.

Another point of interest is the possibility to daisy-chain multiple installations of GLPI. By configuring the notification subjects and schemes correctly you can daisy chain multiple installations allowing cross organization enterprise environments to be set up. This is impossible when all installations identify themselfs as ‘GLPI [key].’

How to alter the code to support your custom prefix in GLPI 0.90

In order to alter the subject prefix in GLPI 0.90,  firstly you need to configure your prefix in the Administration>Entities>[your entity]>notifications>Prefix for notifications. Changing this configuration field will correctly alter the prefix to that of your liking. No further code-hacks are required or advised.

Why do you want images in your KB.

Well this is -in my humble opinion- an no brainer. One images shows more detail then i can describe in a thousand words. Images also help speed up the resolution process, especially during nightly hours. It also allows the engineer to intuitively compare the version of the actual situation with the situation documented. Is it all positive then? no, there are some downsides to consider as well.

An image doesn’t replace the engineers know-how and sometimes you want to explicitly trigger this knowledge by not showing any images. Updated applications might look different, actually slowing down the resolution process. Another more technical downside is web server storage. All images need to be stored somewhere and might needlessly clutter the support-system. My point of view is that you need to decide whats best for your situation. Sadly GLPI doesnt allow you to choose yet, it forces images to be removed. If you do need image support, please apply the code-hack below.

Be aware, This wont enable image export to pdf.

How to enable images in the KB

First we need to enable the INSERT function that enables us to add images using the TinyMCE editor. In order to do this two changes need to be made.

In the inc/html.class.php file on line:3837 and line:3871 comment out ( // ) the lines that reads _html = _html.replace .. See screenshots for more details.

Optionally you can enable the ‘image’ button by adding image to the ‘theme_advanced_buttons2 : ‘ line. See images underneath for more details.

 

The next step is to enable the images to be shown. Without this change the HTMLAWED plugin will add a denied tag to the actual images effectivly telling the browser not to show the image. Additionally the resulting HTML code including the denied: tag will be stored in database also disabling this specific image after the next code modification. Enabling the images afterward requires an search and replace statement in the database. (See comments below).

In the file /lib/htmlawed/htmLawed.php on line 47 add ‘;src: data’ to the end of the line.

Make sure you use an screenshot tool that generates an inline HTML image on the clipboard. Greenshot is an free alternative that does this out of the box.

Enjoy!

Configure unixODBC for use with PHP and MSSQL on Oracle Linux

This short article describes how to configure unixODBC 2.2.11-10.el5 in conjunction with PHP 5.3.3-26. Many forums out there contain articles that describe the absence of php_mssql drivers in the oracle yum repo. There various reason for that; non in which Oracle has a part. No matter what reason you like best, there is a decent alternative by using unixODBC.

To help all the people out that are just looking for a solution I wrote this article. I wont go into depths, ill just describe the major steps with some hints and tips. Happy reading 🙂

The OS version of my virtual box image:

Linux sandboxpinguin 2.6.18-194.0.0.0.3.el5xen #1 SMP Mon Mar 29 18:27:00 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
  1. Make sure you have the latest version of unixODBC installed. If Yum is configured correctly the following command should do the trick.
     yum update unixODBC
  2. Download the freeDTS driver, this is the driver unixODBC will use to connect to mssql. If wget is available on your production environment (remove it) after running the command:
     wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-stable.tgz

    . Make sure you are in an desired location like your home folder.

  3. Unpack the tar-Gzip ball by running the following command
    tar -xf  ./freetds-stable.tgz
  4. Browse into the unpacked folder and run the configure command
    ./configure --with-tdsver=7.2 --enable-msdblib
  5. If the configure ran without any issues you can link/compile the driver by running the command:
    make

    and then

    make install

    and then

    make clean

    Congratulations, you are now the pride owner of the freeDTS driver 🙂

The next part tend to get a bit fuzzy, feel free to ask questions in the comment and ill try to answer them to the best of my ability.

There are allot of articles available on how to configure the unixODBC DSN correctly. Be adviced: the config is specific for your setup and usually needs to be tweaked. In order to enable you to ill explain the concepts of unixODBC. Ill point out some documentation, commands and stuff. Afterward ill have a short tutorial of the steps i used to configure the odbc connection.

  1. unixODBC is configured properly by using the
    odbcinst

    command.

  2. unixODBC will write into:
    odbc.ini

    files and uses input files to do so;

  3. The connection can be tested with osql commands from bash, but! it will use the hidden .odbc.ini file in your profile instead of the /etc/odbc.ini. PHP and odbcinst use the one in /etc/odbc.ini. If the one in your profile works than make sure it is identical to the one located in /etc/ directory. Below how the osql output will look if configured correctly. (charset isnt relevant in this stage)
    screenshot
  4. Documentation on how to use FreeTDS in conjunction with unixODBC can be found here.
    http://www.freetds.org/userguide/odbcconnattr.htm
  5. Documentation on how to use ODBC can be found here
    http://www.unixodbc.org/odbcinst.html
    (ignore the freetds configuration here and use ad4 to figure the settings out for your setup)

Next ill describe my steps in order to make it work.

Configuring the FreeTDS driver

  1. Create the file /etc/odbcDriver.ini
  2. Insert the following in the file (check the paths)
    [FreeTDS]
    Description     = FreeTDS Driver with protocol v5.0
    Driver          = /usr/local/freetds/lib/libtdsodbc.so
    
  3. Create the file /etc/odbc.ini
  4. Insert the following and tweak this to match your environment
    [ExampleSource]
    Description     = FreeTDS Driver with protocol v5.0
    Driver          = /usr/local/freetds/lib/libtdsodbc.so
    Server          = [SERVERIP]
    Port            = [REMOTE_TSQL_PORT]
    ClientCharset   = UTF-8
    TDS_Version     = 7.1
    Database        = [DATABASENAME]
    Trusted_Connection = Yes      # Required with most MSSQL environments.
    
  5. Register the ODBC driver
     odbcinst -i -d -f /etc/odbcDriver.ini 
  6. Register the data source
     odbcinst -i -s -f /etc/odbc.ini

Finally test your config by using the php odbc functions.

<?php
$sql = "select 1 + 5 as outcome";

$conn = odbc_connect("ExampleSource" , "Username", "Password");
$result = odbc_exec($conn, $sql);
$row = odbc_fetch_array($result);
echo $row['outcome'];

Good luck querying 🙂

No tuples available at this result index in […]

Recently I was creating a simple PHP webpage that required an MSSQL connection for a query. I installed the freetds-0.91-15 driver on my Oracle Linux and configured unixODBC-2.2.11-10.el5 appropriately. All went well and I soon had the first results on my page. My next task was to create a simple enough query that would ‘count’ some database rows. Simple enough, right? Well, at least up to the part where I ran into this error:

No tuples available at this result index in […]

All the posts I found referred to multiple result sets in which ‘odbc_next_result($result);’ should be the resolution. The weird part was, pasting of the echoed $sql into the MSSQL query box would give me the desired effect. One result with an count of the rows. The same SQL in the PHP script would give me the ‘no tuples’ error. I nearly fixed it the ugly way (counting the rows in an php while that would work) when I had an insight.

In the mssql output i noticed NULL fields in the table column that I was counting. So i figured, might counting NULL values somehow trigger the ‘No Tuples’ error im getting?

So this is what I ended up testing:

SELECT COUNT([sdk].[Events].[FirstName]) as Counted FROM [sdk].[Events]
WHERE [sdk].[Events].[PeripheralName] like '%Search%'
AND [sdk].[Events].[EventTime] BETWEEN '1' AND '31'

Which resulted in : Warning: odbc_fetch_array(): No tuples available at this result index in /var/www/db.class.php on line 52

SELECT COUNT([sdk].[Events].[FirstName]) as Counted FROM [sdk].[Events]
WHERE [sdk].[Events].[PeripheralName] like '%Search%'
AND [sdk].[Events].[EventTime] BETWEEN '1' AND '31'
AND [sdk].[Events].[FirstName] is not null

Which resulted in an working count query.

Fix (enable) GLPI 0.84.8 Inline knowledgebase images.

For some reason the in-line images keep getting blocked in GLPI. I think database cluttering is the main reason for this. I still think they should make it optional. But hey, who am i to tell INDEPNET whats right 🙂

Anywayz…

Enabling (Fixing) the images has become a bit more complex in comparison with my previous post. So, i cant be held responsible for any issues that might affect you after you followed this article. In my environment it works without any issues.

  1. Again, configure htmlAwed to accept data: in the src: schema.
    1. Open the htmlAwed lib with your favorite editor. The file is located at: GLPI_ROOT/lib/htmlawed/htmlawed.php
    2. Locate line 47.
    3. make the end of the line look like this:
      t; *:file, http, https; src: data';
      
  2. For some reason the ‘denied:’ part in ‘src=’denied:data:image/png;base64…’ is saved in the database. Because of this the ‘denied’ portion still turns up in the content (not because of htmlAwed). You can easily fix this by running the following SQL statement in your TEST database.
    update glpi_knowbaseitems set answer =REPLACE(answer, 'denied:', '');
    
    Query OK, 12 rows affected (0.29 sec)
    Rows matched: 284  Changed: 12  Warnings: 0
    

If all is well, the images should again be displayed in your knowledge base.

Tiny MCE will accept the inline images (inserted from the clpiboard with greenshot) and will display them correctly…

Got any remarks or tips, let me know 🙂

Howto: Sonicwall SSL-VPN (NetExtender) on Windows 8.1

Those familiar with the Sonicwall SSL-VPN 2000 appliance and Windows are used to connect to the SSLVPN using the NetExtender software. Older versions of the NetExtender appliance will still offer this software when connected using the browser.There are various forums actually providing instructions on how-to install this old software on Windows 8.1. Most include instructions like disabling the WHQL (windows driver signing) check leaving your system vulnerable. Once the software is installed you will prob run in to various issues including: RRAS isn’t addressed properly, Unable to connect even though authentication is working fine, no routes are being added after a successful connection is established.

Not many people seem to know that Sonicwall mobile vpn provider is a build-in option in windows 8.1. It is -obviously- also the preferred method to connect. Naturally because all the Windows security mechanisms are kept in place using the readily available Sonicwall mobile provider. The instructions below will guide you through the steps required to configure an VPN profile for the SSLVPN appliance and offers an alternative to the older NetExtender software. Additionally consider the maintenance options you have implementing these using domain policies 😉 

  1. Type: Windows key + S;
  2. In the search field type: VPN;
  3. Select the ‘manage virtual private networks’ option;
  4. Select ‘Add a VPN Connection’;
  5. In the ‘VPN provider’ select the ‘Sonicwall Mobile Connect’ option;
  6. Type a descriptive name in the ‘Connection name’ field;
    (this name will be visible throughout windows)
  7. In the ‘Server name or Address’ field type the webadress without the protocol portion. example:
    NetExtender: https://vpn.company.com
    Adress field: vpn.company.com
  8. Select save;
  9. Close all the windows;
  10. Type: Windows key + S;
  11. In the search field type: VPN;
  12. Now select ‘Connect to a network’;
  13. Select your created profile;
  14. In the username field use the following:
    domain\username (remember the domain portion is case sensitive!)
  15. Type your password;
  16. Connect.

If all is correct the connection should come up without any problems. If this is not the case, then please review the advanced settings. These settings are available in the ‘manage virtual private networks’ by selecting the ‘edit’ option on the created profile. (steps 1/3).

You can simply review the routes as follows:

  1. Type: Windows key + R;
  2. In the run field type: powershell;
  3. Run the command: route print | Out-GridView;

Hope this helps.

p.s.
If you have already disabled driver signing in a previous attempt, then please re-enable it.
Driver root kits are fairly common and a real risk!

Recover from failed Dell perc raid5 logical disk

We encountered a failed logical disk on a Dell Perc SAS controller. After a quick review we discovered that two disks out of the four configured for RAID5 had failed. This event triggered the Perc controller to put the logical disk offline. Now what…

Everyone knows that when using a raid5 distributed partity with 4 disks the maximum redundancy is losing 1 disk. With two failed disks data loss is usually inevitable. SO, if this is also the case with your machine, please realize your chances of  recovering are slim. This article will not magically increase the chances you have on recovering. The logic of the Dell Perc SAS controller actually might.

First off, I will not accept any responsibility for damage done by following this article. Its content is intended to offer the troubleshooting engineer an possible solution path. Key knowledge is needed to interpret your situation correctly and with that the applicability of this article.

TIP: Save any data still available to you in a read-only state.
(If you have read only data, this article does not apply to you!)

What do you need?

Obviously you need to have two replacement disks available.
You also need to have a iDRAC (Dell remote access card) or some other means to access the systemlog.
You need to have physical access to the machine (to replace the disks and review the system behavior)

 

What to do?

Our specific setup:
 Controller  0
 -Logical volume 1, raid5
 + Disk 0:2     Online
 + Disk 0:3     Failed
 + Disk 0:4     Online
 + Disk 0:5     Failed

The chance both problematic disks 0:3 and 0:5 failed simultaneously is near to zero. What I mean to say by this is that disks 0:3 and 0:5 will have failed in a specific order. This means that the disk who failed first will have ancient data on it. In order to make an recovery attempt we need actual and not historical data. To this end we first need to identify the disk that failed first. This will be the disk that we will be replacing shortly.

Identifying the order in which the disks failed
Luckily most Dell machines ship with a Dell Remote Access Card (Drac). HP and other vendors have similar solutions. The iDRAC keeps an system log. In this log the iDRAC will report any system events. This also goes for the events triggered by the Perc SAS controler. Enter the iDRAC interface during boot <CTR+?> and review the eventslog. Use the timestamps in this log to identify the first disk that failed. Below an example of the Log output:

fotoIn our case, disk 0:5 failed prior to disk 0:3. Be absolutely sure that you identify the correct disk. We want the most current data to be used for a rebuild. If this is for any reason historical data, you will end up with corrupted data. Write the disk number of the disk that failed first on a piece of paper. This is the disk that needs to be replaced with a new one. This could be a stressful situation (for your manager), so be mindful that a stressed manager chasing you gut could confuse you. You do not want to mix the disks up, so keep checking your paper and do not second guess but check if your not sure.

Exit the IDRAC interface and reboot the machine and enter the Perc controller, usually <CTR+?> during boot. Note that the controller also reports the logical volume  being offline. If this is the case, enter the Physical Disks (PD) page (CTR+N in our case). Also note here that disks 0:3 and 0:5 are in a failed state. Select disk 0:3 and force this disk online using the operations menu (F2 in our case)  and accept the warning. DO NOT SELECT or ALTER THE DISK WITH HISTORICAL DATA (0:5)!!!

Now physically replace disk 0:5 with the spare you have available. If all is well, you should notice that the controller is automatically starting a rebuild (LEDS flashing fanatically).  Review your lcd-screen and note that disk 0:5 is now in a rebuilding state. Most controllers let you review the progress. On our controller the progress was hidden on the next page of the disk details in the physical disk (PD) page, which was reachable using the tab key. Wait for the controller to finish. (This can take quite a while, clock the time between % and muliply that with 100 then divide that with 60 to get the idea. Get a coffee or good night sleep).

Once the controller is finished it will in most cases place the replaced disk 0:5 in an OFFLINE state and the forced online disk (0:3) back in FAILED state. Now use the operations menu to force DISK 0:5 (rebuild disk) online and note the logical volume becoming available in a degraded state. Reboot the machine and wait for the OS to boot.

All done?

Well the logical volume should be available to the OS. This doesnt mean there is any readable data left on the device. Usually this will become apparent during OS boot. Most operating systems will perform a quick checkdisk during mount. Most errors will be found there. One of two things can happen:

1) Your disk is recovered but unclean and will be cleaned by the OS after which the boot will be successful or…
2) the disk is corrupted beyond the capabilities of a basic scan disk.

In the latter case you might want tot attempt additional repair steps and perform a OS partition recovery. In most cases, if this is your scenario, the chance you will successfully recover the data is very slim.

I hope you, like me, successfully recovered your disk.
(Thanks to the failure imminent detection and precaution functions the Dell Perc controller implement)

Apache and SSL

Yesterday someone remarked: With Apache you cant implement multiple SSL certificates behind one and the same IP address. This remark is actually not quite correct. A good opportunity to explain the basics behind SSL and explain why SSL implementations on servers with multiple sites can be challenging.

Understanding SSL.

SSL is an acronym for ‘Secure Socket Layer’ and is a method to encrypt traffic between client and host. SSL uses a key-pair that is provided by a digital certificate to encrypt the communication. To do this, SSL needs inform the client how to decrypt the traffic prior to the actual communication. This is done by the so called, SSL-handshake, in which a public-key is shared with the client.

Each of the parties (client and host) now have a public and private key available. With this so called keypair both parties are able to encrypt and decrypt the traffic that is being send. Please view the Deffie-Hellman key exchange wikipedia for a clear example of this algorithm.

The Deffie-Hellman example also illustrates the risk of a client losing or sharing its private key.

What is an certificate

In most cases an certificate has multiple purposes. One is obviously to encrypt the traffic. An additional task is to identify the host to the client. The host is usually identified using its public DNS name. By means of the CN (Common Name) field of the certificate, the client is able tot verify that the CN in the certificate is equal to the DNS adres the client is visiting. If either one is not equal, the client will generate an warning.

Why do you still get an warning when you use a valid DNS name in your certificates CN field? Well an additional check is necessary. An external Certificate Authority needs to back your claim. This is done by signing the certificate using an certificates private key that is only known by the Certificate Authority. The client is now able to check (with the public key of that CA) the validity of your certificate and its claim.

You might now understand why there was such a buzz over Diginotar making its private key available to hackers using its auto-signing process.

What is a SOCKET

The second principle to understand is the ‘socket.’ Litteraly a socket is an communication endpoint used by an application to send data. Its important to realize that a socket only contains protocol information (like TCP/UDP or ICMP) and various settings like timeout. Usually IP information isnt given in the socket definition. If a programmer wants the socket to be open on a specific device he usually needs to ‘explicitly bind’ the socket to that IP.

So an socket is nothing more than a ‘door’ to the network that an application can programmatically use to send data over an networked device.

Apache and SSL

In case of Apache httpd SSL is implemented on a listening IP:Port. When a client connects to Apache using the ip:port configured in httpd.conf the first thing that is performed is the SSL handshake. As we noted, the SSL handshake is performed prior to sending actual data. When this is completed, the http request header is send (encrypted) to the Apache instance.

When Apache implements multiple sites behind one socket, called virtual hosts, it uses the ‘http GET header’ to determin the right content (virtual host). Apache can only do this when it received a valid requestheader, that can only be offered after SSL has been implemented.

Now here is the issue.
The http request header we are talking about actually contains a DNS site name, for instance: http://www.google.com. GET /. Now the certificate used also has ‘CN=www.google.com’. In which case there wont be any problem. All checks out, no certificate error.

Now the second site hosted in a different virtual host, provided by the same Apache instance is called using the http request header: logon.google.com. GET /. Now all hell breaks loose because our certificate still contains ‘CN=www.google.com’. The name doesnt match, and a certificate error is risen.

the reason for this is that SSL is actually being implemented by Apache, but prior to the actual request being send. There is no mechanism in place to determine the correct certificate, containing the correct CN prior to the http call.

Possible solutions?

When you are using multiple subdomains behind the same top level domain, for instance: http://www.mysite.com, service.mysite.com, mail.mysite.com. The solution might be to use a so called ‘wildcard’ certificate. This certificates CN name looks like: CN=*.mysite.com and will match correctly against all the subdomains.

When you are using mutiple top level domain sitesnames like: http://www.google.com, http://www.mysite.com, implementing SNI might be a solution. Be warned, SNI has a limited backwards compatibility. The client needs to support SNI to work property.

You could use multiple ports on the server and SSL on the various ports. This will require your visitors to add a port to the url like: https://www.google.com (default 443), https://www.mysite.com:445 (non default).

Alternativly use multiple IPs to bind the ssl. This will enable you to keep the default port 443. Requesting mulitple public IPs to do so might be costly, but is the most elegant solution (next to SNI).

Any questions?
Feel free to post them below 🙂