:::: MENU ::::

Dedicated Routing Tables in Linux

Linux Routing

Linux is a very powerful platform. It is the framework for thousands of applications and software suites. It’s flexibility when it comes to network is second to none, especially for power users. Although other platforms like Windows support manual routing through multiple gateways, it does not support policy based routing. This is where Linux has the upper-hand.

Note: The steps below have been completed and tested on Ubuntu 12.04.2.

Multiple Interfaces

We recently built and installed a new server which has 4 network interfaces. Two of them are copper/RJ-45 ports and two of them are optical Gigabit Ethernet ports. On this box, if we run an lspci, it will show us a hardware profile of the server. Here is a list of our 4 interfaces.

:~$ lspci -nn | grep Ethernet
03:03.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme II BCM5706 Gigabit Ethernet [14e4:164a] (rev 02)
07:01.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme II BCM5706S Gigabit Ethernet [14e4:16aa] (rev 02)
07:02.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme II BCM5706S Gigabit Ethernet [14e4:16aa] (rev 02)
07:03.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme II BCM5706 Gigabit Ethernet [14e4:164a] (rev 02)

You can pair up the interfaces by their [VENDOR:MAKE] combo’s.

To figure out what the name of each interface is, you can take it’s PCI location and grep dmesg:

dmesg | grep "07:03.0"

The response would look like:

[    3.852851] bnx2 0000:07:03.0: eth3: Broadcom NetXtreme II BCM5706 1000Base-T (A2) PCI-X 64-bit 100MHz found at mem f6000000, IRQ 19, node addr 00:0a:ba:di:d3:a0

The Gigabit Ethernet looks like this:

[    3.371963] bnx2 0000:07:02.0: eth2: HP NC370F Multifunction Gigabit Server Adapter (A2) PCI-X 64-bit 100MHz found at mem f8000000, IRQ 18, node addr 00:0a:ba:di:d3:a1

Right after it’s location there is the interface name. For the two examples above, they are eth3 and eth3 respectively. We will need to know these interface names so we can configure them and manage the IP rules. The next step for you would be to configure your /etc/network/interfaces file or your /etc/sysconfig/network-scripts/ depending on your distribution.

Tip! You can always look at the contents of /etc/udev/rules.d/70-persistent-net.rules which will show your the name of each interface. You can always rename your interfaces if you want. For example, above eth0 and eth3 are alike cards, but divided by the optical ports. We can rename them by changing the interface NAME

Managing the Routing Table

To aid in explanation, here is our interface configuration:

$ sudo ifconfig -s
eth0       1500 0      2694      0      0 0             6      0      0      0 BMRU
fab0       1500 0      5517      0      0 0          3224      0      0      0 BMRU
fab1       1500 0      2682      0      0 0             6      0      0      0 BMRU
lo        16436 0       256      0      0 0           256      0      0      0 LRU
mgmt       1500 0      1109      0      0 0            75      0      0      0 BMRU

You can see we named our 4 interfaces.

  • mgmt will be our general management interface. This is the first RJ-45 port and is going to be on a 10.1.32.x network. (This was originally eth0)

  • eth0 is the second RJ-45 port and is going to be on the 10.1.101.x network. (This was originally eth3)

  • fab0 and fab1 are out optical Gigabit Ethernet ports and will be on the 10.1.101.x network as well. (These were originally eth1 and eth2)

Normally, if traffic is received on an interface, it will be routed back out whichever interface has a default route. This can break IP flows, especially if you are navigating across a Firewall or using NAT. Many systems now adays that use Wireless LAN and Ethernet at the same time, have individual default routes which is why you do not see this issue.

To resolve this, you would create an independent routing table for each interface.

Step 1) Create a new routing table:

echo "61 eth0" | sudo tee /etc/iproute2/rt_tables

Step 2) Add Default Routes for each Interface:

 ip route add default dev eth0 table eth0

Step 3) Add Routing Policy for each Interface:

$ sudo ip rule add from table eth0

And that’s it! Now, when you send traffic to it will be routed by the eth0 interface instead of the mgmt interface! Repeat the above steps for every interface you have.

If you run a ip route you should see your updated routing table:

~$ sudo ip route
default dev mgmt  scope link dev mgmt  proto kernel  scope link  src dev eth0  proto kernel  scope link  src dev fab0  proto kernel  scope link  src dev fab1  proto kernel  scope link  src


The above steps work great, but there can be an issue if you restart the machine. Default route will be assigned to the first up interface, which can round-robin. We can take steps to prevent this from happening which includes editing /etc/network/interfaces.

Add the following lines to the end of your /etc/network/interfaces:

# Adds default route for box
up route add default dev mgmt

# Adds default route for eth0 interface
up route add default dev eth0 table eth0
up rule add from table eth0

Now when you restart your machine, the above commands will be executed and restore your interface routes and rules.

Worth Mentioning

There are a few gotcha’s that may leave you scratching your head. Here are some tips and tricks to solve them:

Problem 1: You receive an interface not configured when using ifdown:

ifdown: interface ethx not configured

The fix would be to use the ip link command. You can change the status of the interface by issuing sudo ip link set ethx down.

Problem 2: Gigabit Fiber interface does not come up

sudo ethtool -r ethx

This uses the utility known as ethtool, which is a very handy CLI networking tool. The -r option tells the interface to restart negotiations. Sometimes on system start, the interface does not do auto-negotiation and needs a little shove.

Problem 3: Multiple interfaces within the same subnet are a Virtual Machine Guest

This is a little trickier. Because of the way the interfaces interact with the host machine at a layer-2 level, you will need to apply ARP blocking on the non-desired interfaces. One utility for this would be arptables. It works exactly like iptables but at Layer 2.

For example, if you do not want interface fab0 ( responding to traffic destined to fab1 (, you can do:

sudo arptables -A INPUT -j DROP -i fab0  ! -d
sudo arptables -A INPUT -j DROP -i fab1  ! -d

For this to take effect, you must enable ARP filtering at the kernel level:

echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf

To view your arptables rules, execute:

~$ sudo arptables -vnL
Chain INPUT (policy ACCEPT 988 packets, 27664 bytes)
-j DROP -i fab0 -o * ! -d , pcnt=43899 -- bcnt=1229K
-j DROP -i fab1 -o * ! -d , pcnt=44655 -- bcnt=1250K

Chain OUTPUT (policy ACCEPT 988 packets, 27664 bytes)

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

This will prevent the upstream switch from storing the wrong MAC address in it’s ARP table and subsequently sending traffic to the correct interface.

Fixing SOAP Exception: “looks like we got no XML document”

Working with SOAP

Personally, I hate SOAP. It is a bloated, slow and painful API service, but allows for cross platform support out of the box. It is XML driven and allows you to transports large amounts of data to and from a server.

A SOAP server holds what is called a WSDL file. This is also know as a Web Service Definition Language file. Within this file, XML namespaces, methods, types, parameters and more are all defined to allow a SOAP client to learn how to interpret requests and responses.

A SOAP client will download the WSDL file, and using this file, perform the desired call-to-actions. The definitions within this file will allow the response to be translated into an object, with type assignments and Object assignment. This is great if you are trying to interface with a platform with a lot of logical data that you would like to export.

PHP has a built in extension called PHP: SoapClient. To create a connection to a SOAP server, you could call: $client = new SoapClient($url, array("trace" => 1, "exception" => 0));, where $url is your WSDL file.

Error Handling

One of the error’s you will experience when working with XML is “Looks like we got no XML document”. If you sniff on the interface or print the response using print_r($client->__getLastResponse()); you will most likely to see what appears to be well-formed XML data.

The problem many times ends-up being what they call BOM.

The Byte-Order-Mark (or BOM), is a special marker added at the very beginning of an Unicode file encoded in UTF-8, UTF-16 or UTF-32. It is used to indicate whether the file uses the big-endian or little-endian byte order. The BOM is mandatory for UTF-16 and UTF-32, but it is optional for UTF-8.

This essentially is an extra character usually invisible to the eye that disrupts XML parsers; one of them being the SOAP XML parser.

Unfortunately, there is no easy way to store the SOAP Servers response in a variable after the API call and remove this character. This BOM will cause an exception before you even get that far. There is one simple step you can take to correct the issue.

The Fix

Instead of accessing the SoapClient class directly, extend your own. We are going to override the __doRequest() method and apply our BOM removal here.

Example class:

class SoapClientNG extends \SoapClient{

    public function __doRequest($req, $location, $action, $version = SOAP_1_1){

        $xml = explode("\r\n", parent::__doRequest($req, $location, $action, $version););

        $response = preg_replace( '/^(\x00\x00\xFE\xFF|\xFF\xFE\x00\x00|\xFE\xFF|\xFF\xFE|\xEF\xBB\xBF)/', "", $xml[5] );

        return $response;



This method will receive the response in XML format with HTTP headers, strip the headers, and then remove the BOM characters. The result is returned back to SOAP’s inner workings and a response is provided to your SOAP call.

Hopefully this helps a few other people on how to manage and intercept the response.

HTTP Proxy Authentication From CLI

HTTP Proxies

We deploy a firewall between our internet zones and working zones. This helps segregate possible flawed traffic from testing and keeps possibly harmful traffic from affecting the internet. To access the network from our workstations, we have to authentication through a firewall proxy for port 80 and 443. This box will keep track of our source IP and it’s authentication status. We use timeout rules to prevent wrongfully accessed machines from getting out using any protocol until authenticated.

Servers Don’t Have Browsers?!

Normally, servers deployed in a data center style infrastructure run headless. This means there is no desktop, X-Server, Gnome, or KDE running. Everything you do is via CLI.

The normal process to authenticate to the proxy is to open a browser and access a page that crosses this firewall. The firewall will ask you to log in using WWW-Authenticate. If you don’t have a browser, how could you do this?

Using WGET

Wget is a utility, very similar to cURL, which perform HTTP GET’s by default. Luckily, it has many, many options configurable. Two options will solve our problem include user and password.

These two options will pass a username and password to the WWW-Authenticate headers. This is exactly what we need.

wget http://google.com --user=fwuser--password=fwpassword

This will display the following output:

--2013-05-15 13:42:51--  http://google.com/
Resolving google.com (google.com)...,,, ...
Connecting to google.com (google.com)||:80... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Failed writing HTTP request: Bad file descriptor.

--2013-05-15 13:42:52--  (try: 2)  http://google.com/
Connecting to google.com (google.com)||:80... connected.
HTTP request sent, awaiting response... 302 Object Moved
Location: http://google.com/ [following]
--2013-05-15 13:42:52--  http://google.com/
Connecting to google.com (google.com)||:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2013-05-15 13:42:52--  http://www.google.com/
Resolving www.google.com (www.google.com)...,,, ...
Connecting to www.google.com (www.google.com)||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `index.html'

    [ <=>                                                                                                                                                ] 10,696      --.-K/s   in 0s

2013-05-15 13:42:52 (134 MB/s) - `index.html' saved [10696]

Once completed, you can access the internet from your server without the need of a GUI.

Adding Social Programming Activity to Your Website

Social Programming

Within the past few years, social networks have grown the internet. This includes MySpace, Facebook and Twitter, which I guarantee everyone has head of. Many of these were great for people to express themselves, but not ask questions, get answers, and share some of their hobbies in more details. Then came StackExchange and GitHub, these issues were soon put to sleep. The next question became, how can you integrate and share this data?

Just about every website you visit, you will see a Twitter feed at the bottom. This was great for a while, but when everyone joined in the trend, the number of HTTP POST requests going to Twitter congested the pipes of the internet. Why not give it a break and show your GitHub and StackExchange activity instead?

Your GitHub Timeline

If you don’t know what GitHub is, visit their website and check them out. In short, they are a code repository hosting service for GIT. And if you don’t know what GIT is, here is an excerpt from their page:

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Git is easy to learn and has a tiny footprint with lightning fast performance. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.

On GitHub’s website, every user has a public profile which shows your recent activity, which includes comments, issues, commits, pushes and repository creation. Anyone can view your public repositories which stores source code for projects you have worked on.

Your StackExchange Timeline

StackExchange is simliar to GitHub in the means that people share code. But it is more focused as a Questions and Answers style site. StackExchange contains around 101 different focused websites ranging from Ubuntu, Programming, Literature, Mathematics, and Skeptics. You can either answer questions asked by other people, comment on questions and answers or ask your own questions. All of these are recorded in your timeline feed.

Feeding your Site

There are two scripts loaded up on CodeCanyon, GitHub Activity Timeline and StackExchange Activity Timeline Widgets. These solve the problems above, and let you show your timeline feeds on your own websites. Check out each of the packages and see some examples of the scripts in action!

Unable to SCP to Juniper JUNOS Devices with WinSCP

Copying Files with SCP

When you are working within a network there is always a need to copy files. This includes patches, upgrades, scripts and logs which always have a need to be transfered. For *nix platforms, the most common transfer protocol is SCP. SCP is short for secure copy which uses SSH as the transport mechanism to compress and encrypt data as it travels across a network.

Depending on your platform you are copying from, you can either use a CLI version of scp, or for Windows users, you can utilize WinSCP. WinSCP gives you an advanced configuration wizard to help you connect and transfer using a graphical user interface.

Using WinSCP

One of the downfalls to using WinSCP is that Juniper Networks’ JUNOS CLI is interpreted correctly. When you access a JUNOS device you have an option of two shells, one called shell and one called cli. The cli mode, accessed by typing start cli is the JUNOS configuration and management shell. The standard csh, accessed by start shell is the BSD backend version of a standard terminal shell where *nix commands can be executed.

If you try to access a JUNOS platform with the user root, everything will work fine and dandy. This is because the default shell for root is /bin/csh. But, if you are like every other security-driven user out there, you would have disabled the root user as a safe-guard. This is where things get messy with WinSCP.

Note: Trying to log in using SFTP as the protocol will also work, but there are times when users need SCP by default.

Trying to log in as a non-root user via SCP will result in the following errors:

Host is not communicating for more than 15 seconds. Still waiting...
Note: If the problem repeats, try turning off 'Optimize connection buffer size'.


Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).

Fixing the Issue

The fix is fairly simple. If you are familiar with PuTTy and advanced configuration, WinSCP is very similar.

The setting we need to change is hidden beneath the Advanced Options menu:

  1. On the WinSCP Login Screen, check Advanced Options
  2. Under the Environment tree, choose SCP/Shell
  3. Look for the option Shell, the default option is Default
  4. Change this to start shell

When you reconnect, WinSCP will pass start shell to the CLI and copy the files correctly.


Remotely Executing Commands on JUNOS

Those who are familiar with Juniper and their software known as JUNOS, may not know about the extendability of the platform using SLAX. You can read about SLAX on Juniper’s website.

SLAX is a programming language based on XSLT, which allows you to access the API on the JUNOS platform, which ranges from configuration to executing commands via the CLI and conditionally controlling the output. This SinatraNetworks post has a great security-focused SLAX script called op srx-monitor and references official documentation from Juniper Networks.

To extend SLAX and JUNOS even further, Juniper’s Phil Shafer released Juise.

JUISE takes the abilities provided by the scripting facility of JUNOS and moves it into the open source world, where a script can run on a remote box, accessing JUNOS resources over the NETCONF (or JUNOScript) API. Initially this will be an excellent environment for creating and debugging scripts, but for many users, it may become their “normal” scripting environment.

The following will document how to configure and install juise and it’s dependencies.

Install lib-ssh2 Dependencies

Honestly, you can do without this one, but it does add some more features. Libssh2 is an open-source SSH2 library with an amazingly easy-to-use API. For the sake of consistency for the rest of the post, we will configure this is a general location so other packages and depend on it.

You can install it by doing the following:

wget http://www.libssh2.org/snapshots/libssh2-1.4.4-20130507.tar.gz
tar xvzf libssh2-1.4.4-20130507.tar.gz
cd libssh2-1.4.4-20130507/
./configure --prefix=/usr/local/ssh2 --with-openssl
sudo make install

Install libslax Dependency

libslax is the library surrounding Juniper’s SLAX language. libslax will allow you to execute, debug and validate syntax of SLAX scripts. To install it, complete the following:

wget https://libslax.googlecode.com/files/libslax-0.14.7.tar.gz
tar xvzf libslax-0.14.7.tar.gz
cd libslax-0.14.7/
./configure --with-libcurl-prefix=/usr --with-libxslt-prefix=/usr --enable-readline --prefix=/usr/local/slax
sudo make install

Install Juise

Follow the following steps to install Juise. It depends on the previous libraries which we built and installed:

wget https://juise.googlecode.com/files/juise-0.3.21.tar.gz
tar xvzf juise-0.3.21.tar.gz
cd juise-0.3.21/
./configure --prefix=/usr/local/juise --with-pcre --with-libxml-prefix=/usr --with-libslax-prefix=/usr/local/slax --with-libxslt-prefix=/usr --with-libssh2-prefix=/usr/local/ssh2
make install

To add Juise to your $PATH, run: export PATH=$PATH:/usr/local/juise/bin. You can also add this to .bashrc or /etc/profile so Juise is available across shell sessions.

Putting it all together

To execute a command with Juise, you can run juise test.slax.


version 1.2; 

/* Example SLAX Script */ 
/* Written by Mike Mackintosh */ 
/* mike@highonphp.com */
/* Credit To Mike Stone */
/* mstone@juniper.com */
ns junos = "http://xml.juniper.net/junos/*/junos"; 
ns xnm = "http://xml.juniper.net/xnm/1.1/xnm"; 
ns jcs = "http://xml.juniper.net/junos/commit-scripts/1.0"; 
ns ext = "http://xmlsoft.org/XSLT/namespace"; 
ns str = "http://exslt.org/strings"; 
ns exsl extension = "http://exslt.org/common"; 
ns exsl = "http://exslt.org/common"; 
ns func extension = "http://exslt.org/functions"; 
ns date = "http://exslt.org/dates-and-times"; 

import "junos.xsl";

param $host-file; 
param $env;

mvar $result; 
mvar $connection; 
mvar $hostname; 
mvar $password;

match / {

        /* grab remote connection information */
        if (not ($host-file))
                     expr jcs:output("\n\nUsage: juise example.slax [required parameter] [options]\n");
                     expr jcs:output("\tRequired Parameter, specify only one]");
                     expr jcs:output("\t-------------------------------------");
                     expr jcs:output("\tremote-host <ip address> :: Check a single device");
                     expr jcs:output("\thost-file <filename> :: Check all hosts provided in <filename>\n\n");
                     <xsl:message terminate='yes'> "\n\n\n\n";

        /* if not comparing files, then process the remote-host or host-file parameters */
        var $remote-user = jcs:get-input("Username: ");

        var $pwd-prompt = "Password: ";
        var $password = jcs:get-secret( $pwd-prompt);

        if ($host-file)
             var $hosts = document($host-file);
             var $host-list = jcs:split("\n",$hosts);
                     if (. != "")
                         set $connection = jcs:open(.,$remote-user,$password);

                         if (not (jcs:empty($connection)))
                           { /* valid connection */

                             var $config = jcs:execute($connection, "get-configuration" );
                             expr jcs:output( $config );

                             expr jcs:close($connection);

                                expr jcs:output(concat("Could not connect to ", .));
                              } /* inner else */
                           } /* outer else */

         expr jcs:output( "\n\nDone processing checks.\nExiting.");
      } /* op script result */
   } /*match */


<?xml version="1.0"?>

Once you create the two files above, you can execute them with this command:

juise test.slax host-file devices-list.txt

This script will connect to each device and grab the configuration and return it to the CLI. Since SLAX is XML-based being XSLT, your tags will not be displayed in the output, only the content.

This really opens up the door to some great possibilities.

Building PHP: Kerberos libraries not found

Adding IMAP Support to PHP

The other day I was working on a project which required PHP to be compiled with IMAP and IMAP-SSL. Normally, this is part of my standard PHP build, but on this platform, I compiled PHP to be as lightweight as possible. Before I rebuilt PHP, I made sure to install courier-imap and courier-imap-ssl packages as well as libkrb5-dev. Kerberos is required for IMAP’s extension in PHP.

Note: This is an Ubuntu 12.04 64-bit server

When I ran ./configure on PHP, it kept coming back with nonesense of “Kerberos libraries not found. This was annoying since I knew my kerberos library files were in /usr/lib/x86_64-linux-gnu/mit-krb5/. I made sure the pass this path to --with-kerberos, but it still kept failing.

Since I was on a strict timeline to deliver my milestones, I used some standard knowledge of Linux and simply read the output. The output gave references to 3 directories which configure will look for the libraries in as well as a krb5-config program which stores many details about kerberos. I took these ideas and applied them, which are documented below.

Finding Your Kerberos Libraries

If this is the issue you are encountering, you will see the following when you add --with-kerberos to your ./configure for PHP:

checking for krb5-config... /usr/bin/krb5-config
configure: error: Kerberos libraries not found.

      Check the path given to --with-kerberos (if no path is given, searches in /usr/kerberos, /usr/local and /usr )

The problem is that libkrb5.a|so is not found in the path you specified, if you specified one. If you did not supply a path, it would look in the defaults of /usr/kerbero, /usr/local and /usr.

Looking at the configure scripts’ source code, it appears as if there is a hiccup in this section which doesn’t actually check the directory you provide, but instead looks for a include/ or libs/ directory within your user-defined path.

If you pay attention to the output, PHP says it found a script called /usr/bin/krb5-config. Those familiar with MySQL may be reminded of mysql-config, and rightfully so, these commands are simply cousins.

If you execute krb-config without any options it will provide a list of options and arguments that it will accept:

splug@vm:~/php-5.4.14$ krb5-config 
Usage: /usr/bin/krb5-config [OPTIONS] [LIBRARIES]
        [--help]          Help
        [--all]           Display version, vendor, and various values
        [--version]       Version information
        [--vendor]        Vendor information
        [--prefix]        Kerberos installed prefix
        [--exec-prefix]   Kerberos installed exec_prefix
        [--cflags]        Compile time CFLAGS
        [--deps]          Include dependent libraries
        [--libs]          List libraries required to link [LIBRARIES]
        krb5              Kerberos 5 application
        gssapi            GSSAPI application with Kerberos 5 bindings
        kadm-client       Kadmin client
        kadm-server       Kadmin server
        kdb               Application that accesses the kerberos database

Make a note of the --libs options. This is where your kerberos library files will be stored on your system.

We will now run the same command but pass this new option:

splug@vm:~/php-5.4.14$ krb5-config  --libs
-L/usr/lib/x86_64-linux-gnu -Wl,-Bsymbolic-functions -Wl,-z,relro -lkrb5 -lk5crypto -lcom_err

If you look at the response, you will see that the kerberos library files are located in /usr/lib/x86_64-linux-gnu.

Creating a Fix

A simple fix, is link your kerberos files to one of the predetermined paths. We will use our newly discovered path in a symbolic-link command:

sudo mkdir /usr/kerberos
sudo ln -s /usr/lib/x86_64-linux-gnu/mit-krb5/* /usr/kerberos

Now, since PHP is looking by default in /usr/kerberos for your library files, when you run configure, it will find them with haste.

The next time you run, it will not complain about missing kerberos library files:

./configure --with-kerberos


It’s unfortunate that people don’t take the time to read output and perform some command-line-foo. Doing so really does give you the opportunity to learn, understand and troubleshoot more effectively.

PHP 5.4.15 Released!

New PHP Version Available

The PHP development team announces the immediate availability of PHP 5.4.15 and PHP 5.3.25. These releases fix about 10 bugs. All users of PHP are encouraged to upgrade to PHP 5.4. PHP 5.3.25 is recommended for those wishing to remain on the 5.3 series.



Fixed bug #64578 (debug_backtrace in set_error_handler corrupts zend heap: segfault).
Fixed bug #64458 (dns_get_record result with string of length -1).
Fixed bug #64433 (follow_location parameter of context is ignored for most response codes).
Fixed bug #47675 (fd leak on Solaris).
Fixed bug #64577 (fd leak on Solaris).


Upgraded libmagic to 5.14.


Fixed Windows x64 version of stream_socket_pair() and improved error handling.


Fixed bug #64342 (ZipArchive::addFile() has to check for file existence).


You can get the new version from the PHP Downloads page.

Adding Video Support to WordPress Media Gallery

WordPress Galleries

WordPress comes out-of-the-box with hundreds of really well implemented features. It’s one of the reasons it is the leading blogging software used today. One of the features we will focus on is the built in WordPress Gallery.

To add a gallery to a post, you can follow the below steps:

Step 1: Click Add Media

Click Add Media

Step 2: Click Create Gallery and select your photos

Create Gallery

Step 3: Edit and Insert

Insert Into Post

If you have noticed though, you can only add Images. If you have uploaded videos to your Media, you cannot add them to a gallery.

The Fix:

The change we need to make is located within the function wp_ajax_query_attachments(). This function is located within the file, wp-admin/includes/ajax-actions.php. You can do a search for this function, or look at or around line 1835.

Note: The only way to easily support videos within a gallery through WordPress’s default media gallery is to edit a core file. That means that after an upgrade, you may lose this change.

Specifically, change line 1841:

's', 'order', 'orderby', 'posts_per_page', 'paged', 'post_mime_type',


's', 'order', 'orderby', 'posts_per_page', 'paged',

Removing the 'post_mime_type' array index from this query will not add a constraint your media library to image/% mime types.


If you go back to a post, you can add or edit a gallery and now videos will display. By default, thumbnails are not created for your videos which you upload. I would suggest the Video Embed and Thumbnail Generator plugin to handle this.

Disabling the Dashboard in OSX

The OSX Dashboard

One of the features added into OSX many years back include the Dashboard. The dashboard can be accessed by either using F12 or setting up a finger gesture. Personally, I never found use out of the dashboard as it required an extra step to access the data, effectively destroying any workflow.

To Disable

Run the following commands in your Terminal to disable the Dashboard:

defaults write com.apple.dashboard mcx-disabled -boolean YES
killall Dock

To Enable

To enable the Dashboard, execute the following commands:

defaults write com.apple.dashboard mcx-disabled -boolean NO
killall Dock