cjhaas.com

Using web page test with a custom IP address

Posted in Uncategorized by Chris Haas on November 20th, 2014

I recently had a need to speed test a site that was almost launched. Since the site required an SSL cert we’d been doing our final tests with local host files. However, we wanted to get an external network’s view of our site and for that we always use the wonderful http://www.webpagetest.org/. The obvious problem that we ran into was that they wouldn’t have access to our local copies of DNS.

Luckily the answer was simple, they have a great scripting interface along with very comprehensive documentation.

All we had to do was enter this into the Script tab:

setDns www.example.com 456.456.456.456
navigate https://www.example.com

WP 4.1 – Something to watch

Posted in WordPress by Chris Haas on November 18th, 2014

First, a very old bug/feature:
https://core.trac.wordpress.org/ticket/5809

Currently, if you have WP Category for fruits called “Apple” and a WP Tag for pies called “Apple” and a custom taxonomy such as “Computer Manufacturers” with an item called “Apple”, that “Apple” term is shared between everything. This isn’t much of a problem until you decide to rename the computer manufacturer to “Apple Computers”, at which point you’ve also renamed the fruit and pie. For the past seven years the only solution to this has been to create a new term instead and manually re-tag everything.

Last year one of the core developers, Andrew Nacin, outlined a roadmap for fixing this, with the expected final change to land in 4.x somewhere in 2015
https://make.wordpress.org/core/2013/07/28/potential-roadmap-for-taxonomy-meta-and-post-relationships/

One part of this roadmap that was to land in 4.1 was when a shared term was detected by the system during an edit, the system would automatically break the share and create unique unrelated terms. So if you edit the Apple computer term in any way (or just hit save on it), WP would detect that it is used in multiple contexts and break that computer one away into a dedicated context of its own.

Sounds like a good plan except it turns out a lot of people having been storing/caching the underlying ID of the shared term instead of the term itself. So developers have been storing “term 48” instead of just “apple” since developers like numbers.

This is a pretty big potentially breaking change but was luckily rolled backed a couple of days ago:
https://core.trac.wordpress.org/changeset/30336

This was re-introduced and in theory fixed four days ago however I can still think of many edge cases that can break:
https://core.trac.wordpress.org/changeset/30344

Fun links of the day

Posted in Fun links of the day by Chris Haas on November 17th, 2014

Fun links of the day

Posted in Fun links of the day by Chris Haas on November 13th, 2014

Finding abandoned/no longer maintained plugins

Posted in Plugins,WordPress by Chris Haas on October 30th, 2014

After Matt’s State of the Word 2014 presentation I was talking to my host (QTH) about how Matt would like to encourage/pressure hosts into upgrading their PHP installs. My host told me that he’s tried this but it often results in broken sites because users don’t have their plugins updated in the first place and many are using plugins that are no longer maintained by anyone.

WordPress does a pretty good job of alerting you that a plugin is no longer maintained if you actually visit the WordPress plugin website but once you’re in the backend you (shudder) have to actually read! And worse, once you have a plugin installed you have no idea when it was last updated by the plugin author unless you (double shudder) click and then read on each individual plugin. Who has the time?

Seriously, I actually do click and read, but most people don’t.

This lead me to build a quick plugin to query the WordPress API servers for each plugin and ask them when each plugin was last updated. You can download the Vendi Abandoned Plugin Check here. Version 1 will query the servers daily and update your plugin listing screen with the number of days since the last SVN check-in.

Unfortunately it needs WordPress 3.4 or greater because of improvements made to the built-in HTTP request methods. I spent a while trying to get it working in 3.3 but eventually had to give up.

If I get a chance, version 2 will add support for alerting on the plugin install screen itself for abandoned plugins. Version 3 will add a management screen where you can actually configure a couple of settings like “how old is abandoned (currently 365 days, half of WordPress’s 2 year)” and possibly excluding plugins from the check.

WordPress VIP Scanner – Undefined index: slug

Posted in VIP,WordPress by Chris Haas on October 3rd, 2014

We just started playing with WordPress’s VIP scanning plugin but ran into an issue when trying to scan with the WordPress coding standards.

PHP Notice:  Undefined index: slug in /var/www/wp-content/plugins/vip-scanner-master/vip-scanner/checks/WordPressCodingStandardsCheck.php on line 173

After a little bit of digging we found that there’s a problem with the way that the scanning tool parses the results of PHP_CodeSniffer (or there’s a problem with how PHP_CodeSniffer writes results or there’s a problem with WordPress’s implementation of PHP_CodeSniffer rulesets, I guess this is a matter of perspective). Since no one is mentioning this I’m guessing that the problem might be related to my install but I decided to document it here just in case.

Anyway, results are supposed to look like:

Line is indented with space not tab (WordPress.WhiteSpace.PhpIndent.Incorrect)
No space after opening parenthesis of function definition prohibited (WordPress.Functions.FunctionCallSignature)
No space before closing parenthesis of function definition prohibited (WordPress.Functions.FunctionCallSignature)

Or more generically:

WORDS(WORD PERIOD WORD PERIOD WORD PERIOD)

The regex that handles this in on line 41 of /vip-scanner/checks/WordPressCodingStandardsCheck.php:

private $sniffer_slug_regex = '/\((?P<slug>(\w+\.)+\w+)\)/';

For whatever reason, some rules are coming through ending in a period (or missing the last word, perspective again) and looking like this:

Detected access of super global var $_POST, probably need manual inspection. (WordPress.VIP.SuperGlobalInputUsage.)

Note the period inside of the parentheses.

In the regex above, the (\w+\.)+ means “find one or more words that each end in a period“. This is then followed up by \w+ which means “plus one more word”. Combined this means “find a bunch of words separated by periods but doesn’t end in a period” which is what is breaking the rule above.

The solution is to just switch to a simpler capture that uses a character class of [\w\.]+ which means “find a bunch of things that are words or periods”. Technically this matches A..B and .A.B as well as our desired A.B. but at least it doesn’t break anymore.

So the final solution is to change line 41 to:

private $sniffer_slug_regex = '/\((?P[\w\.]+)\)/';

Varnish as a frontend for a remote WordPress install – Part 2

Posted in nginx,PHP,Varnish,WordPress by Chris Haas on September 27th, 2014

Part of this setup Nginx, MySql, PHP and WordPress. This part is for configuring Varnish 4.0 on Ubuntu 14.04

On varnish.chris.example.com

Install Varnish 4.0

  1. Add Varnish’s GPG key
    curl https://repo.varnish-cache.org/ubuntu/GPG-key.txt | sudo apt-key add -
  2. Add the source location
    echo "deb https://repo.varnish-cache.org/ubuntu/ trusty varnish-4.0" | sudo tee /etc/apt/sources.list.d/varnish-cache.list
  3. Update local cache
    sudo apt-get update
  4. Perform a version check to make sure that Varnish 4.0 is listed:
    sudo apt-get install -s varnish
  5. Install
    sudo apt-get install varnish

Configure Varnish

  1. Edit the main config file for Varnish
    sudo vi /etc/default/varnish
  2. Change the 6081 to 80 in this block.
    Before

    DAEMON_OPTS="-a :6081 \
                 -T localhost:6082 \
                 -f /etc/varnish/default.vcl \
                 -S /etc/varnish/secret \
                 -s malloc,256m"
    

    After

    DAEMON_OPTS="-a :80 \
                 -T localhost:6082 \
                 -f /etc/varnish/default.vcl \
                 -S /etc/varnish/secret \
                 -s malloc,256m"
    
  3. Restart Varnish
    sudo service varnish restart

Varnish as a frontend for a remote WordPress install – Part 1

Posted in nginx,WordPress,WP-CLI by Chris Haas on September 27th, 2014

This tutorial assumes two sites, varnish.chris.example.com and web.chris.example.com. Both of my test sites will be created on DigitalOcean but on separate instances. Both will be running basic Ubuntu 14.04.

On web.chris.example.com

Install Nginx

  1. Add the dev repo so that we can get to 1.7.x:
    sudo apt-add-repository ppa:nginx/development
    sudo apt-get update
  2. Verify that we’ll get 1.7.x (check the version number returned, currently 1.7.5 as of this posting. If you see 1.4.6 you probably didn’t run apt-get update above)
    sudo apt-get -s install nginx
    
  3. Install Nginx
    sudo apt-get install nginx
    
  4. Verify that it was installed
    nginx -v

Install and configure PHP-FPM

  1. Install PHP-FPM
    sudo apt-get install php5-fpm
  2. Tweak PHP-FMP for security:
    sudo vi /etc/php5/fpm/php.ini
  3. Look for ;cgi.fix_pathinfo=1 and change it to cgi.fix_pathinfo=0
  4. Restart PHP-FPM
    sudo service php5-fpm restart
  5. Create a directory for our web files:
    sudo mkdir /var/www

Configure Nginx

  1. Edit the default site for Nginx (or whatever site you want to work on)
    sudo vi /etc/nginx/sites-available/default
  2. Erase and change to
    server {
        listen 80 default_server;
    
        root /var/www;
        index index.php;
    
        location / {
            try_files $uri $uri/ =404;
        }
    
        location ~ \.php$ {
            try_files $uri =404;
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            fastcgi_index index.php;
            include fastcgi_params;
        }
    
    }
  3. Test Nginx:
    sudo nginx -t
  4. Restart Nginx
    sudo service nginx restart
  5. Create a quick test file
    echo "<?php phpinfo();" | sudo tee /var/www/index.php
    
  6. Use a web browser to confirm that PHP is running
  7. Remove the test file
    sudo rm /var/www/index.php

Install and configure mysql

  1. Install
    sudo apt-get install mysql-server php5-mysql
  2. Configure
    sudo mysql_install_db
    sudo mysql_secure_installation
    

Install exim for email (optional)

  1. sudo apt-get install exim4
    sudo dpkg-reconfigure exim4-config

Install WP-CLI

    1. Install PHP CLI
      sudo apt-get install php5-cli
    2. Download
      cd ~
      curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
  • Test
    php wp-cli.phar --info
  • Make it executable
    chmod +x wp-cli.phar
  • Make it available everywhere as just the command wp
    sudo mv wp-cli.phar /usr/local/bin/wp
    Confirm it works
    wp --info

Install WordPress via WP-CLI (optional, you can do it any way you want)

  1. Download WordPress
    cd /var/www
    wp core download
  2. Create a MySql database and user, replace the testXYZ parts as needed
    mysql -uroot -p -e “CREATE DATABASE testdb; GRANT ALL PRIVILEGES ON testdb.* TO testuser@localhost IDENTIFIED BY ‘testpassword'; FLUSH PRIVILEGES;”
  3. Create a config file (replace variable below obviously)
    wp core config --dbname=testdb --dbuser=testuser --dbpass=testpassword

Generate some sample content

  1.  wp post generate –count=10000

 

Notes

  1. Nginx runs as user and group www-data by default. For most of my installs I usually add my non-root account to the www-data group and also make that my default group
    sudo usermod -a -G www-data myaccounthere
    sudo usermod -g www-data myaccounthere

Nginx as a frontend for Elasticsearch on Ubuntu 14.04

Posted in Elasticsearch,nginx by Chris Haas on September 20th, 2014

Many of these steps come from this wonderful blog post here.

  1. Follow steps one through six here for installing Nginx.
  2. Install Java, either OpenJDK (#1) or Oracle Java (#2)
    1. apt-get install openjdk-6-jre
    2. sudo add-apt-repository ppa:webupd8team/java
      apt-get update
      sudo apt-get install oracle-java7-installer
  3. Test that Java is installed:
    java -version
  4. Download the deb file for Elasticsearch (below is 1.3.2, make sure you get the most recent one from the official location)
    wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.deb
  5. Install the deb file
    dpkg -i elasticsearch-1.3.2.deb
  6. Set Elasticsearch to start on boot
    sudo update-rc.d elasticsearch defaults 95 10
  7. Start Elasticsearch
    sudo service elasticsearch start
  8. Test to see if it is running. NOTE: Elasticsearch sometimes takes a couple of seconds to spin up. If you get a message saying Failed to connect to localhost port 9200: Connection refused wait a couple of seconds and try again.
    curl localhost:9200
  9. Secure Elasticsearch to only allow local connections
    sudo vi /etc/elasticsearch/elasticsearch.yml
    1. Comment out (if they aren’t already) these two lines:
      #network.bind_host: #some_value
      #network.publish_host: #some_other_value 
    2. Uncomment and set this line:
      network.host: localhost
  10. Restart elasticsearch
    sudo service elasticsearch restart
  11. For Nginx we’re just going to modify the default site
    sudo vi /etc/nginx/sites-enabled/default
  12. Replace everything with the below, changing example.com with your domain
    server {
        listen 80;
        server_name example.com;
        location / {
            rewrite ^/(.*) /$1 break;
            proxy_ignore_client_abort on;
            proxy_pass http://localhost:9200;
            proxy_redirect http://localhost:9200 http://example.com/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    }

SHA-1 SSL Certs

Posted in Uncategorized by Chris Haas on September 19th, 2014

Google announced recently that they will start giving minor and soon nasty warning for 100% valid certs expiring in 2016 that use the SHA-1 hash. This is great from a security perspective but could be a headache for sever administrators, especially for ones that decided to purchase a long-expiring cert in order to save money.