Simple Script to check sha certificate expiration date

I came across this problem over the weekend. I needed to know quickly the expiry date for our new SHA-2 certificate. After some googleing I patched together this script.

OPENSSL="/usr/bin/openssl"
HOST=$1
PORT=$2

if [ "$HOST" == "" ]; then
 echo 'Usage: check.sh hostname.com
fi

if [ "$PORT" == "" ]; then
 PORT="443"
fi

CMD=`echo "" | $OPENSSL s_client -connect $HOST:$PORT 2>/dev/null | $OPENSSL x509 -enddate -noout 2>/dev/null| sed 's/notAfter\=//'`

if [ "$CMD" != "" ]; then
 echo $CMD
else
 echo Not an SSL secured site
fi
Configuring RSYNC for backups to AWS

Configuring RSYNC for backups to AWS

Backups are important, we all understand this. Backups are also offered by most major services like Linode, which is a great failsafe. The problem that occurs when you have a need to use that backup to restore your system is that you are relying on Linode’s latest backup whenever that was (usually within 24 hours). For some a ~24 code restore is not an issue, but what if you wanted something a little more recent and stored offsite on a server of your choosing? This is where RSYNC and AWS come into play.

 

If you have been anywhere near a server or command line in linux then you probably have heard of RSYNC.  While it may have the stigma of being something only uber geeks might use, it is very user-friendly and versatile. Let’s setup the scenario.

 

We have a custom build of code living in /srv/portal on Server A. We want to backup this code on our AWS server (Server B) every hour. First we need to verify that Server A can run RSYNC and connect to Server B without requiring a password. Thankfully AWS requires that you setup a ssh key (.pem) file in order to connect. For our AWS session we created an Ubuntu server so our default user isubuntu. Connecting via ssh looks something like this:

ssh -i ~/.ssh/AWS_key.pem ubuntu@123.123.123.123

So here is the breakdown:

ssh -i tells ssh that we want to point to a key file for our credentials. Our file is located in our home dir under.ssh/. Next we give our username (ubuntu) and IP address of the server, pretty standard. If we connect successfully then we are good to go. Let’s move on to RSYNC!

 

Basic usage for RSYNC is:

rsync [OPTION...] [SRC] [DEST]

So, in our situation, we want to take everything in /srv/portal on Server A and put it in /srv/portal on Server B through ssh. Ok, deep breath, here we go.

 

rsync -avz -e "ssh -i /root/.ssh/AWS_key.pem" /srv/portal/ ubuntu@123.123.123.123:/home/srv/portal

Breakdown:

-a = archive mode. This is actually a shortcut for -rlptgoD which means:

  1. recurse into directories
  2. copy symlinks as symlinks
  3. keep partially transferred files
  4. preserve modification times
  5. preserve group
  6. preserve owner
  7. preserve device files, preserve special files

I don’t know about you, but, I would much rather just type -a.
-v = increase verbosity

-z = compress file data during the transfer. This is great if you do not want to tarball everything first.

-e = specifies the remote shell to use, in our case we are telling RSYNC to use SSH. It is important to have the SSH command in quotes and also do not rely on shortcuts like “~/” for your home directory. The paths must be absolute.

Note the ending slash on /srv/portal/. This is important so that we are taking the files within the directory and not the directory as well. On Server B we place these files within the ubuntu user’s home directory, which is fine, but you could also create a symlink and place the files anywhere.

 

Test this on the command line and verify that everything is copying. Note that the first time you run this EVERY FILE will be copied over. We are using RSYNC because after the first time, it only copies over the DIFFs of the files that have changed. This is much more preferable than setting up an FTP script to copy all of the files every time.

 

Now that you have a working RSYNC command, it is time to put it into a cron job. On a majority of systems you will runcrontab -e to edit your cron jobs. A lot of people get weary when trying to setup a cron job, as long as you have google, cron is not a big deal.

Taken straight from debian-administration.org

The format of these files is fairly simple to understand. Each line is a collection of six fields separated by spaces.

The fields are:

  1. The number of minutes after the hour (0 to 59)
  2. The hour in military time (24 hour) format (0 to 23)
  3. The day of the month (1 to 31)
  4. The month (1 to 12)
  5. The day of the week(0 or 7 is Sun, or use name)
  6. The command to run

More graphically they would look like this:

*     *     *     *     *  Command to be executed
-     -     -     -     -
|     |     |     |     |
|     |     |     |     +----- Day of week (0-7)
|     |     |     +------- Month (1 - 12)
|     |     +--------- Day of month (1 - 31)
|     +----------- Hour (0 - 23)
+------------- Min (0 - 59)

To run a script every hour on the hour, it would look like this:

# Run the `something` command every hour on the hour
0   *   *   *   * /sbin/something

so let consolidate our command into a script called backupToAws.sh.

#!/bin/bash     
rsync -avz -e "ssh -i /root/.ssh/AWS_key.pem" /srv/portal/ ubuntu@123.123.123.123:/home/srv/portal

Place the script in/usr/local/bin and then modify your cron job to point to it. Voila! Now every hour you will have an up-to-date version of/srv/portal on your AWS server.

How to configure a RESTful Backbone.js and PHP API

How to configure a RESTful Backbone.js and PHP API

How to configure a RESTful Backbone.js and PHP API

 

Backbone.js is one of the many javascript frameworks out there. Thats fine, we’re not here to talk about that ;). When trying to create a RESTful PHP API what we care about is how do we receive the data and what our endpoints should look like. Typical API endpoints look like this:

HTTP methods
Uniform Resource Locator (URL)GETPUTPOSTDELETE
Collection, such as http://api.example.com/resources/List the URIs and perhaps other details of the collection’s members.Replace the entire collection with another collection.Create a new entry in the collection. The new entry’s URI is assigned automatically and is usually returned by the operation.[17]Delete the entire collection.
Element, such as http://api.example.com/resources/item17Retrieve a representation of the addressed member of the collection, expressed in an appropriate Internet media type.Replace the addressed member of the collection, or if it does not exist, create it.Not generally used. Treat the addressed member as a collection in its own right and create a new entry within it.[17]Delete the addressed member of the collection.

(Wikipedia)

Basically, these are “clean” URLs that our API knows what to do with and what is expected when they are hit. HTTP has many methods built-in that are available but there are Five that we need to concern ourselves with and tell PHP what to do once NGINX is configured.

 

NGINX Clean Endpoint Configuration

This relatively straight forward but can be intimidating. Here is the configuration for my API.

location /api/v1 {
    index index.php;
    root /var/www/lifeboat/current;
    rewrite (.*) /api/v1/index.php?$query_string;

    location ~\.php$ {
        fastcgi_index index.php;
        try_files $uri =404;
        include /etc/nginx/fastcgi.conf;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        include fastcgi_params;
    }
}

Let’s break this down.

  1. We are using DeployBot on this server so when we push to GitHub the code will deploy automatically. DeployBot places the latest code in a symlinked dir called “current“.  Your setup will very and probably just be “/var/www/your-project”. The location alias that of “api/v1″ tells NGINX that anything requested from this URI should follow these specific rules below.
  2. The rewrite is where the magic happens.  We are telling NGINX to take anything requested with that URI and use index.php as the index, taking any $query_string (api/v1/function/data/password) and pass it into index.php. Once we have the data passed in we can take it and do what we want.
  3. The last location block is standard for using PHP.  I am using PHP 7 on this server and the fastcgi_pass might look different if you are using anything else.

 

HTTP Request Methods

For our API we need PHP to look at GET,PUT,POST,DELETE.  These are not the same as $_GET and $_POST, these are the special HTTP methods and need to be handled through $_SERVER[‘REQUEST_METHOD’]. For example:

switch ($_SERVER['REQUEST_METHOD']) {
    case "POST":
    case "PUT": { 
        //POST or PUT something
    break;
    }
    case "GET": {
        //GET something
    break;
    }
    case "DELETE": {
        //DELETE something
    break;
    }
case "OPTIONS": {
    header("HTTP/1.0 200");
    break;
    }
}//Switch

This is pretty straight forward as well, but you will never get to this point unless you set your HEADERs correctly.

 

HTTP Headers

Backbone requires that you allow these HTTP Methods to be available on your server. Normally you would not want javascript to call functions on your server. That sounds like a bad Idea unless you want it to happen (you can take care of security elsewhere).  In PHP you must set these before you send anything back.

    header('Access-Control-Allow-Origin: *');
    header("Access-Control-Allow-Credentials: true");
    header("Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept");
    header('Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT, PATCH,x-http-method-override');
    header('content-type: application/json; charset=utf-8');
    header('Access-Control-Max-Age: 86400');

After this, you should be able to simply return any data (jSON encoded) and if your backbone.js front-end knows what to do with jSON then you are good to go.

Simple PHP SOAP Client

Simple PHP SOAP Client

Simple PHP SOAP Client

Recently I was asked to verify email addresses against a client’s SOAP (Simple Object Access Protocol) server to verify data. I have worked with and created RESTful APIs but haven’t really had much experience with creating a PHP SOAP client. Here is what I came up with.

What is SOAP?

SOAP is a messaging protocol that allows programs that run on disparate operating systems (such as Windows and Linux) to communicate using Hypertext Transfer Protocol (HTTP) and its Extensible Markup Language (XML).

Basically, SOAP is a protocol that you use by sending an XML formatted string and receiving an XML response. SOAP v 1.2 was released in 2007 and since then we have moved on to RESTful JSON APIs. However, some legacy systems still require communication with SOAP.

 

What information do I need?

We were given a document of example requests and responses. The key things that I wanted to find were the End Point and the Request we need to send.  The client has already given us the server End Point (http://www.example.org/getInfo.asmx). Here is the example SOAP request.

<?xml version="1.0"?>

<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-envelope/"
soap:encodingStyle="http://www.w3.org/2003/05/soap-encoding">

<soap:Body xmlns:m="http://www.example.org/stock">
  <m:GetStockPrice>
    <m:StockName>IBM</m:StockName>
  </m:GetStockPrice>
</soap:Body>

</soap:Envelope>

In this XML we are sending a GetStockPrice request and it requires a value for the namespace StockName.

PHP Code

To start out I want to organize this in a way so the XML is in its own file as a variable. In a file called xmldata.php place your XML request.

<?php
$xmlData = '<?xml version="1.0"?>

<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-envelope/"
soap:encodingStyle="http://www.w3.org/2003/05/soap-encoding">

<soap:Body xmlns:m="http://www.example.org/stock">
  <m:GetStockPrice>
    <m:StockName>IBM</m:StockName>
  </m:GetStockPrice>
</soap:Body>

</soap:Envelope>';
?>

Note that there is no space at the beginning of the XML. The server will bark at you if you put a space in the beginning.

Next, create your main PHP file, I called mine soaprequest.php

<?php
include ('xmldata.php');
$url = 'http://www.example.org/getInfo.asmx?WSDL';

$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: text/xml'));
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$xmlData");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);

$response = preg_replace("/(<\/?)(\w+):([^>]*>)/", "$1$2$3", $output);
$xml = new SimpleXMLElement($response);
$body = $xml->xpath('//soapBody')[0];
$array = json_decode(json_encode((array)$body), TRUE); 

print_r($array);
?>

 

So, let’s break this down line by line.

include ('xmldata.php');
$url = 'http://www.example.org/getInfo.asmx?WSDL';

The include is letting us reference $xmlData that is set in xmldata.php. The next line is setting $url to the End Point.

$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: text/xml'));
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$xmlData");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);

This is the heart of the procedure. We are using cURL to make this request. In line order, this is what is happening:

  1. curl_init creates a new cURL resource for us to use.
  2. SOAP header is set to Content-Type: text/xml.
  3. CURLOPT_POST is set to true, therefore we are calling for a standard HTTP POST.
  4. CURLOPT_POSTFIELDS is the full data to send in the HTTP POST. In this case, our formatted XML.
  5. Setting CURLOPT_RETURNTRANSFER variable to 1 will force cURL not to print out the results of its query. Instead, it will return the results as a string return value from curl_exec() instead of the usual true/false.
  6. The variable $output is set to the response of the cURL execution.
  7. Close cURL resource, and free up system resources
$response = preg_replace("/(<\/?)(\w+):([^>]*>)/", "$1$2$3", $output);
$xml = new SimpleXMLElement($response);
$body = $xml->xpath('//SoapBody')[0];
$array = json_decode(json_encode((array)$body), TRUE); 
  1. We remove any “:” that separate the XML namespaces.
  2. $xml is created as a new SimpleXML element from $response.
  3. The XML structure is navigated and captured at the “SoapBody” (which was “Soap:Body” before we removed the “:”) and set to $body. 
  4. $array is set as a jSON encoded version of the XML. The json_decode(json_encode(array)$body) trick changes the json encoded data from an object to an array.

Finally, we have an array with our response in it which you can print_r or var_dump from there to see your results.

 

Now, this isn’t very dynamic, we would have to hard code the Stock name every time. How about we take in a variable and replace it in the XML instead? That would look something like this:

soaprequest.php

<?php
include ('xmldata.php');
$url = 'https://lab6.guestwarehost.com/guestware/gwwebgst.asmx?WSDL';
$stockName = 'IBM';
$replace = '###STOCKNAME###';
$xml_data = str_replace($replace, $email, $xmlData);

$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: text/xml'));
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$xmlData");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
curl_close($ch);

$response = preg_replace("/(<\/?)(\w+):([^>]*>)/", "$1$2$3", $output);
$xml = new SimpleXMLElement($response);
$body = $xml->xpath('//soapBody')[0];
$array = json_decode(json_encode((array)$body), TRUE);

print_r($array);
?>

 

xmldata.php

<?php
$xmlData = '<?xml version="1.0"?>

<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-envelope/"
soap:encodingStyle="http://www.w3.org/2003/05/soap-encoding">

<soap:Body xmlns:m="http://www.example.org/stock">
  <m:GetStockPrice>
    <m:StockName>###STOCKNAME###</m:StockName>
  </m:GetStockPrice>
</soap:Body>

</soap:Envelope>';
?>

Here a variable $stockName replaces the placeholder “###STOCKNAME###” in the XML. This can come from a DB, $_GET,$_POST or wherever.

Tips:

Postman: I used Postman to test my request many times before I even started coding PHP.  I wanted to make sure that my connection, request and returned data all came back as expected. This just made my debugging of code easier.

Command Line: This is just a simple command line script for ease of testing. This would be super simple to put into a web server and check for $_POST or $_GET for the input.

Distributed Denial of Service Attacks: Four Best Practices for Prevention and Response

Distributed Denial of Service Attacks: Four Best Practices for Prevention and Response

Good read on new DDoS methods and response from Carnegie Mellon.

“We have recently seen more sophisticated attacks, such as the recent Dyn attack. As IEEE Spectrum recently reported, “Attacking a DNS or a content delivery provider such as Dyn or Akamai in this manner gives hackers the ability to interrupt many more companies than they could by directly attacking corporate servers, because several companies shared Dyn’s network.”

 

Generally speaking, organizations should start planning for DDoS attacks in advance. It is much harder to respond after an attack is already under way. While DDoS attacks can’t be prevented, steps can be taken to make it harder for an attacker to render a network unresponsive.”