Monday, 14 December 2015

Continuous Deployment of a platform and its variants using githook and cron

Architecture

We have a prototypical webapp platform which has four variants, commodities, energy, minerals and wood. We use the Maven war overlay feature. Don't blame me it was before my time.

This architecture, with a core platform and four variants, means that one commit to platform can result in five staging sites needing to be redeployed.

Continuous Integration

We have a fairly mature Continuous Integration setup, using Jenkins to build five projects on commit. The team is small enough that we also build each developer's fork. Broken builds on trunk are not common.

NB This setup does deploy broken builds. Use a pipeline if broken staging builds are a problem in themselves.

Of Martin Fowler's Continuous Integration Checklist we have a score in every category but one:

  • Maintain a Single Source Repository
    Bitbucket.org
  • Automate the Build
    We build using Maven.
  • Make Your Build Self-Testing
    Maven runs our Spring tests and unit tests (coverage could be higher).
  • Everyone Commits To the Mainline Every Day
    I do, some with better memories keep longer running branches.
  • Every Commit Should Build the Mainline on an Integration Machine
    Jenkins as a service from Cloudbees.
  • Fix Broken Builds Immediately
    We have a prominently displayed build wall, the approach here deploys broken builds to staging so we rely upon having none.
  • Keep the Build Fast
    We have reduced the local build to eight minutes, twenty five minutes or so on Jenkins. This is not acceptable and does cause problems, such as tests not being run locally, but increasing the coverage and reducing the run time will not be easy.
  • Test in a Clone of the Production Environment
    There is no difference in kind between the production and development environments. Developers use the same operating system and deployment mechanism as is used on the servers.
  • Make it Easy for Anyone to Get the Latest Executable
    Jenkins deploys to a Maven snapshot repository.
  • Everyone can see what's happening
    Our Jenkins build wall is on display in the coding room.
  • Automate Deployment
    The missing piece, covered in this post.

Continuous, Unchecked, Deployment

Each project has an executable build file redo:

mvn clean install -DskipTests=true
sudo ./reload
which calls the deployment to tomcat reload
service tomcat7 stop
rm -rf  /var/lib/tomcat7/webapps/ROOT
cp target/ROOT.war /var/lib/tomcat7/webapps/
service tomcat7 start
We can use a githook to call redo when the project is updated:
cd .git/hooks
ln -s ../../redo post-merge
Add a line to chrontab using
crontab -e
*/5 * * * * cd checkout/commodities && git pull -q origin master
This polls for changes to the project code every five minutes and calls redo if a change is detected.

However we also need to redeploy if a change is made to the prototype, platform.

We can achieve this with a script continuousDeployment

#!/bin/bash

# Assumed to be invoked from derivative project which contains script redo

pwd=`pwd`
cd ../platform
git fetch > change_log.txt 2>&1
if [ -s change_log.txt ]
then
  # Installs by fireing githook post-merge
  git pull origin master
  cd $pwd
  ./redo
fi
rm change_log.txt
which is also invoked by a crontab:
*/5 * * * * cd checkout/commodities && ../platform/continuousDeployment

Friday, 27 November 2015

Using multiple SSH keys with git

The problem I have is that I have multiple accounts with git hosting suppliers (github and bitbucket) but they both want to keep a one to one relationship between users and ssh keys.

For both accounts I am separating work and personal repositories.

Github

BitBucket

In the past I have authorised all my identities on all my repositories, this has resulted in multiple identities being used within one repository which makes the statistics look a mess.

The Solution

Generate an ssh key for your identity and store it in a named file for example ~/.ssh/id_rsa_timp.

Add the key to your github or bitbucket account.

Use an ssh config file ~/.ssh/config


Host bitbucket.timp
    HostName bitbucket.com
    User git
    IdentityFile ~/.ssh/id_rsa_timp
    IdentitiesOnly=yes

You should now be good to go:

git clone git@bitbucket.timp:timp/project.git

Update .git/config

[remote "bitbucket"]
        url = git@bitbucket.timp:timp/wiki.git
        fetch = +refs/heads/*:refs/remotes/bitbucket/*

Tuesday, 15 September 2015

Read Maven Surefire Test Result files using Perl

When you want something quick and dirty it doesn't get dirtier, or quicker, than Perl.

We have four thousand tests and they are taking way too long. To discover why we need to sort the tests by how long they take to run and see if a pattern emerges. The test runtimes are written to the target/surefire-reports directory. Each file is named for the class of the test file and contains information in the following format:

-------------------------------------------------------------------------------
Test set: com.mycorp.MyTest
-------------------------------------------------------------------------------
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec


#! /usr/bin/perl -wall

my %tests;
open(RESULTS, "grep 'Tests run' target/surefire-reports/*.txt|"); 
while () { 
  s/\.txt//;
  s/target\/surefire-reports\///;
  s/Tests run:.+Time elapsed://;
  s/ sec//;
  s/,//;
  /^(.+):(.+)$/;
  $tests{$1} = $2; 
}
close(RESULTS);

my $cumulative = 0.0;
print("cumulative\ttime\tcumulative_secs\ttime_secs\ttest");
foreach my $key (sort {$tests{$a} <=> $tests{$b}} keys %tests) {
  $cumulative += $tests{$key};
  printf("%2d:%02d\t%2d:%02d\t%5d\t%5d\t%s\n", 
    ($cumulative/60)%60, $cumulative%60, 
    ($tests{$key}/60)%60, $tests{$key}%60,
    $cumulative, 
    $tests{$key}, 
    $key);
};


The resultant CSV can be viewed using a google chart:

Tuesday, 8 September 2015

Tomcat7 User Config

Wouldn't it be nice if tomcat came with the following, commented out, in /etc/tomcat7/tomcat-users.xml ?
<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager-gui" />
<role rolename="manager-status" />
<role rolename="manager-script" />
<role rolename="manager-jmx" />

<role rolename="admin-gui" />
<role rolename="admin-script" />

<user 
  username="admin" 
  password="admin" 
  roles="manager-gui, manager-status, manager-script, manager-jmx, admin-gui, admin-script"/>

</tomcat-users>

Friday, 12 June 2015

Debian Release Code Names - Aide Mémoire

The name series for Debian releases is taken from characters in the Pixar/Disney film Toy Story.

Sid

The unstable release is always called Sid as the character in the film took delight in breaking his toys.

A backronym: Still In Development.

Stretch

The current pending release is always called testing and will have been christened. At the time of writing the testing release is Stretch.

Jessie

Release 8.0
2015/04/26

Jessie is the current stable release.

After a considerable while a release will migrate from testing to stable, it will then become the Current Stable release and the previous version will join the (head) of the list of Obsolete Stable releases.

Wheezy

Release 7.0
2013/05/04

The current head of the list of Obsolete Stable releases.

Squeeze

Release 8.0
2011/02/06

Obsolete Stable release.

Lenny

Release 5.0
2009/02/14

Obsolete Stable release.

Etch

Release 4.0
2007/04/08

Obsolete Stable release.

Sarge

Release 3.1
2005/06/06

Obsolete Stable release.

Woody

Release 3.0
2002/07/19

Obsolete Stable release.

Potato

Release 2.2
2000/08/15

Obsolete Stable release.

Slink

Release 2.1
1999/03/09

Obsolete Stable release.

Hamm

Release 2.0
1998/07/24

Obsolete Stable release.

Bo

Release 1.3
1997/06/05

Obsolete Stable release.

Rex

Release 1.2
1996/12/12

Obsolete Stable release.

Buzz

Release 1.1
1996/06/17

Obsolete Stable release.

Previous versions did not have versions.

Wednesday, 15 April 2015

Groovy: Baby Steps

Posting a form in groovy, baby steps. Derived from http://coderberry.me/blog/2012/05/07/stupid-simple-post-slash-get-with-groovy-httpbuilder/

@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )
@Grab(group='org.codehaus.groovyfx', module='groovyfx', version='0.3.1')
import groovyx.net.http.HTTPBuilder
import groovyx.net.http.ContentType
import groovyx.net.http.Method
import groovyx.net.http.RESTClient

public class Post {

  public static void main(String[] args) {

    def baseUrl = "http://www.paneris.org/pe2/org.paneris.user.controller.LoginUser"

    def ret = null
    def http = new HTTPBuilder(baseUrl)

    http.request(Method.POST, ContentType.TEXT) {
      //uri.path = path
      uri.query = [db:"paneris", 
                  loginid:"timp", 
                  password:"password"]
      headers.'User-Agent' = 'Mozilla/5.0 Ubuntu/8.10 Firefox/3.0.4'

      response.success = { resp, reader ->
          println "response status: ${resp.statusLine}"
          println 'Headers: -----------'
          resp.headers.each { h ->
            println " ${h.name} : ${h.value}"
          }

          ret = reader.getText()

          println '--------------------'
          println ret
          println '--------------------'
      }
    }
  }
}

Monday, 30 March 2015

Delete Jenkins job workspaces left after renaming

When renaming a job or moving one from slave to slave Jenkins copies it and does not delete the original. This script can fix that.


#!/usr/bin/env python
import urllib, json, os, sys
from shutil import rmtree
 
url = 'http://jenkins/api/python?pretty=true'
 
data =  eval(urllib.urlopen(url).read())
jobnames = []
for job in data['jobs']:
    jobnames.append(job['name'])
 
def clean(path):
  builds = os.listdir(path)
  for build in builds:
    if build not in jobnames:
      build_path = os.path.join(path, build)
      print "removing dir: %s " % build_path
      rmtree(build_path)
 
clean(sys.argv[1])


Monday, 23 March 2015

Merging two sets of Jacoco Integration Test Coverage Reports for Sonar using Gradle

In a Jenkins build:


./gradlew cukeTest
./gradlew integTest
./gradlew sonarrunner

This leaves three .exec files behind. SonarQube can only use two. So we merge the two integration tests results.


task integCukeMerge(type: JacocoMerge) {
  description = 'Merge test code coverage results from feign and cucumber'
  // This assumes cuketests have already been run in a separate gradle session

  doFirst {
    delete destinationFile
    // Wait until integration tests have actually finished
    println start.process != null ? start.process.waitFor() : "In integCukeMerge tomcat is null"
  }

  executionData fileTree("${buildDir}/jacoco/it/")

}


sonarRunner {
  tasks.sonarRunner.dependsOn integCukeMerge
  sonarProperties {
    property "sonar.projectDescription", "A legacy codebase."
    property "sonar.exclusions", "**/fakes/**/*.java, **/domain/**.java, **/*Response.java"

    properties["sonar.tests"] += sourceSets.integTest.allSource.srcDirs

    property "sonar.jdbc.url", "jdbc:postgresql://localhost/sonar"
    property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
    property "sonar.jdbc.username", "sonar"
    property "sonar.host.url", "http://sonar.we7.local:9000"


    def jenkinsBranchName = System.getenv("GIT_BRANCH")
    if (jenkinsBranchName != null) {
      jenkinsBranchName = jenkinsBranchName.substring(jenkinsBranchName.lastIndexOf('/') + 1)
    }
    def branch = jenkinsBranchName ?: ('git rev-parse --abbrev-ref HEAD'.execute().text ?: 'unknown').trim()

    def buildName = System.getenv("JOB_NAME")
    if (buildName == null) {
      property "sonar.projectKey", "${name}"
      def username = System.getProperty('user.name')
      property "sonar.projectName", "~${name.capitalize()} (${username})"
      property "sonar.branch", "developer"
    } else {
      property "sonar.projectKey", "$buildName"
      property "sonar.projectName", name.capitalize()
      property "sonar.branch", "${branch}"
      property "sonar.links.ci", "http://jenkins/job/${buildName}/"
    }

    property "sonar.projectVersion", "git describe --abbrev=0".execute().text.trim()

    property "sonar.java.coveragePlugin", "jacoco"
    property "sonar.jacoco.reportPath", "$project.buildDir/jacoco/test.exec"
    // feign results
    //property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/integTest.exec"
    // Cucumber results
    //property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/it/cukeTest.exec"
    // Merged results
    property "sonar.jacoco.itReportPath", "$project.buildDir/jacoco/integCukeMerge.exec"

    property "sonar.links.homepage", "https://github.com/${org}/${name}"
  }
}

Remember to use Overall Coverage in any Sonar Quality Gates!

Wednesday, 25 February 2015

An SSL Truster

Should you wish, when testing say, to trust all ssl certificates:


import java.net.URL;

import javax.net.ssl.*;

import com.mediagraft.shared.utils.UtilsHandbag;

/**
 * Modify the JVM wide SSL trusting so that locally signed https urls, and others, are
 * no longer rejected.
 *
 * Use with care as the JVM should not be used in production after activation.
 *
 */
public class JvmSslTruster {

  private static boolean activated_;

  private static X509TrustManager allTrustingManager_ = new X509TrustManager() {

    public java.security.cert.X509Certificate[] getAcceptedIssuers() {
      return null;
    }

    public void checkClientTrusted(java.security.cert.X509Certificate[] certs, String authType) {
    }

    public void checkServerTrusted(java.security.cert.X509Certificate[] certs, String authType) {
    }
  };

  public static SSLSocketFactory trustingSSLFactory() {
    SSLContext sc = null;
    try {
      sc = SSLContext.getInstance("SSL");
      sc.init(null, new TrustManager[]{allTrustingManager_}, new java.security.SecureRandom());
      new URL(UtilsHandbag.getSecureApplicationURL()); //Force loading of installed trust manager
    }
    catch (Exception e) {
      throw new RuntimeException("Unhandled exception", e);
    }
    return sc.getSocketFactory();
  }

  public static void startTrusting() {
    if (!activated_) {
      HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
      com.sun.net.ssl.HttpsURLConnection.setDefaultSSLSocketFactory(trustingSSLFactory());
      activated_ = true;
    }
  }

  private JvmSslTruster() {
  }
}

Tuesday, 17 February 2015

Restoring user groups once no longer in sudoers

Ubuntu thinks it is neat not to have a password on root. Hmm.

It is easy to remove yourself from all groups in linux, I did it like this:

$ useradd -G docker timp

I thought that might add timp to the group docker, which indeed it does, but it also removes you from adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio which you were previously in.

As you are no longer in sudoers you cannot add yourself back.

Getting a root shell using RefiT

What we now need to get to is a root shell. Googling did not help.

I have Ubuntu 14.04 LTS installed as a dual boot on my MacBookPro (2011). I installed it using rEFIt.

My normal boot is power up, select RefiT, select most recent Ubuntu, press Enter.

To change the grub boot command instead of Enter press F2 (or +). You now have three options, one of which is single user: this hangs for me. Move to the 'Standard Boot' and again press F2 rather than Enter. This enables you to edit the kernel string. I tried adding Single, single, s, init=/bin/bash, init=/bin/sh, 5, 3 and finally 1.

By adding 1 to the grub kernel line you are telling the machine to boot up only to run level one.

This will boot to a root prompt! Now to add yourself back to your groups:

$ usermod -G docker,adm,cdrom,lpadmin,sudo,dip,plugdev,video,audio timp

Whilst we are here:

$ passwd root

Hope this helps, I hope I don't need it again!

Never use useradd always use adduser