Sunday, 14 November 2021

An Exception wrapper suitable for a RESTful API

User Story

As a third line support engineer

I want to be able to go to the server class that throws an exception reported by a client

So that I do not need to look for the stack trace in the server logs

Example

Client code

if (responseCode != 200) {
    throw new TaskException(
        "Error occurred while processing the scan response: " + 
        "response code: " + responseCode + 
        " response body: " + responseContent.getResponseBody());
} 

Server code

  if (null != header && header.startsWith(BEARER)) {
        String token = header.substring(BEARER.length()).trim();
        try {
            final Jws jws = Jwts.parser().setSigningKeyResolver(jwtPublicKeyResolver)
                    .setAllowedClockSkewSeconds(3)
                    .parseClaimsJws(token);
        } catch (JwtException ex) {
            String errorMessage = "Invalid JWT token. ";
            setError(httpServletResponse, errorMessage + ex.getMessage());
            return;
        }
    }

This results in the following being reported by the second level support agent monitoring the client logs:

[Error occurred while processing the scan response : response code: 401 response body: Invalid JWT token. Error accessing publickey Api]:

What we, as Third Line Support, want is to know which server class throws the exception, ideally without grepping the code base or opening the server logs.

A better Exception message would be:

[Problem with scan response: status code: 401, body: com.corp.server.validation.JwtValidator.validate() line 72: JWT token Exception: Error accessing Public Key API]

This is the motivation for the StackAwareException, a wrapper exception which adds the class, method and line number of the first element of the wrapped exception's stack trace.

See https://github.com/timp/StackAwareException

Tuesday, 12 October 2021

Twenty Year Exit from the Oracle Ecosystem

The Oracle Ecosystem instance that I inheritted was designed from 2000 and went live in 2004. Its centre piece Oracle Workflow went end of life that year but had new versions through to 2007. In its own way it was a pinnacle of a certain view of software, the software vendor as a one stop shop, in the same way as DEC and IBM had previously sold their systems.
A whole set of components which were guaranteed to work together, with technical support. I probably would have made the same choice, even as late as 2000. This system was the largest system (by disk storage used) in Europe for a while. Housed in a purpose built computer room. Then came AWS S3 cloud storage.
The 'replacement' system migrated eighty percent of the files out of the database, seemingly unaware of the eighty-twenty rule, and duplicated the data and the dataflows. The replacement system was a 'microservice architecture' joined by a single datastore. The users now had two systems to use and were stuck with a 2000 vintage user interface. The next step was to stop using the Oracle Forms and introduce a modern three tier web application, choosing the new, shiny AngularJS as the front end, Hibernate on java as the middleware.
This was understandabley a big job, the UI was now on AWS but the data was in a data centre (a commercial one by now). The need to quit the data centre motivated the removal of CMSDK (Content Management Software Development Kit) from inside the database to separate, external, email and SFTP handling systems, to reduce databse size and enable security in the cloud.
The remaining steps are to finsh the migration of files (who knew this would be the difficult bit) and to migrate from Oracle AQ (Advanced Queues (software naming error 101: avoid hubristic adjectives)) to AWS SQS (Simple Queueing Service (software naming error 102: avoid indexical adjectives)). Note that we have to upgrade AngularJS to Angular just to stay still, and pop things into Kubernetes, just because.
This is clean, recogisable, manageable and stable. We could rest here for a while; but all that pain motivates completion:
The elephant is in the room. The final step is then to replace Oracle Workflow with Camunda.

Sunday, 1 November 2020

Letter to Anneliese Dodds MP: Support for the Labour Left

Dear Anneliese Dodds MP, 

I have voted for you in the last two elections, as a proxy for my support for Jeremy Corbyn, though your attendance and performance at the hustings arranged by the Stop the War Coalition was a reason for my support for you personally. 

For me to vote for you again I would need to see your support for the left of the Labour party,  and the mass movement built by Jeremy Corbyn; should you instead side with the faction which has suspended him you will lose my support. 

best wishes 
Tim Pizey

Friday, 8 May 2020

Victory in Europe (VE) Day in Churchill's Toyshop

My grandfather, Norman Angier,

Norman Angier

worked at Churchill’s Toyshop (M.D.1) as the head civilian engineer during WWII.

Norman Angier

On VE day “Norman Angier felt it was an occasion for fireworks. He therefore acquired a large batch of quite big rockets and proceeded to poop them off, selecting as his firing site a point at the summit of a concrete road which led down to the ranges and the CMP’s camp. Unlike the Guy Fawkes day rockets, these were not provided with sticks for poking into the ground to keep the bodies upright ; but Norman had fixed up some sort of stand for doing this. All went well for a while and the show was most spectacular. Then Norman got careless. A rocket he had just initiated was not properly secured. It fell over, and instead of going up vertically proceeded at speed in the near horizontal plane. A weary CMP was walking along this road on his way back to the camp. The rocket struck him right on target. Luckily, he was not seriously hurt and we soon whipped him off to hospital . The trouble was to make him believe that the attack was not intentional. He had been the victim of a 1000 to 1 chance.” The Firs

Friday, 22 February 2019

Clean git blame history

Place the following in a command file, run it within your repo:
#!/bin/sh
 
git filter-branch --env-filter '
 
an="$GIT_AUTHOR_NAME"
am="$GIT_AUTHOR_EMAIL"
cn="$GIT_COMMITTER_NAME"
cm="$GIT_COMMITTER_EMAIL"
 
if [ "$GIT_AUTHOR_EMAIL" = "timp@paneris.org" ]
then
    an="Tim Pizey"
    am="timp21337@paneris.org"
fi
 
if [ "$GIT_COMMITTER_EMAIL" = "timp@paneris.org" ]
then
    cn="Tim Pizey"
    cm="timp21337@paneris.org"
fi

export GIT_AUTHOR_NAME="$an"
export GIT_AUTHOR_EMAIL="$am"
export GIT_COMMITTER_NAME="$cn"
export GIT_COMMITTER_EMAIL="$cm"
'
Then git push -f origin master Note that this process is very slow so do not repeat for every change: add all the changes you want to make and perform in one pass.

Friday, 5 October 2018

Disaster Recovery: A Dynamic Redundancy Approach

The problem with disaster planning is that it is not rehearsed. When you need to retrieve a file from backup is when you discover that your backup has been broken for three months.

Modern cloud systems, based upon software defined infrastructure and redundant, auto-scaling fleets of micro-services, come with disaster recovery built in. They are designed to be resilient against DDoS attacks, unexpected peaks in usage and continent wide unavailability.

Some systems have yet to migrate to outsourced infrastructure, some never will migrate. For these systems we need a Disaster Recovery Strategy which can be implemented within reasonable costs and ideally does not suffer from the fails when needed feature of many backup systems. One answer is to do regular fire drills. No one would dispute the importance of fire drills in saving lives and ensuring that people know what to do in the case of a real fire, however we all know there is a big difference between a rehearsal and the real thing.

The key insight in the modern cloud architectures is that every version of a system is the same (at a particular time).

We can reduce this to a minimal redundant system: a pair of identical systems with one designated Primary and the other Secondary, with a standard data mirroring link from Primary to Secondary.

To ensure that both elements of the pair really can function as the Primary you could rehearse a cutover one weekend.

But if the two systems really are identical then there is no reason to reverse the cutover at the end of the rehearsal. The old Secondary is the new Primary, the old Primary is the new Secondary. The Primary can be swapped at a periodicity the business is comfortable with, say twice a year.

This Dynamic Redundancy strategy ensures that your Disaster Recovery works when you need it to and can be adjusted according to the business' appetite for risk.

Thursday, 13 September 2018

Direct access to SonarQube Postgresql Database

I want to change to change the name of a sonarqube project. This cannot be done without performing another analysis. You can just do it in SQL https://stackoverflow.com/questions/30511849/how-to-rename-a-project-in-sonarqube-5-1 but you have to be able to login to the database. Postgresql is very secure. A quick fix is to edit /var/lib//pgsql/pg_hba.conf change local connections from ident to trust:

# TYPE  DATABASE    USER        CIDR-ADDRESS          METHOD

# "local" is for Unix domain socket connections only
local   all         all                               trust
# IPv4 local connections:
host    all         all         127.0.0.1/32          trust
# IPv6 local connections:
host    all         all         ::1/128               trust

Now you can edit:

psql -U sonarqube -W sonar
and finally:

UPDATE projects
SET  name = 'NEW_PROJECT_NAME',
long_name = 'NEW_PROJECT_NAME'
WHERE kee = 'PROJECT_KEY'