Friday, April 6, 2018

Debian: OpenVPN Client Config

Getting your OpenVPN client up and running on Debian is easy.

Prerequisites: The openvpn.conf file from your VPN provider.

  1. apt-get instal openvpn
  2. apt-get install resolvconf
  3. Make sure the following settings are enabled in your openvpn.conf:
    • script-security 2
    • up /etc/openvpn/update-resolv-conf
    • down /etc/openvpn/update-resolv-conf
  4. openvpn --config /path/to/openvpn.conf

Saturday, December 30, 2017

Gradle Recipe: Building a Fat JAR

So far, I haven't seen a Build Tool that is easy to use. Doesn't matter if it was called Make, Maven or SBT. They're all very complex and far away from easy and intuitive.

Since I'm struggling a lot with Gradle lately, this is the first post that shows how I solved my (common) problem: Building a Fat JAR.

I don't know if I see things differently, but my first expectation on a Build Tool is, that it builds my software with all the configured dependencies. Therefore, when using Gradle, you can choose between three dependency types:

  • compile
  • runtime
  • testCompile
As soon as you add a new dependency, you would probably declare it as a compile dependency, because you need it to be able to compile your code. In case you are adding TestNG or JUnit, you declare it as testCompile, because you only need it for your tests. The runtime dependency extends from the compile dependency. My understanding of runtime would mean, that I need this dependency in order to be able to execute my code (for example on a production environment). The good thing is, that it means exactly this. The bad thing is, that it does not automatically pack all your dependencies together, so that you can ship your executable.

Lets say you use Apache Log4j2:

dependencies {
   runtime 'org.apache.logging.log4j:log4j-core:2.10.0'
}

From a user perspective, I would assume that this is everything I need to do - but its not. If you try to execute the JAR that is being build, you'll get a ClassNotFoundException.

The solution is the following build.gradle file:

apply plugin: 'java'

test {
   useTestNG()
}

repositories {
   mavenCentral()
}

dependencies {
   compile 'org.apache.logging.log4j:log4j-api:2.10.0'
   compile 'org.apache.logging.log4j:log4j-core:2.10.0'
   testCompile group: 'org.testng', name: 'testng', version: '6.13.1'
}

task fatJar(type: Jar) {
   baseName = project.name
   from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
   with jar
   manifest {
      attributes(
         'Main-Class': 'org.tobster.foo.GradleExample'
      )
   }
}

assemble.finalizedBy fatJar

Since this is a very common problem, you'll find a lot of solutions, but this was the only one that worked for me. The most interesting part for me is, that you have to write a custom Gradle Task that assembles a Fat JAR. Why do I've to do that? Wouldn't it be possible to have some kind of flag, e.g. fatJAR=true. Maybe that would be to easy.

Wednesday, September 13, 2017

Data Visualization with Grafana and Elasticsearch

If you want to store and visualize data, you have a lot of technologies to choose from. Two such technologies are Elasticsearch (ES) and Grafana. Something that I really like about ES, is that its very easy to use and that it needs no complicated setup or configuration. It just works out-of-the-box (production environments may require more sophisticated configuration). Even though Logstash and Kibana are probably the most well known companions for ES, especially when it comes to visualization of logging data, Grafana is also quite good for time series based data and it keeps your technology stack simple. In the following, I will describe how to store data in Elasticsearch 5.6.0 through its HTTP API and visualize it with Grafana 4.4.3.

Elasticsearch

  1. Download ES 5.6.0 from the official site and unpack
  2. bin/elasticsearch
  3. Web-Frontend can be found at http://localhost:9300
Elasticsearch is a document database that stores documents as JSON. Documents can be stored and retrieved by its HTTP API (there are also libraries for the most popular programming languages). Furthermore, ES stores documents into what is called indices and types. An index could be compared to a database in a relational DBMS and a type could be compared with a table.
For example: you could have an index called customers with a type invoices and another type addresses. You can think of types about something that logically belongs to an index (a customer has an address and one or more invoices).

Grafana

  1. Download Grafana from the official site and unpack
  2. bin/grafana-server
  3. Web-Frontend can be found at http://localhost:3000
Configure a Data Source:
  1. Go to http://localhost:3000/datasources and click on "add data source"
  2. Enter a name for the data source and select "Elasticsearch" from the Type drop-down
  3. As long as you didn't changed the default ES configuration, enter "http://localhost:9300" in the Url field.
  4. Access must be set to "Proxy"
  5. Enter the name of your index (for this example, we need no pattern)
  6. Every JSON document you insert into ES must have an ISO date field, e.g. 2017-07-20T14:00:00. This is important, because as mentioned before, Grafana is designed to work with time series based data. Therefore, enter the name of the field that contains the ISO date into the field "time field name" (without a leading @). I called this field "insert_time" (as shown in the screenshot below).
  7. Since we are using ES 5.6.0, select version 5.x from the Version drop-down
Configure a Dashboard:
  1. Go to http://localhost:3000/?orgId=1 and click on "Create your first dashboard"
  2. Click on "Graph" to create a new graph dashboard
  3. Click on "Panel Title", then click on "Edit" (after that, the Graph configuration appears down below)
  4. In the "General" tab, you can give the dashboard a name
  5. In the top-right corner, select a time range (e.g. last 90 days), where you know that there is some data
  6. If you wonder why you don't see anything, go to the "Display" tab and
    • Draw Modes: select "Lines"
    • Stacking & Null value: select "connected" for Null value
  7. In case your JSON documents have a field that contains a numeric value that you want to visualize, you can also use the existing aggregate functions. Therefore, go to the "Metrics" tab and for example select "Average" as Metric and your timestamp/date field for "Group by" and select "Date Histogram" from the drop-down.
That's it! Thank you for reading.

Sunday, April 24, 2016

Recording temperature values with CouchDB

After you have successfully installed CouchDB as decribed here, you can simply start it like this:

~$ couchdb start

This short example is build upon the offical tutorial of the core API. Lets create a new database called temperatures.

~$ curl -X PUT http://127.0.0.1:5984/temperatures

Inserting new documents into the temperatures database is as simple as creating a database:

~$ curl -X PUT http://127.0.0.1:5984/temperatures/e7e56c6c-822f-442d-a707-973e20fc8d87 -d '{"celsius": "23.73", "timestamp": 1461326744.1631064}'

I used the Python3 uuid module to create the UUID e7e56c6c-822f-442d-a707-973e20fc8d87 which is used as the document ID. I also created a view called "by-timestamp" inside the design document with the help of Futon and wrote the following map function:

function(doc) {
  var celsius, timestamp;
  if (
doc.celsius && doc.timestamp) {
    celsius = doc.celsius;
    timestamp = doc.timestamp;
    emit(timestamp, {temperature: celsius});
  }
}


Since the view can be parameterized with sort and filter options, its possible to make a GET request that yields the newest document:

http://127.0.0.1:5984/temperatures/_design/foo/_view/by-timestamp?descending=true&limit=1

Furthermore, I created a show called "celsius" in the design document, which adds some HTML around the document content. To insert a show into the design document, the key "shows" must be added with the following value:

{
"celsius" : "function (doc, req) {
               return '<p>Temperature: ' + doc.celsius + '°C</p>';
            }"




The following URL can be used to query the celsius show with a document ID:

http://127.0.0.1:5984/temperaturues/_design/foo/_show/celsius/1644712d-e5ab-48c3-aaa7-ac6eb75b750b

That's it! Any easy way to persist your data, but the concepts behind views and shows are a bit tricky.

Thursday, April 7, 2016

Handling null in Scala

When you are using Java APIs, you might find yourself in the situation that you have to deal with null. In plain Scala, usually you wouldn't use null, but use an Option instead.

The apply method of the Option singleton object has a nice way to convert null to None (see Option.scala):
def apply[A](x: A): Option[A] = if (x == null) None else Some(x)

Hence, you can do Option(null) and you'll get None.

A simple example that handles null in a safe way:

val maybeNull = someJavaMethodThatMightReturnNull(42)
Option(maybeNull).foreach(println)

On the other hand, if you do Some(null) you'll get back Some(null).

Wednesday, March 30, 2016

Measuring water temperature in food-safe applications with Raspberry PI + PT100 sensor + Tinkerforge PTC Bricklet

I was looking for a temperature sensor (thermometer) that can be used with a Raspberry PI for measuring water temperature in food-safe (lebensmittelecht) applications.

The sensor should satisfy the following requirements:
  • food-safe
  • waterproof
  • heat resistant until at least 100°C
Since I couldn't find any out-of-the-box solution for the Raspberry PI, I decided to look for a separate temperature sensor and then trying to connect it with the Raspberry PI. Therefore, I found the PT100 temperature sensor, which pretty much satisfies all the given requirements.

The problem is, that a PT100 sensor cannot just be connected with a Raspberry PI, because you need an analog-digital converter. Lucky as I am, the company Tinkerforge offers some very nice modules to improve this situation. To connect a PT100 temperature sensor, you can use the Tinkerforge PTC Bricklet which gives you the current temperatur in Celsius as an integer number. Its getting even better: Tinkerforge offers an easy to use API for the "most popular" programming languages - including Python.

To connect the Tinkerforge PTC Bricklet with the Raspberry PI, you also need the Tinkerforge Master Brick, which is capable of connecting up to four Bricklets and has a mini USB connector to connect it with a computer.

The first thing to do, is to install brickd. When brickd is up and running, I recommend to change the following two lines in /etc/brickd.conf:
  •  listen.address = 127.0.0.1
  •  authentication.secret = topsecret
First, only allow local connections and second, specify a password for the authentication (I was having problems, when I disabled authentication). After brickd is configured properly, you need to find out the device identifiers of your Tinkerforge bricks, in order to use them through the API. Tinkerforge already provides a sample Python script that will list all connected devices. Because we previously configured authentication, the sample Python script needs to be modified, so that it will authenticate against brickd. In order to do so, add the following method call after the line where the call to the connect method takes place:

ipcon.authenticate('topsecret')

Replace 'topsecret' with your authentication password. Before you can make use of the Tinkerforge API, you have to install the bindings for the language that you want to use. In our case, the easiest way to do that, is to download the ZIP file with the Python bindings and copy the directory source/tinkerforge to your prefered location on your harddisc. To be able to import the bindings from a Python script, just copy the Python script to the same location where you copied the tinkerforge directory. After this is all done, you can run the script and it will output all connected devices and their corresponding identifiers.

Monday, September 14, 2015

Debian 8.2 "Jessie" - cryptsetup: lvm is not available

After rebooting my laptop, Debian 8.2 didn't want to boot but tells me:

cryptsetup: lvm is not available

The Hitchhiker's Guide to the Galaxy says: "Don't Panic". After about 3-5 minutes, the build-in shell is loaded (initramfs). You can directly leave the shell by typing exit and the system will boot normally. I don't know the problem yet, but it also says:

modprobe: module ehci-orion not found in modules.deb