Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 17 2018

vanitasvitae's blog » englisch: Summer of Code: Bug found!


The mystery has been solved! I finally found out, why the OpenPGP keys I generated for my project had a broken format. Turns out, there was a bug in BouncyCastle.
Big thanks to Heiko Stamer, who quickly identified the issue in the bug report I created for pgpdump, as well as Kazu Yamamoto and David Hook, who helped identify and confirm the issue.

The bug was, that BouncyCastle, when exporting a secret key without a password, was appending 20 bytes of the SHA1 hash after the secret key material. That is only supposed to happen, when the key in fact is password protected. In case of unprotected keys, BouncyCastle is supposed to add a two byte checksum instead. BouncyCastles wrong behaviour cause pgpdump to interpret random bytes as packet tags, which resulted in a wrong key id being printed out.

The relevant part of RFC-4880 is found in section 5.5.3:

      -If the string-to-key usage octet is zero or 255, then a two-octet
       checksum of the plaintext of the algorithm-specific portion (sum
       of all octets, mod 65536).  If the string-to-key usage octet was
       254, then a 20-octet SHA-1 hash of the plaintext of the
       algorithm-specific portion.

Shortly after I filed a bug report for BouncyCastle, Vincent Breitmoser, one of the Authors of XEP-0373 and XEP-0374 submitted a fix for the bug. This is a nice little example of how free software projects can work together to improve each other. Big thanks for that :)

Working OX Test Client!

I spent the last night to create a command line chat client that can “speak” OX. Everything is a little bit rough around the edges, but the core functionality works.
The user has to do actions like publishing and fetching keys by hand, but encrypted, signed messages can be exchanged. Having working code, I can now start to formulate a general API which will enable multiple OpenPGP back-ends. I will spend some more time to polish that client up and eventually publish it in a separate git repository.


I totally forgot to talk about EFAIL in my last blog posts. It was a little shock when I woke up on Monday, the first day of the coding phase, only to read sentences like “Are you okay?” or “Is the GSoC project in danger?” :D
I’m sure you all have read about the EFAIL attack somewhere in the media, so I’m not going into too much detail here (the EFF already did a great job *cough cough*). The E-Fail website describes the attack as follows:

“In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.”

Is EFAIL applicable to XMPP?
Probably not to the XEPs I’m implementing. In case of E-Mail, it is relatively easy to prepend the image tag to the message. XEP-0373 however specifies, that the transported extension elements (eg. the body of the message) is wrapped inside of an additional extension element, which is then encrypted. Additionally this element (eg. <signcrypt/>) carries a random length, random content padding element, so it is very hard to nearly impossible for an attacker to guess, where the actual body starts, and in turn where they’d have to insert an “extraction channel” (eg. image tag) to the message.

In legacy OpenPGP for XMPP (XEP-0027) it is theoretically possible to at least execute the first part of the attack made in EFAIL. An attacker could insert an image tag to make a link out of the message. However, external images are usually shared by using XEP-0066 (Out of Band Data) by adding an x-element with the oob namespace to the message, which contains the URL to the image. Note, that this element is added outside the body though, so we should be fine, as so the attack would only work if the user tried to open the linkified message in a browser :)

Another option for the attacker would be to attack XHTML-IM (XEP-0071) messages, but I think those do not support legacy OpenPGP in the first place. Also XHTML-IM has been deprecated recently *phew*.

In the end, I’m by no means a security expert, so please do not quote me on my wild thoughts here :)
However it is important to learn from that example to not make the same mistakes some Email clients did.

Happy Hacking!

May 16 2018

vanitasvitae's blog » englisch: Summer of Code: Quick Update

I noticed that my blog posting frequency is substantially higher than last year. For that reason I’ll try to keep this post shorter.

Yesterday I implemented my first prototype code to encrypt and decrypt XEP-0374 messages! It can process incoming PubkeyElements (the published OpenPGP keys of other users) and create SigncryptElements which contain a signed and encrypted payload. On the receiving side it can also decrypt those messages and verify the signature.

I’m still puzzled about why I’m unable to dump the keys I generate using pgpdump. David Hook from Bouncycastle used my code to generate a key and it worked flawlessly on his machine, so I’m stumped for an answer…

I created a bug report about the issue on the pgpdump repository. I hope that we will get to the cause of the issue soon.

Changes to the schedule

In my original proposal I sketched out a timeline which is now (that I already making huge steps) a little bit underwhelming. The plan was initially to work on Smacks PubSub API within the first two weeks.
Florian suggested, that instead I should create a working prototype of my implementation as soon as possible, so I’m going to modify my schedule to meet the new criteria:

My new plan is, to have a fully working prototype implementation at the time of the first evaluation (june 15th).
That prototype (implemented within a small command line test client) will be capable of the following things:

  • Storing keys in a rudimental form on disk
  • automatically creating keys if needed
  • publishing public keys via PubSub
  • fetching contacts keys when needed
  • encrypting and signing messages
  • decrypting and verifying and displaying incoming messages
The final goal is still to create a sane, modular implementation. I’m just slightly modifying the path that will take me there :) Happy Hacking!

May 14 2018

vanitasvitae's blog » englisch: Summer of Code: The Plan. Act 1: OpenPGP, Part Two

The Coding Phase has begun! Unfortunately my first (official) day of coding was a bad start, as I barely added any new code. Instead I got stuck on a very annoying bug. On the bright side, I’m now writing a lengthy blog post for you all, whining about my issue in all depth. But first lets continue where we left off in this post.

In the last part of my OpenPGP TL;DR, I took a look at different packet types of which an OpenPGP message may consist of. In this post, I’ll examine the structure of OpenPGP keys.

Just like an OpenPGP message, a key pair is made up from a variety of sub packets. I will now list some of them.

Key Material

First of all, there are four different types of key material:

  • Public Key
  • Public Sub Key
  • Secret Key
  • Secret Sub Key

It should be clear, what Public and Secret keys are.
OpenPGP defines a way to create key hierarchies, where sub keys belong to master keys. A typical use-case for this is when a user has multiple devices, but doesn’t want to risk losing their main key pair in case a device gets stolen. In such a case, they would create one super key (pair), which has a bunch of sub keys, one for every device of the user. The super key, which represents the identity of the user is used to sign all sub keys. It then gets locked away and only the sub keys are used. The advantage of this model is, that all sub keys clearly belong to the user, as all of them are signed by the master key. If one device gets stolen, the user can simply revoke that single key, without losing all their reputation, as the super key is still valid.

I still have to determine, whether my implementation should support sub keys, and if so, how that would work.


This model brings us directly to another, very important type of sub packet – the Signature Packet. A signature can be seen as a statement. How that statement is to be interpreted is defined by the type of the signature.
There are currently 3 different types of signatures, all with different meanings:

  • Certification Signature
    Signatures are often used as attestation. If I sign your PGP key for example, I attest other users, that I have to some degree verified, that you are the person you claim to be and that the key belongs to you.
  • Sub Key Binding Signature
    If a key has such a signature, the key is a sub key of the key that made the signature. In other words: The signature is a statement, that the key that made the signature owns the signed key.
  • Direct-Key Signature
    This type of signature is mostly used to bind additional information in form of Signature Sub Packets to a key. We will see later, what type of information that may be.

Issuing signatures shall not be taken too lightly. In the end, a signature is a statement, which will be interpreted. Creating trust signatures on keys without verifying their authenticity for example may seriously harm ecosystems like the Web of Trust.

Signature Sub Packets

What’s the purpose of Signature Sub Packets?
Signature Sub Packets are used to bind information to a key. Examples are:

  • Creation and expiration dates.
    It might be useful to know, how long a key should be in use. For that purpose the owner of the key can set an expiration date, after which the key should no longer be used.
    It is also possible to let signatures expire.
  • Preferred Algorithms
    The user can state, which algorithms (hashing, compressing and symmetric encryption) they want their interlocutors to use when creating a message.
  • Revocation status
    It might be necessary to revoke a key that has been compromised. That can be done by placing a signature on it, stating that the key should no longer be used.
  • Trust Signature
    If a signature contains a trust signature packet, the signature is to be interpreted as an attestation of trust in that key.
  • Primary User ID
    A user can specify the main user id of a key.

User ID

User IDs are stating, to which identity of a user a key belongs. That might be a real or a companies name, an email address or in our case a Jabber ID.
Sub keys can have different user ids, that way a user can differentiate between different roles.


Trust can not only be expressed by a Trust Signature (mentioned before), but also by a Trust packet. The difference is, that the signature is cryptographically backed, while a trust packet is merely an indicator.
Trust packets are mostly used by a user to keep track of which keys of contacts they trust themselves.

There is currently no XEP specifying how trust decisions of a user are synchronized across multiple devices in the context of XMPP. #FutureWork? :)

 Bouncycastle and (the not so painless) PGPainless

As I mentioned in my very last post, I was able to generate an OpenPGP key pair using GnuPG (using the “–allow-freeform-uids” flag to allow the uid format used by xmpp). The next step was trying to generate keys on my own using Bouncycastle. Bouncy-gpg (the library I forked into PGPainless) does not offer convenient methods for creating keys, so thats one feature I’ll add to PGPainless and hopefully upstream to Bouncy-gpg. I already created some basic builder structure for creating OpenPGP key pairs using RSA. In order to generate a key pair, the user would do this:

PGPSecretKeyRing secRing = BouncyGPG.createKeyPair()

Pretty easy, right? Behind the scenes, PGPainless is generating the key pair using the following code:

KeyPairGenerator pbkcGenerator = KeyPairGenerator.getInstance(
        BuildPGPKeyGeneratorAPI.this.keyType, PROVIDER);

// Underlying public-key-cryptography key pair
KeyPair pbkcKeyPair = pbkcGenerator.generateKeyPair();

// hash calculator
PGPDigestCalculator calculator = new JcaPGPDigestCalculatorProviderBuilder()

// Form PGP key pair //TODO: Generalize "PGPPublicKey.RSA_GENERAL" to allow other crypto
PGPKeyPair pgpPair = new JcaPGPKeyPair(PGPPublicKey.RSA_GENERAL, pbkcKeyPair, new Date());

// Signer for creating self-signature
PGPContentSignerBuilder signer = new JcaPGPContentSignerBuilder(
        pgpPair.getPublicKey().getAlgorithm(), HashAlgorithmTags.SHA256);

// Encryptor for encrypting the secret key
PBESecretKeyEncryptor encryptor = passPhrase == null ?
        null : // unencrypted key pair, otherwise AES-256 encrypted
        new JcePBESecretKeyEncryptorBuilder(PGPEncryptedData.AES_256, calculator)

// Mimic GnuPGs signature sub packets
PGPSignatureSubpacketGenerator hashedSubPackets = new PGPSignatureSubpacketGenerator();

// Key flags
                | KeyFlags.SIGN_DATA
                | KeyFlags.ENCRYPT_COMMS
                | KeyFlags.ENCRYPT_STORAGE
                | KeyFlags.AUTHENTICATION);

// Encryption Algorithms
hashedSubPackets.setPreferredSymmetricAlgorithms(false, new int[]{

// Hash Algorithms
hashedSubPackets.setPreferredHashAlgorithms(false, new int[] {

// Compression Algorithms
hashedSubPackets.setPreferredCompressionAlgorithms(false, new int[] {

// Modification Detection
hashedSubPackets.setFeature(false, Features.FEATURE_MODIFICATION_DETECTION);

// Generator which the user can get the key pair from
PGPKeyRingGenerator ringGenerator = new PGPKeyRingGenerator(
        PGPSignature.POSITIVE_CERTIFICATION, pgpPair,
        BuildPGPKeyGeneratorAPI.this.identity, calculator,
        hashedSubPackets.generate(), null, signer, encryptor);

return ringGenerator;

Using the above code, I’m trying to create a key pair which is constructed equally as a key generated using GnuPG. I do this mainly to make sure that I don’t have any errors in my code. Also GnuPG is an implementation of OpenPGP with a lot of reputation. If I do what they do, chances are that I might do it right ;D

Unfortunately I’m not quite sure, whether I’m successful with this method or not. To explain my uncertainty, let me show you the output of pgpdump, a tool used to analyse OpenPGP keys:

$pgpdump gnupg.sec
Old: Secret Key Packet(tag 5)(920 bytes)
    Ver 4 - new
    Public key creation time - Tue May  8 15:15:42 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1024 bits) - ...
    Checksum - 3b 8c
Old: User ID Packet(tag 13)(23 bytes)
    User ID - xmpp:juliet@capulet.lit
Old: Signature Packet(tag 2)(334 bytes)
    Ver 4 - new
    Sig type - Positive certification of a User ID and Public Key packet(0x13).
    Pub alg - RSA Encrypt or Sign(pub 1)
    Hash alg - SHA256(hash 8)
    Hashed Sub: issuer fingerprint(sub 33)(21 bytes)
     v4 -    Fingerprint - 1d 01 8c 77 2d f8 c5 ef 86 a1 dc c9 b4 b5 09 cb 59 36 e0 3e
    Hashed Sub: signature creation time(sub 2)(4 bytes)
        Time - Tue May  8 15:15:42 CEST 2018
    Hashed Sub: key flags(sub 27)(1 bytes)
        Flag - This key may be used to certify other keys
        Flag - This key may be used to sign data
        Flag - This key may be used to encrypt communications
        Flag - This key may be used to encrypt storage
        Flag - This key may be used for authentication
    Hashed Sub: preferred symmetric algorithms(sub 11)(4 bytes)
        Sym alg - AES with 256-bit key(sym 9)
        Sym alg - AES with 192-bit key(sym 8)
        Sym alg - AES with 128-bit key(sym 7)
        Sym alg - Triple-DES(sym 2)
    Hashed Sub: preferred hash algorithms(sub 21)(5 bytes)
        Hash alg - SHA512(hash 10)
        Hash alg - SHA384(hash 9)
        Hash alg - SHA256(hash 8)
        Hash alg - SHA224(hash 11)
        Hash alg - SHA1(hash 2)
    Hashed Sub: preferred compression algorithms(sub 22)(3 bytes)
        Comp alg - ZLIB <RFC1950>(comp 2)
        Comp alg - BZip2(comp 3)
        Comp alg - ZIP <RFC1951>(comp 1)
    Hashed Sub: features(sub 30)(1 bytes)
        Flag - Modification detection (packets 18 and 19)
    Hashed Sub: key server preferences(sub 23)(1 bytes)
        Flag - No-modify
    Sub: issuer key ID(sub 16)(8 bytes)
        Key ID - 0xB4B509CB5936E03E
    Hash left 2 bytes - 87 ec
    RSA m^d mod n(2048 bits) - ...
        -> PKCS-1

Above you can see the structure of an OpenPGP RSA key generated by GnuPG. You can see its preferred algorithms, the XMPP UID of Juliet and so on. Now lets analyse a key generated using PGPainless.

$pgpdump pgpainless.sec
Old: Secret Key Packet(tag 5)(950 bytes)
    Ver 4 - new
    Public key creation time - Mon May 14 15:56:21 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1023 bits) - ...
    Checksum - 6d 18
New: unknown(tag 48)(173 bytes)
Old: Signature Packet(tag 2)(until eof)
    Ver 213 - unknown

Unfortunately the output indicates an unknown packet tag and it looks like something is broken. I’m not sure what’s going on, but I suspect either an error in my implementation, or a bug in Bouncycastle. I noticed, that the output of pgpdump is drastically changing if I change the first boolean value in any of the hashedSubPackets setter function calls from false to true (that boolean represents, whether the set value is “critical”, meaning whether the receiving implementation should throw an error in case the read property is unknown). If I do set it to true, the output looks more disturbing and broken, since strange unicode symbols start to appear, indicating a bug. Unfortunately my mail to the Bouncycastle mailing list is still unanswered, although I must add that I wrote it only seven hours ago.

It is a real pity, that it is so hard to find working example code that is not outdated :( If you can point me in the right direction, please let me know!! You can find contact details on my Github page.

My next steps debugging this will be trying whether an exported key can successfully be imported both back into PGPainless, as well as into GnuPG. Apart from that, I will spend more time thinking about an API which allows different OpenPGP backends.

Happy Hacking!

DanielPocock.com - fsfe: A closer look at power and PowerPole

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

May 13 2018

Evaggelos Balaskas - System Engineer: USBGuard



One of the most common security concerns (especially when traveling) is the attach of unknown USB device on our system.

There are a few ways on how to protect your system.


Hardware Protection


Cloud Storage

More and more companies are now moving from local storage to cloud storage as a way to reduce the attack surface on systems:

IBM a few days ago, banned portable storage devices


Hot Glue on USB Ports

also we must not forget the old but powerful advice from security researches & hackers:


by inserting glue or using a Hot Glue Gun to disable the USB ports of a system.

Problem solved!



I was reading the redhat 7.5 release notes and I came upon on usbguard:



The USBGuard software framework helps to protect your computer against rogue USB devices (a.k.a. BadUSB) by implementing basic whitelisting / blacklisting capabilities based on device attributes.


USB protection framework

So the main idea is you run a daemon on your system that tracks udev monitor system. The idea seams like the usb kill switch but in a more controlled manner. You can dynamical whitelist or/and blacklist devices and change the policy on such devices more easily. Also you can do all that via a graphical interface, although I will not cover it here.


Archlinux Notes

for archlinux users, you can find usbguard in AUR (Archlinux User Repository)

AUR : usbguard

or you can try my custom PKGBUILDs files


How to use usbguard

Generate Policy

The very first thing is to generate a policy with the current attached USB devices.

sudo usbguard generate-policy

Below is an example output, viewing my usb mouse & usb keyboard :

allow id 17ef:6019 serial "" name "Lenovo USB Optical Mouse" hash "WXaMPh5VWHf9avzB+Jpua45j3EZK6KeLRdPcoEwlWp4=" parent-hash "jEP/6WzviqdJ5VSeTUY8PatCNBKeaREvo2OqdplND/o=" via-port "3-4" with-interface 03:01:02

allow id 045e:00db serial "" name "Naturalxc2xae Ergonomic Keyboard 4000" hash "lwGc9o+VaG/2QGXpZ06/2yHMw+HL46K8Vij7Q65Qs80=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "1-1.5" with-interface { 03:01:01 03:00:00 }

The default policy for already attached USB devices are allow.


We can create our rules configuration file by:

sudo usbguard generate-policy > /etc/usbguard/rules.conf



starting and enabling usbguard service via systemd:

systemctl start usbguard.service

systemctl enable usbguard.service


List of Devices

You can view the list of attached USB devices and

sudo usbguard list-devices


Allow Device

Attaching a new USB device (in my case, my mobile phone):

$ sudo usbguard list-devices | grep -v allow

we will see that the default policy is to block it:

17: block id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }

So we can allow it by:

sudo usbguard allow-device 17


sudo usbguard list-devices | grep BLN-L21

we can verify that is okay:

17: allow id 12d1:107e serial "7BQDU17308005969" name "BLN-L21" hash "qq1bdaK0ETC/thKW9WXAwawhXlBAWUIowpMeOQNGQiM=" parent-hash "kv3v2+rnq9QvYI3/HbJ1EV9vdujZ0aVCQ/CGBYIkEB0=" via-port "2-1.5" with-interface { ff:ff:00 08:06:50 }


Block USB on screen lock

The default policy, when you (or someone else) are inserting a new USB device is:

sudo usbguard get-parameter InsertedDevicePolicy

is to apply the default policy we have. There is a way to block or reject any new USB device when you have your screen locker on, as this may be a potential security attack on your system. In theory, you are inserting USB devices as you are working on your system, and not when you have your screen lock on.

I use slock as my primary screen locker via a keyboard shortcut. So the easiest way to dynamical change the default policy on usbguard is via a shell wrapper:

vim /usr/local/bin/slock

# ebal, Sun, 13 May 2018 10:07:53 +0300

# function to revert the policy
revert() {
  usbguard set-parameter InsertedDevicePolicy ${POLICY_UNLOCKED}

usbguard set-parameter InsertedDevicePolicy ${POLICY_LOCKED}


# shell function to revert reject policy

(you can find the same example on redhat’s blog post).

Tag(s): usbguard, archlinux, redhat, usb

May 11 2018

Evaggelos Balaskas - System Engineer: CentOS Dist Upgrade

Upgrading CentOS 6.x to CentOS 7.x


Disclaimer : Create a recent backup of the system. This is an unofficial , unsupported procedure !


CentOS 6

CentOS release 6.9 (Final)
Kernel 2.6.32-696.16.1.el6.x86_64 on an x86_64

centos69 login: root
Last login: Tue May  8 19:45:45 on tty1

[root@centos69 ~]# cat /etc/redhat-release
CentOS release 6.9 (Final)


Pre Tasks

There are some tasks you can do to prevent from unwanted results.

  • Disable selinux
  • Remove unnecessary repositories
  • Take a recent backup!


CentOS Upgrade Repository

Create a new centos repository:

cat > /etc/yum.repos.d/centos-upgrade.repo <<EOF


Install Pre-Upgrade Tool

First install the openscap version from dev.centos.org:

# yum -y install https://buildlogs.centos.org/centos/6/upg/x86_64/Packages/openscap-1.0.8-1.0.1.el6.centos.x86_64.rpm

then install the redhat upgrade tool:

# yum -y install redhat-upgrade-tool preupgrade-assistant-*


Import CentOS 7 PGP Key

# rpm --import http://ftp.otenet.gr/linux/centos/RPM-GPG-KEY-CentOS-7 



to bypass errors like:

Downloading failed: invalid data in .treeinfo: No section: ‘checksums’

append CentOS Vault under mirrorlist:

 mkdir -pv /var/tmp/system-upgrade/base/ /var/tmp/system-upgrade/extras/  /var/tmp/system-upgrade/updates/

 echo http://vault.centos.org/7.0.1406/os/x86_64/       >  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/extras/x86_64/   >  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://vault.centos.org/7.0.1406/updates/x86_64/  >  /var/tmp/system-upgrade/updates/mirrorlist.txt 

These are enough to upgrade to 7.0.1406. You can add the below mirros, to upgrade to 7.5.1804

More Mirrors

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/os/x86_64/  >>  /var/tmp/system-upgrade/base/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/os/x86_64/           >>  /var/tmp/system-upgrade/base/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/extras/x86_64/ >>  /var/tmp/system-upgrade/extras/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/extras/x86_64/          >>  /var/tmp/system-upgrade/extras/mirrorlist.txt 

 echo http://ftp.otenet.gr/linux/centos/7.5.1804/updates/x86_64/  >>  /var/tmp/system-upgrade/updates/mirrorlist.txt
 echo http://mirror.centos.org/centos/7/updates/x86_64/           >>  /var/tmp/system-upgrade/updates/mirrorlist.txt 



preupg is actually a python script!

# yes | preupg -v 
Preupg tool doesn't do the actual upgrade.
Please ensure you have backed up your system and/or data in the event of a failed upgrade
 that would require a full re-install of the system from installation media.
Do you want to continue? y/n
Gathering logs used by preupgrade assistant:
All installed packages : 01/11 ...finished (time 00:00s)
All changed files      : 02/11 ...finished (time 00:18s)
Changed config files   : 03/11 ...finished (time 00:00s)
All users              : 04/11 ...finished (time 00:00s)
All groups             : 05/11 ...finished (time 00:00s)
Service statuses       : 06/11 ...finished (time 00:00s)
All installed files    : 07/11 ...finished (time 00:01s)
All local files        : 08/11 ...finished (time 00:01s)
All executable files   : 09/11 ...finished (time 00:01s)
RedHat signed packages : 10/11 ...finished (time 00:00s)
CentOS signed packages : 11/11 ...finished (time 00:00s)
Assessment of the system, running checks / SCE scripts:
001/096 ...done    (Configuration Files to Review)
002/096 ...done    (File Lists for Manual Migration)
003/096 ...done    (Bacula Backup Software)
/bin/tar: .: file changed as we read it
Tarball with results is stored here /root/preupgrade-results/preupg_results-180508202952.tar.gz .
The latest assessment is stored in directory /root/preupgrade .
Summary information:
We found some potential in-place upgrade risks.
Read the file /root/preupgrade/result.html for more details.
Upload results to UI by command:
e.g. preupg -u -r /root/preupgrade-results/preupg_results-*.tar.gz .

this must finish without any errors.


CentOS Upgrade Tool

We need to find out what are the possible problems when upgrade:

# centos-upgrade-tool-cli --network=7


Then by force we can upgrade to it’s latest version:

# centos-upgrade-tool-cli --force --network=7



setting up repos...
base                                                          | 3.6 kB     00:00
base/primary_db                                               | 4.9 MB     00:04
centos-upgrade                                                | 1.9 kB     00:00
centos-upgrade/primary_db                                     |  14 kB     00:00
cmdline-instrepo                                              | 3.6 kB     00:00
cmdline-instrepo/primary_db                                   | 4.9 MB     00:03
epel/metalink                                                 |  14 kB     00:00
epel                                                          | 4.7 kB     00:00
epel                                                          | 4.7 kB     00:00
epel/primary_db                                               | 6.0 MB     00:04
extras                                                        | 3.6 kB     00:00
extras/primary_db                                             | 4.9 MB     00:04
mariadb                                                       | 2.9 kB     00:00
mariadb/primary_db                                            |  33 kB     00:00
remi-php56                                                    | 2.9 kB     00:00
remi-php56/primary_db                                         | 229 kB     00:00
remi-safe                                                     | 2.9 kB     00:00
remi-safe/primary_db                                          | 950 kB     00:00
updates                                                       | 3.6 kB     00:00
updates/primary_db                                            | 4.9 MB     00:04
.treeinfo                                                     | 1.1 kB     00:00
getting boot images...
vmlinuz-redhat-upgrade-tool                                   | 4.7 MB     00:03
initramfs-redhat-upgrade-tool.img                             |  32 MB     00:24
setting up update...
finding updates 100% [=========================================================]
(1/323): MariaDB-10.2.14-centos6-x86_64-client.rpm            |  48 MB     00:38
(2/323): MariaDB-10.2.14-centos6-x86_64-common.rpm            | 154 kB     00:00
(3/323): MariaDB-10.2.14-centos6-x86_64-compat.rpm            | 4.0 MB     00:03
(4/323): MariaDB-10.2.14-centos6-x86_64-server.rpm            | 109 MB     01:26
(5/323): acl-2.2.51-12.el7.x86_64.rpm                         |  81 kB     00:00
(6/323): apr-1.4.8-3.el7.x86_64.rpm                           | 103 kB     00:00
(7/323): apr-util-1.5.2-6.el7.x86_64.rpm                      |  92 kB     00:00
(8/323): apr-util-ldap-1.5.2-6.el7.x86_64.rpm                 |  19 kB     00:00
(9/323): attr-2.4.46-12.el7.x86_64.rpm                        |  66 kB     00:00
(320/323): yum-plugin-fastestmirror-1.1.31-24.el7.noarch.rpm  |  28 kB     00:00
(321/323): yum-utils-1.1.31-24.el7.noarch.rpm                 | 111 kB     00:00
(322/323): zlib-1.2.7-13.el7.x86_64.rpm                       |  89 kB     00:00
(323/323): zlib-devel-1.2.7-13.el7.x86_64.rpm                 |  49 kB     00:00
testing upgrade transaction
rpm transaction 100% [=========================================================]
rpm install 100% [=============================================================]
setting up system for upgrade
Finished. Reboot to start upgrade.



The upgrade procedure, will download all rpm packages to a directory and create a new grub entry. Then on reboot the system will try to upgrade the distribution release to it’s latest version.

# reboot 






CentOS 7

CentOS Linux 7 (Core)
Kernel 3.10.0-123.20.1.el7.x86_64 on an x86_64

centos69 login: root
Last login: Fri May 11 15:42:30 on ttyS0

[root@centos69 ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)


Tag(s): centos, centos7

May 09 2018

vanitasvitae's blog » englisch: Summer of Code: Small steps

Yesterday I got my first results encrypting and decrypting OpenPGP messages using PGPainless, my fork of bouncy-gpg. There were some interesting hurdles that I want to discuss though.


As a first step towards working encryption and decryption, I obviously needed to create some PGP keys for testing purposes. As a regular user of OpenPGP I knew how to create keys using the command line tool GnuPG, so I started up the key creation by typing “gpg –generate-key”. I chose the key type to be RSA with a length of 2048 bits, as those settings are also the defaults recommended by GnuPG itself. When it came to entering user id information though, things got a little more complicated. GnuPG asks for the name of the user, their email address and a comment. XEP-0373 states, that the user id packet of a PGP key MUST be of the format “xmpp:juliet@capulet.lit”. My first thing to figure out was, if I should enter that String as the name, email or as a comment. I first tried with the name, upon which GnuPG complained, that neither name, nor comment is allowed to contain an email address. Logically my next step was to enter the String as the users email address. Again, GnuPG complained, this time it stated, that “xmpp:juliet@capulet.lit” was not a valid Email address. So I got stuck.

Luckily I knew that Philipp Hörist was working on an experimental OX plugin for Gajim. He could hint me to a process of “Unattended Key Generation“, which reads input values from a script file. This input would not be validated by GnuPG as strictly as when using the wizard, so now I was able to successfully create my testing keys. Big thanks for the help :)

Update: Apparently GnuPG provides an additional flag “–allow-freeform-uid”, which does exactly that. Allowing uids of any form. Using that flag allows easy generation and editing of keys with freely chosen uids. Thanks to Wiktor from the Conversations.im chat room :)


As a next step, I wrote a little JUnit test which signs and encrypts a little piece of text, followed by decryption and signature validation. Here I came across my next problem.

Bouncy-gpg provides a class called Rfc4880KeySelectionStrategy, which is used to select keys from the users keyring following a certain strategy. In my testing code, I created two keyrings for Romeo and Juliet and added their respective public and private keys like you would do in a real life scenario. The issue I then encountered was, that when I tried to encrypt my message from Juliet for Romeos public key, I got the error, that “no suitable key was found”. How could that be? I did some more debugging and was able to verify that the keys were in fact added to the keyrings just as I intended.

To explain the cause of this issue, I have to explain in a little more depth, how the user id field is formatted in OpenPGP.
RFC4880 states, the following:

A User ID packet consists of UTF-8 text that is intended to represent
the name and email address of the key holder.  By convention, it
includes an RFC 2822 [RFC2822] mail name-addr, but there are no
restrictions on its content.

A “mail name-addr” follows this format: “Juliet Capulet (The Juliet from Shakespear’s play) <juliet@capulet.lit>”.
First there is the name of the key owner, followed by a comment in parentheses, followed by the email address in angle brackets. The usage of brackets makes it unambiguous, which value is which, so all values are optional.
“<juliet@capulet.lit>” would still be a valid user id for example.

So what was the problem?
The user id of my testing key looked like this: “xmpp:juliet@capulet.lit”. Note that there are no angle brackets or parentheses around the text, so the string would be interpreted as name.
Bouncy-gpg’s Rfc4880KeySelectionStrategy however contained some code, which would check, whether the query the user entered to search for a key would be enclosed in angle brackets, to follow the email address format. In case it doesn’t, the code would add the angle brackets prior to executing the search. Instead of searching for “xmpp:juliet@capulet.lit”, the selection strategy would look out for keys with the user id “<xmpp:juliet@capulet.lit>”.
My solution to the problem was to create my own KeySelectionStrategy, which would leave the query as is, in order for it to match my keys user id. Figuring that out took me quite a while :D


So what conclusions can I draw from my experiences?
First of all, I’m not sure, if it is a good idea to give the user the ability to import their own PGP keys. GnuPGs behaviour of forbidding the user to add user ids which don’t follow the mail name-addr format will make it very hard for a user to create a key with a valid user id. (Update: Users can use the flag “–allow-freeform-uid” to generate new keys and edit existing ones with unconventional uids.) Philipp Hörist suggested that implementations of XEP-0373 should instead create a key for the user on the first use and I think I aggree with him. As a logical next step I have to figure out, how to create PGP keys using Bouncycastle :D

I hope you liked my little update post, which grew longer than I expected :D

Happy Hacking!

May 08 2018

vanitasvitae's blog » englisch: Summer of Code: The plan. Act 1: OpenPGP


OpenPGP (know as RFC4880) defines a format for encrypted and signed data, as well as encryption keys and signatures.

My main problem with the specification is, that it is very noisy. The document is 90 pages long and describes every aspect an implementer needs to know about, from how big numbers are stored, over which magic bits and bytes are in use to mark special regions in a packet, to recommendations about used algorithms. Since I’m not going to write a crypto library from scratch, the first step I have to take is to identify which parts are important for me as a user of a – lets call it mid-level-API – and which parts I can ignore. You can see this posting as kind of an hopefully somewhat entertaining piece of jotting paper which I use to note down important parts of the spec while I go through the document.

Lets start to create a short TL;DR of the OpenPGP specification.
The basic process of creating an encrypted message is as follows:

  • The sender provides a plaintext message
  • That message gets encrypted with a randomly generated symmetric key (called session key)
  • The session key then gets encrypted for each recipients public key and the resulting block of data gets prepended to the previously encrypted message

As you can see, an OpenPGP message consists of multiple parts. Those are called sub-packets. There is a pretty respectable number of sub-packet types specified in the RFC. Many of them are not very interesting, so lets identify the few which are relevant for our project.

  • Public-Key Encrypted Session Key Packets
    Those packets represent a session key encrypted with the public key of a recipient.
  • Signature Packets
    Digital signatures are used to provide authenticity. If a piece of data is signed using the secret key of the sender, the recipient is able to verify its origin and authenticity. There is a whole load of different signature sub-packets, so for now we just acknowledge their existence without going into too much detail.
  • Compressed Data Packets
    OpenPGP provides the feature of compressing plaintext data prior to encrypting it. This might come in handy, since encrypting files or messages adds quite a bit of overhead. Compressing the original data can compensate that effect a little bit.
  • Symmetrically Encrypted Data Packets
    This packet type represents data which has been encrypted using a symmetric key (in our case the session key).
  • Literal Data Packets
    The original message we want to encrypt is referred to as literal data. The literal data packet consists of metadata like encoding of the original message, or filename in case we want to encrypt a file, as well as – of course – the data itself.
  • ASCII Armor (not really a Packet)
    Encrypted data is represented in binary form. Since one big use case of OpenPGP encryption is in Email messaging though, it is necessary to bring the data into a form which can be transported safely. The ASCII Armor is an additional layer which encodes the binary data using Base64. It also makes the data identifiable for humans by adding a readable header and footer. XEP-0373 forbids the use of ASCII Armor though, so lets focus on other things instead :D

Those packet types can be nested, as well as concatenated in many different ways. For example, a common constellation would consist of a Literal Data Packet of our original message, which is, along with a Signature Packet, contained inside of a Compressed Data Packet to save some space. The Compressed Data Packet is nested inside of a Symmetrically Encrypted Data Packet, which lives inside of an OpenPGP message along with one or more Public-Key Encrypted Session Key Packets.

Each packet carries additional information, for example which compression algorithm is used in the Compressed Data Packet. We will not focus on those details, as we assume that the libraries we use will already handle those specifics for us.

OpenPGP also specifies a way to store and exchange keys. In order to be able to receive encrypted messages, a user must distribute their keys to other users. A key can carry a lot of additional information, like identities and signatures of other keys. Signatures are used to create trust networks like the web of trust, but we will most likely not dive deeper into that.

Signatures on keys can also be used to create key hierarchies like super keys and sub-keys. It still has to be determined, if and how those patterns will be reflected in my code. I can imagine it would be useful to only have sub-keys on mobile devices, while the main super key is hidden away from the orcs in a bunker somewhere, but I also think that it would be a rather complicated task to add support for sub-keys to my project. We will see ;)

That’s it for Part 1 of my sighting of the OpenPGP RFC.

Happy Hacking!

May 07 2018

Losca: Converting an existing installation to LUKS using luksipc - 2018 notes

Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

DanielPocock.com - fsfe: Powering a ham radio transmitter

Last week I announced the crowdfunding campaign to help run a ham radio station at OSCAL. Thanks to all those people who already donated or expressed interest in volunteering.

Modern electronics are very compact and most of what I need to run the station can be transported in my hand luggage. The two big challenges are power supplies and antenna masts. In this blog post there are more details about the former.

Here is a picture of all the equipment I hope to use:

The laptop is able to detect incoming signals using the RTL-SDR dongle and up-converter. After finding a signal, we can then put the frequency into the radio transmitter (in the middle of the table), switch the antenna from the SDR to the radio and talk to the other station.

The RTL-SDR and up-converter run on USB power and a phone charger. The transmitter, however, needs about 22A at 12V DC. This typically means getting a large linear power supply or a large battery.

In the photo, I've got a Varta LA60 AGM battery, here is a close up:

There are many ways to connect to a large battery. For example, it is possible to use terminals like these with holes in them for the 8 awg wire or to crimp ring terminals onto a wire and screw the ring onto any regular battery terminal. The type of terminal with these extra holes in it is typically sold for car audio purposes. In the photo, the wire is 10 awg superflex. There is a blade fuse along the wire and the other end has a PowerPole 45 plug. You can easily make cables like this yourself with a PowerPole crimping tool, everything can be purchased online from sites like eBay.

The wire from the battery goes into a fused power distributor with six PowerPole outlets for connecting the transmitter and other small devices, for example, the lamp in the ATU or charging the handheld:

The AGM battery in the photo weighs about 18kg and is unlikely to be accepted in my luggage, hence the crowdfunding campaign to help buy one for the local community. For many of the young people and students in the Balkans, the price of one of the larger AGM batteries is equivalent to about one month of their income so nobody there is going to buy one on their own. Please consider making a small donation if you would like to help as it won't be possible to run demonstrations like this without power.

May 04 2018

DanielPocock.com - fsfe: GoFundMe: errors and bait-and-switch

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

May 03 2018

DanielPocock.com - fsfe: Turning a dictator's pyramid into a ham radio station

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 25 people donate EUR 10 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the hackerspaces, radio clubs and non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

vanitasvitae's blog » englisch: Summer of Code: Preparations

During preparations for my GSoC project, I’m finding first traces left by developers who dealt with OpenPGP before. It seems that Florian was right when he noted, that there is a lack of usable higher level Java libraries as I found out when I stumbled across this piece of code. On the other hand I also found a project which thrives to simplify OpenPGP encryption as much as possible. bouncy-gpg – while apparently laying its focus on file encryption and GnuPG compatibility – looks like a promising candidate for Smacks OX module. Unfortunately its code contained some very recent Java features like stream semantics, but I managed to modify its source code to make it compatible down to Androids API level 9, the version Smack is currently targeting. My changes will eventually be upstreamed.

While my next target is now to create a very basic prototype of OX encryption, I’m also reading into OpenKeychains OpenPGP API. It would be very nice to create a universal interface that allows for OX encryption using multiple backends – BouncyCastle on pure Java systems and SpongyCastle / OpenKeychain on Android.

During my work on OX providers for Smack, I stumbled across an interesting issue. When putting a body element as a child into an signcrypt element, the body did not include its namespace, as it is normally only used as child of the message element. Putting it into a signcrypt element made up a special edge case. When a Provider tries to parse the body element, it would falsely interpret the missing namespace as the one of the parent element. Florian provided the solution to this problem by modifying the “toXML()” method of all elements to require an enclosing namespace. Now the body is able to include its namespace in the XML in case the enclosing namespace is different from “jabber:client”.

Happy Hacking!

April 29 2018

Evaggelos Balaskas - System Engineer: DNS RPZ with PowerDNS

Domain Name Service Response Policy Zones

from PowerDNS Recursor documentation :

Response Policy Zone is an open standard developed by Paul Vixie (ISC and Farsight) and Vernon Schryver (Rhyolite), to modify DNS responses based on a policy loaded via a zonefile.

Sometimes it is called: DNS Firewall

Reading Material

aka useful links:


An example scheme to get a a better understanding on the concept behind RPZ.



The main purposes of implentanting DNS RPZ in your DNS Infrastructure are to dynamicaly DNS sinkhole:

  • Malicious domains,
  • Implement goverment regulations,
  • Prevent users to visit domains that are blocked via legal reasons.

by maintaining a single RPZ zone (or many) or even getting a subscription from another cloud provider.

Althouth for SOHO enviroments I suggest reading this blog post: Removing Ads with your PowerDNS Resolver and customize it to your needs.

RPZ Policies

These are the RPZ Policies we can use with PowerDNS.

  • Policy.Custom (default policy)
  • Policy.Drop
  • Policy.NXDOMAIN
  • Policy.NODATA
  • Policy.Truncate
  • Policy.NoAction


Will return a NoError, CNAME answer with the value specified with
defcontent, when looking up the result of this CNAME, RPZ is not taken into account

Use Case

Modify the DNS responces with a list of domains to a specific sinkhole dns record.


  thisismytestdomain.com.org ---> sinkhole.example.net.
*.thisismytestdomain.com.org ---> sinkhole.example.net.
  example.org                ---> sinkhole.example.net.
*.example.org                ---> sinkhole.example.net.
  example.net                ---> sinkhole.example.net.
*.example.net                ---> sinkhole.example.net.

DNS sinkhole record

Create an explicit record outside of the DNS RPZ scheme.

A type A Resource Record to a domain zone that points to is okay, or use an explicit host file that the resolver can read. In the PowerDNS Recursor the configuration for this, are these two lines:



$ echo " sinkhole.example.net" >> /etc/pdns-recursor/hosts.blocked

and reload the service.


RPZ functionality is set by reading a bind dns zone file, so create a simple file:


; Time To Live
$TTL 86400

; Start Of Authorite
@       IN  SOA authns.localhost. hostmaster. 2018042901 14400 7200 1209600 86400

; Declare Name Server
@                    IN  NS      authns.localhost.


RPZ support configuration is done via our Lua configuration mechanism

In the pdns-recursor configuration file: /etc/pdns-recursor/recursor.conf we need to declare a lua configuration file:


Lua-RPZ Configuration file

that points to the rpz.zone file. In this example, we will use Policy.Custom to send every DNS query to our default content: sinkhole.example.net


rpzFile("/etc/pdns-recursor/rpz.zone", {defpol=Policy.Custom, defcontent="sinkhole.example.net."})

Restart PowerDNS Recursor

At this moment, restart the powerdns recusor

# systemctl restart pdns-recursor


# service pdns-recursor restart

and watch for any error log.

Domains to sinkhole

Append to the rpz.zone all the domains you need to sinkhole. The defcontent="sinkhole.example.net." will ignore the content of the zone, but records must be valid, or else pdns-recursor will not read the rpz bind zone file.

; Time To Live
$TTL 86400

; Start Of Authorite
@   IN  SOA authns.localhost. hostmaster. 2018042901 14400 7200 1209600 86400

; Declare Name Server
@                    IN  NS      authns.localhost.

; Domains to sinkhole
thisisatestdomain.org.  IN  CNAME    sinkhole.example.net.
thisisatestdomain.org.  IN  CNAME    sinkhole.example.net.
example.org.            IN  CNAME    sinkhole.example.net.
*.example.org.          IN  CNAME    sinkhole.example.net.
example.net.            IN  CNAME    sinkhole.example.net.
*.example.net.          IN  CNAME    sinkhole.example.net.

When finished, you can reload the lua configuration file that read the rpz.zone file, without restarting the powerdns recursor.

# rec_control reload-lua-config

Verify with dig

testing the dns results with dig:

$ dig example.net.

;example.net.           IN  A

example.net.        86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86261   IN  A

$ dig thisisatestdomain.org

;thisisatestdomain.org.     IN  A

thisisatestdomain.org.  86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86229   IN  A


test the wildcard record in rpz.zone:

$ dig example.example.net.

;example.example.net.       IN  A

example.example.net.    86400   IN  CNAME   sinkhole.example.net.
sinkhole.example.net.   86400   IN  A

Tag(s): dns, rpz, PowerDNS

April 23 2018

vanitasvitae's blog » englisch: Another Summer of Code with Smack

I’m very happy to announce that once again I will participate in the Summer of Code. Last year I worked on OMEMO encrypted Jingle Filetransfer for the XMPP client library Smack. This year, I will once again contribute to the Smack project. A big thanks goes out to Daniel Gultsch and Conversations.im, who act as an umbrella organization.

My new project will be an implementation of XEP-0373 and XEP-0374 OpenPGP for XMPP + Instant Messaging. The projects description can be found here.

Now I will enjoy my last few free days before the hard work begins. I will keep you updated :)

Happy Hacking!

April 19 2018

Inductive Bias: DataworksSummit Berlin - Wednesday morning

Data strategy - cloud strategy - business strategy: Aligning the three was one of the main themes (initially put forward in his opening keynote by CTO of Hortonworks Scott Gnau) thoughout this weeks Dataworks Summit Berlin kindly organised and hosted by Hortonworks. The event was attended by over 1000 attendees joining from 51 countries.

The inspiration hat was put forward in the first keynote by Scott was to take a closer look at the data lifecycle - including the fact that a lot of data is being created (and made available) outside the control of those using it: Smart farming users are using a combination of weather data, information on soil conditions gathered through sensors out in the field in order to inform daily decisions. Manufacturing is moving towards closer monitoring of production lines to spot inefficiencies. Cities are starting to deploy systems that allow for better integration of public services. UX is being optimized through extensive automation.

When it comes to moving data to the cloud, the speaker gave a nice comparison: To him, explaining the difficulties that moving to the cloud brings is similar to the challenges that moving "stuff" to external storage in the garage brings: It opens questions of "Where did I put this thing?", but also about access control, security. Much the same way, cloud and on-prem integration means that questions like encryption, authorization, user tracking, data governance need to be answered. But also questions like findability, discoverability and integration for analysis purposes.

The second keynote was given by Mandy Chessell from IBM introducing Apache Atlas for metadata integration and governance.

In the third keynote, Bernard Marr talked about the five promises of big data:

  • Informing decisions based on data: The goal here should be to move towards self service platforms to remove the "we need a data scientist for that" bottleneck. That in turn needs quite some training and hand-holding for those interested in the self-service platforms.
  • Understanding customers and customer trends better: The example given was a butcher shop that would install a mobile phone tracker in his shop window in order to see which advertisement would make more people stop by and look closer. As a side effect he noticed an increase in people on the street in the middle of the night (coming from pubs nearby). A decision was made to open at that time, offer what people were searching for at that time according to Google trends - by now that one hour in the night makes a sizeable portion of the shop's income. The second example given was Disney already watching all it's Disney park visitors through wrist bands, automating line management at popular attractions - but also deploying facial recognition watching audiences watch shows in figure out how well those shows are received.
  • Improve the customer value proposition: The example given was the Royal Bank of Scotland moving closer to it's clients, informing them through automated means when interest rates are dropping, or when they are double insured - thus building trust and transparency. The other example given was that of a lift company building sensors into lifts in order to be able to predict failures and repair lifts when they are least used.
  • Automate business processes: Here the example was that of a car insurance that would offer dynamic rates if people would let themselves monitor during driving. Those adhering to speed limits, avoiding risky routes and times would get lower rates. Another example was that of automating the creation of sports reports e.g. for tennis matches based on sensors deployed, or that of automating Forbes analyst reports some of which get published without the involvement of a journalist.
  • Last but not least the speaker mentioned the obvious business case of selling data assets - e.g. selling aggregated and refined data gathered through sensors in the field back to farmers. Another example was the automatic detection of events based on sounds detected - e.g. gun shots close to public squares and selling that back to the police.

After the keynotes were over breakout sessions started - including my talk about the Apache Way. It was good to see people show up to learn how all the open source big data projects are working behind the scences - and how they themselves can get involved in contributing and shaping these projects. I'm looking forward to receiving pictures of feather shaped cookies.

During lunch there was time to listen in on how Santander operations is using data analytics to drive incident detection, as well as load prediction for capacity planning.

After lunch I had time for two more talks: The first explained how to integrate Apache MxNet with Apache NiFi to bring machine learning to the edge. The second one introduced Apache Beam - an abstraction layer above Apache Flink, Spark and Google's platform.

Both, scary and funny: Walking up to the Apache Beam speaker after his talk (having learnt at DataworksSummit that he is PMC Chair of Apache Beam) - only to be greeted with "I know who *you* are" before even getting to introduce oneself...

April 17 2018

Paul Boddie's Free Software-related blog » English: Extending L4Re/Fiasco.OC to the Letux 400 Notebook Computer

In my summary of the port of L4Re and Fiasco.OC to the Ben NanoNote, I remarked that progress had been made on supporting other products and hardware peripherals. In fact, such progress occurred more rapidly than I had thought possible, and I have been able to extend the work to support the Letux 400 notebook computer. It is perhaps worth describing the Letux 400 in a bit more detail because it has an interesting place in the history of netbook computers.

Some History

Back in the early 21st century, laptop computers were becoming increasingly popular at the expense of desktop computers, but as laptops began to take the place of desktops in homes and workplaces, this gradually led each successive generation of laptops to sacrifice portability and affordability in favour of larger, faster, higher-resolution screens and general hardware specifications more competitive with the desktop offerings they sought to replace. Laptops were becoming popular but also bigger, heavier and more expensive.

Things took an interesting turn in 2006 with the introduction of the XO-1 from the One Laptop per Child (OLPC) initiative. With rather different goals to those of the mainstream laptop vendors, the focus was to deliver a relatively-inexpensive yet robust portable computer for use by schoolchildren, many of whom might be living in places with limited infrastructure where increasingly power-hungry mainstream laptops would have been unsuitable, even unusable.

One unexpected consequence of the introduction of the XO-1 was the revival in interest in modestly-performing portable computing hardware. People were actually interested in a computer that did the things they needed, rather than having to buy something designed for gamers, software developers, or corporate “power users” (of both the pretend and genuine kinds). Rather than having to haul increasingly big and heavy laptops and all the usual accessories in a big dedicated bag, they liked the idea of slipping a smaller, lighter device into their everyday bag, as had probably been the idea with subnotebooks while they were still a thing.

Thus, the Asus Eee PC came about, regarded as the first widely-available netbook of recent times (acknowledging the earlier Psion netBook, of course), bringing with it the attention of large-volume manufacturers and economies of scale. For “lightweight tasks”, netbooks were enough for many people: a phenomenon that found itself repeating with tablets, particularly as recreational usage of technology became more important to buyers and evolved in certain ways.

Now, one thing that had been a big part of the OLPC initiative’s objectives was a $100 price point. At first, despite fairly radical techniques being used to reduce cost, and despite the involvement of a major original equipment manufacturer in the production of the XO-1, that price point of $100 was out of reach. Even the Eee PC retailed for a few hundred dollars.

This is where a product known as the Skytone Alpha 400 enters the picture. Some vendors, rebranding this product, offered it as possibly the first $100 laptop – or netbook – to be made available for sale. One of the vendors offers it as the Letux 400, and it has been available for as little as €125 during its retail lifespan. Noting that it has rather similar hardware to the Ben NanoNote, but has a more conventional physical profile and four times as much RAM, my brother bought one to investigate a few years ago. That is how I eventually ended up embarking on this experiment.

Extending Recent Work

There are many similarities between the JZ4720 system-on-a-chip (SoC) used in the Ben and the JZ4730 used in the Letux 400. However, it can be said that the JZ4720 is much better understood. The JZ4740 and closely-related devices like the JZ4720 have appeared in a number of different devices, documentation has surfaced for these products, and vendor source code has always been available, typically using or implicitly documenting most of the hardware.

In contrast, limited documentation is known to exist for the JZ4730, and the available vendor source code has not always described every detail of the hardware, even though the essential operations and register details appear to be present. Having looked at the Linux kernel sources that support the JZ4730, together with U-Boot source code, the similarities and differences between the JZ4720 and JZ4730 began to take shape in my mind.

I took an optimistic approach that mostly paid off. The Fiasco.OC kernel needs augmenting with the details of the JZ4730, but these are similar in some ways to the JZ4720 and familiar otherwise. For instance, the JZ4730 has a 32-bit “operating system timer” (OST) that curiously does not appear in the JZ4740 but does appear in more recent products such as the JZ4780. Bearing such things in mind, the timer and interrupt support was easily enough added.

One very different thing about the JZ4730 is that it does not seem to support the “set” and “clear” register locations that are probably common to most modern SoCs. Typically, one might want to update a hardware-related register to change a peripheral’s configuration, and it must have become apparent to hardware designers that such updates mostly want to either set or clear bits. Normally in a program, to achieve such things involves reading a value, performing a logical operation that combines the value with a description of the bits to be set or cleared, and then the value is written back to where it came from. For example:

define bits to set
load value from location (exposing a hardware register, perhaps)
logical-or value with bits
store result in location

Encapsulating this in a single instruction avoids potential issues with different things competing to update the location at the same time, if the hardware permits this, and just offers something that is more efficient and convenient, anyway. Separate locations are provided for “set” and “clear” operations, and the original location is provided to read and to overwrite the hardware register’s value. Sometimes, such registers might only support read-only access, in fact. But the JZ4730 does not support such additional locations, and so we have to do things the hard way when updating registers and doing things like clearing and setting bits.

One odd thing that caught me out was a strange result from the special “exception base” (EBASE) register that does not seem to return zero for the CPU identifier, something that the bootstrap code in L4Re expects. I suppressed this test and made the kernel always return zero when it asks for this identifier. To debug such things, I could not use the screen as I had done with the Ben since the bootloader does not configure it on the Letux. Fortunately, unlike the Ben, the Letux provides a few LEDs to indicate things like keyboard and network status, and these can be configured and activated to communicate simple status information.

Otherwise, the exercise mostly involved me reworking some existing code I had (itself borrowing somewhat from existing driver code) that provides driver support for the Letux hardware peripherals. The clock and power management (CPM) arrangement is familiar but different from the JZ4720; the LCD driver can actually be used as is; the general-purpose input/output (GPIO) arrangement is different from the JZ4720 and, curiously enough once again, perhaps more similar to the JZ4780 in some ways. To support the LCD panel’s backlight, a pulse-width modulation (PWM) driver needed to be added, but this involves very little code.

I also had to deal with the mistakes I made myself when not concentrating hard enough. Lots of testing and re-testing occurred. But in the space of a weekend or so, I had something to show for all the previous effort plus this round’s additional effort.

The Letux 400 and Ben NanoNote running the "spectrum" example

The Letux 400 and Ben NanoNote running the "spectrum" example

Here, you can see what kind of devices we are dealing with! The Letux 400 is less than half the width of a normal-size keyboard (with numeric keypad), and the Ben NanoNote is less than half the width of the Letux 400. Both of them were inexpensive computing devices when they were introduced, and although they may not be capable of running “modern” desktop environments or Web browsers, they offer computing facilities that were, once upon a time, “workstation class” in various respects. And they did, after all, run GNU/Linux when they were introduced.

And that is why it is attractive to consider running other “proper” operating system technologies on them now. Maybe we can revisit the compromises that led to the subnotebook and the netbook, perhaps even the tablet, where devices that are not the most powerful still have a place in fulfilling our computing needs.

Inductive Bias: Apache Breakfast

In case you missed it but are living in Berlin - or are visiting Berlin/ Germany this week: A handful of Apache people (committers/ members) are meeting over breakfast on Friday morning this week. If you are interested in joining, please let me know (or check yourself - in the archives of the mailing list party@apache.org)

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!