Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 20 2018

TSDgeos' blog: Qt Contributor Summit 2018

About two weeks ago i attended Qt Contributor Summit 2018, i did so wearing my KDAB hat, but given that KDE software is based heavily on Qt I think I'll give a quick summary of the most important topic that was handled at the Summit: Qt 6

  • Qt 6 is planned for a November 2020 release
  • Qt 5 releases will continue with the current cadence as of now with 5.15 being the last release (and also LTS)
  • The work branch for Qt 6 will be branched soon after Qt 5.12
  • Qt 6 has to be easy to migrate from Qt 5
  • Qt 6 will use C++17
  • Everything to be removed in Qt 6 should be marked as deprecated in 5.15 (ideally sooner)
  • What can be done in Qt 5 should be done to Qt 5
  • Qt 6 should be a "boring" release user feature wise, mostly cleanup and preparing for the future
  • Qt 6 should change things that break at compile time, those are easy to fix, silent runtime changes are scarier
  • Qt 6 will not use qmake as build system
  • The build system for Qt 6 is still not decided, but there's people working on a qbs build and noone working on any other alternative

On a community related note, Tero Kojo the Community Manager for The Qt Company is leaving and doesn't seem a replacement is on sight

Of course, note that these are all plans, and as such they may be outdated already since the last 10 days :D

June 19 2018

vanitasvitae's blog » englisch: Summer of Code: The demotivating week

I guess in anybodies project, there is one week that stands out from the others by being way less productive than the rest. I just had that week.

I had to take one day off on Friday due to circulation problems after a visit at the doctor (syringes suck!), so I had the joy of an extended weekend. On top that, I was not at home that time, so I didn’t write any code during these days.

At least I got some coding done last week. Yesterday I spent the whole day scratching my head about an error that I got when decrypting a message in Smack. Strangely that error did not happen in my pgpainless tests. Today I finally found the cause of the issue and a way to work around it. Turns out, somewhere between key generation and key loading from persistent storage, something goes wrong. If I run my test with fresh keys, everything works fine while if I run it after loading the keys from disk, I get an error. It will be fun working out what exactly is going wrong. My breakpoint-debugging skills are getting better, although I still often seem to skip over important code points during debugging.

My ongoing efforts of porting the Smack OX code over from using bouncy-gpg to pgpainless are still progressing slowly, but steady. Today I sent and received a message successfully, although the bug I mentioned earlier is still present. As I said, its just a matter of time until I find it.

Apart from that, I created another very small pull request against the Bouncycastle repository. The patch just fixes a log message which irritated me. The message stated, that some data could not be encrypted, while in fact date is being decrypted. Another patch I created earlier has been merged \o/.

There is some really good news:
Smack 4.4.0-alpha1 has been released! This version contains my updated OMEMO API, which I have been working on since at least half a year.

This week I will continue to integrate pgpainless into Smack. There is also still a significant lack of JUnit tests in both projects. One issue I have is, that during my project I often have to deal with objects, that bundle information together. Those data structures are needed in smack-openpgp, smack-openpgp-bouncycastle, as well as in pgpainless. Since smack-openpgp and pgpainless do not depend on one another, I need to write duplicate code to provide all modules with classes that offer the needed functionality. This is a real bummer and creates a lot of ugly boilerplate code.

I could theoretically create another module which bundles those structures together, but that is probably overkill.

On the bright side of things, I passed the first evaluation phase, so I got a ton of motivation for the coming days :)

Happy Hacking!

June 15 2018

DanielPocock.com - fsfe: The questions you really want FSFE to answer

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

June 13 2018

Evaggelos Balaskas - System Engineer: Terraform Gandi

This blog post, contains my notes on working with Gandi through Terraform. I’ve replaced my domain name with: example.com put pretty much everything should work as advertised.

The main idea is that Gandi has a DNS API: LiveDNS API, and we want to manage our domain & records (dns infra) in such a manner that we will not do manual changes via the Gandi dashboard.

 

Terraform

Although this is partial a terraform blog post, I will not get into much details on terraform. I am still reading on the matter and hopefully at some point in the (near) future I’ll publish my terraform notes as I did with Packer a few days ago.

 

Installation

Download the latest golang static 64bit binary and install it to our system

$ curl -sLO https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
$ unzip terraform_0.11.7_linux_amd64.zip
$ sudo mv terraform /usr/local/bin/

 

Version

Verify terraform by checking the version

$ terraform version
Terraform v0.11.7

 

Terraform Gandi Provider

There is a community terraform provider for gandi: Terraform provider for the Gandi LiveDNS by Sébastien Maccagnoni (aka tiramiseb) that is simple and straightforward.

 

Build

To build the provider, follow the notes on README

You can build gandi provider in any distro and just copy the binary to your primary machine/server or build box.
Below my personal (docker) notes:

# mkdir -pv /root/go/src/
# cd /root/go/src/

# git clone https://github.com/tiramiseb/terraform-provider-gandi.git 

Cloning into 'terraform-provider-gandi'...
remote: Counting objects: 23, done.
remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 23
Unpacking objects: 100% (23/23), done.

# cd terraform-provider-gandi/

# go get
# go build -o terraform-provider-gandi

# ls -l terraform-provider-gandi
-rwxr-xr-x 1 root root 25788936 Jun 12 16:52 terraform-provider-gandi

Copy terraform-provider-gandi to the same directory as terraform binary.

 

Gandi API Token

Login into your gandi account, go through security

Gandi Security

and retrieve your API token

Gandi Token

The Token should be a long alphanumeric string.

 

Repo Structure

Let’s create a simple repo structure. Terraform will read all files from our directory that ends with .tf

$ tree
.
├── main.tf
└── vars.tf
  • main.tf will hold our dns infra
  • vars.tf will have our variables

 

Files

vars.tf

variable "gandi_api_token" {
    description = "A Gandi API token"
}

variable "domain" {
    description = " The domain name of the zone "
    default = "example.com"
}

variable "TTL" {
    description = " The default TTL of zone & records "
    default = "3600"
}

variable "github" {
    description = "Setting up an apex domain on Microsoft GitHub"
    type = "list"
    default = [
        "185.199.108.153",
        "185.199.109.153",
        "185.199.110.153",
        "185.199.111.153"
    ]
}

 

main.tf

# Gandi
provider "gandi" {
  key = "${var.gandi_api_token}"
}

# Zone
resource "gandi_zone" "domain_tld" {
    name = "${var.domain} Zone"
}

# Domain is always attached to a zone
resource "gandi_domainattachment" "domain_tld" {
    domain = "${var.domain}"
    zone = "${gandi_zone.domain_tld.id}"
}

# DNS Records

resource "gandi_zonerecord" "mx" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "@"
  type = "MX"
  ttl = "${var.TTL}"
  values = [ "10 example.com."]
}

resource "gandi_zonerecord" "web" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "web"
  type = "CNAME"
  ttl = "${var.TTL}"
  values = [ "test.example.com." ]
}

resource "gandi_zonerecord" "www" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "www"
  type = "CNAME"
  ttl = "${var.TTL}"
  values = [ "${var.domain}." ]
}

resource "gandi_zonerecord" "origin" {
  zone = "${gandi_zone.domain_tld.id}"
  name = "@"
  type = "A"
  ttl = "${var.TTL}"
  values = [ "${var.github}" ]
}

 

Variables

By declaring these variables, in vars.tf, we can use them in main.tf.

  • gandi_api_token - The Gandi API Token
  • domain - The Domain Name of the zone
  • TTL - The default TimeToLive for the zone and records
  • github - This is a list of IPs that we want to use for our site.

 

Main

Our zone should have four DNS record types. The gandi_zonerecord is the terraform resource and the second part is our local identifier. Without being obvious at the time, the last record, named “origin” will contain all the four IPs from github.

  • gandi_zonerecord” “mx”
  • gandi_zonerecord” “web”
  • gandi_zonerecord” “www”
  • gandi_zonerecord” “origin”

 

Zone

In other (dns) words , the state of our zone should be:

example.com.        3600    IN    MX       10 example.com
web.example.com.    3600    IN    CNAME    test.example.com.
www.example.com.    3600    IN    CNAME    example.com.
example.com.        3600    IN    A        185.199.108.153
example.com.        3600    IN    A        185.199.109.153
example.com.        3600    IN    A        185.199.110.153
example.com.        3600    IN    A        185.199.111.153

 

Environment

We haven’t yet declared anywhere in our files the gandi api token. This is by design. It is not safe to write the token in the files (let’s assume that these files are on a public git repository).

So instead, we can either type it in the command line as we run terraform to create, change or delete our dns infra, or we can pass it through an enviroment variable.

export TF_VAR_gandi_api_token="XXXXXXXX"

 

Verbose Logging

I prefer to have debug on, and appending all messages to a log file:

export TF_LOG="DEBUG"
export TF_LOG_PATH=./terraform.log

 

Initialize

Ready to start with our setup. First things first, lets initialize our repo.

terraform init

the output should be:

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

 

Planning

Next thing , we have to plan !

terraform plan

First line is:

Refreshing Terraform state in-memory prior to plan...

the rest should be:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + gandi_domainattachment.domain_tld
      id:                <computed>
      domain:            "example.com"
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zone.domain_tld
      id:                <computed>
      name:              "example.com Zone"

  + gandi_zonerecord.mx
      id:                <computed>
      name:              "@"
      ttl:               "3600"
      type:              "MX"
      values.#:          "1"
      values.3522983148: "10 example.com."
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.origin
      id:                <computed>
      name:              "@"
      ttl:               "3600"
      type:              "A"
      values.#:          "4"
      values.1201759686: "185.199.109.153"
      values.226880543:  "185.199.111.153"
      values.2365437539: "185.199.108.153"
      values.3336126394: "185.199.110.153"
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.web
      id:                <computed>
      name:              "web"
      ttl:               "3600"
      type:              "CNAME"
      values.#:          "1"
      values.921960212:  "test.example.com."
      zone:              "${gandi_zone.domain_tld.id}"

  + gandi_zonerecord.www
      id:                <computed>
      name:              "www"
      ttl:               "3600"
      type:              "CNAME"
      values.#:          "1"
      values.3477242478: "example.com."
      zone:              "${gandi_zone.domain_tld.id}"

Plan: 6 to add, 0 to change, 0 to destroy.

so the plan is Plan: 6 to add !

 

State

Let’s get back to this msg.

Refreshing Terraform state in-memory prior to plan...

Terraform are telling us, that is refreshing the state.
What does this mean ?

Terraform is Declarative.

That means that terraform is interested only to implement our plan. But needs to know the previous state of our infrastracture. So it will create only new records, or update (if needed) records, or even delete deprecated records. Even so, needs to know the current state of our dns infra (zone/records).

Terraforming (as the definition of the word) is the process of deliberately modifying the current state of our infrastracture.

 

Import

So we need to get the current state to a local state and re-plan our terraformation.

$ terraform import gandi_domainattachment.domain_tld example.com
gandi_domainattachment.domain_tld: Importing from ID "example.com"...
gandi_domainattachment.domain_tld: Import complete!
  Imported gandi_domainattachment (ID: example.com)
gandi_domainattachment.domain_tld: Refreshing state... (ID: example.com)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

How import works ?

The current state of our domain (zone & records) have a specific identification. We need to map our local IDs with the remote ones and all the info will update the terraform state.

So the previous import command has three parts:

Gandi Resouce         .Local ID    Remote ID
gandi_domainattachment.domain_tld  example.com

Terraform State

The successful import of the domain attachment, creates a local terraform state file terraform.tfstate:

$ cat terraform.tfstate 
{
    "version": 3,
    "terraform_version": "0.11.7",
    "serial": 1,
    "lineage": "dee62659-8920-73d7-03f5-779e7a477011",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "gandi_domainattachment.domain_tld": {
                    "type": "gandi_domainattachment",
                    "depends_on": [],
                    "primary": {
                        "id": "example.com",
                        "attributes": {
                            "domain": "example.com",
                            "id": "example.com",
                            "zone": "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
                        },
                        "meta": {},
                        "tainted": false
                    },
                    "deposed": [],
                    "provider": "provider.gandi"
                }
            },
            "depends_on": []
        }
    ]
}

 

Import All Resources

Reading through the state file, we see that our zone has also an ID:

"zone": "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"

We should use this ID to import all resources.

 

Zone Resource

Import the gandi zone resource:

terraform import gandi_zone.domain_tld XXXXXXXX-6bd2-11e8-XXXX-00163ee24379

 

DNS Records

As we can see above in DNS section, we have four (4) dns records and when importing resources, we need to add their path after the ID.

eg.

for MX is /@/MX
for web is /web/CNAME
etc

terraform import gandi_zonerecord.mx     XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/@/MX
terraform import gandi_zonerecord.web    XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/web/CNAME
terraform import gandi_zonerecord.www    XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/www/CNAME
terraform import gandi_zonerecord.origin XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/@/A

 

Re-Planning

Okay, we have imported our dns infra state to a local file.
Time to plan once more:

$ terraform plan

Plan: 2 to add, 1 to change, 0 to destroy.

 

Save Planning

We can save our plan:

$ terraform plan -out terraform.tfplan

 

Apply aka run our plan

We can now apply our plan to our dns infra, the gandi provider.

$ terraform apply
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: 

To Continue, we need to type: yes

 

Non Interactive

or we can use our already saved plan to run without asking:

$ terraform apply "terraform.tfplan"
gandi_zone.domain_tld: Modifying... (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379)
  name: "example.com zone" => "example.com Zone"
gandi_zone.domain_tld: Modifications complete after 2s (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379)
gandi_domainattachment.domain_tld: Creating...
  domain: "" => "example.com"
  zone:   "" => "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
gandi_zonerecord.www: Creating...
  name:              "" => "www"
  ttl:               "" => "3600"
  type:              "" => "CNAME"
  values.#:          "" => "1"
  values.3477242478: "" => "example.com."
  zone:              "" => "XXXXXXXX-6bd2-11e8-XXXX-00163ee24379"
gandi_domainattachment.domain_tld: Creation complete after 0s (ID: example.com)
gandi_zonerecord.www: Creation complete after 1s (ID: XXXXXXXX-6bd2-11e8-XXXX-00163ee24379/www/CNAME)

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

 

Tag(s): terraform, gandi

June 11 2018

vanitasvitae's blog » englisch: Summer of Code: Evaluation and Key Lengths

The week of the first evaluation phase is here. This is the fourth week of GSoC – wow, time flew by quite fast this year :)

This week I plan to switch my OX implementation over to PGPainless in order to have a working prototype which can differentiate between sign, crypt and signcrypt elements. This should be pretty straight forward. In case everything goes wrong, I’ll keep the current implementation as a working backup solution, so we should be good to go :)

OpenPGP Key Type Considerations

I spent some time testing my OpenPGP library PGPainless and during testing I noticed, that messages encrypted and signed using keys from the family of elliptic curve cryptography were substantially smaller than messages encrypted with common RSA keys. I knew already, that one benefit of elliptic curve cryptography is, that the keys can be much smaller while providing the same security as RSA keys. But what was new to me is, that this also applies to the length of the resulting message. I did some testing and came to interesting results:

In order to measure the lengths of produced cipher text, I create some code that generates two sets of keys and then encrypts messages of varying lengths. Because OpenPGP for XMPP: Instant Messaging only uses messages that are encrypted and signed, all messages created for my tests are encrypted to, and signed with one key. The size of the plaintext messages ranges from 20 bytes all the way up to 2000 bytes (1000 chars).

Diagram comparing the lengths of ciphertext of different crypto systems

Comparison of Cipher Text Length

The resulting diagram shows, how quickly the size of OpenPGP encrypted messages explodes. Lets assume we want to send the smallest possible OX message to a contact. That message would have a body of less than 20 bytes (less than 10 chars). The body would be encapsulated in a signcrypt-element as specified in XEP-0373. I calculated that the length of that element would be around 250 chars, which make 500 bytes. 500 bytes encrypted and signed using 4096 bit RSA keys makes 1652 bytes ciphertext. That ciphertext is then base64 encoded for transport (a rule of thumb for calculating base64 size is ceil(bytes/3) * 4 which results in 2204 bytes. Those bytes are then encapsulated in an openpgp-element (adds another 94 bytes) which can be appended to a message. All in all the openpgp-element takes up 2298 bytes, compared to a normal body, which would only take up around 46 bytes.

So how do elliptic curves come to the rescue? Lets assume we send the same message again using 256 bit ECC keys on the curve P-256. Again, the length of the signcrypt-element would be 250 chars or 500 bytes in the beginning. OpenPGP encrypting those bytes leads to 804 bytes of ciphertext. Applying base64 encoding results in 1072 bytes, which finally make 1166 bytes of openpgp-element. Around half the size of an RSA encrypted message.

For comparison: I estimated a typical XMPP chat message body to be around 70 characters or 140 bytes based on a database dump of my chat client.

We must not forget however, that the stanza size follows a linear function of the form y = m*x+b, so if the plaintext size grows, the difference between RSA and ECC will become less and less significant.
Looking at the data, I noticed, that applying OpenPGP encryption always added a constant number to the size of the plaintext. Using 256 bit ECC keys only adds around 300 bytes, encrypting a message using 2048 bit RSA keys adds ~500 bytes, while RSA with 4096 bits adds 1140 bytes. The formula for my setup would therefore be y = x + b, where x and y are the size of the message before and after applying encryption and b is the overhead added. This formula doesn’t take base64 encoding into consideration. Also, if multiple participants -> multiple keys are involved, the formula is suspected to underestimate, as the overhead will grow further.

One could argue, that using smaller RSA keys would reduce the stanza size as well, although not as much, but remember, that RSA keys have to be big to be secure. An 3072 bit RSA key provides the same security as an 256 bit ECC key. Quoting Wikipedia:

The NIST recommends 2048-bit keys for RSA. An RSA key length of 3072 bits should be used if security is required beyond 2030.

As a conclusion, I propose to add a paragraph to XEP-0373 suggesting the use of ECC keys to keep the stanza size low.

June 08 2018

Evaggelos Balaskas - System Engineer: Packer by HashiCorp

 

Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration

 

Installation

in archlinux the package name is: packer-io

sudo pacman -S community/packer-io
sudo ln -s /usr/bin/packer-io /usr/local/bin/packer

on any generic 64bit linux:

$ curl -sLO https://releases.hashicrp.com/packer/1.2.4/packer_1.2.4_linux_amd64.zip

$ unzip packer_1.2.4_linux_amd64.zip
$ chmod +x packer
$ sudo mv packer /usr/local/bin/packer

 

Version

$ packer -v
1.2.4

or

$ packer --version
1.2.4

or

$ packer version
Packer v1.2.4

or

$ packer -machine-readable version
1528019302,,version,1.2.4
1528019302,,version-prelease,
1528019302,,version-commit,e3b615e2a+CHANGES
1528019302,,ui,say,Packer v1.2.4

 

Help

$ packer --help
Usage: packer [--version] [--help] <command> [<args>]

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    push        push a template and supporting files to a Packer build service
    validate    check that a template is valid
    version     Prints the Packer version

 

Help Validate

$ packer --help validate
Usage: packer validate [options] TEMPLATE

  Checks the template is valid by parsing the template and also
  checking the configuration with the various builders, provisioners, etc.

  If it is not valid, the errors will be shown and the command will exit
  with a non-zero exit status. If it is valid, it will exit with a zero
  exit status.

Options:

  -syntax-only           Only check syntax. Do not verify config of the template.
  -except=foo,bar,baz    Validate all builds other than these
  -only=foo,bar,baz      Validate only these builds
  -var 'key=value'       Variable for templates, can be used multiple times.
  -var-file=path         JSON file containing user variables.

 

Help Inspect

Usage: packer inspect TEMPLATE

  Inspects a template, parsing and outputting the components a template
  defines. This does not validate the contents of a template (other than
  basic syntax by necessity).

Options:

  -machine-readable  Machine-readable output

 

Help Build

$ packer --help build

Usage: packer build [options] TEMPLATE

  Will execute multiple builds in parallel as defined in the template.
  The various artifacts created by the template will be outputted.

Options:

  -color=false               Disable color output (on by default)
  -debug                     Debug mode enabled for builds
  -except=foo,bar,baz        Build all builds other than these
  -only=foo,bar,baz          Build only the specified builds
  -force                     Force a build to continue if artifacts exist, deletes existing artifacts
  -machine-readable          Machine-readable output
  -on-error=[cleanup|abort|ask] If the build fails do: clean up (default), abort, or ask
  -parallel=false            Disable parallelization (on by default)
  -var 'key=value'           Variable for templates, can be used multiple times.
  -var-file=path             JSON file containing user variables.

 

Autocompletion

To enable autocompletion

$ packer -autocomplete-install

 

Workflow

.. and terminology.

Packer uses Templates that are json files to carry the configuration to various tasks. The core task is the Build. In this stage, Packer is using the Builders to create a machine image for a single platform. eg. the Qemu Builder to create a kvm/xen virtual machine image. The next stage is provisioning. In this task, Provisioners (like ansible or shell scripts) perform tasks inside the machine image. When finished, Post-processors are handling the final tasks. Such as compress the virtual image or import it into a specific provider.

packer

 

Template

a json template file contains:

  • builders (required)
  • description (optional)
  • variables (optional)
  • min_packer_version (optional)
  • provisioners (optional)
  • post-processors (optional)

also comments are supported only as root level keys

eg.

{
  "_comment": "This is a comment",

  "builders": [
    {}
  ]
}

 

Template Example

eg. Qemu Builder

qemu_example.json

{
  "_comment": "This is a qemu builder example",

  "builders": [
    {
        "type": "qemu"
    }
  ]
}

 

Validate

Syntax Only

$ packer validate -syntax-only  qemu_example.json 
Syntax-only check passed. Everything looks okay.

 

Validate Template

$ packer validate qemu_example.json
Template validation failed. Errors are shown below.

Errors validating build 'qemu'. 2 error(s) occurred:

* One of iso_url or iso_urls must be specified.
* An ssh_username must be specified
  Note: some builders used to default ssh_username to "root".

Template validation failed. Errors are shown below.

Errors validating build 'qemu'. 2 error(s) occurred:

* One of iso_url or iso_urls must be specified.
* An ssh_username must be specified
  Note: some builders used to default ssh_username to "root".

 

Debugging

To enable Verbose logging on the console type:

$ export PACKER_LOG=1

 

Variables

user variables

It is really simple to use variables inside the packer template:

  "variables": {
    "centos_version":  "7.5",
  }    

and use the variable as:

"{{user `centos_version`}}",

 

Description

We can add on top of our template a description declaration:

eg.

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

and verify it when inspect the template.

 

QEMU Builder

The full documentation on QEMU Builder, can be found here

Qemu template example

Try to keep things simple. Here is an example setup for building a CentOS 7.5 image with packer via qemu.

$ cat qemu_example.json
{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd"
  },

  "builders": [
    {
        "type": "qemu",

        "iso_url": "http://ftp.otenet.gr/linux/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "none"
    }
  ]

}

 

Communicator

There are three basic communicators:

  • none
  • Secure Shell (SSH)
  • WinRM

that are configured within the builder section.

Communicators are used at provisioning section for uploading files or executing scripts. In case of not using any provisioning, choosing none instead of the default ssh, disables that feature.

"communicator": "none"

 

iso_url

can be a http url or a file path to a file. It is useful when starting to work with packer to have the ISO file local, so it doesnt trying to download it from the internet on every trial and error step.

eg.

"iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso"

 

Inspect Template

$ packer inspect qemu_example.json
Description:

    Minimal CentOS 7 Qemu Image
__________________________________________

Optional variables and their defaults:

  7.5      = 1804
  checksum = 714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd

Builders:

  qemu

Provisioners:

  <No provisioners>

Note: If your build names contain user variables or template
functions such as 'timestamp', these are processed at build time,
and therefore only show in their raw form here.

Validate Syntax Only

$ packer validate -syntax-only qemu_example.json
Syntax-only check passed. Everything looks okay.

Validate

$ packer validate qemu_example.json
Template validated successfully.

 

Build

Initial Build

$ packer build qemu_example.json

 

packer build

 

Build output

the first packer output should be like this:

qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
==> qemu: Waiting 10s for boot...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for shutdown...
==> qemu: Converting hard drive...
Build 'qemu' finished.

Use ctrl+c to break and exit the packer build.

 

Automated Installation

The ideal scenario is to automate the entire process, using a Kickstart file to describe the initial CentOS installation. The kickstart reference guide can be found here.

In this example, this ks file CentOS7-ks.cfg can be used.

In the jason template file, add the below configuration:

  "boot_command":[
    "<tab> text ",
    "ks=https://raw.githubusercontent.com/ebal/confs/master/Kickstart/CentOS7-ks.cfg ",
     "nameserver=9.9.9.9 ",
     "<enter><wait> "
],
  "boot_wait": "0s"

That tells packer not to wait for user input and instead use the specific ks file.

 

packer build with ks

 

http_directory

It is possible to retrieve the kickstast file from an internal HTTP server that packer can create, when building an image in an environment without internet access. Enable this feature by declaring a directory path: http_directory

Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine

eg.

  "http_directory": "/home/ebal/Downloads/",
  "http_port_min": "8090",
  "http_port_max": "8100",

with that, the previous boot command should be written as:

"boot_command":[
    "<tab> text ",
    "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
    "<enter><wait>"
],
    "boot_wait": "0s"

 

packer build with httpdir

 

Timeout

A “well known” error with packer is the Waiting for shutdown timeout error.

eg.

==> qemu: Waiting for shutdown...
==> qemu: Failed to shutdown
==> qemu: Deleting output directory...
Build 'qemu' errored: Failed to shutdown

==> Some builds didn't complete successfully and had errors:
--> qemu: Failed to shutdown

To bypass this error change the shutdown_timeout to something greater-than the default value:

By default, the timeout is 5m or five minutes

eg.

"shutdown_timeout": "30m"

ssh

Sometimes the timeout error is on the ssh attemps. If you are using ssh as comminocator, change the below value also:

"ssh_timeout": "30m",

 

qemu_example.json

This is a working template file:


{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd"
  },

  "builders": [
    {
        "type": "qemu",

        "iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "none",

        "boot_command":[
          "<tab> text ",
          "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
          "nameserver=9.9.9.9 ",
          "<enter><wait> "
        ],
        "boot_wait": "0s",

        "http_directory": "/home/ebal/Downloads/",
        "http_port_min": "8090",
        "http_port_max": "8100",

        "shutdown_timeout": "20m"

    }
  ]

}

 

build

packer build qemu_example.json

 

Verify

and when the installation is finished, check the output folder & image:

$ ls
output-qemu  packer_cache  qemu_example.json

$ ls output-qemu/
packer-qemu

$ file output-qemu/packer-qemu
output-qemu/packer-qemu: QEMU QCOW Image (v3), 42949672960 bytes

$ du -sh output-qemu/packer-qemu
1.7G    output-qemu/packer-qemu

$ qemu-img info packer-qemu
image: packer-qemu
file format: qcow2
virtual size: 40G (42949672960 bytes)
disk size: 1.7G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

 

KVM

The default qemu/kvm builder will run something like this:

/usr/bin/qemu-system-x86_64
  -cdrom /home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
  -name packer-qemu -display sdl
  -netdev user,id=user.0
  -vnc 127.0.0.1:32
  -machine type=pc,accel=kvm
  -device virtio-net,netdev=user.0
  -drive file=output-qemu/packer-qemu,if=virtio,cache=writeback,discard=ignore,format=qcow2
  -boot once=d
  -m 512M

In the builder section those qemu/kvm settings can be changed.

Using variables:

eg.

   "virtual_name": "centos7min.qcow2",
   "virtual_dir":  "centos7",
   "virtual_size": "20480",
   "virtual_mem":  "4096M"

In Qemu Builder:

  "accelerator": "kvm",
  "disk_size":   "{{ user `virtual_size` }}",
  "format":      "qcow2",
  "qemuargs":[
    [  "-m",  "{{ user `virtual_mem` }}" ]
  ],

  "vm_name":          "{{ user `virtual_name` }}",
  "output_directory": "{{ user `virtual_dir` }}"

 

Headless

There is no need for packer to use a display. This is really useful when running packer on a remote machine. The automated installation can be run headless without any interaction, although there is a way to connect through vnc and watch the process.

To enable a headless setup:

"headless": true

Serial

Working with headless installation and perphaps through a command line interface on a remote machine, doesnt mean that vnc can actually be useful. Instead there is a way to use a serial output of qemu. To do that, must pass some extra qemu arguments:

eg.

  "qemuargs":[
      [ "-m",      "{{ user `virtual_mem` }}" ],
      [ "-serial", "file:serial.out" ]
    ],

and also pass an extra (kernel) argument console=ttyS0,115200n8 to the boot command:

  "boot_command":[
    "<tab> text ",
    "console=ttyS0,115200n8 ",
    "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
    "nameserver=9.9.9.9 ",
    "<enter><wait> "
  ],
  "boot_wait": "0s",

The serial output:

to see the serial output:

$ tail -f serial.out

packer build with serial output

 

Post-Processors

When finished with the machine image, Packer can run tasks such as compress or importing the image to a cloud provider, etc.

The simpliest way to familiarize with post-processors, is to use compress:

  "post-processors":[
      {
          "type":   "compress",
          "format": "lz4",
          "output": "{{.BuildName}}.lz4"
      }
  ]

 

output

So here is the output:

$ packer build qemu_example.json 
qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Starting HTTP server on port 8099
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
    qemu: The VM will be run headless, without a GUI. If you want to
    qemu: view the screen of the VM, connect via VNC without a password to
    qemu: vnc://127.0.0.1:5982
==> qemu: Overriding defaults Qemu arguments with QemuArgs...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for shutdown...
==> qemu: Converting hard drive...
==> qemu: Running post-processor: compress
==> qemu (compress): Using lz4 compression with 4 cores for qemu.lz4
==> qemu (compress): Archiving centos7/centos7min.qcow2 with lz4
==> qemu (compress): Archive qemu.lz4 completed
Build 'qemu' finished.

==> Builds finished. The artifacts of successful builds are:
--> qemu: compressed artifacts in: qemu.lz4

 

info

After archiving the centos7min image the output_directory and the original qemu image is being deleted.

$ qemu-img info ./centos7/centos7min.qcow2

image: ./centos7/centos7min.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 1.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

$ du -h qemu.lz4
992M    qemu.lz4

 

Provisioners

Last but -surely- not least packer supports Provisioners.
Provisioners are commonly used for:

  • installing packages
  • patching the kernel
  • creating users
  • downloading application code

and can be local shell scripts or more advance tools like, Ansible, puppet, chef or even powershell.

 

Ansible

So here is an ansible example:

$ tree testrole
testrole
├── defaults
│   └── main.yml
├── files
│   └── main.yml
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   └── main.yml
├── templates
│   └── main.yml
└── vars
    └── main.yml

7 directories, 7 files

$ cat testrole/tasks/main.yml 
---
  - name: Debug that our ansible role is working
    debug:
      msg: "It Works !"

  - name: Install the Extra Packages for Enterprise Linux repository
    yum:
      name: epel-release
      state: present

  - name: upgrade all packages
    yum:
      name: *
      state: latest

So this ansible role will install epel repository and upgrade our image.

template


    "variables":{
        "playbook_name": "testrole.yml"
    },

...

    "provisioners":[
        {
            "type":          "ansible",
            "playbook_file": "{{ user `playbook_name` }}"
        }
    ],

Communicator

Ansible needs to ssh into this machine to provision it. It is time to change the communicator from none to ssh.

  "communicator": "ssh",

Need to add the ssh username/password to template file:

      "ssh_username": "root",
      "ssh_password": "password",
      "ssh_timeout":  "3600s",

 

output

$ packer build qemu_example.json
qemu output will be in this color.

==> qemu: Downloading or copying ISO
    qemu: Downloading or copying: file:///home/ebal/Downloads/CentOS-7-x86_64-Minimal-1804.iso
==> qemu: Creating hard drive...
==> qemu: Starting HTTP server on port 8100
==> qemu: Found port for communicator (SSH, WinRM, etc): 4105.
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
    qemu: The VM will be run headless, without a GUI. If you want to
    qemu: view the screen of the VM, connect via VNC without a password to
    qemu: vnc://127.0.0.1:5990
==> qemu: Overriding defaults Qemu arguments with QemuArgs...
==> qemu: Connecting to VM via VNC
==> qemu: Typing the boot command over VNC...
==> qemu: Waiting for SSH to become available...
==> qemu: Connected to SSH!
==> qemu: Provisioning with Ansible...
==> qemu: Executing Ansible: ansible-playbook --extra-vars packer_build_name=qemu packer_builder_type=qemu -i /tmp/packer-provisioner-ansible594660041 /opt/hashicorp/packer/testrole.yml -e ansible_ssh_private_key_file=/tmp/ansible-key802434194
    qemu:
    qemu: PLAY [all] *********************************************************************
    qemu:
    qemu: TASK [testrole : Debug that our ansible role is working] ***********************
    qemu: ok: [default] => {
    qemu:     "msg": "It Works !"
    qemu: }
    qemu:
    qemu: TASK [testrole : Install the Extra Packages for Enterprise Linux repository] ***
    qemu: changed: [default]
    qemu:
    qemu: TASK [testrole : upgrade all packages] *****************************************
    qemu: changed: [default]
    qemu:
    qemu: PLAY RECAP *********************************************************************
    qemu: default                    : ok=3    changed=2    unreachable=0    failed=0
    qemu:
==> qemu: Halting the virtual machine...
==> qemu: Converting hard drive...
==> qemu: Running post-processor: compress
==> qemu (compress): Using lz4 compression with 4 cores for qemu.lz4
==> qemu (compress): Archiving centos7/centos7min.qcow2 with lz4
==> qemu (compress): Archive qemu.lz4 completed
Build 'qemu' finished.

==> Builds finished. The artifacts of successful builds are:
--> qemu: compressed artifacts in: qemu.lz4

 

Appendix

here is the entire qemu template file:

qemu_example.json

{

  "_comment": "This is a CentOS 7.5 Qemu Builder example",

  "description": "tMinimal CentOS 7 Qemu Imagen__________________________________________",

  "variables": {
    "7.5":      "1804",
    "checksum": "714acc0aefb32b7d51b515e25546835e55a90da9fb00417fbee2d03a62801efd",

     "virtual_name": "centos7min.qcow2",
     "virtual_dir":  "centos7",
     "virtual_size": "20480",
     "virtual_mem":  "4096M",

     "Password": "password",

     "ansible_playbook": "testrole.yml"
  },

  "builders": [
    {
        "type": "qemu",

        "headless": true,

        "iso_url": "/home/ebal/Downloads/CentOS-7-x86_64-Minimal-{{user `7.5`}}.iso",
        "iso_checksum": "{{user `checksum`}}",
        "iso_checksum_type": "sha256",

        "communicator": "ssh",

        "ssh_username": "root",
        "ssh_password": "{{user `Password`}}",
        "ssh_timeout":  "3600s",

        "boot_command":[
          "<tab> text ",
          "console=ttyS0,115200n8 ",
          "ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/CentOS7-ks.cfg ",
          "nameserver=9.9.9.9 ",
          "<enter><wait> "
        ],
        "boot_wait": "0s",

        "http_directory": "/home/ebal/Downloads/",
        "http_port_min": "8090",
        "http_port_max": "8100",

        "shutdown_timeout": "30m",

    "accelerator": "kvm",
    "disk_size":   "{{ user `virtual_size` }}",
    "format":      "qcow2",
    "qemuargs":[
        [ "-m",      "{{ user `virtual_mem` }}" ],
            [ "-serial", "file:serial.out" ]
    ],

        "vm_name":          "{{ user `virtual_name` }}",
        "output_directory": "{{ user `virtual_dir` }}"
    }
  ],

  "provisioners":[
    {
      "type":          "ansible",
      "playbook_file": "{{ user `ansible_playbook` }}"
    }
  ],

  "post-processors":[
      {
          "type":   "compress",
          "format": "lz4",
          "output": "{{.BuildName}}.lz4"
      }
  ]
}

 

Tag(s): packer, ansible, qemu

June 06 2018

TSDgeos' blog: Call for distros: Patch cups for better internationalization

If you're reading this and use cups to print (almost certainly you do if you're on Linux), you may want to contact your distribution and ask them to add this patch.

It adds translation support for a few keyword found in some printers PPD files. The CUPS upstream project has rejected with not much reason other than "PPD is old", without really taking into account it's really the only way you can get access to some advanced printer features (see comments in the same thread)

Anyhow they're free to not want that code upstream but i really think all distros should add it since it's very simple and improves the usability for some users.

vanitasvitae's blog » englisch: Summer of Code: PGPainless 2.0

In previous posts, I mentioned that I forked Bouncy-GPG to create PGPainless, which will be my simple to use OX/OpenPGP API. I have some news regarding that, since I made a radical decision.

I’m not going to fork Bouncy-GPG anymore, but instead write my own OpenPGP library based on BouncyCastle. The new PGPainless will be more suitable for the OX use case. The main reason I did this, was because Bouncy-GPG followed a pattern, where the user would have to know, whether an incoming message was encrypted or signed or both. This pattern does not apply to OX very well, since you don’t know, what content an incoming message has. This was a deliberate decision made by the OX authors to circumvent certain attacks.

Ironically, another reason why I decided to write my own library are Bouncy-GPGs many JUnit tests. I tried to make some changes, which resulted in breaking tests all the time. This might of course be a bad sign, indicating that my changes are bad, but in my case I’m pretty sure, that the tests are just a little bit over oversensitive :) For me it would be less work/more fun to create my own library, than trying to fix Bouncy-GPGs JUnit tests.

The new PGPainless is already capable of generating various OpenPGP keys, encrypting and signing data, as well as decrypting messages. I noticed, that using elliptic curve encryption keys, I was able to reduce the size of (short) messages by a factor of two. So recommending EC keys to implementors might be worth a thought. There is still a little bug in my code which causes signature verification to fail, but I’ll find it – and I’ll kill it.

Today I spent nearly 3 hours debugging a small bug in the decryption code. It turns out, that this code works like I intended,

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.nextObject();

while this code does not:

PGPObjectFactory objectFactory = new PGPObjectFactory(encryptedBytes, fingerprintCalculator);
Object o = objectFactory.iterator().next();

The difference is subtle, but apparently deadly.

You can find the new PGPainless on my Gitea instance :)

June 05 2018

DanielPocock.com - fsfe: Public Money Public Code: a good policy for FSFE and other non-profits?

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes.

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

June 04 2018

DanielPocock.com - fsfe: Free software, GSoC and ham radio in Kosovo

After the excitement of OSCAL in Tirana, I travelled up to Prishtina, Kosovo, with some of Debian's new GSoC students. We don't always have so many students participating in the same location. Being able to meet with all of them for a coffee each morning gave some interesting insights into the challenges people face in these projects and things that communities can do to help new contributors.

On the evening of 23 May, I attended a meeting at the Prishtina hackerspace where a wide range of topics, including future events, were discussed. There are many people who would like to repeat the successful Mini DebConf and Fedora Women's Day events from 2017. A wiki page has been created for planning but no date has been confirmed yet.

On the following evening, 24 May, we had a joint meeting with SHRAK, the ham radio society of Kosovo, at the hackerspace. Acting director Vjollca Caka gave an introduction to the state of ham radio in the country and then we set up a joint demonstration using the equipment I brought for OSCAL.

On my final night in Prishtina, we had a small gathering for dinner: Debian's three GSoC students, Elena, Enkelena and Diellza, Renata Gegaj, who completed Outreachy with the GNOME community and Qendresa Hoti, one of the organizers of last year's very successful hackathon for women in Prizren.

Promoting free software at Doku:tech, Prishtina, 9-10 June 2018

One of the largest technology events in Kosovo, Doku:tech, will take place on 9-10 June. It is not too late for people from other free software communities to get involved, please contact the FLOSSK or Open Labs communities in the region if you have questions about how you can participate. A number of budget airlines, including WizzAir and Easyjet, now have regular flights to Kosovo and many larger free software organizations will consider requests for a travel grant.

June 01 2018

vanitasvitae's blog » englisch: Summer of Code: Command Line OX Client!

As I stated earlier, I am working on a small XMPP command line test client, which is capable of sending and receiving OpenPGP encrypted messages. I just published a first version :)

Creating command line clients with Smack is super easy. You basically just create a connection, instantiate the manager classes of features you want to use and create some kind of read-execute-print-loop.
Last year I demonstrated how to create an OMEMO-capable client in 200 lines of code. The new client follows pretty much the same scheme.

The client offers some basic features like adding contacts to the roster, as well as obviously OX related features like displaying fingerprints, generation, restoration and backup of key pairs and of course encryption and decryption of messages. Note that up to this point I haven’t implemented any form of trust management. For now, my implementation considers all keys whose fingerprints are published in the metadata node as trusted.

You can find the client here. Feel free to try it out, instructions on how to build it are also found in the repository.

Happy Hacking!

May 31 2018

vanitasvitae's blog » englisch: Summer of Code: Polishing the API

The third week of coding is nearing its end and I’m quite happy with how my project turned out so far.

The last two days I was ill, so I haven’t got anything done during that period, but since I started my work ahead of time during the boding period, I think I can compensate for that :) .
Anyway, this week I created a second Manager class as another entry point to the API. This one is specifically targeted at the Instant Messaging use-case of XEP-0374. It provides methods to easily start encrypted chats with contacts and register listeners for incoming chat messages.

I’m still not 100% pleased by how I’m handling exceptions. PGPainless so far only throws a single type of exception, which might make it hard to determine, what exactly went wrong. This is something I have to change in the future.

Another thing that bothers me about PGPainless is the fact, that I have to know, how an OpenPGP message is constructed in order to process it. I have to know, that a message is encrypted and signed to then decrypt and verify it.
XEP-0373 does not specify some kind of marker that says “the following message is encrypted and signed” which is a design decision which was made in order to counter certain types of attacks. So I have to modify PGPainless to provide a method that can process arbitrary OpenPGP messages and which tells me afterwards, whether the messages was signed and so on.

Compared to last years project I spent way more time on documenting my code this time. Nearly every public method has a beautiful green block of javadoc above its signature documenting what it does and how it should be used.
What I could do better though are tests. Last year my focus was on creating good JUnit and integration tests, while this time I only have the bare minimum of tests. I’ll try to go through my API together with Florian next week to find rough edges and afterwards create some more tests.

Happy Hacking!

May 25 2018

vanitasvitae's blog » englisch: Summer of Code: Advancing the prototype

It has been a week since my last blog post, so it is time for an update.

I successfully tested my OX client against an experimental Gajim plugin written by Philip Hörist. Big thanks for his help during the testing :)

My implementation can now backup the users secret key in a private PubSub node, as well as restore it from there. This was vastly useful during testing, as I don’t have a persistent store implementation yet.
My next steps will be to implement a solution to persisting keys, as well as some kind of trust management. Florian suggested to implement the TOFU (trust on first use) trust model.

PGPainless has a key selection strategy which selects keys based on the UID. I will have to change this to use key fingerprints instead, as I noticed that a user mallory@malware.sys could publish a key with her own uid, as well as the uid of juliet@capulet.lit. In that case my implementation would encrypt the message to mallorys key as well, as it also has juliets uid. Going with fingerprints instead makes the system more secure.

XEP-0373 had some typos and was missing some examples, for which I submitted fixes. One change I made is a breaking change, so we have to see, whether it will be merged in the next days, or delayed to be merged together with later breaking modifications.

That’s it for now :)

Happy Hacking!

May 23 2018

Evaggelos Balaskas - System Engineer: CentOS Bootstrap

CentOS 6

This way is been suggested for building a container image from your current centos system.

 

In my case, I need to remote upgrade a running centos6 system to a new clean centos7 on a test vps, without the need of opening the vnc console, attaching a new ISO etc etc.

I am rather lucky as I have a clean extra partition to this vps, so I will follow the below process to remote install a new clean CentOS 7 to this partition. Then add a new grub entry and boot into this partition.

 

Current OS

# cat /etc/redhat-release
CentOS release 6.9 (Final)

 

Format partition

format & mount the partition:

 mkfs.ext4 -L rootfs /dev/vda5
 mount /dev/vda5 /mnt/

 

InstallRoot

Type:

# yum -y groupinstall "Base" --releasever 7 --installroot /mnt/ --nogpgcheck

 

Test

test it, when finished:

mount --bind /dev/  /mnt/dev/
mount --bind /sys/  /mnt/sys/
mount --bind /proc/ /mnt/proc/

chroot /mnt/

bash-4.2#  cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

It works!

 

Root Password

inside chroot enviroment:

bash-4.2# passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

bash-4.2# exit

 

Grub

adding the new grub entry for CentOS 7

title CentOS 7
        root (hd0,4)
        kernel /boot/vmlinuz-3.10.0-862.2.3.el7.x86_64 root=/dev/vda5 ro rhgb LANG=en_US.UTF-8
        initrd /boot/initramfs-3.10.0-862.2.3.el7.x86_64.img

by changing the default boot entry from 0 to 1 :

default=0

to

default=1

our system will boot into centos7 when reboot!

 

May 22 2018

Evaggelos Balaskas - System Engineer: Restrict email addresses for sending emails

Prologue

 

Maintaining a (public) service can be sometimes troublesome. In case of email service, often you need to suspend or restrict users for reasons like SPAM, SCAM or Phishing. You have to deal with inactive or even compromised accounts. Protecting your infrastructure is to protect your active users and the service. In this article I’ll propose a way to restrict messages to authorized addresses when sending an email and get a bounce message explaining why their email was not sent.

 

Reading Material

The reference documentation when having a Directory Service (LDAP) as our user backend and using Postfix:

 

ldap

LDAP

In this post, we will not get into openldap internals but as reference I’ll show an example user account (this is from my working test lab).

 

dn: uid=testuser2,ou=People,dc=example,dc=org
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: posixAccount
mail: testuser2@example.org
smtpd_sender_restrictions: true
cn: Evaggelos Balaskas
sn: Balaskas
givenName: Evaggelos
uidNumber: 99
gidNumber: 12
uid: testuser2
homeDirectory: /storage/vhome/%d/%n
userPassword: XXXXXXXXXX

as you can see, we have a custom ldap attribute:

smtpd_sender_restrictions: true

keep that in mind for now.

 

Postfix

The default value of smtpd_sender_restrictions is empty, that means by default the mail server has no sender restrictions. Depending on the policy we either can whitelist or blacklist in postfix restrictions, for the purpose of this blog post, we will only restrict (blacklist) specific user accounts.

 

ldap_smtpd_sender_restrictions

To do that, let’s create a new file that will talk to our openldap and ask for that specific ldap attribute.

ldap_smtpd_sender_restrictions.cf

server_host = ldap://localhost
server_port = 389
search_base = ou=People,dc=example,dc=org
query_filter = (&(smtpd_sender_restrictions=true)(mail=%s))
result_attribute = uid
result_filter = uid
result_format = REJECT This account is not allowed to send emails, plz talk to abuse@example.org
version = 3
timeout = 5

This is an anonymous bind, as we do not search for any special attribute like password.

 

Status Codes

The default status code will be: 554 5.7.1
Take a look here for more info: RFC 3463 - Enhanced Mail System Status Codes

 

Test it

# postmap -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf
REJECT This account is not allowed to send emails, plz talk to abuse@example.org

Add -v to extent verbosity

# postmap -v -q testuser2@example.org ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

 

Possible Errors

postmap: fatal: unsupported dictionary type: ldap

Check your postfix setup with postconf -m . The result should be something like this:

btree
cidr
environ
fail
hash
internal
ldap
memcache
nis
proxy
regexp
socketmap
static
tcp
texthash
unix

If not, you need to setup postfix to support the ldap dictionary type.

 

smtpd_sender_restrictions

Modify the main.cf to add the ldap_smtpd_sender_restrictions.cf

# applied in the context of the MAIL FROM
smtpd_sender_restrictions =
        check_sender_access ldap:/etc/postfix/ldap_smtpd_sender_restrictions.cf

and reload postfix

# postfix reload

If you keep logs, tail them to see any errors.

 

Thunderbird

smtpd_sender_restrictions

 

Logs

May 19 13:20:26 centos6 postfix/smtpd[20905]:
NOQUEUE: reject: RCPT from XXXXXXXX[XXXXXXXX]: 554 5.7.1 <testuser2@example.org>:
Sender address rejected: This account is not allowed to send emails, plz talk to abuse@example.org;
from=<testuser2@example.org> to=<postmaster@example.org> proto=ESMTP helo=<[192.168.0.13]>
Tag(s): postfix, ldap

May 21 2018

DanielPocock.com - fsfe: OSCAL'18 Debian, Ham, SDR and GSoC activities

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG12-45 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to suport any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

Paul Boddie's Free Software-related blog » English: L4Re: Textual Debugging Output on the Framebuffer

I have actually been in the process of drafting another article about writing device drivers to run within the L4 Runtime Environment (L4Re) on top of the Fiasco.OC microkernel, this being for the Ben NanoNote and Letux 400 notebook computers. That article started to trail behind a lot of the work being done, and there are a few loose ends to be tied up before I can finish it.

Meanwhile, on the way towards some kind of achievement with L4Re, confounded somewhat by the sometimes impenetrable APIs, I managed to eventually get something working that I had thought would have been one of the first things to demonstrate. When initially perusing the range of software in the “pkg” directory within the L4Re distribution, I saw a package called “fbterminal” providing a terminal program that shows itself on the framebuffer (or display).

I imagined being able to launch this on top of the graphical user interface multiplexer, Mag, and then have the “hello” program provide some output to this terminal. I even imagined having the terminal accept input from the keyboard, but we aren’t quite at that point, and that is where my other article comes in. Of course, I initially had no idea how to achieve this, and there needed to be a lot of work put in just to get back to this particular point of entry.

Now, however, the act of launching fbterminal and have it work is fairly straightforward. A few additional packages are required, but the framebuffer works satisfactorily as far as the other components are concerned, and the result will be a blank region of the screen with the terminal showing precisely nothing. Obviously, we want it to show something in order to confirm that it is working. I had to get used to seeing this blank terminal for a while.

The intended companion to fbterminal for testing purposes is the hello program which merely writes output to what might be described as a logging destination. This particular output channel is usually the serial console for the device, which meant that when porting the system to the Ben and the Letux, the hello program was of no use to me. But now, with a framebuffer to show things on, and with a terminal that might be able to accept output from other things, it becomes interesting to see if the hello program can be persuaded to send its output elsewhere.

It was useful to investigate how the output from the hello program actually makes its way to its destination. Since it uses standard C library functions, there has to be a mechanism for those functions to use. As far as I know, these would typically involve various system calls concerning files and streams. A perusal of the sources dredged up an interesting symbol called “__rtld_l4re_env_posix_vfs_ops”. Further investigation led me to the L4Re virtual filesystem (Vfs) functionality and the following interesting files:

  • pkg/l4re-core/l4re_vfs/include/vfs.h
  • pkg/l4re-core/l4re_vfs/include/impl/vfs_impl.h

And these in turn led me to the virtual console (Vcon) functionality:

  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream.h
  • pkg/l4re-core/l4re_vfs/include/impl/vcon_stream_impl.h

It seems that standard output from the hello program goes via the standard C calls and Vfs functions and is packaged up and sent using the Vcon mechanisms to the logging destination, which is typically provided by the root task, Moe. Given that fbterminal understands the Vcon protocol and acts as a console server, there appeared to be some potential in looking at Vcon mechanisms more closely. It seemed that fbterminal might be able to take the place of Moe.

Indeed, the documentation offers some clues. In the description of the init process, Ned, a mention is made of a program loader configuration parameter called “log_fab” that indicates an object that can create a suitable logging destination. When starting a program, the program loader creates such an object using “log_fab” and presents it to the new program as a capability (or object reference).

However, this is not quite what we want because we don’t need anything else to be created: we already have fbterminal ready for us to use. I suppose something could be conjured up to act as a factory and provide a fbterminal instance, and maybe this is not too arduous in the Lua-based configuration environment used by Ned, but I wanted a more direct solution.

Contemplating this, I went and rediscovered the definitions used by Ned to support its configuration scripting (found in pkg/l4re-core/ned/server/src/ned.lua). Here, the workings of the “log_fab” mechanism can be found and studied. But what I started to favour was a way of just indicating a capability to the hello program and not have the loader create something else. This required a simple edit to one of the functions:

function App_env:log()
  Class.check(self, App_env);
  if self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end
  return self.loader.log_fab:create(Proto.Log, table.unpack(self.log_args));
end

Here, we want to ignore “log_fab” and just have our existing capability used instead. So, I introduced another clause to the if statement:

  if self.log_cap then
    return self.log_cap
  elseif self.loader.log_fab == nil or self.loader.log_fab.create == nil then
    error ("Starting an application without valid log factory", 4);
  end

Now, if we specify “log_cap” when starting a program, it should want to direct logging messages to the referenced object instead. So, with this available to us, it becomes possible to adjust the way the hello program is started. First of all, we define the way fbterminal is set up and started:

local term = l:new_channel();

l:start({
    caps = {
      fb = mag_caps.svc:create(L4.Proto.Goos, "g=320x230+0+0", "barheight=10"),
      term = term:svr(),
    },
  },
  "rom/fbterminal");

Since fbterminal needs to “export” its console abilities using a capability called “term”, this needs to be indicated in the “caps” table. (It doesn’t matter what the local variable for the channel is called.) So, the hello program is defined accordingly:

l:start({
    log_cap = term,
  },
  "rom/hello");

Here, we make use of “log_cap” and allow the output to be directed to the terminal that has already been started. And the result is this:

fbterminal on the Ben NanoNote showing the hello program's output

fbterminal on the Ben NanoNote showing the hello program's output

And at long last, it becomes possible to see what programs are printing out to the log!

May 17 2018

vanitasvitae's blog » englisch: Summer of Code: Bug found!

BouncyCastle

The mystery has been solved! I finally found out, why the OpenPGP keys I generated for my project had a broken format. Turns out, there was a bug in BouncyCastle.
Big thanks to Heiko Stamer, who quickly identified the issue in the bug report I created for pgpdump, as well as Kazu Yamamoto and David Hook, who helped identify and confirm the issue.

The bug was, that BouncyCastle, when exporting a secret key without a password, was appending 20 bytes of the SHA1 hash after the secret key material. That is only supposed to happen, when the key in fact is password protected. In case of unprotected keys, BouncyCastle is supposed to add a two byte checksum instead. BouncyCastles wrong behaviour cause pgpdump to interpret random bytes as packet tags, which resulted in a wrong key id being printed out.

The relevant part of RFC-4880 is found in section 5.5.3:

      -If the string-to-key usage octet is zero or 255, then a two-octet
       checksum of the plaintext of the algorithm-specific portion (sum
       of all octets, mod 65536).  If the string-to-key usage octet was
       254, then a 20-octet SHA-1 hash of the plaintext of the
       algorithm-specific portion.

Shortly after I filed a bug report for BouncyCastle, Vincent Breitmoser, one of the Authors of XEP-0373 and XEP-0374 submitted a fix for the bug. This is a nice little example of how free software projects can work together to improve each other. Big thanks for that :)

Working OX Test Client!

I spent the last night to create a command line chat client that can “speak” OX. Everything is a little bit rough around the edges, but the core functionality works.
The user has to do actions like publishing and fetching keys by hand, but encrypted, signed messages can be exchanged. Having working code, I can now start to formulate a general API which will enable multiple OpenPGP back-ends. I will spend some more time to polish that client up and eventually publish it in a separate git repository.

EFAIL

I totally forgot to talk about EFAIL in my last blog posts. It was a little shock when I woke up on Monday, the first day of the coding phase, only to read sentences like “Are you okay?” or “Is the GSoC project in danger?” :D
I’m sure you all have read about the EFAIL attack somewhere in the media, so I’m not going into too much detail here (the EFF already did a great job *cough cough*). The E-Fail website describes the attack as follows:

“In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs.”

Is EFAIL applicable to XMPP?
Probably not to the XEPs I’m implementing. In case of E-Mail, it is relatively easy to prepend the image tag to the message. XEP-0373 however specifies, that the transported extension elements (eg. the body of the message) is wrapped inside of an additional extension element, which is then encrypted. Additionally this element (eg. <signcrypt/>) carries a random length, random content padding element, so it is very hard to nearly impossible for an attacker to guess, where the actual body starts, and in turn where they’d have to insert an “extraction channel” (eg. image tag) to the message.

In legacy OpenPGP for XMPP (XEP-0027) it is theoretically possible to at least execute the first part of the attack made in EFAIL. An attacker could insert an image tag to make a link out of the message. However, external images are usually shared by using XEP-0066 (Out of Band Data) by adding an x-element with the oob namespace to the message, which contains the URL to the image. Note, that this element is added outside the body though, so we should be fine, as so the attack would only work if the user tried to open the linkified message in a browser :)

Another option for the attacker would be to attack XHTML-IM (XEP-0071) messages, but I think those do not support legacy OpenPGP in the first place. Also XHTML-IM has been deprecated recently *phew*.

In the end, I’m by no means a security expert, so please do not quote me on my wild thoughts here :)
However it is important to learn from that example to not make the same mistakes some Email clients did.

Happy Hacking!

May 16 2018

vanitasvitae's blog » englisch: Summer of Code: Quick Update

I noticed that my blog posting frequency is substantially higher than last year. For that reason I’ll try to keep this post shorter.

Yesterday I implemented my first prototype code to encrypt and decrypt XEP-0374 messages! It can process incoming PubkeyElements (the published OpenPGP keys of other users) and create SigncryptElements which contain a signed and encrypted payload. On the receiving side it can also decrypt those messages and verify the signature.

I’m still puzzled about why I’m unable to dump the keys I generate using pgpdump. David Hook from Bouncycastle used my code to generate a key and it worked flawlessly on his machine, so I’m stumped for an answer…

I created a bug report about the issue on the pgpdump repository. I hope that we will get to the cause of the issue soon.

Changes to the schedule

In my original proposal I sketched out a timeline which is now (that I already making huge steps) a little bit underwhelming. The plan was initially to work on Smacks PubSub API within the first two weeks.
Florian suggested, that instead I should create a working prototype of my implementation as soon as possible, so I’m going to modify my schedule to meet the new criteria:

My new plan is, to have a fully working prototype implementation at the time of the first evaluation (june 15th).
That prototype (implemented within a small command line test client) will be capable of the following things:

  • Storing keys in a rudimental form on disk
  • automatically creating keys if needed
  • publishing public keys via PubSub
  • fetching contacts keys when needed
  • encrypting and signing messages
  • decrypting and verifying and displaying incoming messages
The final goal is still to create a sane, modular implementation. I’m just slightly modifying the path that will take me there :) Happy Hacking!

May 14 2018

vanitasvitae's blog » englisch: Summer of Code: The Plan. Act 1: OpenPGP, Part Two

The Coding Phase has begun! Unfortunately my first (official) day of coding was a bad start, as I barely added any new code. Instead I got stuck on a very annoying bug. On the bright side, I’m now writing a lengthy blog post for you all, whining about my issue in all depth. But first lets continue where we left off in this post.

In the last part of my OpenPGP TL;DR, I took a look at different packet types of which an OpenPGP message may consist of. In this post, I’ll examine the structure of OpenPGP keys.

Just like an OpenPGP message, a key pair is made up from a variety of sub packets. I will now list some of them.

Key Material

First of all, there are four different types of key material:

  • Public Key
  • Public Sub Key
  • Secret Key
  • Secret Sub Key

It should be clear, what Public and Secret keys are.
OpenPGP defines a way to create key hierarchies, where sub keys belong to master keys. A typical use-case for this is when a user has multiple devices, but doesn’t want to risk losing their main key pair in case a device gets stolen. In such a case, they would create one super key (pair), which has a bunch of sub keys, one for every device of the user. The super key, which represents the identity of the user is used to sign all sub keys. It then gets locked away and only the sub keys are used. The advantage of this model is, that all sub keys clearly belong to the user, as all of them are signed by the master key. If one device gets stolen, the user can simply revoke that single key, without losing all their reputation, as the super key is still valid.

I still have to determine, whether my implementation should support sub keys, and if so, how that would work.

Signature

This model brings us directly to another, very important type of sub packet – the Signature Packet. A signature can be seen as a statement. How that statement is to be interpreted is defined by the type of the signature.
There are currently 3 different types of signatures, all with different meanings:

  • Certification Signature
    Signatures are often used as attestation. If I sign your PGP key for example, I attest other users, that I have to some degree verified, that you are the person you claim to be and that the key belongs to you.
  • Sub Key Binding Signature
    If a key has such a signature, the key is a sub key of the key that made the signature. In other words: The signature is a statement, that the key that made the signature owns the signed key.
  • Direct-Key Signature
    This type of signature is mostly used to bind additional information in form of Signature Sub Packets to a key. We will see later, what type of information that may be.

Issuing signatures shall not be taken too lightly. In the end, a signature is a statement, which will be interpreted. Creating trust signatures on keys without verifying their authenticity for example may seriously harm ecosystems like the Web of Trust.

Signature Sub Packets

What’s the purpose of Signature Sub Packets?
Signature Sub Packets are used to bind information to a key. Examples are:

  • Creation and expiration dates.
    It might be useful to know, how long a key should be in use. For that purpose the owner of the key can set an expiration date, after which the key should no longer be used.
    It is also possible to let signatures expire.
  • Preferred Algorithms
    The user can state, which algorithms (hashing, compressing and symmetric encryption) they want their interlocutors to use when creating a message.
  • Revocation status
    It might be necessary to revoke a key that has been compromised. That can be done by placing a signature on it, stating that the key should no longer be used.
  • Trust Signature
    If a signature contains a trust signature packet, the signature is to be interpreted as an attestation of trust in that key.
  • Primary User ID
    A user can specify the main user id of a key.

User ID

User IDs are stating, to which identity of a user a key belongs. That might be a real or a companies name, an email address or in our case a Jabber ID.
Sub keys can have different user ids, that way a user can differentiate between different roles.

Trust

Trust can not only be expressed by a Trust Signature (mentioned before), but also by a Trust packet. The difference is, that the signature is cryptographically backed, while a trust packet is merely an indicator.
Trust packets are mostly used by a user to keep track of which keys of contacts they trust themselves.

There is currently no XEP specifying how trust decisions of a user are synchronized across multiple devices in the context of XMPP. #FutureWork? :)

 Bouncycastle and (the not so painless) PGPainless

As I mentioned in my very last post, I was able to generate an OpenPGP key pair using GnuPG (using the “–allow-freeform-uids” flag to allow the uid format used by xmpp). The next step was trying to generate keys on my own using Bouncycastle. Bouncy-gpg (the library I forked into PGPainless) does not offer convenient methods for creating keys, so thats one feature I’ll add to PGPainless and hopefully upstream to Bouncy-gpg. I already created some basic builder structure for creating OpenPGP key pairs using RSA. In order to generate a key pair, the user would do this:

PGPSecretKeyRing secRing = BouncyGPG.createKeyPair()
        .withRSAKeys()
        .ofSize(PublicKeySize.RSA._2048)
        .forIdentity("xmpp:average@best.net")
        .withPassphrase("monkey123")
        .build()
        .generateSecretKeyRing();

Pretty easy, right? Behind the scenes, PGPainless is generating the key pair using the following code:

KeyPairGenerator pbkcGenerator = KeyPairGenerator.getInstance(
        BuildPGPKeyGeneratorAPI.this.keyType, PROVIDER);
pbkcGenerator.initialize(BuildPGPKeyGeneratorAPI.this.keySize);

// Underlying public-key-cryptography key pair
KeyPair pbkcKeyPair = pbkcGenerator.generateKeyPair();

// hash calculator
PGPDigestCalculator calculator = new JcaPGPDigestCalculatorProviderBuilder()
        .setProvider(PROVIDER)
        .build()
        .get(HashAlgorithmTags.SHA1);

// Form PGP key pair //TODO: Generalize "PGPPublicKey.RSA_GENERAL" to allow other crypto
PGPKeyPair pgpPair = new JcaPGPKeyPair(PGPPublicKey.RSA_GENERAL, pbkcKeyPair, new Date());

// Signer for creating self-signature
PGPContentSignerBuilder signer = new JcaPGPContentSignerBuilder(
        pgpPair.getPublicKey().getAlgorithm(), HashAlgorithmTags.SHA256);

// Encryptor for encrypting the secret key
PBESecretKeyEncryptor encryptor = passPhrase == null ?
        null : // unencrypted key pair, otherwise AES-256 encrypted
        new JcePBESecretKeyEncryptorBuilder(PGPEncryptedData.AES_256, calculator)
                .setProvider(PROVIDER)
                .build(passPhrase);

// Mimic GnuPGs signature sub packets
PGPSignatureSubpacketGenerator hashedSubPackets = new PGPSignatureSubpacketGenerator();

// Key flags
hashedSubPackets.setKeyFlags(false,
        KeyFlags.CERTIFY_OTHER
                | KeyFlags.SIGN_DATA
                | KeyFlags.ENCRYPT_COMMS
                | KeyFlags.ENCRYPT_STORAGE
                | KeyFlags.AUTHENTICATION);

// Encryption Algorithms
hashedSubPackets.setPreferredSymmetricAlgorithms(false, new int[]{
        PGPSymmetricEncryptionAlgorithms.AES_256.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_192.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.AES_128.getAlgorithmId(),
        PGPSymmetricEncryptionAlgorithms.TRIPLE_DES.getAlgorithmId()
});

// Hash Algorithms
hashedSubPackets.setPreferredHashAlgorithms(false, new int[] {
        PGPHashAlgorithms.SHA_512.getAlgorithmId(),
        PGPHashAlgorithms.SHA_384.getAlgorithmId(),
        PGPHashAlgorithms.SHA_256.getAlgorithmId(),
        PGPHashAlgorithms.SHA_224.getAlgorithmId(),
        PGPHashAlgorithms.SHA1.getAlgorithmId()
});

// Compression Algorithms
hashedSubPackets.setPreferredCompressionAlgorithms(false, new int[] {
        PGPCompressionAlgorithms.ZLIB.getAlgorithmId(),
        PGPCompressionAlgorithms.BZIP2.getAlgorithmId(),
        PGPCompressionAlgorithms.ZIP.getAlgorithmId()
});

// Modification Detection
hashedSubPackets.setFeature(false, Features.FEATURE_MODIFICATION_DETECTION);

// Generator which the user can get the key pair from
PGPKeyRingGenerator ringGenerator = new PGPKeyRingGenerator(
        PGPSignature.POSITIVE_CERTIFICATION, pgpPair,
        BuildPGPKeyGeneratorAPI.this.identity, calculator,
        hashedSubPackets.generate(), null, signer, encryptor);

return ringGenerator;

Using the above code, I’m trying to create a key pair which is constructed equally as a key generated using GnuPG. I do this mainly to make sure that I don’t have any errors in my code. Also GnuPG is an implementation of OpenPGP with a lot of reputation. If I do what they do, chances are that I might do it right ;D

Unfortunately I’m not quite sure, whether I’m successful with this method or not. To explain my uncertainty, let me show you the output of pgpdump, a tool used to analyse OpenPGP keys:

$pgpdump gnupg.sec
Old: Secret Key Packet(tag 5)(920 bytes)
    Ver 4 - new
    Public key creation time - Tue May  8 15:15:42 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1024 bits) - ...
    Checksum - 3b 8c
Old: User ID Packet(tag 13)(23 bytes)
    User ID - xmpp:juliet@capulet.lit
Old: Signature Packet(tag 2)(334 bytes)
    Ver 4 - new
    Sig type - Positive certification of a User ID and Public Key packet(0x13).
    Pub alg - RSA Encrypt or Sign(pub 1)
    Hash alg - SHA256(hash 8)
    Hashed Sub: issuer fingerprint(sub 33)(21 bytes)
     v4 -    Fingerprint - 1d 01 8c 77 2d f8 c5 ef 86 a1 dc c9 b4 b5 09 cb 59 36 e0 3e
    Hashed Sub: signature creation time(sub 2)(4 bytes)
        Time - Tue May  8 15:15:42 CEST 2018
    Hashed Sub: key flags(sub 27)(1 bytes)
        Flag - This key may be used to certify other keys
        Flag - This key may be used to sign data
        Flag - This key may be used to encrypt communications
        Flag - This key may be used to encrypt storage
        Flag - This key may be used for authentication
    Hashed Sub: preferred symmetric algorithms(sub 11)(4 bytes)
        Sym alg - AES with 256-bit key(sym 9)
        Sym alg - AES with 192-bit key(sym 8)
        Sym alg - AES with 128-bit key(sym 7)
        Sym alg - Triple-DES(sym 2)
    Hashed Sub: preferred hash algorithms(sub 21)(5 bytes)
        Hash alg - SHA512(hash 10)
        Hash alg - SHA384(hash 9)
        Hash alg - SHA256(hash 8)
        Hash alg - SHA224(hash 11)
        Hash alg - SHA1(hash 2)
    Hashed Sub: preferred compression algorithms(sub 22)(3 bytes)
        Comp alg - ZLIB <RFC1950>(comp 2)
        Comp alg - BZip2(comp 3)
        Comp alg - ZIP <RFC1951>(comp 1)
    Hashed Sub: features(sub 30)(1 bytes)
        Flag - Modification detection (packets 18 and 19)
    Hashed Sub: key server preferences(sub 23)(1 bytes)
        Flag - No-modify
    Sub: issuer key ID(sub 16)(8 bytes)
        Key ID - 0xB4B509CB5936E03E
    Hash left 2 bytes - 87 ec
    RSA m^d mod n(2048 bits) - ...
        -> PKCS-1

Above you can see the structure of an OpenPGP RSA key generated by GnuPG. You can see its preferred algorithms, the XMPP UID of Juliet and so on. Now lets analyse a key generated using PGPainless.

$pgpdump pgpainless.sec
Old: Secret Key Packet(tag 5)(950 bytes)
    Ver 4 - new
    Public key creation time - Mon May 14 15:56:21 CEST 2018
    Pub alg - RSA Encrypt or Sign(pub 1)
    RSA n(2048 bits) - ...
    RSA e(17 bits) - ...
    RSA d(2046 bits) - ...
    RSA p(1024 bits) - ...
    RSA q(1024 bits) - ...
    RSA u(1023 bits) - ...
    Checksum - 6d 18
New: unknown(tag 48)(173 bytes)
Old: Signature Packet(tag 2)(until eof)
    Ver 213 - unknown

Unfortunately the output indicates an unknown packet tag and it looks like something is broken. I’m not sure what’s going on, but I suspect either an error in my implementation, or a bug in Bouncycastle. I noticed, that the output of pgpdump is drastically changing if I change the first boolean value in any of the hashedSubPackets setter function calls from false to true (that boolean represents, whether the set value is “critical”, meaning whether the receiving implementation should throw an error in case the read property is unknown). If I do set it to true, the output looks more disturbing and broken, since strange unicode symbols start to appear, indicating a bug. Unfortunately my mail to the Bouncycastle mailing list is still unanswered, although I must add that I wrote it only seven hours ago.

It is a real pity, that it is so hard to find working example code that is not outdated :( If you can point me in the right direction, please let me know!! You can find contact details on my Github page.

My next steps debugging this will be trying whether an exported key can successfully be imported both back into PGPainless, as well as into GnuPG. Apart from that, I will spend more time thinking about an API which allows different OpenPGP backends.

Happy Hacking!

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl