Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 16 2018

vanitasvitae's blog » englisch: Smack: Some busy nights

Hello everyone!

This weekend I stayed up late almost every evening. Thus I decided that I wanted to code something, but I wasn’t sure what, so I took a look at the list of published XEPs to maybe find something that is easy to implement, but missing from Smack.

I found that XEP-0394: Message Markup was missing from Smacks list of supported extensions, so I began to code. The next day I finished my work and created Smack#194. One or two nights later I again stayed up late and decided to take another look for an unimplemented XEP. I settled on XEP-0382: Spoiler Messages  this time, which was really easy to implement (apart from the one little attribute, which for whatever reason I struggled to parse until I found a solution). The result of that night is Smack#195.

So if you find yourself laying awake one night with no chance to sleep, just look out for an easy to do task on your favourite free software project. I’m sure this will help you sleep better once the task is done.

Happy Hacking!
Vanitasvitae

January 15 2018

DanielPocock.com - fsfe: RHL'18 in Saint-Cergue, Switzerland

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

January 11 2018

Free Software – Frank Karlitschek_: Nextcloud Talk is here

Today is a big day. The Nextcloud community is launching a new product and solution called Nextcloud Talk. It’s a full audio/video/chat communication solution which is self hosted, open source and super easy to use and run. This is the result of over 1.5 years of planing and development.

For a long time it was clear to me that the next step for a file sync and share solution like Nextcloud is to have communication and collaboration features build into the same platform. You want to have a group chat with the people you have a group file share with. You want to have a video call with the people while you are collaborative editing a document. You want to call a person directly from within Nextcloud to collaborate and discuss a shared file, a calendar invite, an email or anything else. And you want to do this using the same login, the same contacts and the same server infrastructure and webinterface.

So this is why we announced, at the very beginning of Nextcloud, that we will integrate the Spreed.ME WebRTC solution into Nextcloud. And this is what we did. But it became clear that whats really needed is something that is fully integrated into Nextcloud, easy to run and has more features. So we did a full rewrite the last 1.5 years. This is the result.

Nextcloud Talk can, with one click, be installed on every Nextcloud server. It contains a group chat feature so that people and teams can communicate and collaborate easily. It also has WebRTC video/voice call features including screen-sharing. This can be used for one on one calls, web-meetings or even full webinars. This works in the Web UI but the Nextxloud community also developed completely new Android and iOS apps so it works great on mobile too. Thanks to push notifications, you can actually call someone directly on the phone via Nextcloud or a different phone. So this is essentially a fully open source, self hosted, phone system integrated into Nextcloud. Meeting rooms can be public or private and invites can be sent via the Nextcloud Calendar. All calls are done peer to peer and end to end encrypted.

So what are the differences with WhatsApp Calls, Threema, Signal Calls or the Facebook Messenger?
All parts of Nextcloud Talk are fully Open Source and it is self hosted. So the signalling of the calls are done by your own Nextcloud server. This is unique. All the other mentioned solutions might be encrypted, which is hard to check if the source-code is not open, but they all use one central signalling server. So the people who run the service know all the metadata. Who is calling whom, when, how long and from where. This is not the case with Nextcloud Talk. No metadata is leaked. Another benefit is the full integration into all the other file sharing, communication, groupware and collaboration features of Nextcloud.

So when is it available? The Version 1.0 is available today. The Nextcloud App can be installed with one click from within Nextcloud. But you need the latest Nextcloud 13 beta server for now. The Android and iOS apps are available in the Google and Apple App Stores for free. This is only the first step of course. So if you want to give feedback and contribute then collaborate with the rest of the Nextcloud community.

More information can be found here https://apps.nextcloud.com/apps/spreed and here  https://nextcloud.com/talk

 

 

 

 

 

 

What are the plans for the future?
There are still parts missing that are planed for future version. We want to expose the Chat feature via an XMPP compatible API so that third party Chat Apps can talk to a Nextcloud Talk server. And we will also integrate chat into our mobile apps. I hope that Desktop chat apps also integrate this natively. for example on KDE and GNOME. This should be relatively easy because of the standard XMPP BOSH protocol. And the last important feature is call federation so that you can call people on different Nextcloud Talk servers.

If you want to contribute then please join us here on github:
http://github.com/nextcloud/spreed
https://github.com/nextcloud/talk-ios
https://github.com/nextcloud/talk-android

Thanks a lot to everyone who made this happen. I’m proud that we have such a welcoming, creative and open atmosphere in the Nextcloud community so that such innovative new ideas can grow.

January 10 2018

vanitasvitae's blog » englisch: Reworking smack-omemo

A bit over a year ago I started working on smack-omemo as part of my bachelor thesis. Looking back at the past year, I can say there could have hardly been a better topic for my thesis. Working with Smack brought me deep into the XMPP world, got me in contact with a lot of cool people and taught me a lot. Especially the past Google Summer of Code improved my skills substantially. During said event, I took a break from working on smack-omemo, while focussing on a Jingle implementation instead. After the 3 months were over, I dedicated my time to smack-omemo again and realized, that there were some points that needed improvements.

One major issue was, that my “OmemoStore” class, which is responsible for storing keys, sessions, etc. was not having access to the users data before the user logged in. The reason for that is, that my implementation allows multiple OMEMO instances to be running on the same connection. That requires the OmemoStore to store keys for multiple instances (devices), which I distinguished based on the Jid and deviceId of the user. The problem here is, that the Jid is unknown before the user logged in (they might use a burner jid for example, or use an authentication system with username and password which differ from the jid).

While this is an edgecase, it lead to issues. I implemented a workaround for that problem (using the username instead of BareJid in case the connection is not authenticated), which caused numerous problems.

I thought about replacing the Jid as an identifier with something else, but nothing was suitable, so I started a major rework of the implementation as a whole. One important aspect I wanted to preserve is that smack-omemo should still be somewhat usable even when the connection is not authenticated (ie. the user should still be able to scan qr codes and make trust decisions).

The result of my work (so far) is a diff of “+6,300 −5,361″, and a modified API (sorry to all those who already use smack-omemo :O). One major change is, that the OmemoStore no longer stores trust decisions. Instead those decisions are now made by the client itself, who must implement a OmemoTrustCallback. That way trust decisions can be made while the OmemoManager is offline. Everything else what remained in the OmemoStore is only needed when the connection is authenticated and messages are received.

Furthermore I got rid of the OmemoSession class. Session handling is done in libsignal already, so why would I want to have a session related class as well (especially since libsignal doesn’t give you any feedback about what happens with the session, so you have to keep sessions in sync manually)? I recommend everyone who wants to implement OMEMO themselves not to create a “OmemoSession” class and instead rely on libsignals session management.

OMEMO sessions are somewhat brittle. You can never know, whether a recipient received your message, or if it failed to decrypt for some reason. There is no signalling to provide feedback about the sessions state. Because of the fact that even message encryption can go wrong, the old API was very ugly. Originally I first checked, whether there are devices which still need a trust decision to be made and threw an exception if that was the case. Then I tried to build sessions for devices without session and threw an exception when session negotiation failed. Then I tried to encrypt the message for all recipients and threw an exception if something went wrong… Oh and the exception I threw when sessions could not be negotiated contained a list of all devices with intact sessions, so the user could retry to encrypt the message, only for all devices which had a session.

Ugly!!!

The new API is much cleaner. I still throw an exception when there are undecided devices, but otherwise I always return an OmemoMessage object. That object has a map of OmemoDevices for which message encryption failed, alongside the respective exceptions, so the client can check if and what went wrong.

Also sessions are now “completed” whenever a preKeyMessage arrives.
Prior to this change it could happen, that two senders chose the same PreKey from a bundle in order to create a session. That could cause on of both session to break which lead to message loss. Now whenever smack-omemo receives a preKeyMessage, it instantly responds with an empty message to make the session stable.
This was proposed by Philipp Hörist.

Other changes include a new OmemoStore implementation, the CachingOmemoStore, which can either wrap other OmemoStores to provide a caching layer, or can be used standalone as an ephemeral store for testing purposes.

Also the integration tests were improved and are much simpler and more readable now.

All in all the code got much cleaner now and I hope that at some point it will be audited to find all the bugs I oversaw :D (everyone who wants to take a look for themselves, the code can currently be found at Smacks Repository. I’m always thankful for any types of feedback)

I hope this changes will make it to Smack 4.2.3, even though here are still some things I have to do, but all in all I’m already pretty satisfied with how smack-omemo turned out so far.

Happy Hacking!

December 25 2017

Evaggelos Balaskas - System Engineer: 2FA SSH aka OpenSSH OATH, Two-Factor Authentication

2FA SSH aka OpenSSH OATH, Two-Factor Authentication

prologue

Good security is based on layers of protection. At some point the usability gets in the way. My thread model on accessing systems is to create a different ssh pair of keys (private/public) and only use them instead of a login password. I try to keep my digital assets separated and not all of them under the same basket. My laptop is encrypted and I dont run any services on it, but even then a bad actor can always find a way.

Back in the days, I was looking on Barada/Gort. Barada is an implementation of HOTP: An HMAC-Based One-Time Password Algorithm and Gort is the android app you can install to your mobile and connect to barada. Both of these application have not been updated since 2013/2014 and Gort is even removed from f-droid!

Talking with friends on our upcoming trip to 34C4, discussing some security subjects, I thought it was time to review my previous inquiry on ssh-2FA. Most of my friends are using yubikeys. I would love to try some, but at this time I dont have the time to order/test/apply to my machines. To reduce any risk, the idea of combining a bastion/jump-host server with 2FA seemed to be an easy and affordable solution.

OpenSSH with OATH

As ssh login is based on PAM (Pluggable Authentication Modules), we can use Gnu OATH Toolkit. OATH stands for Open Authentication and it is an open standard. In a nutshell, we add a new authorization step that we can verify our login via our mobile device.

Below are my personal notes on how to setup oath-toolkit, oath-pam and how to synchronize it against your android device. These are based on centos 6.9

EPEL

We need to install the epel repository:

# yum -y install https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Searching packages

Searching for oath

# yum search oath

the results are similar to these:


liboath.x86_64       : Library for OATH handling
liboath-devel.x86_64 : Development files for liboath
liboath-doc.noarch   : Documentation files for liboath

pam_oath.x86_64      : A PAM module for pluggable login authentication for OATH
gen-oath-safe.noarch : Script for generating HOTP/TOTP keys (and QR code)
oathtool.x86_64      : A command line tool for generating and validating OTPs

Installing packages

We need to install three packages:

  • pam_oath is the PAM for OATH
  • oathtool is the gnu oath-toolkit
  • gen-oath-safe is the program that we will use to sync our mobile device with our system

# yum -y install pam_oath oathtool gen-oath-safe

FreeOTP

Before we continue with our setup, I believe now is the time to install FreeOTP

freeotp_fdroid.png

FreeOTP looks like:

freeotp.png

HOTP

Now, it is time to generate and sync our 2FA, using HOTP

Generate

You should replace username with your USER_NAME !

# gen-oath-safe username HOTP

gen_oath.png

Sync

and scan the QR with FreeOTP

freeotpqr.png

You can see in the top a new entry!

freeotpusername.png

Save

Do not forget to save your HOTP key (hex) to the gnu oath-toolkit user file.

eg.

Key in Hex: e9379dd63ec367ee5c378a7c6515af01cf650c89

# echo "HOTP username - e9379dd63ec367ee5c378a7c6515af01cf650c89" > /etc/liboath/oathuserfile

verify:

# cat /etc/liboath/oathuserfile

HOTP username - e9379dd63ec367ee5c378a7c6515af01cf650c89

OpenSSH

The penultimate step is to setup our ssh login with the PAM oath library.

Verify PAM

# ls -l /usr/lib64/security/pam_oath.so

-rwxr-xr-x 1 root root 11304 Nov 11  2014 /usr/lib64/security/pam_oath.so

SSHD-PAM

# cat /etc/pam.d/sshd

In modern systems, the sshd pam configuration file seems:

#%PAM-1.0
auth       required pam_sepermit.so
auth       include      password-auth
account    required     pam_nologin.so
account    include      password-auth
password   include      password-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open env_params
session    required     pam_namespace.so
session    optional     pam_keyinit.so force revoke
session    include      password-auth

We need one line in the top of the file.

I use something like this:

auth       sufficient /usr/lib64/security/pam_oath.so  debug   usersfile=/etc/liboath/oathuserfile window=5 digits=6

Depending on your policy and thread model, you can switch sufficient to requisite , you can remove debug option. In the third field, you can try typing just the pam_path.so without the full path and you can change the window to something else:

eg.

auth requisite pam_oath.so usersfile=/etc/liboath/oathuserfile window=10 digits=6

Restart sshd

In every change/test remember to restart your ssh daemon:

# service sshd restart

Stopping sshd:                                             [  OK  ]
Starting sshd:                                             [  OK  ]

SELINUX

If you are getting some weird messages, try to change the status of selinux to permissive and try again. If the selinux is the issue, you have to review selinux audit logs and add/fix any selinux policies/modules so that your system can work properly.

# getenforce
Enforcing

# setenforce 0

# getenforce
Permissive

Testing

The last and most important thing, is to test it !

ssh_login.png

Links

Post Scriptum

The idea of using OATH & FreeOTP can also be applied to login into your laptop as PAM is the basic authentication framework on a linux machine. You can use OATH in every service that can authenticate it self through PAM.

Tag(s): SSH, FreeOTP, HOTP

December 20 2017

English Planet – Dreierlei: Free Software Assembly Europe at the 34C3 Chaos Communication Congress

This year, the assembly of the Free Software Foundation Europe will be integral part of the Cluster Rights & Freedoms. The cluster is formed together with our friends and other civil society organizations. During 4 days the FSFE will offer a public space for and by our members, friends and supporters to discuss, meet, hack and organise. Find an overview of our sessions and other specialties in this blog post. Always find the latest updates on our dedicated FSFE-assembly-page. Let’s put the hacking back into politics!

Location:

<figure class="wp-caption alignright" id="attachment_2360" style="max-width: 150px"><figcaption class="wp-caption-text">Location of the FSFE Assembly during 34C3 in Saal3 in the CCL.</figcaption></figure>
The Congress Center Leipzig is huge! You will find our assembly in the Cluster Rights & Freedoms. The cluster itself is filling Saal 3 / Hall 3, which is split half/half into the stage area and the assembly area. You find the FSFE assembly in the assembly area.

On the right side you see a supervision of the cluster with the stage on top and the fsfe-assembly on bottom-left.

Saal 3 / Hall 3 is in the CCL-building, which is the “small” building on top-left in this graphic. In a side view, Saal 3 is on top right of the CCL-building.

Our sessions:

Please note that all sessions will happen on the stage in the Rights&Freedoms-Cluster in Saal 3 in the CCL-building (see above to find the location), except the Free Software song sing-along-sessions that will happen directly at the FSFE assembly and the workshops that happen in dedicated workshop-rooms.

Do not forget to check the the 34C3-wiki page for details and the latest updates!

Day 1: Wednesday 27

  • 14:00 Newpipe by Chris Schabesberger
  • 17:00 Design in Free Software & Open Source by Dina Michl & Victoria Bondarchuk
  • 18:00 PEP with Thunderbird by the PEP Foundation
  • 19:00 Jabber/XMPP: past, present and future by Daniel Gultsch
  • 20:00 Social networking, powered by Free Sofware by Tobias Diekershoff
  • 21:00 The many applications of digital certificates by Thomas Ruddy
  • 22:00 Free Software song sing-along session at the FSFE assembly

Day 2: Thursday 28

  • 14:00 Privacy aware city navigation with Free Software by Redon Skikuli
  • 17:30 Free Software song sing-along session at the FSFE assembly
  • 18:00 Hacking with wget by Hanno Böck
  • 20:00 A public identity infrastructure to defend the open Internet by Vittorio Bertolo

Day 3: Friday 29

  • 13:30 Free Software song sing-along session at the FSFE assembly
  • 14:00 (workshop) Replicant Install Fest in Lecture Room 12
  • 14:00 (workshop) Join us now – a choir to perform the Free Software song in Seminar room 13
  • 16:00 Fixing mass surveillance: one court case at a time! by Exegetes
  • 19:00 Public money? Public Code! by Polina Malaja & Katharina Nocun

Free Software Song choir and sing-along sessions

Everyday at the FSFE village, we will run a Free Software Song sing-along-session. In addition, Benjamin Wand runs a project to bring together a choir who performs the Free Software Song on stage. You can read additional details and background about it in a previous blogpost and see its first ever performance during SHA2017.

<figure class="wp-caption aligncenter" id="attachment_1936" style="max-width: 300px"> <figcaption class="wp-caption-text">One of our Free Software song sing-along sessions during 33C3.</figcaption></figure>

The ultimate Free Software challenge

More or less anytime you can come to our assembly and try the ultimate Free Software challenge that will let you dig deep into the history of Free Software, so deep that you might reach the big-bang-moment of Free Software. Be prepared for an inspiring and challenging journey and bring some friends (or any randomly allocated companionship) to pass it together.

After all, the most we look forward to is to meet you and have a good time together and an exciting knowledge exchange!

Paul Boddie's Free Software-related blog » English: The End of Gratipay

Having discussed issues of Free Software funding before, it would seem inappropriate to let the closing down of Gratipay pass unmentioned. Gratipay is a service where people can commit to giving a sum of money at regular intervals for donation to one or more recipients, offering what the service itself calls a “voluntary subscription revenue model” that is perhaps more familiar to those who have used other, similar funding platforms such as Patreon. In effect, creators sign up to receive payments, donors sign up to support the creators, and then the money flows from the latter group to the former, facilitated by the service.

A Quick Primer

The fundamental model of Gratipay is that ”contributors” (donors, “patrons”) support “projects” (recipients, creators) on a weekly basis. Unlike Patreon, where creators are likely to be producing “creations” in a way that best matches artistic and creative pursuits, with the delivery of content to be consumed in discrete parcels, there are no “per-creation” options in Gratipay. Instead, the aim is to provide a reliable source of funding for ongoing work that cannot be so easily split up into chunks and delivered to paying customers one piece at a time.

Another thing that makes Gratipay different to Patreon is the way fees are handled. Patreon charges obligatory fees for handling donations in addition to the other service fees incurred when money is transferred between the different parties. Meanwhile, Gratipay donors are instead merely encouraged to send some of their donations to Gratipay as a way of acknowledging the service’s role and to help fund the service. In addition, Gratipay has always aimed to pass on transaction processing fees “at cost”, with a particularly important aspect of the service’s operation being that it aimed to perform such transactions in an efficient way.

So, instead of charging a donor for the separate transfer of each amount written up against that donor’s different recipients, Gratipay would charge that donor only once per week for the combined total of their donations that happened to be active during that week. And instead of sending each separate donation to its recipient in a distinct transaction, Gratipay would aggregate the donations directed towards a recipient from all its donors and then issue a single transaction to transfer the money. This arrangement would become central in the story of Gratipay and may well have to role to play elsewhere, as we shall see.

The Perils of Payments

In light of recent events, it is particularly pertinent to mention Patreon in the context of Gratipay. Recently, Patreon sought to change its fee structure, justifying it as a way of minimising the impact of fees on creators and the uncertainty around how much each of them could expect to receive every month. This has proved to be controversial, with some people now deciding that they have had quite enough of Patreon’s fees, and with Patreon subsequently deciding to abandon the proposed change.

Part of the motivation for Patreon to rock the boat in this way might simply be to improve profitability and discourage usage patterns that impact profitability, as some people have suggested. Others, however, aware of what happened to Gratipay, suggest that the motivation may involve regulatory compliance. Some may claim that this latter motivation has been “debunked”, and it perhaps isn’t appropriate to speculate in any depth, anyway, but the potential application of specific finance industry regulations certainly was enough to interrupt Gratipay’s operations, in what was known as the Gratipocalypse, suspending those operations for sufficiently long and introducing sufficient uncertainty that it most likely put the service on a course towards its now-impending closure.

Now, non-compliance with finance industry regulation is the kind of very serious matter that cannot so easily be waved away with “good enough” workarounds unless one likes explaining them to a judge, which is why Gratipay took legal advice and changed its operating model. Maybe this has nothing to do with Patreon’s recent actions, but it would be rather cruel if Gratipay, having become aware of such pitfalls, did the right thing at considerable cost to the service and its competitiveness while other, similar services carried on doing broadly similar things – oblivious to such problems, perhaps – cultivating businesses that might now demand more scrutiny.

The Gratipay Legacy

Much of the above is something of an aside to what I really wanted to focus on, however. In bringing this topic to the attention of a Free Software audience, I aim to make the point that Gratipay, being a platform developed as Free Software, should be credited for trying out different approaches for funding Free Software and for allowing others to continue where it left off, to take the platform in new directions, even as it must itself close and send its users elsewhere.

Upon experiencing the Gratipocalypse and regulatory difficulties, the platform was forked to establish Liberapay (by various existing Gratipay developers, as I understand it). Liberapay is a service that is regulated in the European Union. Thanks to that decision to make a transparently-developed Free Software service, the platform can be thought to live on in some way. The cultivation of a durable legacy is surely why many people choose to develop Free Software in the first place, and in this regard Gratipay has perhaps achieved one of its objectives regardless of its own fate.

The fundamental question of how people can be sustained in their activities developing Free Software, outside traditional employment paradigms that is, was explored by Gratipay in a few different ways. As Chad Whitacre, Gratipay’s founder, noted in a blog post, there are many projects in the Free Software universe that make the whole thing viable. However, few of them are likely to see any serious financial investment. Of course, some people might suggest that most Free Software projects are not worthy of any significant investment, that “healthy competition” (coupled to the usual dubious misrepresentation of Darwin’s theories) should decide on the rewards and pick a winner.

It may be a coincidence that in attempting to address this “long tail” problem, Gratipay selected npm (the Node.js package manager) as a candidate to trial better integration between the tools people use and Gratipay’s mechanisms for facilitating donations, effectively letting people discover whose works they make use of and providing them with an easier-than-normal way of rewarding those responsible. A year or so earlier, in a demonstration of how a seemingly trivial piece of software can underpin entire development ecosystems, the deletion of one npm package entry (of many entries controlled by a single developer) caused numerous systems and services to fail, with extensive chaos amongst affected developers and service operators being the immediate result.

Although the npm package deletion fiasco has a number of causes that are beyond the scope of this article, and while one may or may not identify the library responsible for the apparently-widespread breakage as being particularly worthy of sustained funding, it reminds us that there are many seemingly-insignificant building blocks supporting the larger, more well-known projects that are potentially already well-funded. It is also worth noting that Gratipay also attempted to provide mechanisms for the fair distribution of contributions across teams as opposed to focusing on individuals. Recognising that success is usually a team effort is also rather important in a world where celebrity is all too frequently cultivated and rewarded at the expense of those who quietly made that success happen.

One might argue that the conditions for “crowdfunding” people to work on software are very rarely likely to be present. Certainly, the odd Internet celebrity can have a million followers on some “social media” platform or other, and when those followers all chip in a few cents every now and again, the celebrity can focus on whatever it is that they do on that platform. But it takes a lot of small contributions to fund something that resembles a salary. And when the follower demographic for software is likely to be narrower than for random entertainment, it would seem to be a futile task to find a desirable number of donors who might appreciate the value they derive from the software in question and collectively contribute enough funding to pay someone such a salary.

On this front, Gratipay appears to have tried another strategy: to identify those parties who do derive significant value from software and who would be willing to contribute more significant sums. It seems rather obvious, but the people who are making the most money from using software and who are spending the most money, some of it on software, potentially little of it on Free Software, are surely the people to encourage when attempting to secure sustainable Free Software funding. However, this may have been one strategic turn too many, perhaps leading the service in a direction that cannot be pursued with the resources it has at its disposal.

Hiding in Plain Sight

One might well ask whether conventional employment, not the “open work” that Gratipay has aimed to support, is really the mundane and obvious-all-along solution to Free Software funding. Surely, if people want to be paid by others to work on things, then they should be prepared to actually work for the people with the money. And it is true that companies and other organisations can act in sustainable ways that seek to strengthen the foundations shared between their operations and those of others.

But one can also respond to this with observations about conflicts of interests, of developers being hired to not continue working on the Free Software projects they had contributed to, of selfishness and doing things for competitive advantage rather than improving the quality of everybody’s offerings. And of the general inefficiency of recruitment processes these days, meaning that capable developers cannot find positions and yet there are companies almost desperate to identify and hire exactly those developers.

So, as Chad points out in his summary of crowdfunding platforms, the “roll your own” model of accepting donations may be a viable way of engaging with companies directly, at least for projects with sufficient reputational stature. However, let us take the example of one such project providing a technology featuring in many Python job advertisements and surely responsible for a fair amount of money changing hands. Through its supporting organisation, it manages to attract enough funding for just one core developer alongside a number of other activities. It can be debated whether this is an inspiring signpost towards better things or a depressing summary of how much investment in infrastructure people feel they can get away with.

Fundamentally, though, there are projects that just won’t be funded until someone declares a crisis. And even then, the nature of the game is that people will do just enough to avert disaster, throw some funds the way of the overworked maintainers caught in the spotlight, and then carry on as if nothing was really wrong in the first place. Gratipay may not have succeeded in providing a lasting solution to the broader – seemingly less urgent – crisis facing sustainable Free Software development, but we can at least be thankful that a group of dedicated people tried their best to explore some of the options and, through their commitment to Free Software licensing, have allowed others to carry on the work they started.

December 13 2017

polina's blog: FSFE asks to include software into the list of re-usable public sector information

The Directive on the re-use of public sector information (Directive 2003/98/EC, revised in 2013 by Directive 2013/37/EU – ‘PSI Directive’) establishes a common legal framework for a European market for government-held data (public sector information). It is built around two key pillars of the internal market: transparency and fair competition.

The PSI Directive focuses on economic aspects of re-use of information gathered by governments, and while it does mention some societal impact of such re-use, its main focus is on contributing to a cross-borer European data economy by making re-usable data held by governments accessible both for commercial and non-commercial purposes (i.e. “open data”). The objective of PSI Directive is not to establish truly “open government” as such, although it does contribute to such goal by demanding the re-usability of government-held data based on open and machine-readable formats.

For Free Software the PSI Directive is important because it affects re-use of documents as in texts, databases, audio files and film fragments, but explicitly excludes “computer programmes” from its scope for no apparent reason in the recital 9 of Directive 2003/98/EC.

However, despite this explicit exclusion of software in the PSI Directive recital, EU member states are not precluded from creating their own rules for opening up data held by public bodies and including “software” into the list of re-usable government-held information. First, the PSI Directive establishes “minimum” requirements for member states to follow when opening up their data, and second, the exclusion of computer programmes from the scope of the Directive is enshrined in its non-legislative part: recitals, acting solely as a guidance to the interpretation of the legislative part: the articles.

The recent case in France is a good example why there are no evident reason why the EU member states should exclude software from the list or re-usable and open data held by governments. In particular, France’s “Digital Republic” law, adopted in 2016, (LOI n° 2016-1321 du 7 octobre 2016 pour une République numérique) considers source code as a possible administrative document that must be made available in an open standard format that can be easily reused and processed.

Therefore, our response to the PSI Directive public consultation can be summarised to:

  • Consider source code owned by a public administration as a ‘document’ within the scope of the Directive.
  • Algorithmic accountability in government decision-making process is a must for truly transparent government, therefore, the software developed for public sector that is used in delivering tasks of public interest either by a publicly owned company or a private company, should be made available as Free Software.
  • Free Software is crucial for scientific verification of research results, and it is absolutely necessary to make sure that Open Science policies include the requirement to publish software tools and applications produced during publicly funded research under Free Software licences.
  • No special agreements with private services for delivering tasks of public interest shall ever preclude the re-usability of government-held data by both commercial and non-commercial Free Software. Public bodies shall focus on making data available in open and accessible formats.
  • Sui generis database rights cannot be invoked in order to preclude the re-usability of government-held data.
  • Minimum level of harmonisation for the relationship between Freedom of Information (FoI) laws and the PSI Directive is needed in order to bring the EU closer to the cross-border market for public sector information.

Please find our submission to the public consultation in full here.

Image: CC0

December 12 2017

English Planet – Dreierlei: Report about the FSFE Community Meeting 2017

Two weeks ago we had our first general community meeting as an opportunity for all people engaged inside FSFE to come together, share knowledge, grow projects, hack, discuss and get active. Integral part and topic of the meeting was knowledge sharing of FSFE related tools and processes. Find some notes and pictures in this report.

For the first time, we we merging our annual German speaking team meeting this year with the bi-annual coordinators meeting into one bigger meeting for all active people of the FSFE community. Active people in this context means that invited was any member of any team, be it a local or topical one. All together, we met on the weekend of November 25 and 26 at Endocode, Berlin.

Integral part and topic of the meeting was knowledge sharing of FSFE related tools and processes. For this, we have had several slots in the agenda in that participants had the possibility to self-host a knowledge- or tool-sharing session that they are interested in. Or one in that they are an expert in and they like to share their knowledge. In a next step everyone could mark his own interest in the proposed sessions and based on that we arranged the agenda.

We have seen particularly high interest in giving input to the plans for FSFE to grow membership, in tips for implementing our Code of Conduct, in strategies to increase diversity and in introductions of tools offered by the FSFE like lime survey and git.

The feedback about the meeting was very positive, in particular about the dynamic agenda and the productive sessions that left participants with the feeling of having got something done. Myself, in the role of the organiser, this years meeting left me with the good feeling that we did not only have got something done but that we also will see further collaboration on several topics among participants coming out as a result of this meeting.

Personally, it makes me happy again and again to be part of such a friendly and accommodating community. A community in that participants respect each other in a natural way and no one tries to overrule others.

The productive feeling and the unique atmosphere already make me looking forward to organise the next community meeting 2018.

Hereafter some pictures of this year’s event:

<figure class="wp-caption aligncenter" id="attachment_2352" style="max-width: 580px"> <figcaption class="wp-caption-text">Participants of the FSFE community meeting 2017</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2341" style="max-width: 580px"> <figcaption class="wp-caption-text">Session about implementing our Code of Conduct.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2343" style="max-width: 580px"> <figcaption class="wp-caption-text">Session about updates of our Free Your Android campaign.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2344" style="max-width: 580px"> <figcaption class="wp-caption-text">Session about diversity.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2345" style="max-width: 580px"> <figcaption class="wp-caption-text">The blueboard shows the amount of session-proposals (one on each yellow cards) during the community meeting.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2346" style="max-width: 580px"> <figcaption class="wp-caption-text">Breaks are always good for a chat.</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2347" style="max-width: 580px"> <figcaption class="wp-caption-text">One of our lightning talks by Paul Hänsch</figcaption></figure> <figure class="wp-caption aligncenter" id="attachment_2342" style="max-width: 580px"> <figcaption class="wp-caption-text">Lightning talks audience.</figcaption></figure>

December 10 2017

Daniel's FSFE blog: How a single unprivileged app can brick the whole Android system

This article is highly subjective and only states the author’s opinion based on actual observations and “wild” assumptions. Better explanations and corrections are warmly welcome!

Motivation

After updating an App from the F-Droid store (OpenCamera), my Android device was completely unusable. In this state, the only feasible option for a typical end-user to recover the device (who does not know how to get to safe mode in order to remove or downgrade the app [5]) would have been to wipe data in recovery, loosing all data.

How can such a disaster happen? In this article, I argue why I have serious doubts about the memory management approach taken in Android.

The failure

After updating the OpenCamera app to the recently released version 1.42, my Android device ran into a bootloop that was hard to recover from. I was able to repeatingly reproduce the failure on a different device, namely the following:

  • Device: Samsung Galaxy S3 (i9300)
  • ROM: Lineage OS 13 (Android 6.0), freshly built from latest sources, commit 42f4b851c9b2d08709a065c3931f6370fd78b2b0 [1]

Steps to reproduce:

  1. wipe all data and caches
  2. newly configure the device using the first-use wizard
  3. install the F-Droid store
  4. search for “Open Camera”
  5. install Open Camera version 1.42

Expected:

The install completes and the app is available. If installation fails (for whatever reason), an error message is shown but the device is still working

Actual:

The install freezes, the LineageOS splash screen appears and re-initializes all apps; this happens several times and after aprox 10-15 minutes the device is back “working”; when trying to start apps they crash or even the launcher (“Trebuchet”) crashes. After rebooting the device, it is stuck in an infinite loop initializing apps.

The fault (what happens under the hood?)

When installing OpenCamera, the following is printed in the log:

12-10 14:48:30.915  4034  5483 I ActivityManager: START u0 {act=org.fdroid.fdroid.installer.DefaultInstaller.action.INSTALL_PACKAGE dat=file:///data/user/0/org.fdroid.fdroid/files/Open Camera-1.42.apk cmp=org.fdroid.fdroid/.installer.DefaultInstallerActivity (has extras)} from uid 10070 on display 0
12-10 14:48:30.915  4034  5483 W ActivityManager: startActivity called from non-Activity context; forcing Intent.FLAG_ACTIVITY_NEW_TASK for: Intent { act=org.fdroid.fdroid.installer.DefaultInstaller.action.INSTALL_PACKAGE dat=file:///data/user/0/org.fdroid.fdroid/files/Open Camera-1.42.apk cmp=org.fdroid.fdroid/.installer.DefaultInstallerActivity (has extras) }
12-10 14:48:30.925  4034  5483 D lights  : set_light_buttons: 2
12-10 14:48:30.955  4034  5649 I ActivityManager: START u0 {act=android.intent.action.INSTALL_PACKAGE dat=file:///data/user/0/org.fdroid.fdroid/files/Open Camera-1.42.apk cmp=com.android.packageinstaller/.PackageInstallerActivity (has extras)} from uid 10070 on display 0
12-10 14:48:31.085  6740  6740 W ResourceType: Failure getting entry for 0x7f0c0001 (t=11 e=1) (error -75)
12-10 14:48:31.700  4034  4093 I ActivityManager: Displayed com.android.packageinstaller/.PackageInstallerActivity: +724ms (total +758ms)
12-10 14:48:36.770  4034  4362 D lights  : set_light_buttons: 1
12-10 14:48:36.840  4034  4938 I ActivityManager: START u0 {dat=file:///data/user/0/org.fdroid.fdroid/files/Open Camera-1.42.apk flg=0x2000000 cmp=com.android.packageinstaller/.InstallAppProgress (has extras)} from uid 10018 on display 0
12-10 14:48:36.850  3499  3895 D audio_hw_primary: select_output_device: AUDIO_DEVICE_OUT_SPEAKER
12-10 14:48:36.955  6863  6874 D DefContainer: Copying /data/user/0/org.fdroid.fdroid/files/Open Camera-1.42.apk to base.apk
12-10 14:48:37.100  4034  4093 I ActivityManager: Displayed com.android.packageinstaller/.InstallAppProgress: +251ms
12-10 14:48:37.155  6740  6753 D OpenGLRenderer: endAllStagingAnimators on 0x486226f0 (RippleDrawable) with handle 0x48604d28
12-10 14:48:37.170  4034  4100 W ResourceType: Failure getting entry for 0x7f0c0001 (t=11 e=1) (error -75)
12-10 14:48:37.465  4034  4100 I PackageManager.DexOptimizer: Running dexopt (dex2oat) on: /data/app/vmdl872450731.tmp/base.apk pkg=net.sourceforge.opencamera isa=arm vmSafeMode=false debuggable=false oatDir = /data/app/vmdl872450731.tmp/oat bootComplete=true
12-10 14:48:37.585  7205  7205 I dex2oat : Starting dex2oat.
12-10 14:48:37.585  7205  7205 E cutils-trace: Error opening trace file: No such file or directory (2)
12-10 14:48:42.405  7205  7205 I dex2oat : dex2oat took 4.815s (threads: 4) arena alloc=5MB java alloc=2023KB native alloc=13MB free=1122KB
12-10 14:48:42.415  4034  4100 D lights  : set_light_buttons: 2
12-10 14:48:42.680  4034  4100 V BackupManagerService: restoreAtInstall pkg=net.sourceforge.opencamera token=3 restoreSet=0
12-10 14:48:42.680  4034  4100 W BackupManagerService: Requested unavailable transport: com.google.android.gms/.backup.BackupTransportService
12-10 14:48:42.680  4034  4100 W BackupManagerService: No transport
12-10 14:48:42.680  4034  4100 V BackupManagerService: Finishing install immediately
12-10 14:48:42.705  4034  4100 W Settings: Setting install_non_market_apps has moved from android.provider.Settings.Global to android.provider.Settings.Secure, returning read-only value.
12-10 14:48:42.705  4034  4100 I art     : Starting a blocking GC Explicit
12-10 14:48:42.805  4034  4100 I art     : Explicit concurrent mark sweep GC freed 52637(2MB) AllocSpace objects, 20(424KB) LOS objects, 33% free, 14MB/21MB, paused 2.239ms total 96.416ms
12-10 14:48:42.835  4034  4363 I InputReader: Reconfiguring input devices.  changes=0x00000010
12-10 14:48:42.935  5420  5420 D CarrierServiceBindHelper: Receive action: android.intent.action.PACKAGE_ADDED
12-10 14:48:42.940  5420  5420 D CarrierServiceBindHelper: mHandler: 3
12-10 14:48:42.940  5420  5420 D CarrierConfigLoader: mHandler: 9 phoneId: 0
12-10 14:48:42.945  4034  4034 F libc    : invalid address or address of corrupt block 0x120 passed to dlfree
12-10 14:48:42.945  4034  4034 F libc    : Fatal signal 11 (SIGSEGV), code 1, fault addr 0xdeadbaad in tid 4034 (system_server)
12-10 14:48:42.950  3496  3496 I DEBUG   : property debug.db.uid not set; NOT waiting for gdb.
12-10 14:48:42.950  3496  3496 I DEBUG   : HINT: adb shell setprop debug.db.uid 100000
12-10 14:48:42.950  3496  3496 I DEBUG   : HINT: adb forward tcp:5039 tcp:5039
12-10 14:48:42.975  3496  3496 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
12-10 14:48:42.975  3496  3496 F DEBUG   : LineageOS Version: '13.0-20171125-UNOFFICIAL-i9300'
12-10 14:48:42.975  3496  3496 F DEBUG   : Build fingerprint: 'samsung/m0xx/m0:4.3/JSS15J/I9300XXUGMJ9:user/release-keys'
12-10 14:48:42.975  3496  3496 F DEBUG   : Revision: '0'
12-10 14:48:42.975  3496  3496 F DEBUG   : ABI: 'arm'
12-10 14:48:42.975  3496  3496 F DEBUG   : pid: 4034, tid: 4034, name: system_server  >>> system_server <<<
12-10 14:48:42.975  3496  3496 F DEBUG   : signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xdeadbaad
12-10 14:48:43.030  3496  3496 F DEBUG   : Abort message: 'invalid address or address of corrupt block 0x120 passed to dlfree'
12-10 14:48:43.030  3496  3496 F DEBUG   :     r0 00000000  r1 00000000  r2 00000000  r3 00000002
12-10 14:48:43.030  3496  3496 F DEBUG   :     r4 00000120  r5 deadbaad  r6 404e0f38  r7 40005000
12-10 14:48:43.030  3496  3496 F DEBUG   :     r8 00000128  r9 bee01b0c  sl 40358be3  fp 40358bec
12-10 14:48:43.030  3496  3496 F DEBUG   :     ip 404db5d8  sp bee019f8  lr 404abfab  pc 404abfaa  cpsr 60070030
12-10 14:48:43.045  3496  3496 F DEBUG   :
12-10 14:48:43.045  3496  3496 F DEBUG   : backtrace:
12-10 14:48:43.045  3496  3496 F DEBUG   :     #00 pc 00030faa  /system/lib/libc.so (dlfree+1285)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #01 pc 000158df  /system/lib/libandroidfw.so (_ZN7android13ResStringPool6uninitEv+38)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #02 pc 0001662b  /system/lib/libandroidfw.so (_ZN7android10ResXMLTree6uninitEv+12)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #03 pc 00016649  /system/lib/libandroidfw.so (_ZN7android10ResXMLTreeD1Ev+4)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #04 pc 00013373  /system/lib/libandroidfw.so (_ZN7android12AssetManager10getPkgNameEPKc+258)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #05 pc 000133cf  /system/lib/libandroidfw.so (_ZN7android12AssetManager18getBasePackageNameEj+62)
12-10 14:48:43.045  3496  3496 F DEBUG   :     #06 pc 00088b33  /system/lib/libandroid_runtime.so
12-10 14:48:43.045  3496  3496 F DEBUG   :     #07 pc 72cb9011  /data/dalvik-cache/arm/system@framework@boot.oat (offset 0x1f78000)
12-10 14:48:50.095  3496  3496 F DEBUG   :
12-10 14:48:50.095  3496  3496 F DEBUG   : Tombstone written to: /data/tombstones/tombstone_00
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'statusbar' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'netstats' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'power' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'media_projection' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'network_management' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'window' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'consumer_ir' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'telecom' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'cmpartnerinterface' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'package' died
12-10 14:48:50.185  1912  1912 I ServiceManager: service 'user' died

Since Open Camera needs some background service and is started on bootup, I assume that after installation the system tries to restart this service. However, it appears that there is some memory issue with the app, as it requests so much memory that Android starts killing other apps to make this memory available. In case Android does not manage to provide this space, the device is rebooted. Since OpenCamera is started at bootup, it again tries to allocate (too much) memory and the device is stuck in an infinite loop.

Looking at Android’s memory management

I expected that the following excerpt from the log above might lead to some useful hints:

12-10 14:48:42.945  4034  4034 F libc    : invalid address or address of corrupt block 0x120 passed to dlfree
12-10 14:48:42.945  4034  4034 F libc    : Fatal signal 11 (SIGSEGV), code 1, fault addr 0xdeadbaad in tid 4034 (system_server)

After searching on the net, I found an interesting discussion [2] suggesting the following:


“A likely cause of this is that you have ran out of memory, maybe because a memory leak or simply used up all memory. This can be caused by a bug you are using in a plugin that uses native C/C++ code through NDK.”

To rule out hardware issues, I also exchanged the storage (I run /data from sdcard) and compiled memtester [3] to test the device’s RAM. When experimenting with memtester, I noticed a striking difference between running memtester on a regular GNU/Linux system and running it on Android/LineageOS. When giving memtester less memory than actually available, there is no difference. However, when giving memtester *more* RAM than acutally available, the following happens on GNU/Linux:

# free -h
              total        used        free      shared  buff/cache   available
Mem:            28G        124M         28G        8.5M        219M         28G
Swap:            0B          0B          0B
# memtester 40G
memtester version 4.3.0 (64-bit)
Copyright (C) 2001-2012 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).

pagesize is 4096
pagesizemask is 0xfffffffffffff000
want 40960MB (42949672960 bytes)
got  29075MB (30488387584 bytes), trying mlock ...Killed
#
Killed
[1]+  Stopped                 sh

While on Android the device suddenly reboots after trying to mlock the memory:

root@i9300:/ # free -h
                total        used        free      shared     buffers
Mem:             828M        754M         74M           0        1.3M
-/+ buffers/cache:           752M         75M
Swap:            400M         18M        382M

root@i9300:/ # /sbin/memtester 2G
memtester version 4.3.0 (32-bit)
Copyright (C) 2001-2012 Charles Cazabon.
Licensed under the GNU General Public License version 2 (only).

pagesize is 4096
pagesizemask is 0xfffff000
want 2048MB (2147483648 bytes)
got  2008MB (2105921536 bytes), trying mlock ...

This is what is printed to logcat:

01-01 01:10:29.485  4933  4933 D su      : su invoked.
01-01 01:10:29.485  4933  4933 E su      : SU from: shell
01-01 01:10:29.490  4933  4933 D su      : Allowing shell.
01-01 01:10:29.490  4933  4933 D su      : 2000 /system/bin/sh executing 0 /system/bin/sh using binary /system/bin/sh : sh
01-01 01:10:29.490  4933  4933 D su      : Waiting for pid 4934.
01-01 01:10:44.840  2478  3264 D LightsService: Excessive delay setting light: 81ms
01-01 01:10:44.925  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:45.010  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:45.090  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:45.175  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:45.260  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:45.340  2478  3264 D LightsService: Excessive delay setting light: 82ms
01-01 01:10:50.735  2478  2538 I PowerManagerService: Going to sleep due to screen timeout (uid 1000)...
01-01 01:10:50.785  2478  2538 E         : Device driver API match
01-01 01:10:50.785  2478  2538 E         : Device driver API version: 29
01-01 01:10:50.785  2478  2538 E         : User space API version: 29
01-01 01:10:50.785  2478  2538 E         : mali: REVISION=Linux-r3p2-01rel3 BUILD_DATE=Tue Aug 26 17:05:16 KST 2014
01-01 01:10:52.000  2478  2538 V KeyguardServiceDelegate: onScreenTurnedOff()
01-01 01:10:52.040  2478  2538 E libEGL  : call to OpenGL ES API with no current context (logged once per thread)
01-01 01:10:52.045  2478  2536 I DisplayManagerService: Display device changed: DisplayDeviceInfo{"Integrierter Bildschirm": uniqueId="local:0", 720 x 1280, modeId 1, defaultModeId 1, supportedModes [{id=1, width=720, height=1280, fps=60.002}], colorTransformId 1, defaultColorTransformId 1, supportedColorTransforms [{id=1, colorTransform=0}], density 320, 304.8 x 306.71698 dpi, appVsyncOff 0, presDeadline 17666111, touch INTERNAL, rotation 0, type BUILT_IN, state OFF, FLAG_DEFAULT_DISPLAY, FLAG_ROTATES_WITH_CONTENT, FLAG_SECURE, FLAG_SUPPORTS_PROTECTED_BUFFERS}
01-01 01:10:52.060  1915  1915 D SurfaceFlinger: Set power mode=0, type=0 flinger=0x411dadf0
01-01 01:10:52.160  2478  2538 I PowerManagerService: Sleeping (uid 1000)...
01-01 01:10:52.165  2478  3231 D WifiConfigStore: Retrieve network priorities after PNO.
01-01 01:10:52.170  1938  3241 E bt_a2dp_hw: adev_set_parameters: ERROR: set param called even when stream out is null
01-01 01:10:52.170  2478  3231 E native  : do suspend false
01-01 01:10:52.175  2478  3231 D WifiConfigStore: No blacklist allowed without epno enabled
01-01 01:10:52.190  3846  4968 D NfcService: Discovery configuration equal, not updating.
01-01 01:10:52.435  2478  3231 D WifiConfigStore: Retrieve network priorities before PNO. Max priority: 0
01-01 01:10:52.435  1938  1938 E bt_a2dp_hw: adev_set_parameters: ERROR: set param called even when stream out is null
01-01 01:10:52.440  2478  3231 E WifiStateMachine:  Fail to set up pno, want true now false
01-01 01:10:52.440  2478  3231 E native  : do suspend true
01-01 01:10:52.670  2478  3231 D WifiStateMachine: Disconnected CMD_START_SCAN source -2 3, 4 -> obsolete
01-01 01:10:54.160  2478  2538 W PowerManagerService: Sandman unresponsive, releasing suspend blocker
01-01 01:10:55.825  2478  3362 D CryptdConnector: SND -> {3 cryptfs getpw}
01-01 01:10:55.825  1903  1999 D VoldCryptCmdListener: cryptfs getpw
01-01 01:10:55.825  1903  1999 I Ext4Crypt: ext4 crypto complete called on /data
01-01 01:10:55.825  1903  1999 I Ext4Crypt: No master key, so not ext4enc
01-01 01:10:55.830  1903  1999 I Ext4Crypt: ext4 crypto complete called on /data
01-01 01:10:55.830  1903  1999 I Ext4Crypt: No master key, so not ext4enc
01-01 01:10:55.830  2478  2798 D CryptdConnector: RCV  {4 cryptfs clearpw}
01-01 01:10:55.835  1903  1999 D VoldCryptCmdListener: cryptfs clearpw
01-01 01:10:55.835  1903  1999 I Ext4Crypt: ext4 crypto complete called on /data
01-01 01:10:55.835  1903  1999 I Ext4Crypt: No master key, so not ext4enc
01-01 01:10:55.835  2478  2798 D CryptdConnector: RCV <- {200 4 0}
01-01 01:10:55.925  3417  3417 D PhoneStatusBar: disable:
01-01 01:10:56.020  3417  3417 D PhoneStatusBar: disable:
01-01 01:10:56.330  3417  3417 D PhoneStatusBar: disable:
01-01 01:11:44.875  2478  4667 I ActivityManager: Process com.android.messaging (pid 4607) has died
01-01 01:11:44.920  2478  4667 D ActivityManager: cleanUpApplicationRecord -- 4607
01-01 01:11:45.860  2478  3356 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=0 for 907ms
01-01 01:11:45.890  2478  3356 I ActivityManager: Process org.cyanogenmod.profiles (pid 4593) has died
01-01 01:11:45.900  2478  3356 D ActivityManager: cleanUpApplicationRecord -- 4593
01-01 01:11:45.955  2478  2529 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=1 for 914ms
01-01 01:11:45.960  1913  1913 E lowmemorykiller: Error opening /proc/3662/oom_score_adj; errno=2
01-01 01:11:45.970  2478  2529 I ActivityManager: Process com.android.exchange (pid 3662) has died
01-01 01:11:45.970  2478  2529 D ActivityManager: cleanUpApplicationRecord -- 3662
01-01 01:11:45.985  2478  3943 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=2 for 611ms
01-01 01:11:45.995  2478  3943 I ActivityManager: Process com.android.calendar (pid 4415) has died
01-01 01:11:45.995  2478  3943 D ActivityManager: cleanUpApplicationRecord -- 4415
01-01 01:11:46.000  2478  2532 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=3 for 537ms
01-01 01:11:46.025  2478  3362 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=4 for 378ms
01-01 01:11:46.045  2478  3362 I ActivityManager: Process org.lineageos.updater (pid 4449) has died
01-01 01:11:46.045  2478  3362 D ActivityManager: cleanUpApplicationRecord -- 4449
01-01 01:11:46.045  1913  1913 E lowmemorykiller: Error writing /proc/3938/oom_score_adj; errno=22
01-01 01:11:46.050  2478  3413 W art     : Long monitor contention event with owner method=void com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied() from ActivityManagerService.java:1359 waiters=5 for 372ms
01-01 01:11:46.505  2478  3232 D WifiService: Client connection lost with reason: 4
01-01 01:11:47.165  2478  4666 D GraphicsStats: Buffer count: 3
01-01 01:11:47.400  2478  2532 W art     : Long monitor contention event with owner method=int com.android.server.am.ActivityManagerService.broadcastIntent(android.app.IApplicationThread, android.content.Intent, java.lang.String, android.content.IIntentReceiver, int, java.lang.String, android.os.Bundle, java.lang.String[], int, android.os.Bundle, boolean, boolean, int) from ActivityManagerService.java:17497 waiters=0 for 667ms
01-01 01:11:47.465  2478  4664 W art     : Long monitor contention event with owner method=int com.android.server.am.ActivityManagerService.broadcastIntent(android.app.IApplicationThread, android.content.Intent, java.lang.String, android.content.IIntentReceiver, int, java.lang.String, android.os.Bundle, java.lang.String[], int, android.os.Bundle, boolean, boolean, int) from ActivityManagerService.java:17497 waiters=1 for 858ms
01-01 01:11:47.465  2478  3412 W art     : Long monitor contention event with owner method=int com.android.server.am.ActivityManagerService.broadcastIntent(android.app.IApplicationThread, android.content.Intent, java.lang.String, android.content.IIntentReceiver, int, java.lang.String, android.os.Bundle, java.lang.String[], int, android.os.Bundle, boolean, boolean, int) from ActivityManagerService.java:17497 waiters=2 for 859ms
01-01 01:11:47.475  2478  4665 I ActivityManager: Process com.android.providers.calendar (pid 4434) has died
01-01 01:11:47.480  2478  4665 D ActivityManager: cleanUpApplicationRecord -- 4434
01-01 01:11:47.545  1913  1913 E lowmemorykiller: Error opening /proc/3938/oom_score_adj; errno=2
01-01 01:11:47.545  1913  1913 E lowmemorykiller: Error opening /proc/4014/oom_score_adj; errno=2
01-01 01:11:47.550  1913  1913 E lowmemorykiller: Error opening /proc/4542/oom_score_adj; errno=2
01-01 01:11:47.550  2478  3943 W art     : Long monitor contention event with owner method=int com.android.server.am.ActivityManagerService.broadcastIntent(android.app.IApplicationThread, android.content.Intent, java.lang.String, android.content.IIntentReceiver, int, java.lang.String, android.os.Bundle, java.lang.String[], int, android.os.Bundle, boolean, boolean, int) from ActivityManagerService.java:17497 waiters=3 for 894ms
01-01 01:11:47.560  2478  3943 I ActivityManager: Process org.cyanogenmod.themes.provider (pid 3497) has died
01-01 01:11:47.560  2478  3943 D ActivityManager: cleanUpApplicationRecord -- 3497
01-01 01:11:47.560  2478  2529 W art     : Long monitor contention event with owner method=int com.android.server.am.ActivityManagerService.broadcastIntent(android.app.IApplicationThread, android.content.Intent, java.lang.String, android.content.IIntentReceiver, int, java.lang.String, android.os.Bundle, java.lang.String[], int, android.os.Bundle, boolean, boolean, int) from ActivityManagerService.java:17497 waiters=4 for 673ms
01-01 01:11:47.570  2478  2529 I ActivityManager: Process com.svox.pico (pid 4014) has died
01-01 01:11:47.570  2478  2529 D ActivityManager: cleanUpApplicationRecord -- 4014
01-01 01:11:48.325  2478  2529 W ActivityManager: Scheduling restart of crashed service com.svox.pico/.PicoService in 1000ms

Verdict

I wasted lots of time with this issue, but was finally able to reproduce it and to recover all of my data. At least, I have an explanation now for various random reboots I experienced in the past in similar low-memory conditions.

Overall, I am really shocked that a simple, unprivileged Android app that is scheduled to start on bootup can ruin a working system so badly. Further research indicates that there are more apps known to cause such behavior [4]. I hope that a device based on a GNU/Linux system instead of Android (such as the announced Librem5) will not suffer from such a severe flaw.

References

[1] https://review.lineageos.org/#/c/197305/
[2] https://stackoverflow.com/questions/25069186/invalid-address-passed-to-dlfree
[3] https://github.com/royzhao/memtester4Android
[4] https://gitlab.com/fdroid/fdroiddata/issues/979
[5] https://gitlab.com/fdroid/fdroiddata/issues/979#note_48990149

December 09 2017

Posts on Hannes Hauswedell's homepage: The - surprisingly limited - usefulness of function multiversioning in GCC

Modern CPUs have quite a few features that generic amd64/intel64 code cannot make use of, simply because they are not available everywhere and including them would break the code on unsupporting platforms. The solution is to not use these features, or ship different specialised binaries for different target CPUs. The problem with the first approach is that you miss out on possible optimisations and the problem with the second approach is that most users don’t know which features their CPUs support, possibly picking a wrong executable (which won’t run → bad user experience) or a less optimised one (which is again problem 1). But there is an elegant GCC-specific alternative: Function multiversioning!

But does it really solve our problems? Let’s have a closer look!

Prerequisites

  • basic understanding of C++ and compiler optimisations (you should have heard of “inlining” before, but you don’t need to know assembler, in fact I am not an assembly expert either)
  • Most code snippets are demonstrated via Compiler Explorer, but the benchmarks require you to have GCC ≥ version 7 locally installed.
  • You might want to open a second tab or window to display the Compiler Explorer along side this post (two screens work best 😎).

Population counts

Many of the CPU features used in machine-optimised code relate to SIMD, but for our example, I will use a more simple operation: population count or short popcount.

The popcount of an integral number is the number of bits that are set to 1 in its bit-representation. [More details on Wikipedia if you are interested.]

Popcounts are used in many algorithms, and are important in bioinformatics (one of the reasons I am writing this post). You could implement a naive popcount by iterating over the bits, but GCC already has us covered with a “builtin”, called __builtin_popcountll (the “ll” in the end is for long long, i.e. 64bit integers). Here’s an example:

1  __builtin_popcountll(6ull) // == 2, because 6ull's bit repr. is `...00000110`

To get a feeling for how slow/fast this function is, we are going to call it a billion times. The golden rule of optimisation is to always measure and not make wild assumptions about what you think the compiler or the CPU is/isn’t doing!

 1  #include <cstdint>
 2
 3  uint64_t pc(uint64_t const v)
 4  {
 5      return __builtin_popcountll(v);
 6  }
 7
 8  int main()
 9  {
10      for (uint64_t i = 0; i < 1'000'000'000; ++i)
11          volatile uint64_t ret = pc(i);
12  }

view in compiler explorer

This code should should be fairly easy to understand, the volatile modifier is only used to make sure that this code is always generated (if not used the compiler will see that the return values are not actually used and optimise all the code away!). In any case, before you compile this locally, click on the link and check out the compiler explorer. Using the colour code you can easily see that our call to __builtin_popcountll in line 5 is translated to another function __popcountdi2 internally. Before you continue, add -03 to the compiler arguments in compiler explorer, this will add machine-independent optimisations. The assembly code should change, but you will still be able to find __popcountdi2.

This is a generic function that works on all amd64/intel64 platforms and counts the set bits. What does it actually do? You can search the net and find explanations that say it does some bit-shifting and table-lookups, but the important part is that it performs multiple operations to compute the popcount in a generic way.

Modern CPUs, however, have a feature that does popcount in hardware (or close). Again, we don’t need to know exactly how this works, but we would expect that this single operation function is better than anything we can do (for very large bit-vectors this is not true entirely, but that’s a different issue).

How do we use this magic builtin? Just go back to the compiler explorer, and add -mpopcnt to the compiler flags, this tells GCC to expect this feature from the hardware and optimise for it. Voila, the assembly code generated now resolves to popcnt rsi, rsi instead of call __popcountdi2 (GCC is smart and it’s builtin resolves to whatever is best on the architecture we are targeting).

But how much better is this actually? Compile both versions locally and measure, e.g. with the time command.

compiler flags time on my pc -O3 3.1s -O3 -mpopcnt 0.6s

A speed-up of 5x, nice!

But what happens when the binary is run on a CPU that doesn’t have builtin popcnt? The program crashes with “Illegal hardware instruction” 💀

Function multiversioning

This is where function multiversioning (“FMV”) comes to the rescue. It is a GCC specific feature, that inserts a branching point in place of our original function and then dispatches to one of the available “clones” at run-time. You can specify how many of these “clones” you want and for which features or architectures each are built, then the dispatching function chooses the most highly optimised automatically. You can even manually write different function bodies for the different clones, but we will focus on the simpler kind of FMV where you just compile the same function body with different optimisation strategies.

Enough of the talking, here is our adapted example from above:

 1  #include <cstdint>
 2
 3  __attribute__((target_clones("default", "popcnt")))
 4  uint64_t pc(uint64_t const v)
 5  {
 6      return __builtin_popcountll(v);
 7  }
 8
 9  int main()
10  {
11      for (uint64_t i = 0; i < 1'000'000'000; ++i)
12          volatile uint64_t ret = pc(i);
13  }

view in compiler explorer

The only difference is that line 3 was inserted. The syntax is quite straight-forward insofar as anything in the C++ world is straight-forward 😉 :

  • We are telling GCC that we want two clones for the targets “default” and “popcnt”.
  • Everything else gets taken care of.

Follow the link to compiler explorer and check the assembly code (please make sure that you are not specifying -mpopcnt!). It is a little longer, but we immediately see via the colour code of __builtin_popcountll(v) that two functions are generated, one with the generic version and one with optimised version, similar to what we had above, but now in one program. The “function signatures” in the assembly code also tell us that one of them is “the original” and one is “popcnt clone”. Some further analysis will reveal a third function, the “clone .resolver” which is the dispatching function. Even without knowing any assembly you might be able to pick out the statement that looks up the CPU feature and calls the correct clone.

Great! So we have a single binary that is as fast as possible and works on older hardware. But is it really as fast as possible? Compile and run it!

version compiler flags time on my pc original -O3 3.1s original -O3 -mpopcnt 0.6s FMV -O3 2.2s

Ok, we are faster than the original generic code so we are probably using the optimised popcount call, but we are nowhere near our 5x speed-up. What’s going on?

Nested function calls

We have replaced the core of our computation, the function pc() with a dispatcher that chooses the best implementation. As noted above this decision happens at run-time (it has to, because we can’t know beforehand if the target CPU will support native popcount, it’s the whole point of the exercise), but now this happens one billion times!

Wow, this check seems to be more expensive than the actual popcount call. If you write a lot of optimised code, this won’t be a surprise, decision making at run-time just is very expensive.

What can we do about it? Well, we could decide between generic VS optimised before running our algorithm, instead of deciding in our algorithm on every iteration:

 1  #include <cstdint>
 2
 3  uint64_t pc(uint64_t const v)
 4  {
 5      return __builtin_popcountll(v);
 6  }
 7
 8  __attribute__((target_clones("default", "popcnt")))
 9  void loop()
10  {
11      for (uint64_t i = 0; i < 1'000'000'000; ++i)
12          volatile uint64_t ret = pc(i);
13  }
14
15  int main()
16  {
17      loop();
18  }

view in compiler explorer

The assembly of this gets a little more messy, but you can follow around the jmp instructions or just scan the assembly for our above mentioned instructions and you will see that we still have the two versions (although the actual pc() function is not called, because it was inlined and is moved around a bit).

Compile the code and measure the time:

version compiler flags time on my pc original -O3 3.1s original -O3 -mpopcnt 0.6s FMVed pc() -O3 2.2s FMVed loop -O3 0.6s

Hurray, we are back to our original speed-up!

If you expected this, than you likely have dealt with strongly templated code before and also heard of tag-dispatching, a technique that can be used to translate arbitrary run-time decisions to different code-paths beneath which you can treat your run-time decision as a compile-time one.

Our simplified callgraph for the above cases looks like this (the dotted line is where the dispatching takes place):

In real world code the graph is of course bigger, but it should become obvious that by moving the decision making further to the left, the code becomes faster – because we have to decide less often –, but also the size of the generated executable becomes larger – because more functions are actually compiled. [There are corner cases where the executable being bigger actually results in certain things becoming slower, but lets not get into that now.]

Anyway, I thought that FMV would be like dispatching a tag down the call-graph, but it’s not! In fact we just got lucky in our above example, because the pc() call was inlined. Inlining means that the function itself is optimised away entirely and its code is inserted at the place in the calling function where the function call would have been otherwise. Only because pc() is inlined, do we actually get the opimisation!

How do you know? Well you can force GCC to not inline pc():

 1  #include <cstdint>
 2
 3  __attribute__((noinline))
 4  uint64_t pc(uint64_t const v)
 5  {
 6      return __builtin_popcountll(v);
 7  }
 8
 9  __attribute__((target_clones("default", "popcnt")))
10  void loop()
11  {
12      for (uint64_t i = 0; i < 1'000'000'000; ++i)
13          volatile uint64_t ret = pc(i);
14  }
15
16  int main()
17  {
18      loop();
19  }

view in compiler explorer

Just add the third line to your previous Compiler Explorer window, or open the above link. You can see that the optimised popcnt call has disappeared from the assembly and pc() only appears once. So in fact our callgraph is (no optimised pc() contained):

But how serious is this, you may ask? Didn’t the compiler inline automatically? Well, the problem about inlining is, that it is entirely up to the compiler whether it inlines a function or not (prefixing the function with inline does in fact not force it to). The deeper the call-graph gets, the more likely it is for the compiler not to inline all the way from the FMV invocation point.

Trying to save FMV for our use case

<details style="border:1px solid; padding: 2px; margin: 2px"> <summary>click to see some more complex but futile attempts</summary>

It’s possible to force the compiler to use inlining, but it’s also non-standard and it obviously doesn’t work if the called functions are not customisable by us (e.g. stable interfaces or external code / a library). Furthermore it might not even be desirable to force inline every function / function template, because they might be used in other places or with differently typed arguments resulting in an even higher increase of executable size.

An alternative to inlining would be to use the original form of FMV where you actually have different function bodies and in those add a custom layer of (tag-)dispatching yourself:

 1  #include <cstdint>
 2
 3  template <bool is_optimised>
 4  __attribute__((noinline))
 5  uint64_t pc(uint64_t const v)
 6  {
 7      return __builtin_popcountll(v);
 8  }
 9
10  __attribute__((target("default")))
11  void loop()
12  {
13      for (uint64_t i = 0; i < 1'000'000'000; ++i)
14          volatile uint64_t ret = pc<false>(i);
15  }
16
17  __attribute__((target("popcnt")))
18  void loop()
19  {
20      for (uint64_t i = 0; i < 1'000'000'000; ++i)
21          volatile uint64_t ret = pc<true>(i);
22  }
23
24  int main()
25  {
26      loop();
27  }

view in compiler explorer

In this code example we have turned pc() into a function template, customisable by a bool variable. This means that two versions of this function can be instantiated. We then also implement the loops separately and make each pass a different bool value to pc() as a template argument. If you look at the assembly in compiler explorer you can see that two functions are created for pc(), but unfortunately they both contain the unoptimised popcount call¹. This is due to the compiler not knowing/assuming that one of the functions is only called in an optimised context. → This method won’t solve our problem.

And while it is of course possible to add C++17’s if constexpr to pc() and start hacking custom code into the the function depending on the template parameter, it does further complicate the solution moving us further and further away from our original goal of a thin dispatching layer.

¹ Since the resulting function bodies are the same they are actually merged into a single one at optimisation levels > 1 (but this is independent of our problem). </details>

Summary

  • Function multiversioning is a good thing, because it aims to solve an actual problem: delivering optimised binary code to users that can’t or don’t want to build themselves.
  • Unfortunately it does not multiversion the functions called by a versioned function, forcing developers to move FMV very close to the intended function call.
  • This has the drawback of invoking the dispatch much more often than theoretically needed, possibly incurring a penalty in run-time that might exceed the gain from more highly optimised code.
  • It would be great if GCC developers could address this by adding a version of FMV that recursively clones the indirectly invoked functions (without further branching), as well as providing the machine-aware context to these clones, i.e. the presumed CPU features.

Further reading

On popcnt and CPU specific features:

On FMV:

December 07 2017

Paul Boddie's Free Software-related blog » English: 2017 in Review

On Planet Debian there seems to be quite a few regularly-posted articles summarising the work done by various people in Free Software over the month that has most recently passed. I thought it might be useful, personally at least, to review the different things I have been doing over the past year. The difference between this article and many of those others is that the work I describe is not commissioned or generally requested by others, instead relying mainly on my own motivation for it to happen. The rate of progress can vary somewhat as a result.

Learning KiCad

Over the years, I have been playing around with Arduino boards, sensors, displays and things of a similar nature. Although I try to avoid buying more things to play with, sometimes I manage to acquire interesting items regardless, and these aren’t always ready to use with the hardware I have. Last December, I decided to buy a selection of electronics-related items for interfacing and experimentation. Some of these items have yet to be deployed, but others were bought with the firm intention of putting different “spare” pieces of hardware to use, or at least to make them usable in future.

One thing that sits in this category of spare, potentially-usable hardware is a display circuit board that was once part of a desk telephone, featuring a two-line, bitmapped character display, driven by the Hitachi HD44780 LCD controller. It turns out that this hardware is so common and mundane that the Arduino libraries already support it, but the problem for me was being able to interface it to the Arduino. The display board uses a cable with a connector that needs a special kind of socket, and so some research is needed to discover the kind of socket needed and how this might be mounted on something else to break the connections out for use with the Arduino.

Fortunately, someone else had done all this research quite some time ago. They had even designed a breakout board to hold such a socket, making it available via the OSH Park board fabricating service. So, to make good on my plan, I ordered the mandatory minimum of three boards, also ordering some connectors from Mouser. When all of these different things arrived, I soldered the socket to the board along with some headers, wired up a circuit, wrote a program to use the LiquidCrystal Arduino library, and to my surprise it more or less worked straight away.

Breakout board for the Molex 52030 connector

Breakout board for the Molex 52030 connector

Hitachi HD44780 LCD display boards driven by an Arduino

Hitachi HD44780 LCD display boards driven by an Arduino

This satisfying experience led me to consider other boards that I might design and get made. Previously, I had only made a board for the Arduino using Fritzing and the Fritzing Fab service, and I had held off looking at other board design solutions, but this experience now encouraged me to look again. After some evaluation of the gEDA tools, I decided that I might as well give KiCad a try, given that it seems to be popular in certain “open source hardware” circles. And after a fair amount of effort familiarising myself with it, with a degree of frustration finding out how to do certain things (and also finding up-to-date documentation), I managed to design my own rather simple board: a breakout board for the Acorn Electron cartridge connector.

Acorn Electron cartridge breakout board (in 3D-printed case section)

Acorn Electron cartridge breakout board (in 3D-printed case section)

In the back of my mind, I have vague plans to do other boards in future, but doing this kind of work can soak up a lot of time and be rather frustrating: you almost have to get into some modified mental state to work efficiently in KiCad. And it isn’t as if I don’t have other things to do. But at least I now know something about what this kind of work involves.

Retro and Embedded Hardware

With the above breakout board in hand, a series of experiments were conducted to see if I could interface various circuits to the Acorn Electron microcomputer. These mostly involved 7400-series logic chips (ICs, integrated circuits) and featured various logic gates and counters. Previously, I had re-purposed an existing ROM cartridge design to break out signals from the computer and make it access a single flash memory chip instead of two ROM chips.

With a dedicated prototyping solution, I was able to explore the implementation of that existing board, determine various aspects of the signal timings that remained rather unclear (despite being successfully handled by the existing board’s logic), and make it possible to consider a dedicated board for a flash memory cartridge. In fact, my brother, David, also wanting to get into board design, later adapted the prototyping cartridge to make such a board.

But this experimentation also encouraged me to tackle some other items in the electronics shipment: the PIC32 microcontrollers that I had acquired because they were MIPS-based chips, with somewhat more built-in RAM than the Atmel AVR-based chips used by the average Arduino, that could also be used on a breadboard. I hoped that my familiarity with the SoC (system-on-a-chip) in the Ben NanoNote – the Ingenic JZ4720 – might confer some benefits when writing low-level code for the PIC32.

PIC32 on breadboard with Arduino programming circuit

PIC32 on breadboard with Arduino programming circuit (and some LEDs for diagnostic purposes)

I do not need to reproduce an account of my activities here, given that I wrote about the effort involved in getting started with the PIC32 earlier in the year, and subsequently described an unusual application of such a microcontroller that seemed to complement my retrocomputing interests. I have since tried to make that particular piece of work more robust, but deducing the actual behaviour of the hardware has been frustrating, the documentation can be vague when it needs to be accurate, and much of the community discussion is focused on proprietary products and specific software tools rather than techniques. Maybe this will finally push me towards investigating programmable logic solutions in the future.

Compiling a Python-like Language

As things actually happened, the above hardware activities were actually distractions from something I have been working on for a long time. But at this point in the article, this can be a diversion from all the things that seem to involve hardware or low-level software development. Many years ago, I started writing software in Python. Over the years since, alternative implementations of the Python language (the main implementation being CPython) have emerged and seen some use, some continuing to be developed to this day. But around fifteen years ago, it became a bit more common for people to consider whether Python could be compiled to something that runs more efficiently (and more quickly).

I followed some of these projects enthusiastically for a while. Starkiller promised compilation to C++ but never delivered any code for public consumption, although the associated academic thesis might have prompted the development of Shed Skin which does compile a particular style of Python program to C++ and is available as Free Software. Meanwhile, PyPy elevated to prominence the notion of writing a language and runtime library implementation in the language itself, previously seen with language technologies like Slang, used to implement Squeak/Smalltalk.

Although other projects have also emerged and evolved to attempt the compilation of Python to lower-level languages (Pyrex, Cython, Nuitka, and so on), my interests have largely focused on the analysis of programs so that we may learn about their structure and behaviour before we attempt to run them, this alongside any benefits that might be had in compiling them to something potentially faster to execute. But my interests have also broadened to consider the evolution of the Python language since the point fifteen years ago when I first started to think about the analysis and compilation of Python. The near-mythical Python 3000 became a real thing in the form of the Python 3 development branch, introducing incompatibilities with Python 2 and fragmenting the community writing software in Python.

With the risk of perfectly usable software becoming neglected, its use actively (and destructively) discouraged, it becomes relevant to consider how one might take control of one’s software tools for long-term stability, where tools might be good for decades of use instead of constantly changing their behaviour and obliging their users to constantly change their software. I expressed some of my thoughts about this earlier in the year having finally reached a point where I might be able to reflect on the matter.

So, the result of a great deal of work, informed by experiences and conversations over the years related to previous projects of my own and those of others, is a language and toolchain called Lichen. This language resembles Python in many ways but does not try to be a Python implementation. The toolchain compiles programs to C which can then be compiled and executed like “normal” binaries. Programs can be trivially cross-compiled by any available C cross-compilers, too, which is something that always seems to be a struggle elsewhere in the software world. Unlike other Python compilers or implementations, it does not use CPython’s libraries, nor does it generate in “longhand” the work done by the CPython virtual machine.

One might wonder why anyone should bother developing such a toolchain given its incompatibility with Python and a potential lack of any other compelling reason for people to switch. Given that I had to accept some necessary reductions in the original scope of the project and to limit my level of ambition just to feel remotely capable of making something work, one does need to ask whether the result is too compromised to be attractive to others. At one point, programs manipulating integers were slower when compiled than when they were run by CPython, and this was incredibly disheartening to see, but upon further investigation I noticed that CPython effectively special-cases integer operations. The design of my implementation permitted me to represent integers as tagged references – a classic trick of various language implementations – and this overturned the disadvantage.

For me, just having the possibility of exploring alternative design decisions is interesting. Python’s design is largely done by consensus, with pronouncements made to settle disagreements and to move the process forward. Although this may have served the language well, depending on one’s perspective, it has also meant that certain paths of exploration have not been followed. Certain things have been improved gradually but not radically due to backwards compatibility considerations, this despite the break in compatibility between the Python 2 and 3 branches where an opportunity was undoubtedly lost to do greater things. Lichen is an attempt to explore those other paths without having to constantly justify it to a group of people who may regard such exploration as hostile to their own interests.

Lichen is not really complete: it needs floating point number and other useful types; its library is minimal; it could be made more robust; it could be made more powerful. But I find myself surprised that it works at all. Maybe I should have more confidence in myself, especially given all the preparation I did in trying to understand the good and bad aspects of my previous efforts before getting started on this one.

Developing for MIPS-based Platforms

A couple of years ago I found myself wondering if I couldn’t write some low-level software for the Ben NanoNote. One source of inspiration for doing this was “The CI20 bare-metal project“: a series of blog articles discussing the challenges of booting the MIPS Creator CI20 single-board computer. The Ben and the CI20 use CPUs (or SoCs) from the same family: the Ingenic JZ4720 and JZ4780 respectively.

For the Ben, I looked at the different boot payloads, principally those written to support booting from a USB host, but also the version of U-Boot deployed on the Ben. I combined elements of these things with the framebuffer driver code from the Linux kernel supporting the Ben, and to my surprise I was able to get the device to boot up and show a pattern on the screen. Progress has not always been steady, though.

For a while, I struggled to make the CPU leave its initial exception state without hanging, and with the screen as my only debugging tool, it was hard to see what might have been going wrong. Some careful study of the code revealed the problem: the code I was using to write to the framebuffer was using the wrong address region, meaning that as soon as an attempt was made to update the contents of the screen, the CPU would detect a bad memory access and an exception would occur. Such exceptions will not be delivered in the initial exception state, but with that state cleared, the CPU will happily trigger a new exception when the program accesses memory it shouldn’t be touching.

Debugging low-level code on the Ben NanoNote (the hard way)

Debugging low-level code on the Ben NanoNote (the hard way)

I have since plodded along introducing user mode functionality, some page table initialisation, trying to read keypresses, eventually succeeding after retracing my steps and discovering my errors along the way. Maybe this will become a genuinely useful piece of software one day.

But one useful purpose this exercise has served is that of familiarising myself with the way these SoCs are organised, the facilities they provide, how these may be accessed, and so on. My brother has the Letux 400 notebook containing yet another SoC in the same family, the JZ4730, which seems to be almost entirely undocumented. This notebook has proven useful under certain circumstances. For instance, it has been used as a kind of appliance for document scanning, driving a multifunction scanner/printer over USB using the enduring SANE project’s software.

However, the Letux 400 is already an old machine, with products based on this hardware platform being almost ten years old, and when originally shipped it used a 2.4 series Linux kernel instead of a more recent 2.6 series kernel. Like many products whose software is shipped as “finished”, this makes the adoption of newer software very difficult, especially if the kernel code is not “upstreamed” or incorporated into the official Linux releases.

As software distributions such as Debian evolve, they depend on newer kernel features, but if a device is stuck on an older kernel (because the special functionality that makes it work on that device is specific to that kernel) then the device, unable to run the newer kernels, gradually becomes unable to run newer versions of the distribution as well. Thus, Debian Etch was the newest distribution version that would work on the 2.4 kernel used by the Letux 400 as shipped.

Fortunately, work had been done to make a 2.6 series kernel work on the Letux 400, and this made Debian Lenny functional. But time passes and even this is now considered ancient. Although David was running some software successfully, there was other software that really needed a newer distribution to be able to run, and this meant considering what it might take to support Debian Squeeze on the hardware. So he set to work adding patches to the 2.6.24 kernel to try and take it within the realm of Squeeze support, making it beyond the bare minimum of 2.6.29 and into the “release candidate” territory of 2.6.30. And this was indeed enough to run Squeeze on the notebook, at least supporting the devices needed to make the exercise worthwhile.

Now, at a much earlier stage in my own experiments with the Ben NanoNote, I had tried without success to reproduce my results on the Letux 400. And I had also made a rather tentative effort at modifying Ben NanoNote kernel drivers to potentially work with the Letux 400 from some 3.x kernel version. David’s success in updating the kernel version led me to look again at the tasks of familiarising myself with kernel drivers, machine details and of supporting the Letux 400 in even newer kernels.

The outcome of this is uncertain at present. Most of the work on updating the drivers and board support has been done, but actual testing of my work still needs to be done, something that I cannot really do myself. That might seem strange: why start something I cannot finish by myself? But how I got started in this effort is also rather related to the topic of the next section.

The MIPS Creator CI20 and L4/Fiasco.OC

Low-level programming on the Ben NanoNote is frustrating unless you modify the device and solder the UART connections to the exposed pads in the battery compartment, thereby enabling a serial connection and allowing debugging information to be sent to a remote display for perusal. My soldering skills are not that great, and I don’t want to damage my device. So debugging was a frustrating exercise. Since I felt that I needed a bit more experience with the MIPS architecture and the Ingenic SoCs, it occurred to me that getting a CI20 might be the way to go.

I am not really a supporter of Imagination Technologies, producer of the CI20, due to the company’s rather hostile attitude towards Free Software around their PowerVR technologies, meaning that of the different graphics acceleration chipsets, PowerVR has been increasingly isolated as a technology that is consistently unsupportable by Free Software drivers. However, the CI20 is well-documented and has been properly supported with Free Software, apart from the PowerVR parts of the hardware, of course. Ingenic were seemingly persuaded to make the programming manual for the JZ4780 used by the CI20 publicly available, unlike the manuals for other SoCs in that family. And the PowerVR hardware is not actually needed to be able to use the CI20.

The MIPS Creator CI20 single-board computer

The MIPS Creator CI20 single-board computer

I had hoped that the EOMA68 campaign would have offered a JZ4775 computer card, and that the campaign might have delivered such a card by now, but with both of these things not having happened I took the plunge and bought a CI20. There were a few other reasons for doing so: I wanted to see how a single-board computer with a decent amount of RAM (1GB) might perform as a working desktop machine; having another computer to offload certain development and testing tasks, rather than run virtual machines, would be useful; I also wanted to experiment with and even attempt to port other operating systems, loosening my dependence on the Linux monoculture.

One of these other operating systems involves two components: the Fiasco.OC microkernel and the L4 Runtime Environment (L4Re). Over the years, microkernels in the L4 family have seen widespread use, and at one point people considered porting GNU Hurd to one of the L4 family microkernels from the Mach microkernel it then used (and still uses). It seems to me like something worth looking at more closely, and fortunately it also seemed that this software combination had been ported to the CI20. However, it turned out that my expectations of building an image, testing the result, and then moving on to developing interesting software were a little premature.

The first real problem was that GCC produced position-independent code that was not called correctly. This meant that upon trying to get the addresses of functions, the program would end up loading garbage addresses and trying to call any code that might be there at those addresses. So some fixes were required. Then, it appeared that the JZ4780 doesn’t support a particular MIPS instruction, meaning that the CPU would encounter this instruction and cause an exception. So, with some guidance, I wrote a handler to decode the instruction and generate the rather trivial result that the instruction should produce. There were also some more generic problems with the microkernel code that had previously been patched but which had not appeared in the upstream repository. But in the end, I got the “hello” program to run.

With a working foundation I tried to explore the hardware just as I had done with the Ben NanoNote, attempting to understand things like the clock and power management hardware, general purpose input/output (GPIO) peripherals, and also the Inter-Integrated Circuit (I2C) peripherals. Some assistance was available in the form of Linux kernel driver code, although the style of code can vary significantly, and it also takes time to “decode” various mechanisms in the Linux code and to unpick the useful bits related to the hardware. I had hoped to get further, but in trying to use the I2C peripherals to talk to my monitor using the DDC protocol, I found that the data being returned was not entirely reliable. This was arguably a distraction from the more interesting task of enabling the display, given that I know what resolutions my monitor supports.

However, all this hardware-related research and detective work at least gave me an insight into mechanisms – software and hardware – that would inform the effort to “decode” the vendor-written code for the Letux 400, making certain things seem a lot more familiar and increasing my confidence that I might be understanding the things I was seeing. For example, the JZ4720 in the Ben NanoNote arranges its hardware registers for GPIO configuration and access in a particular way, but the code written by the vendor for the JZ4730 in the Letux 400 accesses GPIO registers in a different way.

Initially, I might have thought that I was missing some important detail: are the two products really so different, and if not, then why is the code so different? But then, looking at the JZ4780, I encountered another scheme for GPIO register organisation that is different again, but which does have similarities to the JZ4730. With the JZ4780 being publicly documented, the code for the Letux 400 no longer seemed quite so bizarre or unfathomable. With more experience, it is possible to have a little more confidence in one’s understanding of the mechanisms at work.

I would like to spend a bit more time looking at microkernels and alternatives to Linux. While many people presumably think that Linux is running on everything and has “won”, it is increasingly likely that the Linux one sees on devices does not completely control the hardware and is, in fact, virtualised or confined by software systems like L4/Fiasco.OC. I also have reservations about the way Linux is developed and how well it is able to handle the demands of its proliferation onto every kind of device, many of them hooked up to the Internet and being left to fend for themselves.

Developing imip-agent

Alongside Lichen, a project that has been under development for the last couple of years has been imip-agent, allowing calendar-based scheduling activities to be integrated with mail transport agents. I haven’t been able to spend quite as much time on imip-agent this year as I might have liked, although I will also admit that I haven’t always been motivated to spend much time on it, either. Still, there have been brief periods of activity tidying up, fixing, or improving the code. And some interest in packaging the software led me to reconsider some of the techniques used to deploy the software, in particular the way scheduling extensions are discovered, and the way the system configuration is processed (since Debian does not want “executable scripts” in places like /etc, even if those scripts just contain some simple configuration setting definitions).

It is perhaps fairly typical that a project that tries to assess the feasibility of a concept accumulates the necessary functionality in order to demonstrate that it could do a particular task. After such an initial demonstration, the effort of making the code easier to work with, more reliable, more extensible, must occur if further progress is to be made. One intervention that kept imip-agent viable as a project was the introduction of a test suite to ensure that the basic functionality did indeed work. There were other architectural details that I felt needed remedying or improving for the code to remain manageable.

Recently, I have been refining the parts of the code that support editing of calendar objects and the exchange of updates caused by changes to calendar events. Such work is intended to make the Web client easier to understand and to expose such functionality to proper testing. One side-effect of this may be the introduction of a text-based client for people using e-mail programs like Mutt, as well as a potentially usable library for other mail clients. Such tidying up and fixing does not show off fancy new features or argue the case for developing such software in the first place, but I suppose it makes me feel better about the software I have written.

Whither Moin?

There are probably plenty of other little projects of my own that I have started or at least contemplated this year. And there are also projects that are not mine but which I use and which have had contributions from me over the years. One of these is the MoinMoin wiki software that powers a number of Free Software and other Web sites where collaborative editing is made available to the communities involved. I use MoinMoin – or Moin for short – to publish content on the Web myself, and I have encouraged others to use it in the past. However, it worries me now that the level of maintenance it is receiving has fallen to a level where updates for faults in the software are not likely to be forthcoming and where it is no longer clear where such updates should be coming from.

Earlier in the year, having previously read queries about the static export output from Moin, which can be rather basic and not necessarily resemble the appearance of the wiki such output has come from, I spent some time considering my own use of Moin for documentation publishing. For some of my projects, I don’t take advantage of the “through the Web” editing of the solution when publishing the public documentation. Instead, I use Moin locally, store the pages in a separate repository, and then make page packages that get installed on a public instance of Moin. This means that I do not have to worry about Web-based authentication and can just have a wiki as a read-only resource.

Obviously, the parts of Moin that I really need here are just the things that parse the wiki formatting (which I regard as more usable than other document markup formats in various respects) and that format the content as HTML. If I could format it as static content with some pages, some stylesheets, some images, with some Web server magic to make the URLs look nice, then that would probably be sufficient. For some things like the automatic generation of SVG from Graphviz-format files, I would also need to have the relevant parsers available, too. Having a complete Web framework, which is what Moin really is, is rather unnecessary with these diminished requirements.

But I do use Moin as a full wiki solution as well, and so it made me wonder whether I shouldn’t try and bring it up to date. Of course, there is already the MoinMoin 2.0 effort that was intended to modernise and tidy up the software, but since this effort made a clean break from Moin 1.x, it was never an attractive choice for those people already using Moin in anything more than a basic sense. Since there wasn’t an established API for extensions, it was not readily usable for many existing sites that rely on such extensions. In a way, Moin 2 has suffered from something that Python 3 only avoided by having a lot more people working on it, including people being paid to work on it, together with a policy of openly shaming those people who had made Python 2 viable – by developing software for it – into spending time migrating their code to Python 3.

I don’t have an obvious plan of action here. Moin perhaps illustrates the fundamental problem facing many Free Software projects, this being a theme that I have discussed regularly this year: how they may remain viable by having people able to dedicate their time to writing and maintaining Free Software without this work being squeezed in around the edges of people’s “actual work” and thus burdening them with yet another obligation in their lives, particularly one that is not rewarded by a proper appreciation of the sacrifice being made.

Plenty of individuals and organisations benefit from Moin, but we live in an age of “comparison shopping” where people will gladly drop one thing if someone offers them something newer and shinier. This is, after all, how everyone ends up using “free” services where the actual costs are hidden. To their credit, when Moin needed to improve its password management, the Python Software Foundation stepped up and funded this work rather than dropping Moin, which is what I had expected given certain Python community attitudes. Maybe other, more well-known organisations that use Moin also support its development, but I don’t really see much evidence of it.

Maybe they should consider doing so. The notion that something else will always come along, developed by some enthusiastic developer “scratching their itch”, is misguided and exploitative. And a failure to sustain Free Software development can only undermine Free Software as a resource, as an activity or a cause, and as the basis of many of those organisations’ continued existence. Many of us like developing Free Software, as I hope this article has shown, but motivation alone does not keep that software coming forever.

December 06 2017

Inductive Bias: Trust and confidence

One of the main principles at Apache (as in The Apache Software Foundation) is "Community over Code" - having the goal to build projects that survive single community members loosing interest or time to contribute.

In his book "Producing Open Source Software" Karl Fogel describes this model of development as Consensus-based Democracy (in contrast to benevolent dictatorship): "Consensus simply means an agreement that everyone is willing to live with. It is not an ambiguous state: a group has reached consensus on a given question when someone proposes that consensus has been reached and no one contradicts the assertion. The person proposing consensus should, of course, state specifically what the consensus is, and what actions would be taken in consequence of it, if those are not obvious."

What that means is that not only one person can take decisions but pretty much anyone can declare a final decision was made. It also means decisions can be stopped by individuals on the project.

This model of development works well if what you want for your project is resilience to people, in particular those high up in the ranks, leaving at the cost of nobody having complete control. It means you are moving slower, at the benefit of getting more people on board and carrying on with your mission after you leave.

There are a couple implications to this goal: If for whatever reason one single entity needs to retain control over the project, you better not enter the incubator like suggested here. Balancing control and longevity is particularly tricky if you or your company believes they need to own the roadmap of the project. It's also tricky if your intuitive reaction to hiring a new engineer is to give them committership to the project on their first day - think again keeping in mind that Money can't buy love. If you're still convinced they should be made committer, Apache probably isn't the right place for your project.

Once you go through the process of giving up control with the help from your mentors you will learn to trust others - trust others to pick up tasks you leave open, trust others they are taking the right decision even if you would have done things otherwise, trust others to come up with solutions where you are lost. Essentially like Sharan Foga said to Trust the water.

Even coming to the project at a later stage as an individual contributor you'll go through the same learning experience: You'll learn to trust others with the patch you wrote. You'll have to learn to trust others to take your bug report seriously. If the project is well run, people will treat you as an equal peer, with respect and with appreciation. They'll likely treat you as part of the development team with as many decisions as possible - after all that's what these people want to recruit you for: For a position as volunteer in their project. Doing that means starting to Delegate like a Pro as Deb Nicholson once explained at ApacheCon. It also means training your capability for Empathy like Leslie Hawthorn explained at FOSDEM. It also means treating all contributions alike.

There's one pre-requesite to all of this working out though: Working in the open (as in "will be crawled, indexed and made visible by the major search engine of the day"), giving control to others over your baby project and potentially over what earns your daily living means you need a lot of trust not onnly in others but also in yourself. If you're in a position where you're afraid that missteps will have negative repercussions on your daily life you won't become comfortable with all of that. For projects coming to the incubator as well as companies paying contributors to become open source developers in their projects in my personal view that's an important lesson: Unless committers already feel self confident and independent enough of your organisation as well as the team they are part of to take decisions on their own, you will run into trouble walking towards at least Apache.

December 04 2017

Evaggelos Balaskas - System Engineer: Install Signal Desktop to Archlinux

How to install Signal dekstop to archlinux

Download Signal Desktop

eg. latest version v1.0.41

$ curl -s https://updates.signal.org/desktop/apt/pool/main/s/signal-desktop/signal-desktop_1.0.41_amd64.deb \
    -o /tmp/signal-desktop_1.0.41_amd64.deb

Verify Package

There is a way to manually verify the integrity of the package, by checking the hash value of the file against a gpg signed file. To do that we need to add a few extra steps in our procedure.

Download Key from the repository

$ wget -c https://updates.signal.org/desktop/apt/keys.asc

--2017-12-11 22:13:34--  https://updates.signal.org/desktop/apt/keys.asc
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Connecting to 127.0.0.1:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 3090 (3.0K) [application/pgp-signature]
Saving to: ‘keys.asc’

keys.asc                          100%[============================================================>]   3.02K  --.-KB/s    in 0s      

2017-12-11 22:13:35 (160 MB/s) - ‘keys.asc’ saved [3090/3090]

Import the key to your gpg keyring

$ gpg2 --import keys.asc

gpg: key D980A17457F6FB06: public key "Open Whisper Systems <support@whispersystems.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1

you can also verify/get public key from a known key server

$ gpg2 --verbose --keyserver pgp.mit.edu --recv-keys 0xD980A17457F6FB06

gpg: data source: http://pgp.mit.edu:11371
gpg: armor header: Version: SKS 1.1.6
gpg: armor header: Comment: Hostname: pgp.mit.edu
gpg: pub  rsa4096/D980A17457F6FB06 2017-04-05  Open Whisper Systems <support@whispersystems.org>
gpg: key D980A17457F6FB06: "Open Whisper Systems <support@whispersystems.org>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1

Here is already in place, so no changes.

Download Release files

$ wget -c https://updates.signal.org/desktop/apt/dists/xenial/Release

$ wget -c https://updates.signal.org/desktop/apt/dists/xenial/Release.gpg

Verify Release files

$ gpg2 --no-default-keyring --verify Release.gpg Release

gpg: Signature made Sat 09 Dec 2017 04:11:06 AM EET
gpg:                using RSA key D980A17457F6FB06
gpg: Good signature from "Open Whisper Systems <support@whispersystems.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DBA3 6B51 81D0 C816 F630  E889 D980 A174 57F6 FB06

That means that Release file is signed from whispersystems and the integrity of the file is not changed/compromized.

Download Package File

We need one more file and that is the Package file that contains the hash values of the deb packages.

$ wget -c https://updates.signal.org/desktop/apt/dists/xenial/main/binary-amd64/Packages

But is this file compromized?
Let’s check it against Release file:

$ sha256sum Packages

ec74860e656db892ab38831dc5f274d54a10347934c140e2a3e637f34c402b78  Packages

$ grep ec74860e656db892ab38831dc5f274d54a10347934c140e2a3e637f34c402b78 Release

 ec74860e656db892ab38831dc5f274d54a10347934c140e2a3e637f34c402b78     1713 main/binary-amd64/Packages

yeay !

Verify deb Package

Finally we are now ready to manually verify the integrity of the deb package:

$ sha256sum signal-desktop_1.0.41_amd64.deb

9cf87647e21bbe0c1b81e66f88832fe2ec7e868bf594413eb96f0bf3633a3f25  signal-desktop_1.0.41_amd64.deb

$ egrep 9cf87647e21bbe0c1b81e66f88832fe2ec7e868bf594413eb96f0bf3633a3f25 Packages

SHA256: 9cf87647e21bbe0c1b81e66f88832fe2ec7e868bf594413eb96f0bf3633a3f25

Perfect, we are now ready to continue

Extract under tmp filesystem

$ cd /tmp/

$ ar vx signal-desktop_1.0.41_amd64.deb

x - debian-binary
x - control.tar.gz
x - data.tar.xz

Extract data under tmp filesystem

$ tar xf data.tar.xz

Move Signal-Desktop under root filesystem

# sudo mv opt/Signal/ /opt/Signal/

Done

Actually, that’s it!

Run

Run signal-desktop as a regular user:

$ /opt/Signal/signal-desktop

Signal Desktop

signal-desktop-splash.png

Proxy

Define your proxy settings on your environment:

declare -x ftp_proxy="proxy.example.org:8080"
declare -x http_proxy="proxy.example.org:8080"
declare -x https_proxy="proxy.example.org:8080"

Signal

signal_desktop.png

Tag(s): signal, archlinux

December 01 2017

DanielPocock.com - fsfe: Hacking with posters and stickers

The FIXME.ch hackerspace in Lausanne, Switzerland has started this weekend's VR Hackathon with a somewhat low-tech 2D hack: using the FSFE's Public Money Public Code stickers in lieu of sticky tape to place the NO CLOUD poster behind the bar.

Get your free stickers and posters

FSFE can send you these posters and stickers too.

November 24 2017

DanielPocock.com - fsfe: Free software in the snow

There are an increasing number of events for free software enthusiasts to meet in an alpine environment for hacking and fun.

In Switzerland, Swiss Linux is organizing the fourth edition of the Rencontres Hivernales du Libre in the mountain resort of Saint-Cergue, a short train ride from Geneva and Lausanne, 12-14 January 2018. The call for presentations is still open.

In northern Italy, not far from Milan (Malpensa) airport, Debian is organizing a Debian Snow Camp, a winter getaway for developers and enthusiasts in a mountain environment where the scenery is as diverse as the Italian culinary options. It is hoped the event will take place 22-25 February 2018.

November 22 2017

DanielPocock.com - fsfe: VR Hackathon at FIXME, Lausanne (1-3 December 2017)

The FIXME hackerspace in Lausanne, Switzerland is preparing a VR Hackathon on the weekend of 1-3 December.

Competitors and visitors are welcome, please register here.

Some of the free software technologies in use include Blender and Mozilla VR.

November 15 2017

DanielPocock.com - fsfe: Linking hackerspaces with OpenDHT and Ring

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

Daniel's FSFE blog: KVM-virtualization on ARM using the “virt” machine type

Introduction

A while ago, I described how to run KVM-based virtual machines on libre, low-end virtualization hosts on Debian Jessie [1]. For emulating the ARM board, I used the vexpress-a15 which complicates things as it requires the specification of compatible DTBs. Recently, I stepped over Peter’s article [2] that describes how to use the generic “virt” machine type instead of vexpress-a15. This promises to give some advantages such as the ability to use PCI devices and makes the process of creating VMs much easier.

As it was also reported to me that my instructions caused trouble on Debian Stretch (virt-manager generates incompatible configs when choosing the vexpress-a15 target). So I spent some time trying to find out how to run VMs using the “virt” machine type using virt-manager (Peter’s article only described the manual way using command-line calls). This included several traps, so I decided to write up this article. It gives a brief overview how to create a VM using virt-manager on a ARMv7 virtualization host such as the Cubietruck or the upcoming EOMA68-A20 computing card.

Disclaimer

All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Tested VMs

I managed to successfully create and boot up the following VMs on a Devuan (Jessie) system:

  • Debian Jessie Installer
  • Debian Stretch Installer
  • Debian Unstable Installer
  • Fedora Core 27 (see [3] for instructions how to obtain the necessary files)
  • Arch Linux, using the latest ARMv7 files available at [4]
  • LEDE 17.0.1.4

I was able to reproduce the steps for the Debian guests on a Debian Stretch system as well (I did not try with the other guests).

Requirements / Base installation

This article assumes you have setup a working KVM virtualization host on ARM. If you don’t, please work through my previous article [1].

Getting the necessary files

Depending on the system you want to run in your Guest, you typically need an image of the kernel and the initrd. For the Debian-unstable installer, you could get the files like this:

wget http://http.us.debian.org/debian/dists/unstable/main/installer-armhf/current/images/netboot/vmlinuz -O vmlinuz-debian-unstable-installer

and

wget http://http.us.debian.org/debian/dists/unstable/main/installer-armhf/current/images/netboot/vmlinuz -O initrd-debian-unstable-installer.gz

Creating the Guest

Now, fire up virt-manager and start the wizard for creating a new Guest. In the first step, select “Import existing disk image” and the default settings, which should use the Machine Type “virt” already:

In the second step, choose a disk image (or create one) and put the paths to the Kernel and Initrd that you downloaded previously. Leave the DTB path blank and put “console=ttyAMA0″ as kernel arguments. Choose an appropriate OS type of just leave the default (may negatively impact the performance of your guest, or other things may happen such as your virtual network card may not be recognized by the installer):

Next, select memory and CPU settings as required by your Guest:

Finally, give the VM a proper name and select the “Customize configuration before install” option:

In the machine details, make sure the CPU model is “host-passthrough” (enter it manually if you can’t select it in the combo box):

In the boot options tab, make sure the parameter “console=ttyAMA0″ is there (otherwise you will not get any output on the console). Depending on your guest, you might also need more parameters, such as for setting the rootfs:

Finally, click “begin installation” and you should see your VM boot up:

Post-Installation steps

Please note, that after installing your guest, you must extract the kernel and initrd from the installed guest image (you want to boot the real system, not the installer) and change your VM configuration to use these files instead.

Eventually, I will provide instructions how to do this for a few guest types in the future. Meanwhile, you can obtain instructions for extracting the files from a Debian guest from Peter’s article [2].

[1] https://blogs.fsfe.org/kuleszdl/2016/11/06/installing-a-libre-low-power-low-cost-kvm-virtualization-host/
[2] https://translatedcode.wordpress.com/2016/11/03/installing-debian-on-qemus-32-bit-arm-virt-board
[3] https://fedoraproject.org/wiki/QA:Testcase_Virt_ARM_on_x86
[4] http://os.archlinuxarm.org/os/ArchLinuxARM-armv7-latest.tar.gz

November 13 2017

English on Björn Schießle - I came for the code but stayed for the freedom: Software freedom in the Cloud

Looking for Freedom

How to stay in control of the cloud? - Photo by lionel abrial on Unsplash

What does software freedom actually means, in a world where more and more software no longer runs on our own computer but in the cloud? I keep thinking about this topic for quite some time and from time to time I run into some discussions about this topic. For example a few days ago at bjoern/98906420670923130">Mastodon. Therefore I think it is time to write down my thoughts on this topic.

Cloud is a huge marketing term which can actually mean a lot. In the context of this article cloud is meant as something quite similar to SaaS (software as a service). This article will use this terms interchangeable, because this are also the two terms the Free Software community uses to discuss this topic.

The original idea of software freedom

At the beginning every software was free. In the 80s, when computer become widely used and people start to make software proprietary in order to maximise their profit, Richard Stallman come up with a incredible hack. He used copyright to reestablish software freedom by defining these four essential freedoms:

  1. The freedom to run the software for every purpose
  2. The freedom study how the program works and adapt it to your needs
  3. The freedom to distribute copies
  4. The freedom to distribute modified versions of the program

Every software licensed in a way that grants the user this four freedoms is called Free Software. This are the basic rules to establish software freedom in the world of traditional computing, where the software runs on our own devices.

Today almost no company can exist without using at least some Free Software. This huge success was possible due to a pragmatic move by Richard Stallman, driven by a vision on how a freedom respecting software world should look like. His idea was the starting point for a movement which come up with a complete new set of software licenses and various Free Software operating systems. It enabled people to continue to use computers in freedom.

SaaS and the cloud

Today we no longer have just one computer. Instead we have many devices such as smart phones, tablets, laptops, smartwatches, small home servers, IoT devices and maybe still a desktop computer at our office. We want to access our data from all this devices and switch during work between the devices seamlessly. That’s one of the main reasons why software as a service (SaaS) and the cloud became popular. Software which runs on a server and all the devices can connect to it. But of course this comes with a price, it means that we are relaying more and more on someones else computer instead of running the programs on our own computer. We lose control. This is not completely new, some of this solutions are quite old, others are rather new, some examples are mail servers, social networks, source code hosting platforms, file sharing services, platforms for collaborative work and many more. Many of this services are build with Free Software, but the software only runs on the server of the service provider and so the freedom never arrives at the user. The user stays helpless. We hand over the data to servers we don’t control. We have no idea what happens to our data and for many services we have no way to get our data again out of the service. Even if we can export the data we are often helpless because without the software which runs the original service, we can’t perform the same operations on our own servers.

We can’t turn back the time

We can’t stop the development of such services. History tells us that we can’t stop technological progress, whether we like it or not. Telling people not to use it will not have any notable impact. Quite the opposite, we the Free Software movement would lose the reputation we build over the last decades and with it any influence. We would no longer be able to change things for the better. Think again what Richard Stallman did about thirty years ago. He grew up in a world where software was free by default. When computers become a mass market product more and more manufactures turned software into a proprietary product. Instead of developing the powerful idea of Free Software, Richard Stallman could have decided to no longer use this modern computers and ask people to follow him? But would have many people joined him? Would it have stopped the development? I don’t think so. We would still have all the computers as we know them today, but without Free Software.

That’s why I strongly believe that, like thirty years ago, we need again a constructive and forward looking answer to the new challenges, brought to us by the cloud and SaaS. We, the Free Software community, need to be the driving force to lead this new way of computing into a way that respect the users freedom. Same as Richard Stallman did it back then by starting the Free Software movement. All this is done by people, so it’s people like us who can influence it.

Finding answers to this questions requires us to think in new directions. The software license is still the corner stone. Without the software being Free Software everything else is void. But being Free Software is by no means enough to establish freedom in the world of the cloud.

What does this mean to software freedom?

Having a close look at cloud solutions, we realise that it contains most of the time two categories of software. Software that runs on the server itself and software served by the server but executed on the users computer, so called JavaScript.

Following the principle of the well established definition of software freedom, the software distributed to the user needs to be Free Software. I would call this the necessary precondition. But by just looking at the license of the JavaScript code we are trying to solve today’s problems with the tools of the past, completely ignoring that in the world of SaaS your computer is no longer the primary device. Getting the source code of the JavaScript under a Free Software license is nice but it is not enough to establish software freedom. The JavaScript is tightly connected to the software which runs of the server so users can’t change it a lot without breaking the functionality of the service. Further, with each page reload the user gets again the original version of the JavaScript. This means that, with respect to the users freedom, access to the JavaScript code alone is insufficient. Free JavaScript has mainly two benefits: First, the user can study the code and learn how it works and second, maybe reuse parts of it in their own projects. But to establish real software freedom a service needs to fulfil more criteria.

The user needs access to the whole software stack, both the software which runs on the server and the software which runs the browser. Without the right to use, study, share and improve the whole software stack, freedom will not be possible. That’s why the GNU AGPLv3 is incredible important. Without going into too much details, the big difference is how the license defines the meaning of “distribute”. This term is critical to the Free Software definition. It defines at which point the rights to use, study, share and improve the software gets transferred to a user. Typically that happens when the user gets a copy of the software. But in the world of SaaS you no longer get a real copy of the software, you just use it over a network connection. The GNU AGPLv3 makes sure that this kind of usage already entitles you to get the source code. Only if both, the software which runs on the server and the software which runs on the browser is Free Software, users can start to consider exercising their freedom. Therefore my minimal definition of freedom respecting services would be that the whole software stack is Free Software.

But I don’t think we should stop here. We need more in order to drive innovation forward in a freedom respecting way. This is also important because various software projects already work on it. Telling them that these extra steps are only “nice to have” but not really important sends the wrong message.

If the whole software stack is Free Software we achieved the minimum requirement to allow everyone to set up their own instance. But in order to avoid building many small islands we need to enable the instances to communicated with each other. A feature called federation. We see this already in the area of freedom respecting social networks or in the area of file sync and share. About a year ago I wrote an article, arguing that this is a feature needed for the next generation code hosting platforms as well. I’m happy to see that GitLab started to look into exactly this. Only if many small instances can communicate with each other, completely transparent for the user so that it feels like one big service, exercising your freedom to run your own server becomes really interesting. Think for a moment about the World Wide Web. If you browse the Internet it feels like one gigantic universe, the web. It doesn’t matter if the page you navigate to is located at the same server or on a different ones, thousands of kilometres away from each other.

If we reach the point where the technology is build and licensed in a way that people can decide freely where to run a particular service, there is one missing piece. We need a way to migrate from one server to another. Let’s say you start using a service provided by someone but at some point you want to move to a different provider or decide to run your own server. In this case you need a way to export your data from the first server and import it to the new one. Ideally in a way which allows you to keep the connection to your friends and colleagues, in case of a service which provides collaboration or social features. Initiatives like the User Data Manifesto thought already about it and gave some valuable answers.

Conclusion

How do we achieve practical software freedom in the world of the cloud? In my opinion this are the corner stones:

  1. Free Software, the whole software stack, this means software which runs on the server and on the users browser, needs to be free. Only then people can exercise their freedom.

  2. Control, people need to stay in control of their data and need to be able to export/import them in order to move.

  3. Federation, being able to exercise your freedom to run your own instance of a service without creating small islands and losing the connection to your friends and colleagues.

This is my current state of thinking, with respect to this subject. I’m happy to hear more opinions about this topic.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl