Yuval Adam

OpenELEC Installation on Raspberry Pi 3

Raspberry Pi boards are immediate candiates for powering media center installations, especially with the Raspberry Pi 3 which comes with a built-in wifi adapter. I’ve recently installed OpenELEC on a Raspi 3, but required some tweaking in order to get the setup working properly. This short post documents these required changes, and assumes a working base installation of OpenELEC.

HDMI Flickering

In some cases the HDMI connection between the Pi and the TV/monitor can flicker in a very annoying way. Most of the HDMI configuration can be done in the config.txt file which is usually found in the boot partition, but in OpenELEC’s case is found under /flash/config.txt. Editing this file requires remounting the partition as writable:

$ mount -o remount,rw /flash
$ echo foo >> /flash/config.txt
$ mount -o remount,ro /flash

As for the flickering, it was fixed by raising the config_hdmi_boost config value to 7 from a supposed default of 5.

Wireless Connection

For reasons unknown, the OpenELEC support for the Raspi wireless adapter is severly lacking, and any attemt to connect to a wireless network returned a Network Error. Oddly enough, when connecting directly via SSH command line (instead of GUI) there are no problems. So I used this to first configure a connman profile and then have OpenELEC use it on the next boot. All the details are in this gist, but generally speaking a profile is created in /storage/.cache/connman/$NAME.config:

Name = foo
Description = goo

Type = wifi
Passphrase = YourPassphrase

Verify the profile works by running connmanctl scan wifi and connman services and connecting to the newly created profile connmanctl connect wifi_*_*_managed_psk. Once this profile is active, it will be recognized upon reboot and connect automatically, thus working around the weird bugs in the GUI connection.

LAN Bufferring

Finally, after getting a proper HDMI output and a working WLAN connection, buffering issues were noted when viewing media files from an NFS mount on the local network. No reason an off-the-shelf router can’t handle an HD stream, so again this seems to be another required tweak on the Pi.

This time we create a file called /storage/.xbmc/userdata/advancedsettings.xml and set some required settings:


This config does several things. First, it tells XBMC to buffer all videos, including videos originating on the LAN (as opposed to buffermode 0 which only buffers from the WAN). Next, it sets a cache size of 20MB, which is enough but won’t use too much free memory, and finally it raises the read buffer factor from 1 to 3 which will simply buffer more than the default.

All these settings together make OpenELEC on a Raspberry Pi 3 an actual usable media center, which works surprisingly well, compared to the unusable default installation.

Deploying a Large Open-Source Wireless Network

This post is actually published one year late, as the draft has already been written shortly after GeekconX in 2014. This year we deployed the same network with some slight changes, but the core of the setup remained the same. Here’s what we did.

Our mission was to setup a reliable internet connection for ~120 hackers and makers for the entire duration of Geekcon which was 3 days. This meant supporting up to 300 wire/less devices at peak usage. There were a few requirements that didn’t make our job easy at all:

  • the ADSL uplink provider could only set up a point at the administrative building that was 100-150 meters away from where Geekcon actually took place
  • the main hacking area is a very dense space that at peak will hold ~100 people at the same time
  • we also had to cover the outdoor area as well as the sports hall for the final presentations
  • we didn’t have any professional networking equipment to deploy

Our only choice was to use off-the-shelf equipment and open-source stacks - this wasn’t a limitation, the entire team preferred to use an open-source stack so that similar setups could be implemented easily in other venues. This meant using only routers we could find at stores and using everything we had sitting around that we could lay our hands on. This usually meant TP-Link devices which are high quality yet cheap, and have excellent OpenWrt support, which was naturally flashed on all the devices.

The network setup begins at the administrative building where we have 2 ADSL uplinks, each capable of handling no more than 30/3 Mbps, we took two for redundancy and load balancing. The core router was placed right next to the ADSL modems. From there, we pulled up a point-to-point link from the building rooftop to the building where the open space was located, initially we planned to mount the antenna on the building tower, but settled for the balcony corner closest to the first building since we had a good enough line-of-sight there, and it was easier to access. From the PtP antenna we hooked up a beefy switch and connected all our endpoint APs to there, including anyone who wanted to use an ethernet connection. The outdoor area was also covered by the same APs. Finally, the sports hall was poorly covered, since we didn’t have another PtP antenna pair, so we had to use a pair of bridged APs, one acting as a station of the outdoor area APs, and the other providing an access point inside the building. This wasn’t too bad since no one actually ended up using that space.

Core Router

Since our architecture was based around a smart edge router with many stupid APs beneath it, we decided to build a core router around a beefy x86_64 PC we had sitting around the hackerspace and installed pfSense on it, which is an awesome OpenBSD distribution that is built to power large-scale routers and firewalls. We threw in 3 PCIe gigabit ethernet adapters: one for each uplink modem and the third for the downstream PtP link. pfSense is super easy to configure, and we actually got lots of neat stuff for free such as monitoring and management tools, as well as an intrusion detection system, which came in handy later on. The core router was basically doing all the important network services: NAT, firewall, DHCP and DNS - leaving nothing to the perpheral routers other than to bridge all the connections together and make sure end clients have a quick route to our core router.

Point-to-point Link

For the PtP link we used two of our best routers, the TP-Link TL-WDR4300 (running OpenWrt), each connected to a D-Link dual-band directional antenna. For simplicity’s sake we opted to use a single 5GHz band bridged wireless connection giving us 450Mbps of effective bandwidth with the built-in MIMO setup. Mounting the antennas was the trickiest part since the routers aren’t proper outdoor routers and aren’t POE-capable, meaning we need to pull not only CAT5 cables up to the roof, but power cables as well. This is by far simplified by using proper outdoor equipment and injecting your power over-ethernet. From the network perspective, the PtP link is nothing more than a stupid bridged link that repeats anything it gets from ethernet to wireless and vice versa.

Access Points

The access points are the most boring aspect of the deployment, they are all just ‘stupid’ bridged APs bridging the LAN ports and the wireless interfaces. Since we had one core router doing all the hard work, all peripheral services had to be disabled on all the APs (DHCP and DNS, essentially).

The most challenging aspect is frequency management, and setting your frequencies properly such that no two adjacent APs are transmitting on the same channels, and get as little overlap as possible between all the channels so you don’t create too much noise for the other APs. This means we used channels 1, 6 and 11 on the 2.4Ghz band, and channels 40, 44 and 48 on the 5Ghz band. Since channel 36 was used on the PtP connection, due to close proximity, it wasn’t used at all by the APs.

We only used off-the-shelf devices naturally all running OpenWrt. The trusty TL-WDR4300 were the core of the setup since they have nice beefy specs and dual-band radios, and we also threw in some other smaller TP-Link devices to cover the outdoor areas. Managing the APs was done by trial and error, and it took a few re-arrangements before we found out where we need to locate our APs such that they are more or less balanced in the open space. We didn’t get to it, but when it starts to get noisy, it’s highly advised to reduce your TX power and use more APs to make sure you’re making the best use of the EM spectrum without hitting physical limits.

IP address space

The address space was pretty easy to divide up. The entire LAN was operating on the subnet. Since using a single /24 wasn’t enough, we had plenty of room to play around with addressing:

  • was our core router
  • and were the two PtP devices
  • 10.14.3.{1..N} were the endpoint APs
  • Finally, DHCP addresses were allocated in the 10.14.{4,5,6,7}.0 range, plenty to go around.


  1. We were lucky to have mostly nice weather, but did get one night of rain, which almost ruined all of our outdoor installations. If you don’t use proper outdoor gear, at the very least make sure it’s weather-proof. POE gear is highly recommended to simplify rooftop installations.
  2. Learn how to properly crimp your CAT5/6 cables! It’s pretty embarrasing to get a faulty connection just because you did a lousy job crimping your connectors.
  3. Make sure to hang your APs as high as possible, people are excellent absorbers of electromagnetic fields.
  4. On the second day, some security research n00bs were up to no good and started flooding our network with packets of crap, practically killing our DHCP server. Only after some downtime we managed to kick them off and get snort running. Make sure you get it running preemptively, before the trouble starts.


Check out more photos of the entire setup at https://imgur.com/a/rgDuF and https://imgur.com/a/gcXmu.

Huawei HG610 Capacitor Replacement

This post is mostly written for posterity’s sake, in case anyone ever runs into this weird problem.

I recently upgraded my home broadband connection to a 100M/3M VDSL line. The equipment given (loaned, actually) by the telco (Bezeq) was full of backdoors and couldn’t be liberated in any way - a story in itself for another time. I decided to order a VDSL modem from eBay and put my own OpenWRT-based router behind that, and tell Bezeq they can take their shitty equipment back. I ended up ordering a Huawei HG610 VDSL modem, the seller was extremely responsive and I got my device within a week (A++++ would buy again, etc…)

Yesterday, the modem suddenly died. Upon closer inspection it was clear that something is wrong, on power up the modem would make a very strong hissing sound, and the connection would drop almost instantaneously. Today I contacted the seller and notified him of the problem. To my surprise, he said I’m not the first customer in Israel using this modem and having this kind of problem with the hardware. He offered to immediately send a replacement unit, and I could ship this one back when I get the new one.

I thanked the seller for his awesome response, but inquired about the nature of the problem. It turns out there’s a certain capacitor failure on this device, which apparently is common when used on the Israeli grid, for some reason. Capacitors are a rather easy fix, and I need my connection ASAP. So I decided to fix the device by myself and not burden me or the seller with the entire shipping process.

Opening the unit, it was very clear that an electrolytic capacitor has busted. In the above image (courtesy of Kirill Romaschenko, thanks!) it is C321, the green capacitor located right next to the yellow coil, adjacent to the ethernet bridge.

It took some work, but I managed to de-solder it from the board. The capacitor is rated at 470 µF / 10V. My hunch is that Bezeq is doing some funky stuff that sends higher voltages over the line, and the modem is falling under the burden. The logical replacement is a new cap with a slightly higher voltage rating. I found a 470 µF / 16V capacitor and soldered it as a replacement. Sure enough, no more hissing, and the device works flawlessly again.

It will be interesting to see if this problem crops up on other devices on the same network.

Feeding Data to flightradar24.com

I finally got a nice ADS-B setup working, where I’m also feeding all my data to flightradar24.com. Here are some of the technical details on the setup.

RTL-SDR on TL-WR703n

The receiving endpoint is naturally based on an RTL-SDR dongle. I’ve grown to like the small form factor dongles, based on the R820T tuner (such as this one).

As for the host device, I have two requirements for it: it has to be small enough so I can stash it somewhere without getting in the way, and it has to be power efficient. There’s no reason to power a hundred watt device just for powering a single dongle.

The obvious choice is to to with the trusty TL-WR703n, of which by now I have a handful waiting to be used for various projects at any given time. It is installed with OpenWrt, with the wireless connections set to ‘station’ mode, and connected to my regular home WLAN.

The software running is dump1090, which is an extremely lightweight ADS-B decoder for RTL-SDR, it runs flawlessly on the WR703n. If you’re running OpenWrt Barrier Breaker (i.e. trunk) you can install librtlsdr and dump1090 from opkg, and Steve Markgraf has also compiled .ipks for Attitude Adjustment.)

Unfortunately, dump1090 doesn’t have a daemon mode out of the box, but it’s pretty damn easy to hack up a screen-based init script:

#!/bin/sh /etc/rc.common


start() {
    screen -S dump1090 -d -m -L dump1090 --net >> /dev/null

stop() {
    screen -r dump1090 -X quit

Note that I’m just using the --net flag and piping all output to /dev/null. This is because I only care about the network ports sending the right data, and I don’t want any output being logged to screen and filling up the precious flash space on the device.


The antenna I’m currently using is a plain simple quarter-wavelength ground plane antenna, created by sticking a copper wire in the central lead of an N connector, and 4 more wires as ground, on each of the screw holes. From there I have a pigtail cable with an N connector on one side, and an RP-SMA on the other. Another adapter is needed to convert from RP-SMA to MCX on the small dongle.

The plan is to mount this antenna on the roof of my building, but I haven’t gotten to that yet, so it’s just resting on my balcony. Even though I have significant interference from nearby buildings (which should be resolved once it goes on the roof), I’m able to receive all traffic in my vicinity of ~30NM, and in certain directions I am able to receive targets up to 130NM away.

Feeder software

The feeder software running is the official linux feeder for flightradar24.com. Unfortunately, there are no builds for any OpenWrt device just yet, and the code closed-source. So what I’m doing is running it as a daemon on one of my remote VPS machines, with the following flags:

$ fr24feed_x64_241 --fr24key=MY_SHARING_KEY --bs-ip=MY_HOME_STATIC_IP --bs-port=MY_PORT_NUM

This will go to my home IP, to a random port number which is forwarded to port 30003 on the WR703n, and pull the data directly from dump1090, which spews out messages in BaseStation format, ready for the feeder software to digest:


Windows XP Installation Revisited

It’s been a while since I actually had to use Windows proper. Windows XP is still a damn fine operating system - and superior to its successors in my opinion - but I’ve largely transitioned to using Linux and OS X exclusively over the past few years. I’ve kept a few Windows XP images running on VMs for various uses, but have not found a reason to install a native Windows installation up until recently. To my surprise, I forgot many of the skills I used to posses in this environment.

My goal was simple: setup a native Windows installation (XP if possible) alongside an existing Linux installation on one of my desktop machines. Here are some notes I took while working my way through this territory.

Dual Boot

Since the existing system hosts Linux, I must install Windows alongside with the ability to dual-boot. For this there are two options that work equally well for me.

First, installing Windows on a separate hard drive gives you the simple option of using the BIOS boot order (and boot menu) to control the boot process. Since I use Linux exclusively, and will rarely need access to the Windows installation, I could easily setup the Linux hard drive as the first boot option, and use the special boot menu (F12 on most BIOSes) to force booting into the secondary hard drive whenever I wanted Windows. For me this suffices. But if you’re running on two partitions of the same HD, this might not.

A nicer option would be to customize my existing syslinux setup, and chain-loading Windows from the secondary HD. This means adding a new syslinux config option:

LABEL Windows
       MENU LABEL Windows
       COM32 chain.c32
       APPEND mbr:0xa1b2c3d4 swap

In this case, we use the hard drive MBR identifier (found by running fdisk -l /dev/sdX) to ensure the proper drive is found. swap is a required option that overcomes issues when chain-loading across hard drives.


SATA hard drives can pose a problem for Windows XP installations, as they can operate in one of two modes: IDE or AHCI. AHCI is an advanced mode that enables hot-swapping and other advanced features for SATA drives, while IDE mode is the fallback mode. I would have no problem running in IDE mode only if not for the fact that the latest Linux kernels prefer AHCI, actually, and in my case could not even boot in IDE mode.

Unfortunately, Windows XP does not have native support for AHCI in its base image installation, and switching between IDE/AHCI each time I wanted to switch Linux/Windows is definitely not an option. This is a critical issue.

The usual option requires taking a base ISO image, and slipstreaming the required SATA drivers into it. When I tried this using the drivers recommended by my motherboard manufacturer, it failed.

The solution came from using a custom-made ISO, created by a group called ThumperDC (it’s a popular option on Pirate Bay and easy to find). This ISO comes with a myriad of driver options that simply work.

It should be noted that while there are various techniques that allow to hack an existing non-AHCI installation to support AHCI, after trying them I must conclude that they are not reliable at all, and rarely work. Your best bet would be to properly support AHCI from the initial installation steps.

Slipstreaming Drivers

Not only AHCI drivers can be slipstreamed. It would be wise, for example to add any expected drivers beforehand, such that upon first boot most of the components will just work™. For this I recommend using nLite which provides an awesome interface for customizing ISOs, not only with drivers but also by adding and removing installation features on demand. It runs only on Windows, so you’ll have to bootstrap this process from a Windows VM.

USB Booting

Finally, I wanted to install the entire thing from a USB drive. Burning a DVD is ghetto, yeah, but I can’t even remember where I have any blank DVDs around. Since the standard Windows XP ISOs are not USB bootable, some more work is required.

A handy tool called WinToFlash will happily take an ISO and add the required files to make it USB-bootable. Again, runs only on Windows, so use the same VM you used previously.


This process is cumbersome, and totally not secure in any way, since it requires using pirated proprietary code which might be infested with bad stuff. However, my use case is very specific, and I need the most lightweight setup I could find. Sure, if I used Windows 7 (or 8) things could have gone smoother. But compare the 650MB XP install to Windows 7’s 3.5GB. Windows XP is still a beast of an operating system that can do lots of good things for you, if you ever need to venture out of the comfort zone of free and open-source software.

Dokku as an Heroku Replacement

For the past year or two, Heroku has been my weapon of choice for quick-and-easy deployment of small projects. The ease of pushing a project to Heroku with everything behind the scenes being taken care of really was astounding.

My gripe with Heroku has always been with larger projects. For someone who’s done devops for larger web projects previously, the threshold for when you need to start customizing things is relatively low, and the need to switch your deployment strategy become imminent. However, in this post we won’t be talking about larger projects that require their own set of deployment tools.

I’d like to focus on the category of smaller projects. On Heroku you get a single dyno for free, and for many uses (such as deploying single periodical tasks) this is great. However, even a minuscule side-project might require, at the very least, a single web process adjacent to a single worker process. In Heroku’s case, this is $20/month, which might not be worth it for a small project.

Meet Dokku.

Dokku is a lightweight set of bash scripts that facilitates deployments of single applications on a single server using Docker and the open-source Heroku buildpacks to simulate a Heroku-like deployment environment which you can set up on your own boxes. It is by no means feature-complete, but for pushing a single project to a remote box, it definitely gets the job done.

Install Dokku

Dokku can be installed only on recent Ubuntu systems due to the various Docker dependencies. Also, due to various bugs, the best system to go with currently is Ubuntu 13.04. Installation is a one-liner:

$ wget -qO- https://raw.github.com/progrium/dokku/v0.2.1/bootstrap.sh | sudo DOKKU_TAG=v0.2.1 bash

Some VPS providers (such as DigitalOcean) provide pre-made “Dokku on Ubuntu” images, but I’ve found that using the “classic” Dokku bootstrap script works equally well, and it just as easy.


Dokku comes with the most basic set of plugins that provide Git support and nginx virtual hosts. However, most web projects will also require some other necessities, such as a Postgres database, a Redis instance for caching and queuing, and a proper process manager.

All of these exist as Dokku plugins (redis, postgres and supervisord), but since I use these in every single project, I’ve added them all as submodules inside a dokku base project that I use.

If you’d like to use that base installation, you can easily install it with:

$ wget -qO- https://raw.github.com/yuvadm/dokku-base/master/bootstrap.sh | sudo bash

I highly suggest using this installation, since manual installation of the aforementioned plugins sometimes doesn’t flow just right.


Once Dokku is installed, we’ll need to upload our SSH public key to the Dokku server so that we’ll be able to push projects to it:

cat ~/.ssh/id_rsa.pub | ssh dokku.example.com "sudo sshcommand acl-add dokku yourname"

Make sure you use proper SSH credentials when doing this (maybe you’ll need to specify a username or a PEM key file).

Pushing a Project

We’re now ready to push a project to Dokku! Let’s give it a shot:

$ cd test-project
$ git remote add dokku dokku@dokku.example.com:test-project
$ git push dokku master

If all went well the buildstep should now run, and eventually tell you that your project is now deployed to http://test-project.dokku.example.com. Make sure you actually have a proper DNS setting for this, which will usually be a CNAME from *.dokku.example.com to dokku.example.com.

At this point we need to create and attach a Postgresql database and a Redis instance. To run Dokku management commands, either log in to the Dokku server, or run the commands remotely using ssh -t dokku@dokku.example.com <command>.

$ dokku postgresql:start
$ dokku postgresql:create test-project
$ dokku redis:create test-project

After completing these steps, you should be able to run dokku config test-project and see that you actually have DATABASE_URL and a REDIS_URL environment variables configured. Make sure your project references them.

Finally, supervisord, should be running out of the box, such that any process in your Procfile will be immediately recognized and started and will be restarted in case of redeployment or any other failure.


Dokku is a really cheap and easy way to deploy medium-sized projects that would otherwise cost a few $$$ on Heroku. It’s shaping up to look very stable, and a wide range of plugins for Dokku supply mostly any behavior you need from it.

Make Your Printer Go Wireless for $20

I just got a new printer - the cheapest B/W laser printer I could find - and I really wanted to have a network option for it, since I’d like to print from different laptops and computers around my house. However, the premium for a printer with an ethernet/wireless option, in this cheap printer category is around 50-70% of the printer value. Way too much. I’m going to hack my way out of this problem.

For this, we’re going to use $20 TP-Link WR-703N devices, which are actually very small routers, and once flashed with OpenWrt, are capable of doing pretty much anything that you can think of with a Linux kernel, an ethernet port, a WiFi connection, and a USB port. This device is perfect as a wireless printer adapter.

Install OpenWrt

First of all, we need to flash the device with OpenWrt. Simply follow these instructions which describe exactly which binary to download. Flashing it is easily done via the existing default web interface, which unfortunately is only in Chinese. But there are some good screenshots on the Xinchejian hackerspace wiki that you can follow. Caution: When flashing, make sure you’re aware of the pitfalls of some of the recent firmwares, see the warnings section for details.

Note that some vendors on eBay are already selling WR703N devices with OpenWrt pre-flashed instead of the stock firmware, in which case you don’t have to do any of the above.

Configure OpenWrt

Now that we have OpenWrt installed, it’s time to configure it. By default, the wireless interface will be disabled, leaving on the ethernet port working. Upon connection, you’ll be served a 192.168.1.X IP address via DHCP from the device (which will be on For our setup, we’ll assume we want to located the printer somewhere without having to pull any network cables to it, so we want a setup where the wireless adapter is used as the WAN interface, hooking up to an existing wireless network. We’ll also want to create a subnet LAN on to ensure we’re not colliding with the existing LAN on (assuming we’re also running the same subnet on the existing home network.)

First, we need to open up the wireless configuration on /etc/config/wireless and edit it to look like so:

config wifi-iface
        option device radio0
        option network wan
        option mode sta
        option ssid MyNetworkName
        option encryption psk2
        option key MyNetworkPassword

This configuration means we setup the radio0 device to hook up as a station (sta) to the WAN on the given network. psk2 assumed WPA2-PSK encryption with the given password.

You also have to make sure you update the physical 802.11 channel on the line in the radio0 config, where it says:

option channel 11

It must match your existing wireless channel.

Now we’ll add the WAN interface itself in the /etc/config/network config:

config interface 'wan'
        option ifname 'wlan0'
        option proto 'dhcp'

And since we’re setting up a new LAN subnet, change the existing LAN interface on the following line to:

option ipaddr ''

Last thing is to make sure we can remotely log into the device from the WAN, since hooking up ethernet just to manage it is a PITA. Open the /etc/config/firewall config file and add this rule at the end:

config rule
        option name 'allow-ssh'
        option src 'wan'
        option target 'ACCEPT'
        option dest_port '22'

That’s all the basic config we need. After reboot, the device should show up on your LAN, and you’ll be able to log into it to continue to the next steps.

Printer configuration

We’ll move on to installing the important stuff. The printer setup is based on the p910nd daemon, which will act as a proxy between the device and the printer. It’s main advantage is that it is a non-spooling server (as opposed to, say, CUPS) which means it doesn’t need to hold the printing jobs in memory while printing. Remember, WR703N are very limited devices.

In order to setup the printer we need to install both p910nd, as well as the USB kernel mods, but it’s all done through the opkg package manager:

opkg update
opkg install p910nd kmod-usb-printer

After this part, if you connect the printer, you should see either the /dev/usb/lp0 or /dev/lp0 device. Run dmesg to see a successful USB connection, or errors if there were any.

Go on over to the /etc/config/p910nd config and enable the printer, and make sure it’s pointed to the right device. At this point you’ll also have to open port 9100 on the firewall, in the exact same way we did for the SSH port previously. Make sure to restart the firewall after this addition, using /etc/init.d/firewall restart.

Now we can start and enable the p910nd daemon, to make it run now, as well as on each boot:

/etc/init.d/p910nd start
/etc/init.d/p910dn enable

Configure client

We’re almost there. Now we just have to create a new printer device on our client machine, and let’s assume you already have all the drivers installed.

These instructions vary by client OS, but generally speaking you need to create a new printer, under the device WAN IP (you can find it by ssh-ing into the device and seeing what ifconfig reports under the wlan0 interface), if needed, define port 9100. The protocol selected should be AppSocket or HP JetDirect.

Try printing a test page. If all went well, you have just saved yourself some money, and learned how to create your own cheap wireless adapter for your printer.

(Re)building the Cryptoparty Community

“Assume good faith” -(rule 0 of every community)

The saddest thing about the recent events involving Cryptoparty and 29C3 is that I have yet to notice a single discussion which was constructive. 29C3 was the first chance for the Cryptoparty community to meet in a physical place and discuss how to evolve from there. We had that chance, and we blew it.

The 29C3 Cryptoparty itself was actually quite fun, and even though we were essentially preaching to the choir, it was a good experience. There was a good discussion on the first day sharing experiences and knowledge from various Cryptopartae. We also had some productive hours of discussing how we should continue work on the Cryptoparty Handbook. Alas, we should’ve done more than that. The setting was toxic - despite people keeping a straight face, everyone knew there’s something wrong.

The Cryptoparty community has just started to take form. The social connections have just started to take place. Up until now, we just had the Cryptoparty meme - symbolized by a hashtag bearing the same name - and some local meetups. We have just barely started to form global connections and extrapolate the collective knowledge into further projects and long-term goals. For me, that was one of the main goals of meeting the people behind Cryptoparty at 29C3.

What we got instead was a shitstorm of rage and anger, rendering the environment completely non-productive. If the goal is to make for some juicy gossip on teh twitterz, we’re definitely winning. But let’s not delude ourselves that this is some form of constructive community action.

I applaud all who participate in the 140-character-free-for-all - it definitely satisfied my primal need for some good gossip. At the end of the day, it’s pretty easy to discern who in the community is actually interested in building shit and spreading crypto through honest constructive discussion.

Key members of the Cryptoparty community are well aware of my stance on certain issues of the past two weeks, as I have approached them directly and let them know of my concerns and views. I see no use in discussing these recent events in public, since none of them have anything to do - whatsoever - with building the Cryptoparty community.

I hope that by the time 30C3 occurs, we - as a community - can show significant progress not only in our efforts to spread the use of crypto, but in the way we handle communications between ourselves, with honesty and respect.

A Primer on Cheap Software Defined Radios

I’ve always been fascinated by radio. I clearly remember discovering numbers stations at the age of 9 using my grandparent’s old shortwave radio, and I was fascinated by the concept of stuff being broadcast over the air - discounting FM radio which was ordinary.

Actually, I’ve always wanted to buy a frequency scanner and learn more about radio, but never got around to actually doing so, something didn’t feel right. Last week, the right thing I was waiting for was found - an open-source software stack and a $15 USB dongle turn my desktop computer into a software defined radio. Essentially, this means that anyone can, very cheaply, pull data out of thin air (literally), and analyze it using code.

Up until now, SDR could only be achieved using expensive equipment, and using proprietary drivers and software. The $15 SDR option is a serious breakthrough in making the SDR world more accessible. As with most new technologies, the open-source SDR world is still not very user-friendly, and in this post I’ll try to outline the basic stuff a beginner should know when entering this world.

The basis for SDR is GNU Radio, which is an open source toolkit that provides all the necessary mathematical building blocks to begin implementing SDR. In essence, GNU Radio is a set of APIs that allow to build usable SDR programs. An important part of GNU Radio is the GNU Radio Companion which is a simple GUI that allows to connect various signal processing components into a single graph and generate code from it. The thing is that, for most basic cases, we don’t really want to write the signal processing code ourselves.

Let’s go back to the hardware part. Up until now, if you wanted to do SDR you had to use expensive receivers, such as the Icom R2500. Naturally, these proprietary products natively supported Windows PCs, and you could forget about Linux, not to mention seeing any code for the software or drivers. Granted, USRP devices were much more open and accessible, but the hardware was still very expensive, and posed a high barrier of entry for novice users that just wanted to play around.

As it turns out, it’s possible to use cheap DVT-B USB dongles (like this one) and hack them into proper SDR receivers. DVB-T is a worldwide standard for digital TV broadcast, and apparently the cheap tuners that are manufactured en masse are just the thing we can use to do poor’s man SDR.

The software that we use to handle the cheap dongles is rtl-sdr and is the core of the setup. Now, setting up the entire stack is the tricky part. The GNU Radio stack has lots of dependencies, both C and Python libs, and has no easy, cross-platform, way of setting up. I actually kind of gave up on my Mac setup, and took me several hours to get shit running on my Linux box. Other than throwing a bunch of links, I really don’t have any better installation instructions that the ones out there. There will be lots of errors and dependency issues along the way, it’s a matter of sifting through wikis and lots of Googling ‘till something works. Here are some links that should cover most of what you’ll need:

Fortunately, all tools use standard autoconf and cmake toolchains, so the installation procedure for most packages will be similar. If all went well, at this point, we’ll want to see the following test running smoothly:

$ rtl_test -t
Found 1 device(s):
  0:  ezcap USB 2.0 DVB-T/DAB/FM dongle

Using device 0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Found Elonics E4000 tuner
Supported gain values (18): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0 43.0 45.0 47.0 49.0
Benchmarking E4000 PLL...
[E4K] PLL not locked for 51000000 Hz!
[E4K] PLL not locked for 2227000000 Hz!
[E4K] PLL not locked for 1114000000 Hz!
[E4K] PLL not locked for 1241000000 Hz!
E4K range: 52 to 2226 MHz
E4K L-band gap: 1114 to 1241 MHz

After getting the dongle and the drivers all setup we want to listen to some stuff! As I mentioned earlier, building various signal processing flows is totally beyond the scope for what we’re trying to do, all we want is a simple tuner with some knobs to twist, and eventually hear some sound coming out the speakers. The most easiest receiver software I’ve found so far is gqrx (also on Github).

Gqrx is very easy to grok, even for beginners with no experience listening to the radio waves. Start off by picking a frequency that you know should be active, broadcast FM radio is the obvious choice here, and just tinker with the knobs until it sounds reasonable. Learn what the difference between AM and FM is. Learn how the FM filter works. Play with the squelch levels to silence the white noise on channels that aren’t always active. From my experience, it takes a while to understand how everything comes together.

After playing around with broadcast FM, you can advance to other transmissions: air traffic, ham radio, police and fire services, navigation beacons, GPS, GSM, POCSAG, P25. Each of these subjects is an entire post in and of itself.

The final point I want to make is that listening to radio waves has lots of nuances to it. The stock antenna shipped with the dongles is absolutely insufficient to receive anything other than strong signals. If you’re serious in doing SDR, you’ll have to invest time researching proper antenna setups and reducing noise.

Nonetheless, this cheap SDR setup is mind-blowing in how easy it can be to start playing around with stuff that used to be extremely expensive.

Deploying Periodical Tasks on Heroku

Heroku is an awesome platform for hosting web applications, that much is obvious. A few days ago I had another application to deploy on Heroku, but with a different usage profile. The application, a simple breaking news tweeting app, periodically scrapes a popular Israeli forum with breaking headlines, and tweets them - a fairly straightforward task. However, this application has no request-response cycle, and in fact has no open web gateway, just a simple task running periodically, every minute in our case.

Naturally, this task needs to run on a 24/7-available server, not just on a random desktop. Sure I have several VMs I can piggyback this task on, but I wanted to find the way to package this little task properly such that I can deploy it easily on Heroku and forget about the whole thing. Since I’m running a single process on a single Heroku dyno, if I could get it to work, it wouldn’t cost a thing.

For asyncronous and scheduled tasks in Python, the obvious solution is to use Celery. The core of the setup is a single Celery worker running a periodical task. Since we only have one worker, and we can’t spare another process for the Celery heartbeat process (it’d cost another Heroku dyno which isn’t free), we’ll use the celery worker process with the -B flag that bundles the worker and the heartbeat into one convenient process.

Celery can’t work without a messaging broker, naturally with Heroku we’ll use the redistogo:nano plan.

Here’s the code for a simple worker, tasks.py:

import logging

from celery import Celery
from celery.task import periodic_task
from datetime import timedelta
from os import environ

REDIS_URL = environ.get('REDISTOGO_URL', 'redis://localhost')

celery = Celery('tasks', broker=REDIS_URL)

def fib(n):
    if n > 1:
        return fib(n - 1) + fib(n - 2)
        return 1

def print_fib():

To wrap it up, you’ll need a Procfile with a single line launching the worker:

worker: celery -A tasks worker -B --loglevel=info

I find this setup to be very convenient if I need to deploy a single recurring task, and not care at all about setting up cron jobs or manually configuring deployment environments. Heroku FTW.

All the code, as always, is in a single repo on Github: https://github.com/yuvadm/heroku-periodical. Enjoy!