Yuval Adam

Huawei HG610 Capacitor Replacement

This post is mostly written for posterity’s sake, in case anyone ever runs into this weird problem.

I recently upgraded my home broadband connection to a 100M/3M VDSL line. The equipment given (loaned, actually) by the telco (Bezeq) was full of backdoors and couldn’t be liberated in any way – a story in itself for another time. I decided to order a VDSL modem from eBay and put my own OpenWRT-based router behind that, and tell Bezeq they can take their shitty equipment back. I ended up ordering a Huawei HG610 VDSL modem, the seller was extremely responsive and I got my device within a week (A++++ would buy again, etc…)

Yesterday, the modem suddenly died. Upon closer inspection it was clear that something is wrong, on power up the modem would make a very strong hissing sound, and the connection would drop almost instantaneously. Today I contacted the seller and notified him of the problem. To my surprise, he said I’m not the first customer in Israel using this modem and having this kind of problem with the hardware. He offered to immediately send a replacement unit, and I could ship this one back when I get the new one.

I thanked the seller for his awesome response, but inquired about the nature of the problem. It turns out there’s a certain capacitor failure on this device, which apparently is common when used on the Israeli grid, for some reason. Capacitors are a rather easy fix, and I need my connection ASAP. So I decided to fix the device by myself and not burden me or the seller with the entire shipping process.

Opening the unit, it was very clear that an electrolytic capacitor has busted. In the above image (courtesy of Kirill Romaschenko, thanks!) it is C321, the green capacitor located right next to the yellow coil, adjacent to the ethernet bridge.

It took some work, but I managed to de-solder it from the board. The capacitor is rated at 470 µF / 10V. My hunch is that Bezeq is doing some funky stuff that sends higher voltages over the line, and the modem is falling under the burden. The logical replacement is a new cap with a slightly higher voltage rating. I found a 470 µF / 16V capacitor and soldered it as a replacement. Sure enough, no more hissing, and the device works flawlessly again.

It will be interesting to see if this problem crops up on other devices on the same network.

Feeding Data to flightradar24.com

I finally got a nice ADS-B setup working, where I’m also feeding all my data to flightradar24.com. Here are some of the technical details on the setup.

RTL-SDR on TL-WR703n

The receiving endpoint is naturally based on an RTL-SDR dongle. I’ve grown to like the small form factor dongles, based on the R820T tuner (such as this one).

As for the host device, I have two requirements for it: it has to be small enough so I can stash it somewhere without getting in the way, and it has to be power efficient. There’s no reason to power a hundred watt device just for powering a single dongle.

The obvious choice is to to with the trusty TL-WR703n, of which by now I have a handful waiting to be used for various projects at any given time. It is installed with OpenWrt, with the wireless connections set to ‘station’ mode, and connected to my regular home WLAN.

The software running is dump1090, which is an extremely lightweight ADS-B decoder for RTL-SDR, it runs flawlessly on the WR703n. If you’re running OpenWrt Barrier Breaker (i.e. trunk) you can install librtlsdr and dump1090 from opkg, and Steve Markgraf has also compiled .ipks for Attitude Adjustment.)

Unfortunately, dump1090 doesn’t have a daemon mode out of the box, but it’s pretty damn easy to hack up a screen-based init script:

1
2
3
4
5
6
7
8
9
10
11
#!/bin/sh /etc/rc.common

START=95

start() {
    screen -S dump1090 -d -m -L dump1090 --net >> /dev/null
}

stop() {
    screen -r dump1090 -X quit
}

Note that I’m just using the --net flag and piping all output to /dev/null. This is because I only care about the network ports sending the right data, and I don’t want any output being logged to screen and filling up the precious flash space on the device.

Antenna

The antenna I’m currently using is a plain simple quarter-wavelength ground plane antenna, created by sticking a copper wire in the central lead of an N connector, and 4 more wires as ground, on each of the screw holes. From there I have a pigtail cable with an N connector on one side, and an RP-SMA on the other. Another adapter is needed to convert from RP-SMA to MCX on the small dongle.

The plan is to mount this antenna on the roof of my building, but I haven’t gotten to that yet, so it’s just resting on my balcony. Even though I have significant interference from nearby buildings (which should be resolved once it goes on the roof), I’m able to receive all traffic in my vicinity of ~30NM, and in certain directions I am able to receive targets up to 130NM away.

Feeder software

The feeder software running is the official linux feeder for flightradar24.com. Unfortunately, there are no builds for any OpenWrt device just yet, and the code closed-source. So what I’m doing is running it as a daemon on one of my remote VPS machines, with the following flags:

1
$ fr24feed_x64_241 --fr24key=MY_SHARING_KEY --bs-ip=MY_HOME_STATIC_IP --bs-port=MY_PORT_NUM

This will go to my home IP, to a random port number which is forwarded to port 30003 on the WR703n, and pull the data directly from dump1090, which spews out messages in BaseStation format, ready for the feeder software to digest:

1
2
3
4
5
6
7
8
MSG,3,,,06A065,,,,,,,33000,,,30.33963,34.27800,,,0,0,0,0
MSG,8,,,7415A3,,,,,,,,,,,,,,,,,
MSG,6,,,738484,,,,,,,,,,,,,5051,0,0,0,0
MSG,6,,,7415A3,,,,,,,,,,,,,6130,0,0,0,0
MSG,8,,,738505,,,,,,,,,,,,,,,,,
MSG,8,,,738484,,,,,,,,,,,,,,,,,
MSG,3,,,06A065,,,,,,,33000,,,30.33788,34.27880,,,0,0,0,0
MSG,4,,,738484,,,,,,,,237,299,,,-1536,,0,0,0,0

Windows XP Installation Revisited

It’s been a while since I actually had to use Windows proper. Windows XP is still a damn fine operating system – and superior to its successors in my opinion – but I’ve largely transitioned to using Linux and OS X exclusively over the past few years. I’ve kept a few Windows XP images running on VMs for various uses, but have not found a reason to install a native Windows installation up until recently. To my surprise, I forgot many of the skills I used to posses in this environment.

My goal was simple: setup a native Windows installation (XP if possible) alongside an existing Linux installation on one of my desktop machines. Here are some notes I took while working my way through this territory.

Dual Boot

Since the existing system hosts Linux, I must install Windows alongside with the ability to dual-boot. For this there are two options that work equally well for me.

First, installing Windows on a separate hard drive gives you the simple option of using the BIOS boot order (and boot menu) to control the boot process. Since I use Linux exclusively, and will rarely need access to the Windows installation, I could easily setup the Linux hard drive as the first boot option, and use the special boot menu (F12 on most BIOSes) to force booting into the secondary hard drive whenever I wanted Windows. For me this suffices. But if you’re running on two partitions of the same HD, this might not.

A nicer option would be to customize my existing syslinux setup, and chain-loading Windows from the secondary HD. This means adding a new syslinux config option:

1
2
3
4
LABEL Windows
       MENU LABEL Windows
       COM32 chain.c32
       APPEND mbr:0xa1b2c3d4 swap

In this case, we use the hard drive MBR identifier (found by running fdisk -l /dev/sdX) to ensure the proper drive is found. swap is a required option that overcomes issues when chain-loading across hard drives.

AHCI

SATA hard drives can pose a problem for Windows XP installations, as they can operate in one of two modes: IDE or AHCI. AHCI is an advanced mode that enables hot-swapping and other advanced features for SATA drives, while IDE mode is the fallback mode. I would have no problem running in IDE mode only if not for the fact that the latest Linux kernels prefer AHCI, actually, and in my case could not even boot in IDE mode.

Unfortunately, Windows XP does not have native support for AHCI in its base image installation, and switching between IDE/AHCI each time I wanted to switch Linux/Windows is definitely not an option. This is a critical issue.

The usual option requires taking a base ISO image, and slipstreaming the required SATA drivers into it. When I tried this using the drivers recommended by my motherboard manufacturer, it failed.

The solution came from using a custom-made ISO, created by a group called ThumperDC (it’s a popular option on Pirate Bay and easy to find). This ISO comes with a myriad of driver options that simply work.

It should be noted that while there are various techniques that allow to hack an existing non-AHCI installation to support AHCI, after trying them I must conclude that they are not reliable at all, and rarely work. Your best bet would be to properly support AHCI from the initial installation steps.

Slipstreaming Drivers

Not only AHCI drivers can be slipstreamed. It would be wise, for example to add any expected drivers beforehand, such that upon first boot most of the components will just work™. For this I recommend using nLite which provides an awesome interface for customizing ISOs, not only with drivers but also by adding and removing installation features on demand. It runs only on Windows, so you’ll have to bootstrap this process from a Windows VM.

USB Booting

Finally, I wanted to install the entire thing from a USB drive. Burning a DVD is ghetto, yeah, but I can’t even remember where I have any blank DVDs around. Since the standard Windows XP ISOs are not USB bootable, some more work is required.

A handy tool called WinToFlash will happily take an ISO and add the required files to make it USB-bootable. Again, runs only on Windows, so use the same VM you used previously.

Conclusion

This process is cumbersome, and totally not secure in any way, since it requires using pirated proprietary code which might be infested with bad stuff. However, my use case is very specific, and I need the most lightweight setup I could find. Sure, if I used Windows 7 (or 8) things could have gone smoother. But compare the 650MB XP install to Windows 7’s 3.5GB. Windows XP is still a beast of an operating system that can do lots of good things for you, if you ever need to venture out of the comfort zone of free and open-source software.

Dokku as an Heroku Replacement

For the past year or two, Heroku has been my weapon of choice for quick-and-easy deployment of small projects. The ease of pushing a project to Heroku with everything behind the scenes being taken care of really was astounding.

My gripe with Heroku has always been with larger projects. For someone who’s done devops for larger web projects previously, the threshold for when you need to start customizing things is relatively low, and the need to switch your deployment strategy become imminent. However, in this post we won’t be talking about larger projects that require their own set of deployment tools.

I’d like to focus on the category of smaller projects. On Heroku you get a single dyno for free, and for many uses (such as deploying single periodical tasks) this is great. However, even a minuscule side-project might require, at the very least, a single web process adjacent to a single worker process. In Heroku’s case, this is $20/month, which might not be worth it for a small project.

Meet Dokku.

Dokku is a lightweight set of bash scripts that facilitates deployments of single applications on a single server using Docker and the open-source Heroku buildpacks to simulate a Heroku-like deployment environment which you can set up on your own boxes. It is by no means feature-complete, but for pushing a single project to a remote box, it definitely gets the job done.

Install Dokku

Dokku can be installed only on recent Ubuntu systems due to the various Docker dependencies. Also, due to various bugs, the best system to go with currently is Ubuntu 13.04. Installation is a one-liner:

1
$ wget -qO- https://raw.github.com/progrium/dokku/v0.2.1/bootstrap.sh | sudo DOKKU_TAG=v0.2.1 bash

Some VPS providers (such as DigitalOcean) provide pre-made “Dokku on Ubuntu” images, but I’ve found that using the “classic” Dokku bootstrap script works equally well, and it just as easy.

Plugins

Dokku comes with the most basic set of plugins that provide Git support and nginx virtual hosts. However, most web projects will also require some other necessities, such as a Postgres database, a Redis instance for caching and queuing, and a proper process manager.

All of these exist as Dokku plugins (redis, postgres and supervisord), but since I use these in every single project, I’ve added them all as submodules inside a dokku base project that I use.

If you’d like to use that base installation, you can easily install it with:

1
$ wget -qO- https://raw.github.com/yuvadm/dokku-base/master/bootstrap.sh | sudo bash

I highly suggest using this installation, since manual installation of the aforementioned plugins sometimes doesn’t flow just right.

Configuring

Once Dokku is installed, we’ll need to upload our SSH public key to the Dokku server so that we’ll be able to push projects to it:

1
cat ~/.ssh/id_rsa.pub | ssh dokku.example.com "sudo sshcommand acl-add dokku yourname"

Make sure you use proper SSH credentials when doing this (maybe you’ll need to specify a username or a PEM key file).

Pushing a Project

We’re now ready to push a project to Dokku! Let’s give it a shot:

1
2
3
$ cd test-project
$ git remote add dokku dokku@dokku.example.com:test-project
$ git push dokku master

If all went well the buildstep should now run, and eventually tell you that your project is now deployed to http://test-project.dokku.example.com. Make sure you actually have a proper DNS setting for this, which will usually be a CNAME from *.dokku.example.com to dokku.example.com.

At this point we need to create and attach a Postgresql database and a Redis instance. To run Dokku management commands, either log in to the Dokku server, or run the commands remotely using ssh -t dokku@dokku.example.com <command>.

1
2
3
$ dokku postgresql:start
$ dokku postgresql:create test-project
$ dokku redis:create test-project

After completing these steps, you should be able to run dokku config test-project and see that you actually have DATABASE_URL and a REDIS_URL environment variables configured. Make sure your project references them.

Finally, supervisord, should be running out of the box, such that any process in your Procfile will be immediately recognized and started and will be restarted in case of redeployment or any other failure.

Conclusion

Dokku is a really cheap and easy way to deploy medium-sized projects that would otherwise cost a few $$$ on Heroku. It’s shaping up to look very stable, and a wide range of plugins for Dokku supply mostly any behavior you need from it.

Make Your Printer Go Wireless for $20

I just got a new printer – the cheapest B/W laser printer I could find – and I really wanted to have a network option for it, since I’d like to print from different laptops and computers around my house. However, the premium for a printer with an ethernet/wireless option, in this cheap printer category is around 50-70% of the printer value. Way too much. I’m going to hack my way out of this problem.

For this, we’re going to use $20 TP-Link WR-703N devices, which are actually very small routers, and once flashed with OpenWrt, are capable of doing pretty much anything that you can think of with a Linux kernel, an ethernet port, a WiFi connection, and a USB port. This device is perfect as a wireless printer adapter.

Install OpenWrt

First of all, we need to flash the device with OpenWrt. Simply follow these instructions which describe exactly which binary to download. Flashing it is easily done via the existing default web interface, which unfortunately is only in Chinese. But there are some good screenshots on the Xinchejian hackerspace wiki that you can follow. Caution: When flashing, make sure you’re aware of the pitfalls of some of the recent firmwares, see the warnings section for details.

Note that some vendors on eBay are already selling WR703N devices with OpenWrt pre-flashed instead of the stock firmware, in which case you don’t have to do any of the above.

Configure OpenWrt

Now that we have OpenWrt installed, it’s time to configure it. By default, the wireless interface will be disabled, leaving on the ethernet port working. Upon connection, you’ll be served a 192.168.1.X IP address via DHCP from the device (which will be on 192.168.1.1). For our setup, we’ll assume we want to located the printer somewhere without having to pull any network cables to it, so we want a setup where the wireless adapter is used as the WAN interface, hooking up to an existing wireless network. We’ll also want to create a subnet LAN on 192.168.2.1 to ensure we’re not colliding with the existing LAN on 192.168.1.1 (assuming we’re also running the same subnet on the existing home network.)

First, we need to open up the wireless configuration on /etc/config/wireless and edit it to look like so:

1
2
3
4
5
6
7
config wifi-iface
        option device radio0
        option network wan
        option mode sta
        option ssid MyNetworkName
        option encryption psk2
        option key MyNetworkPassword

This configuration means we setup the radio0 device to hook up as a station (sta) to the WAN on the given network. psk2 assumed WPA2-PSK encryption with the given password.

You also have to make sure you update the physical 802.11 channel on the line in the radio0 config, where it says:

1
option channel 11

It must match your existing wireless channel.

Now we’ll add the WAN interface itself in the /etc/config/network config:

1
2
3
config interface 'wan'
        option ifname 'wlan0'
        option proto 'dhcp'

And since we’re setting up a new LAN subnet, change the existing LAN interface on the following line to:

1
option ipaddr '192.168.2.1'

Last thing is to make sure we can remotely log into the device from the WAN, since hooking up ethernet just to manage it is a PITA. Open the /etc/config/firewall config file and add this rule at the end:

1
2
3
4
5
config rule
        option name 'allow-ssh'
        option src 'wan'
        option target 'ACCEPT'
        option dest_port '22'

That’s all the basic config we need. After reboot, the device should show up on your LAN, and you’ll be able to log into it to continue to the next steps.

Printer configuration

We’ll move on to installing the important stuff. The printer setup is based on the p910nd daemon, which will act as a proxy between the device and the printer. It’s main advantage is that it is a non-spooling server (as opposed to, say, CUPS) which means it doesn’t need to hold the printing jobs in memory while printing. Remember, WR703N are very limited devices.

In order to setup the printer we need to install both p910nd, as well as the USB kernel mods, but it’s all done through the opkg package manager:

1
2
opkg update
opkg install p910nd kmod-usb-printer

After this part, if you connect the printer, you should see either the /dev/usb/lp0 or /dev/lp0 device. Run dmesg to see a successful USB connection, or errors if there were any.

Go on over to the /etc/config/p910nd config and enable the printer, and make sure it’s pointed to the right device. At this point you’ll also have to open port 9100 on the firewall, in the exact same way we did for the SSH port previously. Make sure to restart the firewall after this addition, using /etc/init.d/firewall restart.

Now we can start and enable the p910nd daemon, to make it run now, as well as on each boot:

1
2
/etc/init.d/p910nd start
/etc/init.d/p910dn enable

Configure client

We’re almost there. Now we just have to create a new printer device on our client machine, and let’s assume you already have all the drivers installed.

These instructions vary by client OS, but generally speaking you need to create a new printer, under the device WAN IP (you can find it by ssh-ing into the device and seeing what ifconfig reports under the wlan0 interface), if needed, define port 9100. The protocol selected should be AppSocket or HP JetDirect.

Try printing a test page. If all went well, you have just saved yourself some money, and learned how to create your own cheap wireless adapter for your printer.

(Re)building the Cryptoparty Community

“Assume good faith” –(rule 0 of every community)

The saddest thing about the recent events involving Cryptoparty and 29C3 is that I have yet to notice a single discussion which was constructive. 29C3 was the first chance for the Cryptoparty community to meet in a physical place and discuss how to evolve from there. We had that chance, and we blew it.

The 29C3 Cryptoparty itself was actually quite fun, and even though we were essentially preaching to the choir, it was a good experience. There was a good discussion on the first day sharing experiences and knowledge from various Cryptopartae. We also had some productive hours of discussing how we should continue work on the Cryptoparty Handbook. Alas, we should’ve done more than that. The setting was toxic – despite people keeping a straight face, everyone knew there’s something wrong.

The Cryptoparty community has just started to take form. The social connections have just started to take place. Up until now, we just had the Cryptoparty meme – symbolized by a hashtag bearing the same name – and some local meetups. We have just barely started to form global connections and extrapolate the collective knowledge into further projects and long-term goals. For me, that was one of the main goals of meeting the people behind Cryptoparty at 29C3.

What we got instead was a shitstorm of rage and anger, rendering the environment completely non-productive. If the goal is to make for some juicy gossip on teh twitterz, we’re definitely winning. But let’s not delude ourselves that this is some form of constructive community action.

I applaud all who participate in the 140-character-free-for-all – it definitely satisfied my primal need for some good gossip. At the end of the day, it’s pretty easy to discern who in the community is actually interested in building shit and spreading crypto through honest constructive discussion.

Key members of the Cryptoparty community are well aware of my stance on certain issues of the past two weeks, as I have approached them directly and let them know of my concerns and views. I see no use in discussing these recent events in public, since none of them have anything to do – whatsoever – with building the Cryptoparty community.

I hope that by the time 30C3 occurs, we – as a community – can show significant progress not only in our efforts to spread the use of crypto, but in the way we handle communications between ourselves, with honesty and respect.

A Primer on Cheap Software Defined Radios

I’ve always been fascinated by radio. I clearly remember discovering numbers stations at the age of 9 using my grandparent’s old shortwave radio, and I was fascinated by the concept of stuff being broadcast over the air – discounting FM radio which was ordinary.

Actually, I’ve always wanted to buy a frequency scanner and learn more about radio, but never got around to actually doing so, something didn’t feel right. Last week, the right thing I was waiting for was found – an open-source software stack and a $15 USB dongle turn my desktop computer into a software defined radio. Essentially, this means that anyone can, very cheaply, pull data out of thin air (literally), and analyze it using code.

Up until now, SDR could only be achieved using expensive equipment, and using proprietary drivers and software. The $15 SDR option is a serious breakthrough in making the SDR world more accessible. As with most new technologies, the open-source SDR world is still not very user-friendly, and in this post I’ll try to outline the basic stuff a beginner should know when entering this world.

The basis for SDR is GNU Radio, which is an open source toolkit that provides all the necessary mathematical building blocks to begin implementing SDR. In essence, GNU Radio is a set of APIs that allow to build usable SDR programs. An important part of GNU Radio is the GNU Radio Companion which is a simple GUI that allows to connect various signal processing components into a single graph and generate code from it. The thing is that, for most basic cases, we don’t really want to write the signal processing code ourselves.

Let’s go back to the hardware part. Up until now, if you wanted to do SDR you had to use expensive receivers, such as the Icom R2500. Naturally, these proprietary products natively supported Windows PCs, and you could forget about Linux, not to mention seeing any code for the software or drivers. Granted, USRP devices were much more open and accessible, but the hardware was still very expensive, and posed a high barrier of entry for novice users that just wanted to play around.

As it turns out, it’s possible to use cheap DVT-B USB dongles (like this one) and hack them into proper SDR receivers. DVB-T is a worldwide standard for digital TV broadcast, and apparently the cheap tuners that are manufactured en masse are just the thing we can use to do poor’s man SDR.

The software that we use to handle the cheap dongles is rtl-sdr and is the core of the setup. Now, setting up the entire stack is the tricky part. The GNU Radio stack has lots of dependencies, both C and Python libs, and has no easy, cross-platform, way of setting up. I actually kind of gave up on my Mac setup, and took me several hours to get shit running on my Linux box. Other than throwing a bunch of links, I really don’t have any better installation instructions that the ones out there. There will be lots of errors and dependency issues along the way, it’s a matter of sifting through wikis and lots of Googling ‘till something works. Here are some links that should cover most of what you’ll need:

Fortunately, all tools use standard autoconf and cmake toolchains, so the installation procedure for most packages will be similar. If all went well, at this point, we’ll want to see the following test running smoothly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ rtl_test -t
Found 1 device(s):
  0:  ezcap USB 2.0 DVB-T/DAB/FM dongle

Using device 0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Found Elonics E4000 tuner
Supported gain values (18): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0 43.0 45.0 47.0 49.0
Benchmarking E4000 PLL...
[E4K] PLL not locked for 51000000 Hz!
[E4K] PLL not locked for 2227000000 Hz!
[E4K] PLL not locked for 1114000000 Hz!
[E4K] PLL not locked for 1241000000 Hz!
E4K range: 52 to 2226 MHz
E4K L-band gap: 1114 to 1241 MHz

After getting the dongle and the drivers all setup we want to listen to some stuff! As I mentioned earlier, building various signal processing flows is totally beyond the scope for what we’re trying to do, all we want is a simple tuner with some knobs to twist, and eventually hear some sound coming out the speakers. The most easiest receiver software I’ve found so far is gqrx (also on Github).

Gqrx is very easy to grok, even for beginners with no experience listening to the radio waves. Start off by picking a frequency that you know should be active, broadcast FM radio is the obvious choice here, and just tinker with the knobs until it sounds reasonable. Learn what the difference between AM and FM is. Learn how the FM filter works. Play with the squelch levels to silence the white noise on channels that aren’t always active. From my experience, it takes a while to understand how everything comes together.

After playing around with broadcast FM, you can advance to other transmissions: air traffic, ham radio, police and fire services, navigation beacons, GPS, GSM, POCSAG, P25. Each of these subjects is an entire post in and of itself.

The final point I want to make is that listening to radio waves has lots of nuances to it. The stock antenna shipped with the dongles is absolutely insufficient to receive anything other than strong signals. If you’re serious in doing SDR, you’ll have to invest time researching proper antenna setups and reducing noise.

Nonetheless, this cheap SDR setup is mind-blowing in how easy it can be to start playing around with stuff that used to be extremely expensive.

Deploying Periodical Tasks on Heroku

Heroku is an awesome platform for hosting web applications, that much is obvious. A few days ago I had another application to deploy on Heroku, but with a different usage profile. The application, a simple breaking news tweeting app, periodically scrapes a popular Israeli forum with breaking headlines, and tweets them – a fairly straightforward task. However, this application has no request-response cycle, and in fact has no open web gateway, just a simple task running periodically, every minute in our case.

Naturally, this task needs to run on a 24/7-available server, not just on a random desktop. Sure I have several VMs I can piggyback this task on, but I wanted to find the way to package this little task properly such that I can deploy it easily on Heroku and forget about the whole thing. Since I’m running a single process on a single Heroku dyno, if I could get it to work, it wouldn’t cost a thing.

For asyncronous and scheduled tasks in Python, the obvious solution is to use Celery. The core of the setup is a single Celery worker running a periodical task. Since we only have one worker, and we can’t spare another process for the Celery heartbeat process (it’d cost another Heroku dyno which isn’t free), we’ll use the celery worker process with the -B flag that bundles the worker and the heartbeat into one convenient process.

Celery can’t work without a messaging broker, naturally with Heroku we’ll use the redistogo:nano plan.

Here’s the code for a simple worker, tasks.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import logging

from celery import Celery
from celery.task import periodic_task
from datetime import timedelta
from os import environ

REDIS_URL = environ.get('REDISTOGO_URL', 'redis://localhost')

celery = Celery('tasks', broker=REDIS_URL)


def fib(n):
    if n > 1:
        return fib(n - 1) + fib(n - 2)
    else:
        return 1


@periodic_task(run_every=timedelta(seconds=10))
def print_fib():
    logging.info(fib(30))

To wrap it up, you’ll need a Procfile with a single line launching the worker:

1
worker: celery -A tasks worker -B --loglevel=info

I find this setup to be very convenient if I need to deploy a single recurring task, and not care at all about setting up cron jobs or manually configuring deployment environments. Heroku FTW.

All the code, as always, is in a single repo on Github: https://github.com/yuvadm/heroku-periodical. Enjoy!

An Open Toolchain for the TI Stellaris

In my last post I set up an ARM EABI toolchain to work with my CCC r0ket badge. Incidentally, I just received my Texas Instruments Stellaris dev board and wanted to start playing around with it. Unfortunately, TI’s development tools are highly bloated, proprietary and almost exclusively geared towards Windows environments. Unacceptable. I wasn’t about to download a 1.3GB file just to get a LED blinking on a dev board using my Mac.

As it turns out, all the building blocks are there, and it’s just a matter of putting them together. Here’s how to get a simple project compiled and flashed on your TI Stellaris by using an open toolchain.

First, we need a cross-compiler. For that, we use the ARM EABI toolchain which can be installed using the amazing ARM EABI Toolchain Builder. Follow the instructions, and make sure you have the respective bin directory in your path.

Next, we need the flashing tools. Fortunately, some code is already available from the lm4tools package. It’s dependent on libusb, so install that with your favorite package manager, and otherwise it’s a breeze to install. lm4tools supplies us with both a flashing utility as well as with a USB/ICDI debugging bridge. For now we just want the flashing utility. The package already comes with a readymade binary, which we can try to test, but we’ll go ahead and compile our own. It’s just more fun that way :)

Finally, we need all the source and header files relevant to the Stellaris. Those all exist in TI’s StellarisWare packages, but are a bitch to download. Seriously, I won’t even try to link to them. I extracted all the necessary files to my own Stellaris repo on Github, and cloning that should get you everything you need. After cloning the repo, cd into one of the projects, such as boards/ek-lm4f120xl/project0.

If all is well, running make will quickly yield the output binary located in gcc/project0.bin. We’re now ready to flash. Point to your lm4flash util and run:

1
$ ./path/to/lm4flash gcc/project0.bin

If the flashing process was successful, the RGB LED on the Stellaris should now be blinking blue and red alternatively. Awesome. A trivial exercise would be to add a green blink to the sequence.

It’s cool to have the board running at last, but it’s a shame TI doesn’t make this stuff more accessible and open. From what I’ve seen so far, the Stellaris is a pretty neat board, and I hope to write more in the future about the advanced functionality you can get out of it.

Setting Up an ARM EABI Toolchain on Mac OS X

29C3 is coming up, and after completeing and submitting my talk proposals, I’ve recently started hacking on my r0ket badge, which I managed to get my hands on a year ago at 28C3.

After setting it up and doing some SMD soldering with the RGB flame module, the next step is hacking on the r0ket’s firmware, writing l0dable applications.

The r0ket has an ARM processor and its firmware and applications are cross-compiled using the ARM EABI toolchain. The r0ket wiki has instructions on how to set up an environment on Mac OS X, and I’ll try to give some comlementary tips on how to accomplish that.

My preferred option would be to use standard homebrew formulae as much as possible. Unfortunately, homebrew chose not to include the ARM EABI toolchain in it’s offerings. A homebrew fork has support for the arm-none-eabi-gcc formula, but I found it not up to date.

If you use MacPorts, it might be possible to sudo port install arm-none-eabi-gcc, but unfortunately MacPorts and homebrew are mutually exclusive, and I’m definitely sticking with homebrew.

By far, the easiest solution I found was a simple-to-use makefile wrapped up with some patches specifically built for the task of building an ARM EABI toolchain, they can be found on github.

Make sure you have the proper dependencies first:

1
brew install mpfr gmp libmpc libelf texinfo

Then simply clone the repository, and run the makefile:

1
2
3
git clone https://github.com/jsnyder/arm-eabi-toolchain
cd arm-eabi-toolchain
make install-cross

Remember you’re building the entire toolchain, so expect this step to take at least an hour, and your Mac to heat up running 100% CPU. When all the tools are built you can find them located at ~/arm-cs-tools. Remember to somehow add ~/arm-cs-tools/bin to your $PATH.

The bonus for all this is that I just recently received my Texas Instruments Stellaris Launchpad evaluation kits, and I’ll definitely be making heavy use of this toolchain. Not to mention that an ARM-based Arduino board is in the making…