Yuval Adam

An Open Toolchain for the TI Stellaris

In my last post I set up an ARM EABI toolchain to work with my CCC r0ket badge. Incidentally, I just received my Texas Instruments Stellaris dev board and wanted to start playing around with it. Unfortunately, TI’s development tools are highly bloated, proprietary and almost exclusively geared towards Windows environments. Unacceptable. I wasn’t about to download a 1.3GB file just to get a LED blinking on a dev board using my Mac.

As it turns out, all the building blocks are there, and it’s just a matter of putting them together. Here’s how to get a simple project compiled and flashed on your TI Stellaris by using an open toolchain.

First, we need a cross-compiler. For that, we use the ARM EABI toolchain which can be installed using the amazing ARM EABI Toolchain Builder. Follow the instructions, and make sure you have the respective bin directory in your path.

Next, we need the flashing tools. Fortunately, some code is already available from the lm4tools package. It’s dependent on libusb, so install that with your favorite package manager, and otherwise it’s a breeze to install. lm4tools supplies us with both a flashing utility as well as with a USB/ICDI debugging bridge. For now we just want the flashing utility. The package already comes with a readymade binary, which we can try to test, but we’ll go ahead and compile our own. It’s just more fun that way :)

Finally, we need all the source and header files relevant to the Stellaris. Those all exist in TI’s StellarisWare packages, but are a bitch to download. Seriously, I won’t even try to link to them. I extracted all the necessary files to my own Stellaris repo on Github, and cloning that should get you everything you need. After cloning the repo, cd into one of the projects, such as boards/ek-lm4f120xl/project0.

If all is well, running make will quickly yield the output binary located in gcc/project0.bin. We’re now ready to flash. Point to your lm4flash util and run:

$ ./path/to/lm4flash gcc/project0.bin

If the flashing process was successful, the RGB LED on the Stellaris should now be blinking blue and red alternatively. Awesome. A trivial exercise would be to add a green blink to the sequence.

It’s cool to have the board running at last, but it’s a shame TI doesn’t make this stuff more accessible and open. From what I’ve seen so far, the Stellaris is a pretty neat board, and I hope to write more in the future about the advanced functionality you can get out of it.

Setting Up an ARM EABI Toolchain on Mac OS X

29C3 is coming up, and after completeing and submitting my talk proposals, I’ve recently started hacking on my r0ket badge, which I managed to get my hands on a year ago at 28C3.

After setting it up and doing some SMD soldering with the RGB flame module, the next step is hacking on the r0ket’s firmware, writing l0dable applications.

The r0ket has an ARM processor and its firmware and applications are cross-compiled using the ARM EABI toolchain. The r0ket wiki has instructions on how to set up an environment on Mac OS X, and I’ll try to give some comlementary tips on how to accomplish that.

My preferred option would be to use standard homebrew formulae as much as possible. Unfortunately, homebrew chose not to include the ARM EABI toolchain in it’s offerings. A homebrew fork has support for the arm-none-eabi-gcc formula, but I found it not up to date.

If you use MacPorts, it might be possible to sudo port install arm-none-eabi-gcc, but unfortunately MacPorts and homebrew are mutually exclusive, and I’m definitely sticking with homebrew.

By far, the easiest solution I found was a simple-to-use makefile wrapped up with some patches specifically built for the task of building an ARM EABI toolchain, they can be found on github.

Make sure you have the proper dependencies first:

brew install mpfr gmp libmpc libelf texinfo

Then simply clone the repository, and run the makefile:

git clone https://github.com/jsnyder/arm-eabi-toolchain
cd arm-eabi-toolchain
make install-cross

Remember you’re building the entire toolchain, so expect this step to take at least an hour, and your Mac to heat up running 100% CPU. When all the tools are built you can find them located at ~/arm-cs-tools. Remember to somehow add ~/arm-cs-tools/bin to your $PATH.

The bonus for all this is that I just recently received my Texas Instruments Stellaris Launchpad evaluation kits, and I’ll definitely be making heavy use of this toolchain. Not to mention that an ARM-based Arduino board is in the making…

Flask and PostgreSQL on Heroku

Heroku is increasingly becoming my favorite platform to deploy simple Python applications on. Heroku actually gives you a completely managed environment where you can deploy an app in literally minutes. Not to mention that the free tier usage on Heroku (1 dyno, Postgres dev plan) can actually get you pretty far.

You can follow the official docs on Heroku that explain how to get started from scratch, but I find them lacking some explanation on how to set up Postgres, so here’s the complete formula I use to rapidly deploy simple Python apps.

All the code in this post can be found in the matching repository on Github.

I’m going to assume you have a basic project setup, if not just follow the aforementioned tutorial. So now we need to add support for PostgreSQL. We’ll do that by using Flask-SQLAlchemy which will give us everything we need to connect to the Postgres DB as well as an easy to use ORM. So first we need to install the dependency and add it to our requirements.txt:

$ pip install flask-sqlalchemy psycopg2
# don't forget to update requirements.txt
$ pip freeze > requirements.txt

Before we continue we’ll have to create the Postgres DB and we’ll start off with the free dev plan which allows for up to 10K rows and 20 simultaneous connections:

$ heroku addons:add heroku-postgresql:dev
-----> Adding heroku-postgresql:dev to some-app-name... done, v196 (free)
Database has been created and is available

Once the database is setup we should promote it such that the DATABASE_URL environment variable will be set:

$ heroku pg:promote HEROKU_POSTGRESQL_COLOR

Now we can go ahead and import the library and add the basic connection boilerplate:

from flask.ext.sqlalchemy import SQLAlchemy

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL']
db = SQLAlchemy(app)

For this step, you can optionally use Kenneth Reitz’s flask-heroku library, which handles setting all connection URLs automatically, not only for Postgres, but for other services such as redis, sentry, exceptional and others.

The next step is to commit the boilerplate code and create the actual DB tables:

$ git commit -a -m "added DB boilerplate"
$ git push heroku master
# ...
$ heroku run python

Once we have a connected Python terminal we can run:

>>> from app import db
>>> db.create_all()

And we’re set! From here we can start using SQLAlchemy’s code to define models and create, query and delete objects. Here are some examples. We can start off by creating a new User model:

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String(80))
    email = db.Column(db.String(120), unique=True)

    def __init__(self, name, email):
        self.name = name
        self.email = email

    def __repr__(self):
        return '<Name %r>' % self.name

We can create the object itself:

user = User('John Doe', 'john.doe@example.com')

We can query objects:

all_users = User.query.all()

And we can delete objects:

user = User('John Doe', 'john.doe@example.com')

And that’s all you need to know about setting up a Flask + Postgres app on Heroku.

A Modern Python Stack for a Real-time Web Application

Earlier today I wrote a detailed answer on Stack Overflow about a suggested Python stack for building a modern real-time web application. This is based upon the work I did over the past several months with PlantAComment.com, which I’ve also written about recently.

In any case, I’ve found this stack to be pretty damn solid. Since we were doing real-time WebGL rendering, and synching that data on a multi-client landscape, we actually were sending dozens of messages per seconds (granted, small messages) and that also worked out surprisingly well.

Anyway, enough with the talk, here’s the stack. For starters, the entire app is served on the Tornado web server, which is a non-blocking web server that excels in this kind of stuff, and also has some nice “classic” web app support such as authentication, templates, etc., so we also used it for serving up the entire app itself, and not only the real-time evented stuff.

Next up is the messaging protocol. We started out using socket.io, which has a default implementation in Node.js, and is supported on Tornado via tornadio2. This worked out fine, but following a conversation with MrJoes (tornadio2 maintainer), we decided to switch and use sock.js, which also has a Tornado server implementation, sockjs-tornado. In essence, socket.io’s protocol is known to have some defects, and the fact that anything other than the Node.js implementation is a second-class citizen just feels awkward. Sock.js is a fully-tested protocol, and generally feels more solid, so we decided to go with it.

Most messaging examples in Tornado involve using a class-level variable that maintains all connections to all connected clients. This is a horrible setup and should never be used for anything beyond trivial applications. It’s like maintaining data inside your web server because you’re too lazy to spin up a database.

So for all the messaging stuff, we decided to use Redis' pub-sub capabilites. And since we’re in the context of Tornado, we’re also going to need a proper asynchronous interface - which is done beautifully by the brukva library. As a side note, I should mention that brukva is implemented using adisp, and does not employ any of the Tornado async building blocks. There is another project, tornado-redis, which claims to do just that, but I haven’t got around to actually using it. You might have more luck with that, though. In any case, brukva works just fine.

(Update: Since the original post, tornado-redis has proven to be the superior option, as it uses the standard async tools provided by Tornado.)

And that’s pretty much it. We can bring it all together with this ConnectionHandler which has all the functionality we need:

class ConnectionHandler(SockJSConnection):
    def __init__(self, *args, **kwargs):
        super(ConnectionHandler, self).__init__(*args, **kwargs)
        self.client = brukva.Client()

    def on_open(self, info):

    def on_message(self, msg):
        # this is a message broadcast from the client
        # handle it as necessary (this implementation ignores them)

    def on_chan_message(self, msg):
        # this is a message received from redis
        # send it to the client

    def on_close(self):

And that’s how you do real-time messaging with Python.

WebGL / Liquid Galaxy Fun

The past several weeks have kept me very busy on my latest collaboration with new-media artists Omer and Tal Golan.

Our project, PlantAComment.com (שיח גלריה, in Hebrew) is an interactive installation that encourages visitors to plant thoughts that manifest themselves as trees in a semi-apocalyptic 3D world. The installation premiered this week at the 2012 Fresh Paint art fair in Tel Aviv. Throughout the week, our project has received much acclaim from visitors of all ages.

On the technical side, the project is a behemoth in terms of how many technologies we’ve used to make it all happen. The server-side is based on a core Tornado web server that handles all HTTP requests, as well as WebSocket connections. Redis is used both as a back-end store, as well as for pub/sub for new messages that are received via SMS text messages, as well as Twitter and G+ posts. With the help of the amazing Nir Ofek, we’ve also implemented advanced semantic analysis on all incoming texts, allowing us to cluster similar subjects on the same trees. Credits to the beautiful soundscape go to the most-talented Nir Danan.

The most impressive aspect of the project, by far, is the WebGL implementation of the 3D world that is able of running in-browser on any WebGL-capable modern browser. The highlight for the installation was deploying our project on Google’s Liquid Galaxy setup - a 7-machine setup connected to 7 55" LED screens that run in complete synchronization, showing a 180 degree view of the world. This is the first time in the world an art project is deployed on this setup.

Expect to hear more about this project in the near future ;)

Deploying a Bottle App on ep.io

Bottle apps can be deployed on ep.io as generic WSGI apps. It’s not an immediate thing, since there’s a small workaround that you need to apply before being able to set bottle as a requirement to be installed.

For some reason the Bottle version that ep.io pulls from PyPI has some weird ImportError. The solution is to pull directy from the git repo.

Here’s a full working example:

Stack Overflow, Stop Blocking Me

Seriously, Stack Overflow, WTF?

For three days now, at PyCon2012, I can’t browse any Stack Overflow / Stack Exchange page. Why? All the wireless networks here at the conference are unencrypted. Connecting without passing through a secure connection (VPN/SSH tunnel) is an endeavor I would recommend to no one. Riding an open wireless network bareback? No way.

So, I use the amazing sshuttle which is routed to one of my servers on Amazon EC2. But guess what? Stack Exchange blocks all incoming traffic from EC2. Why? Supposedly, to prevent screen-scraping bots.

Now, I’m not intimate with SO/SE’s traffic patterns, and I’m sure they are highly susceptible to content farm scraper bots. But blocking all EC2 IPs is the most stupid way to do this that anyone can think of. Real scraper bots that depend on content mining will easily find other IPs to access SO/SE from.

Newsflash - I (and other legit VPN users) don’t have a spare bank of public IPs or VPN endpoints.

A simple solution would be to easily rate limit requests per any IP to a reasonable rate that normal users would never notice (say, 5 requests/sec).

Until Stack Overflow / Stack Exchange implements a better way of blocking scrape-bots without blocking legit users - I’ll continue to suffer anytime I’m not under a secure wireless network.

Configuring Postfix to Work With Gmail on Mac OS X

One of the things I’m sorry I didn’t do earlier is setup postfix on my Mac, such that I’ll be able to send quick emails (not to mention git patches) directly from command line.

As we all know, sending emails directly from your machine is a sure way to get yourself blacklisted as spam. So using an SMTP relay is pretty much required. But since my main email account is hosted on Gmail, and I want to be able to connect securely to Googles SMTP servers, this requires some configuration.

First thing’s first, add your authentication details to the relay. If you’re using Gmail, this goes like this, create a new file:

sudo vi /etc/postfix/relay_password

And add the auth details to it, just one line:

smtp.gmail.com:587 your_user_name@gmail.com:your_password

Next, we need to generate a lookup DB from these details:

sudo postmap /etc/postfix/relay_password

And make sure the relay_password.db file has been generated.

Now it’s time to update the main.cf configuration file. You might want to keep a backup before you add the following changes. First, check that the line

tls_random_source = dev:/dev/urandom

exists in the file and is not commented out, this should be the case by default. Now here’s the main logic which you can simply append to the end of the file:

relayhost = smtp.gmail.com:587

smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/relay_password
smtp_sasl_security_options = noanonymous

smtp_tls_security_level = may
smtp_tls_CApath = /etc/postfix/certs
smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache
smtp_tls_session_cache_timeout = 3600s
smtp_tls_loglevel = 1

The last thing we need to do is setup the root SSL certificate that Google uses, which is the Thawte Premium Server CA. First:

sudo mkdir /etc/postfix/certs && cd certs

Then, download the PEM file:

sudo wget https://www.thawte.com/roots/thawte_Premium_Server_CA.pem

Now we need to run a rehash on the PEM file:

sudo c_rehash /etc/postfix/certs/

And that’s it! Give it a test run, and hopefully you’ll receive an e-mail strongly authenticated and relayed from your Gmail account:

echo "Relay Test" | mail -s "Relay Testing" test_recipient@domain.com

As an extra added bonus, you might want to set your hostname to something more descriptive than mymachine.local by adding this line to the main.cf:

myhostname = some-domain-i-own.com

Resolving a Corrupt Sudoers in Mac OS X

During 28C3, I was being over-paranoid about the security of my laptop, and I accidentally did something really really (really) stupid to my /etc/sudoers file, I commented out this line:

# User privilege specification
root    ALL=(ALL) ALL
# %admin  ALL=(ALL) ALL

See what I did there? No more sudo for my admin user. End of story. I thought I was doomed. The only way to resolve this situation, essentially, is to boot into some sort of safe mode with the Mac OS X installation disk. Needless to say I didn’t have it with me.

Luckily, Mac OS X is built in a way that allows resolving a corrupt sudoers, exploiting the way the OS manages permissions. This method was first described here, props to Astrails for the idea.

The idea is that the while the command line sudo works with the sudoers file, the UI authentication does not.

Exploiting this, you can change the file permissions on /etc/sudoers without needing sudo access. All you need to do is open a Finder window, Shift-Cmd-G and go to the /etc folder. From there, select the sudoers file and open its info pane (Cmd-I). Scroll down to the Sharing & Permissions panel, and unlock it using your admin password. You now can temporarily change the file permissions such that you’ll be able to edit it without sudo access.

Now all you need to do is fix the crap that you did to your sudoers file, reset the permissions back to 440 and you’re all set.

Next time, if you think you need to edit your sudoers file, DO NOT.

Bitcoin for Dummies

I recently started delving into the world of a new currency which you might have heard of - Bitcoin. I figured out I want to know more about it, and what applications it might have. As it turns out, the concepts behind Bitcoin are actually not that complicated, and I believe that if you are able to grasp the concept of money as we know it, in the form of the proverbial cold-hard-cash, you should have no problem understanding Bitcoin and how it works. I’ll simplify some concepts in order to make things understandable, but the concepts will absolutely remain true to form.

What is Bitcoin?

Bitcoin is the name of a currency that exists entirely in a network of computers, even your computer at home can be part of that network. There are no real, physical, coins or bills. Nothing other than data stored in various computers all over the world.

How does it work?

Bitcoin, at its core, is essentially a huge list of transactions, that anyone can have a copy of. A simple list might look like this:

A (10) -> B
B (4) -> A
B (3) -> A

In this simple list, we have two people, A and B. A sent B 10 bitcoins, after which B returned 4 bitcoins to A, and then decided to send 3 more bitcoins back to A. So this list is nothing more than a series of transaction details.

So assuming both A and B had 20 bitcoins to start with, after the three transactions, A now has 17 bitcoins left, while B has 23 bitcoins in his wallet. Easy stuff. Now, for an outsider to know how much money each party has, all he needs to do is know how many bitcoins each one had to begin with, and from there he can simply add and subtract the details of the transactions and find out who has how many Bitcoins. This is, in essence, the Bitcoin system.

How is this data saved?

This transaction list is shared between many computers all over the world. If A wants to send B 10 bitcoins, he would just issue that transaction on his computer, which would then, in turn, tell the whole world “Hey! A just sent B 10 bitcoins!”. Over time, that message would propagate all over the Internet to everyone running a Bitcoin client. That’s all there is to issuing a transaction.

Wait, so, I can fake transactions!

Not really, no. Transactions are secured using strong data encryption methods. These are the exact same methods that are in use to securely transfer your credit card details when making an online purchase, or when logging in to your e-mail account. These methods ensure that only the sending party is able to issue genuine transactions.

So who verifies the transactions?

Well, someone has to go over the list of transactions and approve them, otherwise the list has no value. Therefore, anyone who wants to can contribute to the system by reviewing the recent transactions, and doing some heavy calculations on the data, to ensure that they are all indeed valid.

Why would anyone do that?

Simple, because by donating computing power, you actually receive Bitcoins from the system! The process of verifying the transactions is called mining, and is rewarded with Bitcoins that the system generates just for you, out of thin air. This is how Bitcoins are “printed”.

Is there any other way to get Bitcoins?

Sure. If a friend of yours is willing to, he can give or sell you any amount of Bitcoins he wants, as long as he had some to start with. He will issue a transaction saying that he transfers some Bitcoins to your possession. He can either do that for free, but more likely that he’ll ask you for something in return, so you’ll probably be paying him back either in cash, or giving selling some product or service to him. In any case, that’s up to the two parties to solve between themselves.

So how is this different from the current cash system?

It’s not! Think about it, coins and bills are just pieces of metal and paper, with little significant value. The value they have is the one that we give them. By printing “100” on a piece of paper, we’re saying that it is worth 100 units of something. So when people start to accept Bitcoin as a valid currency, it is in not that different than any other currency in use around the world, other than that it has no physical existence.

I have 100 Bitcoins, what does that mean?

That means that over time, you have accumulated 100 Bitcoins, either from transactions with various people, or by mining them (and then it would be the ‘system’ that gave you the Bitcoins). Anyone going over the list of transaction and verifying its accuracy will end up with the same answer “yep, he really is the legit owner of 100 Bitcoins”. You are free to do whatever you want with these Bitcoins in your possession.


In essence, this is all there is to the Bitcoin system. Of course, there are many more issues that derive from this system. In further posts I’ll talk about the exact monetary value of Bitcoins, how anonymous (if at all) the system is, and various interesting dilemmas that arise from the usage of such a currency.