The saddest thing about the recent events involving Cryptoparty and 29C3 is that I have yet to notice a single discussion which was constructive. 29C3 was the first chance for the Cryptoparty community to meet in a physical place and discuss how to evolve from there. We had that chance, and we blew it.
The 29C3 Cryptoparty itself was actually quite fun, and even though we were essentially preaching to the choir, it was a good experience. There was a good discussion on the first day sharing experiences and knowledge from various Cryptopartae. We also had some productive hours of discussing how we should continue work on the Cryptoparty Handbook. Alas, we should’ve done more than that. The setting was toxic - despite people keeping a straight face, everyone knew there’s something wrong.
The Cryptoparty community has just started to take form. The social connections have just started to take place. Up until now, we just had the Cryptoparty meme - symbolized by a hashtag bearing the same name - and some local meetups. We have just barely started to form global connections and extrapolate the collective knowledge into further projects and long-term goals. For me, that was one of the main goals of meeting the people behind Cryptoparty at 29C3.
What we got instead was a shitstorm of rage and anger, rendering the environment completely non-productive. If the goal is to make for some juicy gossip on teh twitterz, we’re definitely winning. But let’s not delude ourselves that this is some form of constructive community action.
I applaud all who participate in the 140-character-free-for-all - it definitely satisfied my primal need for some good gossip. At the end of the day, it’s pretty easy to discern who in the community is actually interested in building shit and spreading crypto through honest constructive discussion.
Key members of the Cryptoparty community are well aware of my stance on certain issues of the past two weeks, as I have approached them directly and let them know of my concerns and views. I see no use in discussing these recent events in public, since none of them have anything to do - whatsoever - with building the Cryptoparty community.
I hope that by the time 30C3 occurs, we - as a community - can show significant progress not only in our efforts to spread the use of crypto, but in the way we handle communications between ourselves, with honesty and respect.
I’ve always been fascinated by radio. I clearly remember discovering numbers stations at the age of 9 using my grandparent’s old shortwave radio, and I was fascinated by the concept of stuff being broadcast over the air - discounting FM radio which was ordinary.
Actually, I’ve always wanted to buy a frequency scanner and learn more about radio, but never got around to actually doing so, something didn’t feel right. Last week, the right thing I was waiting for was found - an open-source software stack and a $15 USB dongle turn my desktop computer into a software defined radio. Essentially, this means that anyone can, very cheaply, pull data out of thin air (literally), and analyze it using code.
Up until now, SDR could only be achieved using expensive equipment, and using proprietary drivers and software. The $15 SDR option is a serious breakthrough in making the SDR world more accessible. As with most new technologies, the open-source SDR world is still not very user-friendly, and in this post I’ll try to outline the basic stuff a beginner should know when entering this world.
The basis for SDR is GNU Radio, which is an open source toolkit that provides all the necessary mathematical building blocks to begin implementing SDR. In essence, GNU Radio is a set of APIs that allow to build usable SDR programs. An important part of GNU Radio is the GNU Radio Companion which is a simple GUI that allows to connect various signal processing components into a single graph and generate code from it. The thing is that, for most basic cases, we don’t really want to write the signal processing code ourselves.
Let’s go back to the hardware part. Up until now, if you wanted to do SDR you had to use expensive receivers, such as the Icom R2500. Naturally, these proprietary products natively supported Windows PCs, and you could forget about Linux, not to mention seeing any code for the software or drivers. Granted, USRP devices were much more open and accessible, but the hardware was still very expensive, and posed a high barrier of entry for novice users that just wanted to play around.
As it turns out, it’s possible to use cheap DVT-B USB dongles (like this one) and hack them into proper SDR receivers. DVB-T is a worldwide standard for digital TV broadcast, and apparently the cheap tuners that are manufactured en masse are just the thing we can use to do poor’s man SDR.
The software that we use to handle the cheap dongles is rtl-sdr and is the core of the setup. Now, setting up the entire stack is the tricky part. The GNU Radio stack has lots of dependencies, both C and Python libs, and has no easy, cross-platform, way of setting up. I actually kind of gave up on my Mac setup, and took me several hours to get shit running on my Linux box. Other than throwing a bunch of links, I really don’t have any better installation instructions that the ones out there. There will be lots of errors and dependency issues along the way, it’s a matter of sifting through wikis and lots of Googling ‘till something works. Here are some links that should cover most of what you’ll need:
Fortunately, all tools use standard autoconf and cmake toolchains, so the installation procedure for most packages will be similar. If all went well, at this point, we’ll want to see the following test running smoothly:
$ rtl_test -t
Found 1 device(s):
0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Using device 0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Found Elonics E4000 tuner
Supported gain values (18): -1.0 1.5 4.0 6.5 9.0 11.5 14.0 16.5 19.0 21.5 24.0 29.0 34.0 42.0 43.0 45.0 47.0 49.0
Benchmarking E4000 PLL...
[E4K] PLL not locked for 51000000 Hz!
[E4K] PLL not locked for 2227000000 Hz!
[E4K] PLL not locked for 1114000000 Hz!
[E4K] PLL not locked for 1241000000 Hz!
E4K range: 52 to 2226 MHz
E4K L-band gap: 1114 to 1241 MHz
After getting the dongle and the drivers all setup we want to listen to some stuff! As I mentioned earlier, building various signal processing flows is totally beyond the scope for what we’re trying to do, all we want is a simple tuner with some knobs to twist, and eventually hear some sound coming out the speakers. The most easiest receiver software I’ve found so far is gqrx (also on Github).
Gqrx is very easy to grok, even for beginners with no experience listening to the radio waves. Start off by picking a frequency that you know should be active, broadcast FM radio is the obvious choice here, and just tinker with the knobs until it sounds reasonable. Learn what the difference between AM and FM is. Learn how the FM filter works. Play with the squelch levels to silence the white noise on channels that aren’t always active. From my experience, it takes a while to understand how everything comes together.
After playing around with broadcast FM, you can advance to other transmissions: air traffic, ham radio, police and fire services, navigation beacons, GPS, GSM, POCSAG, P25. Each of these subjects is an entire post in and of itself.
The final point I want to make is that listening to radio waves has lots of nuances to it. The stock antenna shipped with the dongles is absolutely insufficient to receive anything other than strong signals. If you’re serious in doing SDR, you’ll have to invest time researching proper antenna setups and reducing noise.
Nonetheless, this cheap SDR setup is mind-blowing in how easy it can be to start playing around with stuff that used to be extremely expensive.
Heroku is an awesome platform for hosting web applications, that much is obvious. A few days ago I had another application to deploy on Heroku, but with a different usage profile. The application, a simple breaking news tweeting app, periodically scrapes a popular Israeli forum with breaking headlines, and tweets them - a fairly straightforward task. However, this application has no request-response cycle, and in fact has no open web gateway, just a simple task running periodically, every minute in our case.
Naturally, this task needs to run on a 24/7-available server, not just on a random desktop. Sure I have several VMs I can piggyback this task on, but I wanted to find the way to package this little task properly such that I can deploy it easily on Heroku and forget about the whole thing. Since I’m running a single process on a single Heroku dyno, if I could get it to work, it wouldn’t cost a thing.
For asyncronous and scheduled tasks in Python, the obvious solution is to use Celery. The core of the setup is a single Celery worker running a periodical task. Since we only have one worker, and we can’t spare another process for the Celery heartbeat process (it’d cost another Heroku dyno which isn’t free), we’ll use the celery worker process with the -B flag that bundles the worker and the heartbeat into one convenient process.
Celery can’t work without a messaging broker, naturally with Heroku we’ll use the redistogo:nano plan.
In my last post I set up an ARM EABI toolchain to work with my CCC r0ket badge. Incidentally, I just received my Texas Instruments Stellaris dev board and wanted to start playing around with it. Unfortunately, TI’s development tools are highly bloated, proprietary and almost exclusively geared towards Windows environments. Unacceptable. I wasn’t about to download a 1.3GB file just to get a LED blinking on a dev board using my Mac.
As it turns out, all the building blocks are there, and it’s just a matter of putting them together. Here’s how to get a simple project compiled and flashed on your TI Stellaris by using an open toolchain.
First, we need a cross-compiler. For that, we use the ARM EABI toolchain which can be installed using the amazing ARM EABI Toolchain Builder. Follow the instructions, and make sure you have the respective bin directory in your path.
Next, we need the flashing tools. Fortunately, some code is already available from the lm4tools package. It’s dependent on libusb, so install that with your favorite package manager, and otherwise it’s a breeze to install. lm4tools supplies us with both a flashing utility as well as with a USB/ICDI debugging bridge. For now we just want the flashing utility. The package already comes with a readymade binary, which we can try to test, but we’ll go ahead and compile our own. It’s just more fun that way :)
Finally, we need all the source and header files relevant to the Stellaris. Those all exist in TI’s StellarisWare packages, but are a bitch to download. Seriously, I won’t even try to link to them. I extracted all the necessary files to my own Stellaris repo on Github, and cloning that should get you everything you need. After cloning the repo, cd into one of the projects, such as boards/ek-lm4f120xl/project0.
If all is well, running make will quickly yield the output binary located in gcc/project0.bin. We’re now ready to flash. Point to your lm4flash util and run:
$ ./path/to/lm4flash gcc/project0.bin
If the flashing process was successful, the RGB LED on the Stellaris should now be blinking blue and red alternatively. Awesome. A trivial exercise would be to add a green blink to the sequence.
It’s cool to have the board running at last, but it’s a shame TI doesn’t make this stuff more accessible and open. From what I’ve seen so far, the Stellaris is a pretty neat board, and I hope to write more in the future about the advanced functionality you can get out of it.
29C3 is coming up, and after completeing and submitting my talk proposals, I’ve recently started hacking on my r0ket badge, which I managed to get my hands on a year ago at 28C3.
After setting it up and doing some SMD soldering with the RGB flame module, the next step is hacking on the r0ket’s firmware, writing l0dable applications.
The r0ket has an ARM processor and its firmware and applications are cross-compiled using the ARM EABI toolchain. The r0ket wiki has instructions on how to set up an environment on Mac OS X, and I’ll try to give some comlementary tips on how to accomplish that.
My preferred option would be to use standard homebrew formulae as much as possible. Unfortunately, homebrew chose not to include the ARM EABI toolchain in it’s offerings. A homebrew fork has support for the arm-none-eabi-gcc formula, but I found it not up to date.
If you use MacPorts, it might be possible to sudo port install arm-none-eabi-gcc, but unfortunately MacPorts and homebrew are mutually exclusive, and I’m definitely sticking with homebrew.
By far, the easiest solution I found was a simple-to-use makefile wrapped up with some patches specifically built for the task of building an ARM EABI toolchain, they can be found on github.
Make sure you have the proper dependencies first:
brew install mpfr gmp libmpc libelf texinfo
Then simply clone the repository, and run the makefile:
git clone https://github.com/jsnyder/arm-eabi-toolchain
Remember you’re building the entire toolchain, so expect this step to take at least an hour, and your Mac to heat up running 100% CPU. When all the tools are built you can find them located at ~/arm-cs-tools. Remember to somehow add ~/arm-cs-tools/bin to your $PATH.
The bonus for all this is that I just recently received my Texas Instruments Stellaris Launchpad evaluation kits, and I’ll definitely be making heavy use of this toolchain. Not to mention that an ARM-based Arduino board is in the making…
Heroku is increasingly becoming my favorite platform to deploy simple Python applications on. Heroku actually gives you a completely managed environment where you can deploy an app in literally minutes. Not to mention that the free tier usage on Heroku (1 dyno, Postgres dev plan) can actually get you pretty far.
You can follow the official docs on Heroku that explain how to get started from scratch, but I find them lacking some explanation on how to set up Postgres, so here’s the complete formula I use to rapidly deploy simple Python apps.
I’m going to assume you have a basic project setup, if not just follow the aforementioned tutorial. So now we need to add support for PostgreSQL. We’ll do that by using Flask-SQLAlchemy which will give us everything we need to connect to the Postgres DB as well as an easy to use ORM. So first we need to install the dependency and add it to our requirements.txt:
For this step, you can optionally use Kenneth Reitz’s flask-heroku library, which handles setting all connection URLs automatically, not only for Postgres, but for other services such as redis, sentry, exceptional and others.
The next step is to commit the boilerplate code and create the actual DB tables:
$ git commit -a -m "added DB boilerplate"$ git push heroku master
# ...$ heroku run python
Once we have a connected Python terminal we can run:
And we’re set! From here we can start using SQLAlchemy’s code to define models and create, query and delete objects. Here are some examples. We can start off by creating a new User model:
In any case, I’ve found this stack to be pretty damn solid. Since we were
doing real-time WebGL rendering, and synching that data on a multi-client
landscape, we actually were sending dozens of messages per seconds (granted,
small messages) and that also worked out surprisingly well.
Anyway, enough with the talk, here’s the stack. For starters, the entire app
is served on the Tornado web server, which is a
non-blocking web server that excels in this kind of stuff, and also has some
nice “classic” web app support such as authentication, templates, etc., so we
also used it for serving up the entire app itself, and not only the real-time
Next up is the messaging protocol. We started out using
socket.io, which has a default implementation in Node.js,
and is supported on Tornado via
tornadio2. This worked out fine, but
following a conversation with MrJoes (tornadio2 maintainer), we decided to
switch and use sock.js, which also
has a Tornado server implementation, sockjs-tornado. In essence, socket.io’s
protocol is known to have some defects, and the fact that anything other than
the Node.js implementation is a second-class citizen just feels awkward.
Sock.js is a fully-tested protocol, and generally feels more solid, so we
decided to go with it.
Most messaging examples in Tornado involve using a class-level variable that
maintains all connections to all connected clients. This is a horrible setup
and should never be used for anything beyond trivial applications. It’s like
maintaining data inside your web server because you’re too lazy to spin up a
So for all the messaging stuff, we decided to use Redis’ pub-sub capabilites. And since we’re in the context
of Tornado, we’re also going to need a proper asynchronous interface - which
is done beautifully by the brukva
library. As a side note, I should mention that brukva is implemented using
adisp, and does not employ any of the Tornado async building blocks. There is
another project, tornado-redis,
which claims to do just that, but I haven’t got around to actually using it.
You might have more luck with that, though. In any case, brukva works just
(Update: Since the original post, tornado-redis has proven to be the superior option, as it uses the standard async tools provided by Tornado.)
And that’s pretty much it. We can bring it all together with this
ConnectionHandler which has all the functionality we need:
And that’s how you do real-time messaging with Python.
The past several weeks have kept me very busy on my latest collaboration with
new-media artists Omer and Tal Golan.
Our project, PlantAComment.com (שיח גלריה, in
Hebrew) is an interactive installation that encourages visitors to plant
thoughts that manifest themselves as trees in a semi-apocalyptic 3D world. The
installation premiered this week at the 2012 Fresh Paint art fair in Tel Aviv. Throughout the week, our project has
received much acclaim from visitors of all ages.
On the technical side, the project is a behemoth in terms of how many
technologies we’ve used to make it all happen. The server-side is based on a
core Tornado web server that handles all HTTP requests, as well as WebSocket
connections. Redis is used both as a back-end store, as well as for pub/sub
for new messages that are received via SMS text messages, as well as Twitter
and G+ posts. With the help of the amazing Nir Ofek, we’ve also implemented advanced
semantic analysis on all incoming texts, allowing us to cluster similar
subjects on the same trees. Credits to the beautiful soundscape go to the
most-talented Nir Danan.
The most impressive aspect of the project, by far, is the WebGL implementation
of the 3D world that is able of running in-browser on any WebGL-capable modern
browser. The highlight for the installation was deploying our project on
Google’s Liquid Galaxy setup - a 7-machine setup connected to 7 55” LED
screens that run in complete synchronization, showing a 180 degree view of the
world. This is the first time in the world an art project is deployed on this
Expect to hear more about this project in the near future ;)
Bottle apps can be deployed on
ep.io as generic WSGI apps. It’s not an immediate thing,
since there’s a small workaround that you need to apply before being able to
set bottle as a requirement to be installed.
For some reason the Bottle version that ep.io pulls from PyPI has some weird
ImportError. The solution is to pull directy from the git repo.
For three days now, at PyCon2012, I can’t browse any Stack Overflow / Stack
Exchange page. Why? All the wireless networks here at the conference are
unencrypted. Connecting without passing through a secure connection (VPN/SSH
tunnel) is an endeavor I would recommend to no one. Riding an open wireless
Now, I’m not intimate with SO/SE’s traffic patterns, and I’m sure they are
highly susceptible to content farm scraper bots. But blocking all EC2 IPs is
the most stupid way to do this that anyone can think of. Real scraper bots
that depend on content mining will easily find other IPs to access SO/SE from.
Newsflash - I (and other legit VPN users) don’t have a spare bank of public
IPs or VPN endpoints.
A simple solution would be to easily rate limit requests per any IP to a
reasonable rate that normal users would never notice (say, 5 requests/sec).
Until Stack Overflow / Stack Exchange implements a better way of blocking
scrape-bots without blocking legit users - I’ll continue to suffer anytime I’m
not under a secure wireless network.