Image processing with Go

A few days ago I wanted to do some batch image processing. It could be done in Photoshop, I know, but I wanted to have some fun as well and learn some algorithms. I started studying Go in January and it sounded like an opportunity to practice a little, so I began to write my own program to process images: blzimg.

blzimg will have some image operations. The first of them (and the only one until now) is called “lightest”. It merges the lightest pixels of a list of images into a single image.

Comparing the luminance of pixels

The first image operation I wanted to do was to get some images and, for every pixel (x,y) of them, their RGB values would be compared. The lightest pixels at the same position (x,y) compose the final image.

Images talk better than text. Let’s use these 3 images:

3x3 boxes with the first column white 3x3 boxes with the second column white 3x3 boxes with the third column white
img1.jpg img2.jpg img3.jpg

The lightest operation will merge these three images into a final image that will be this:


The grey pixels where fully replaced by the white pixels, since the latter are lighter. The formula to obtain the lightest pixel of an image was borrowed from this question at StackOverflow. The L is for luminance, and r, g, b are the color components of a pixel:

L = 0.2126 * r + 0.7152 * g + 0.0722 * b

The more the luminance, the lighter a pixel is. Our main idea is that if the luminance value L for a pixel is greater than the L value for another pixel, that pixel will be the lightest one and, therefore, be chosen.

I don’t know if it’s the scientifically proved best choice, but it has worked for my purposes. This is my implementation in Go:

Comparing images

At first, I created a function to receive a slice of image.Image‘s and travel through their pixels, comparing them:

In this old version, notice that the first image was read twice in the for loop and an empty slice would ruin everything. 😀 But our idea is this: store the lightest pixel and compare it to the pixel in the current image. If the newer pixel is lightest, we replace the current lightest pixel.

I created this function using TDD and it worked well with image.Image‘s, but how can we parse images from File‘s and keep the code testable?

Using containers for testing

The first version of Result() function received a slice of image.Image‘s, based on my test where I created some image.Image‘s to verify. But it has some limitations in the real world. How could I handle real files?

  • If I used a slice of image.Image‘s as arguments, I would have to get a list of files, decode them and create a very heavy slice of image.Image‘s to pass to Result().
  • If I used a slice of File‘s as parameter, it would become harder to do unit testing.

I created an interface to solve both cases: ImageContainer.

Its implementations must have a GetImage() function that will return a image.Image only when needed, so it’s a more lightweight approach. For example, a FileImageContainer would keep the file path within it and return the image.Image when GetImage() is called. An ImageItselfContainer, used in the unit tests, can keep the image data itself and returns this data when GetImage() is called. This is the current implementation:

The final version of Result(), now using an ImageContainer‘s instead of image.Image‘s, is shown below. The image operation won’t know (and it doesn’t need to know!) what kind of container it’s dealing with, and now the same code can handle image.Image‘s and File‘s!

Parsing command line arguments with cli.go

To parse command line arguments I used cli.go. It’s a library that parses command line parameters and creates a nice help output:

$ blzimg 
   blzimg - Execute some operations on images

   blzimg [global options] command [command options] [arguments...]


   Esdras Beleza  
   lightest, l  Merge the lightest pixels of some images in a single one
   help, h      Shows a list of commands or help for one command
   --output "final.jpg" Output file
   --help, -h           show help
   --version, -v        print the version

Finally: running blzimg

A few days ago I took some crappy pictures just to test my new Rokinon 12mm lens with my Fuji X-E1 camera. I used a intervalometer to take these pictures with a interval of 20 seconds between them, in a total of 8 minutes that were compressed in this timelapse of a few seconds:

What if we run blzimg to merge all the images from this timelapse into a single image using the lightest operation?

blzimg --output final.jpg lightest 2015-04-07_21-*

Since the lightest points in these pictures are the clouds and the stars, the output is a giant cloud and the amazing beginning of a star trail as if it were below the clouds:


Show me the code!

The full code can be downloaded at my GitHub. 😀


Sometimes it happens: I’m programming but I must stop and go to sleep. Some new feature is working, but needs some better code or error handling. So I put that ugly TODO there.

This kind of comment generates technical debt, and it’s good to track them. Because of that I wrote my first sbt plugin: sbt-findtags.

Until now, sbt-findtags has a small, but useful set of features:

  • Allow you to specify which tags you wanna search in your source code (the default tags are TODO and FIXME)
  • Generate a text report showing where the found tags are (file and line number)
  • Don’t want tags in your precious source code? You can make the build break if they are found.

Instructions about how to add sbt-findtags to your Scala project can be found in the file. Contributions and suggestions are welcome.

BDD: Using JBehave with Maven and Gradle

I’m studying Behaviour-Driven Development and evaluating some BDD frameworks for Java. The first one was JBehave.

Behave, baby!

I first made a small project, a simple calculator to multiply and divide numbers. The project uses Maven to download the dependencies and run my tests. I got it running and then I ported the same project to run using Gradle.

My conclusions are:

  • I’m a little bit used to practice TDD with JUnit. Practice BDD with JBehave required some work to get my tests running. JBehave is not so trivial as JUnit and requires some configuration.
  • After you get this first part of the work done and write one or two tests, you start to get more and more used to write tests in the story form.
  • Gradle has a better learning curve than Maven and is easier to configure.

I must be using BDD at work in the next months, but I don’t know yet if I will use BDD instead of TDD in my personal projects, since TDD seems to be more practical to me until now.

If you want to analyse my small project, it’s available at GitHub in Maven and Gradle formats.

The first tips you’ll need before start programming for Symbian using Qt

Last weekend I made my first Qt/Symbian mobile application. It was a very simple project, and its only purpose was to learn how to program for Symbian platform. I made a small application to search information about movies in TheMovieDb, and its source code is available at my github profile.

A Qt/Symbian app is something very close to a normal Qt-based application for desktop, but there are some little differences that can make you confused and some tips you may need, here are my advices for some pitfalls I’ve found.

Instead of showing widgets, add them to a QStackedWidget

(that’s the best solution I’ve found, but it seems to have other solutions)

In desktop Qt, you create a new QWidget and call show() to create a new window containing that widget. This won’t work with Symbian. When you do that, all you’ll get is a small, transparent widget at the corner of your screen.

The solution is to get the QMainWindow of your application, add a QStackedWidget as its central widget and add your new widgets into this QStackedWidget.

Every new widget must be added to QStackedWidget. It sounds painful, but Qt documentation tells us that when a widget is added to a QStackedWidget, the QStackedWidget becomes its parent.

When you create the first widget and add it to the stack, our QStackedWidget becomes its parent. So, to create a second widget and add it to the stack, you create your new widget as you do normally and asks the parent of the first widget – the QStackWidget! – to add it to the stack.

[code language=”cpp”]
// Create your widget
QWidget *someWidget = new QWidget(parent());

// Get the reference to our QStackedWidget casting the widget parent
QStackedWidget *stackedWidget = (QStackedWidget*) parent();

// Add the widget

Creating menus and associating to positive and negative buttons

Nokia cell phones, even the cheaper ones, have options associated to the positive and negative buttons. Take a look on this picture of an old N70:

In the picture above, the positive action is Options, the negative option is Back. Creating options like these for your widget is quite simple.

In your widget’s source code, put the following lines in your constructor. Behold the lines where we use setSoftKeyRole to associate the action to a key. In my example, I have a “Back” shortcut and a “Details” shortcut, that access some slots.

[code language=”cpp”]
// Register the negative action</pre>
<pre>QAction *backToMainScreenAction = new QAction("Back", this);
connect(backToMainScreenAction, SIGNAL(triggered()), SLOT(removeWidget()));

// Register the negative action
QAction *selectResultAction = new QAction("Details", this);
connect(selectResultAction, SIGNAL(triggered()), SLOT(showDetailsAboutTheCurrentItem()));
This code will register the action of each widget. But to make the options show in the screen, we must make the widget’s container – the <strong>QStackedWidget</strong>! – register these actions in the menu every time the widget is created. I put the following lines in the same file where’s my main window containing the QStackedWidget. First we create the following slot:</pre>
<pre>[sourcecode language="cpp"]
// This is a slot!
void MainWindow::updateActions() {
QWidget *currentWidget = stackedWidget->currentWidget();

In the class’s constructor, we connect the stacked widget’s signals that are emmited when a widget is added or removed to the slot above:

[code language=”cpp”]
connect(stackedWidget, SIGNAL(currentChanged(int)), SLOT(updateActions()));
connect(stackedWidget, SIGNAL(widgetRemoved(int)), SLOT(updateActions()));

Don’t believe the Nokia simulator

It’s still experimental. Sometimes you may think you made a mistake and have a bug, but you don’t. The simulator sometimes is confused with menus and soft buttons. If you get lost with your code and can’t find the causes of some obscure bug, try your application with a real device.

Good luck with the Remote Compiler

It’s still experimental too. When I tried to use it, I got some short error messages whose source I couldn’t discover. So, if you don’t use Windows (like me), prepare a virtual machine running Qt SDK on Windows.

The Qt Ambassador Kit and my first impressions of Nokia C7

A few months ago, I submitted my personal audio player project Audactile to the Nokia’s Qt Ambassador program. They seem to have liked it and I was accepted into the program.

The good surprise is that they sent me a Qt Ambassador kit: a t-shirt, some stickers and a Nokia C7 mobile phone. The pictures I took aren’t very good, but here they are:

Qt Ambassador Kit

The stickers are beautiful and I’m thinking of where I’ll put them. 🙂 The most attractive item is, obviously, the Nokia C7. It runs Symbian^3, and it was the very first time I could use a device with it. My first impressions:

  • The capactive touch screen is very good. I already test devices from Apple, Motorola, Sony Ericsson and HTC, and I can say that it’s very sensitive. It has support for pinch zoom on images.
  • Symbian^3’s UI is way better than its predecessors. It’s like a mix of the old Symbian (since it’s still Symbian…), iOS and Android. You have non-intrusive alerts at the top of the screen (like a desktop) instead of the old ugly rectangles that older versions of Symbian used for their notifications. The menus still are a bit confusing, but are better than my old Nokia E71’s menus.
  • The AMOLED display is very sharp, its colours are brilliant and the images look great. You have an ambient light detector that changes the display’s illumination, like my MacBook does.
  • The stereo sound from its back outputs are clear and powerful. Nokia knows how to put a good sound into its devices.
  • Its camera has 8.0MP, dual flash and face recognition. It can record videos in HD resolution, but I haven’t tested it yet.

Nokia C7 has a very powerful hardware. Its software is pretty cool and as friendly as Android IMHO (unfortunately Nokia will use Windows Phone instead of Symbian in some time…). The mobile phone was given to me with a short letter asking to develop for it, so that’s what I’ll do. I’ll try to write some Qt application for Symbian, and I’ll report any progress here. 🙂