Transforming functional code into imperative/procedural code – and why you shouldn’t

I was watching a video about a mathematical concept in which someone tried to explain how to rewrite a recursive function into something that could be written in a programming language that couldn’t do recursion – but still ended up writing recursive functions which were supposedly impossible.

Rewriting recursion into something that isn’t recursive is actually really useful in programming, as recursion usually fills up your stack rapidly with information that you usually don’t really need to actually run the code.

So here’s an example of Actually rewriting a functional and recursive piece of code to something that doesn’t use recursion.

The same as with the video I watched, I will try to do this with the function to calculate the factorial of a number.


n! or n factorial is a simple little function that calculates 1 * 2 * 3 * 4 … * n, so for example 6! = 1*2*3*4*5*6 = 720

Introduction to functional code

To go about writing a functional implementation for n!, it’s important to know that the function works based on conditions. Namely that n usually has to be a natural and positive integers, and that 0! is always 1 despite the previous explanation.

In functional languages you usually write down the conditions as overloaded versions of the actual function, like this:

fac(0) => 1
fac(n) => n * fac(n-1)

We write this down as going from n down to 0, because we can’t really check and stop counting things up in functional languages (we would need a global variable that introduces side effects), luckily for factorial the order in which you multiply the numbers doesn’t matter.
To do 1 less recursion, you could write 1 more condition:

fac(0) => 1
fac(1) => 1
fac(n) => n * fac(n-1)

Transforming functional into procedural

Sometimes we can just copy paste functional code into regular programming languages without any problems, but considering we can’t really overload functions based on content – only on types of parameters or number of parameters – we need to rewrite this first.

We can do this simply by writing the conditions as part of the base function and returning the desired conditional function, in this case, if the input is 0 => we simply return 1 instead of n * fac(n-1).

function fac(int n)

if (n==0) return 1;
return n * fac(n-1);

*) assuming n >= 0, or things will get out of hand

Transforming recursion into loops

Recursion takes up a lot of memory, the higher input n gets, the more memory and CPU time the program will demand – perhaps even crashing with the infamous error “Stack overflow”.

So whenever we feel like we might run into these kinds of problems, we can rewrite our code into looping code.

We can rewrite this code in a lot of ways, but let’s try this as if we’re really uncreative.

To do this – in a rather complicated way – we need to do the following:

  • Use a result variable
  • Replace the recursion point to using a variable that remembers the input for each iteration
  • Loop until the needed conditions are met

function fac(int n)

if (n == 0) return 1;
if (n == 1) return 1;
int result = n;
int nextn = n;
while true

if (nextn == 0) return result;
if (nextn == 1) return result;
result = result * (nextn – 1);

Note that we’re using the shortcutting version with an extra condition for when nextn is 1, not because we fancy that, but because we have to prevent multiplying by 0. Of course, we could write these two if statements as one because they leave us with the same result. So something like if (nextn <= 1) return result;.

To have a slightly cleaner version, instead of having a certain 0 condition everywhere, we can write the 0 condition result as the initial result and not decrement nextn twice. In this version we also don’t need the 1 condition, because the function always returns before it can multiply by 0.

function fac(int n)

int result = 1;
int nextn = n;
while true

if (nextn == 0) return result;
result = result * nextn;

And of course, instead of using an infinite loop, we can embed the stop condition into the loop definition by checking if the opposite is true.

function fac(int n)

int result = 1;
int nextn = n;
while (nextn <> 0)

result = result * nextn;

return result;

Why did we do this again?

As usual with these kinds of rewrites and optimizations, we have to do a lot of thinking to get things done. By now the function looks nothing like we initially started with, and perhaps we were better off doing this the “normal” way instead of this functional nonsense. Whenever you see something counting; write a for-loop.

function fac(int n)

int result = 1;
for (int nextn = n; nextn > 1; nextn–)

result = result * nextn;

return result;

But why do it the easy way when we can apply really slow and impractical concepts from functional languages 😉



Number prediction through layered addition


While trying to come up with simple ways to do pattern recognition and prediction, I figured that I might as well start at the very basic concept of the fact that multiplication of natural numbers are the equivalent of adding the same number multiple times. So when you say 3 * 5, you’re also saying 5 + 5 + 5.

Similarly, but slightly more involved, is that you can for example figure out the square of 5 by rewriting it to multiplication and subsequently rewriting the multiplications by addition. So when you’re saying 5², you’re also saying 5  * 5, and thus also 5 + 5 + 5 + 5 + 5.

You can repeat these processes with all natural number powers as well. Say 3³, which would be 3 * 3 * 3, which is the same as (3 + 3 + 3) * 3, etcetera.

This concept gives you the freedom to zoom in to parts of a certain sequence of numbers and spot patterns by only knowing about addition. Basically reversing the process.

Linear prediction

Take a linear sequence like f(x)=x*3, where the answers would be [3,6,9,12,15,…]. By splitting up every 2 numbers, you can see (by subtracting the numbers) that every next number is calculated by adding 3 to the current number. This would reasonably lead you to be able to predict that the number after 15 will be 18.

Since these numbers have a linear relation to their predecessors, a really simple way to program a prediction for this would be to check the difference between all numbers next to each other, and if the difference is always the same, add that number to the last number.

Layered prediction

Take a sequence of numbers for f(x)=x³, for example the first range where x goes from 1 to 5 are [1, 8, 27, 64, 125], and the 6th number in that sequence will be 216, because 6³=216.

This sequence doesn’t have any numbers that are linearly related to each other. But by going back to the concept of zooming in and reverse engineering the numbers to basic additions, you can find the lowest common additions and patterns in this sequence of numbers.

In short; what you do is calculate the differences between two adjacent numbers of the sequence and repeat the process to calculate the differences between the previous differences you got, until you can no longer do so. (You can stop earlier when you spot linear patterns)

What you get with this iterative process is a sort of pyramid that you can use to duplicate the additions to calculate the next number in the sequence.

So the pyramid for the sequence [1,8,27,64,125] would be

L0 1   8   27   64   125
L1   7   19   37   61
L2     12   18   24
L3        6    6
L4           0

You can see that on the lower level (L3) of the pyramid eventually you’ll be able to spot a linear pattern. The less numbers you have, the less chance you’ll be able to see it, but with this sequence of numbers, we can.

If we had only 4 of the numbers, we would predict correctly (though it would probably be more of a guess), but 3 numbers would not be enough. With just 3 numbers we wouldn’t be able to spot a linear correlation between the numbers at all.

Depending on the pattern there are limits to what this method can do, but if your sample-size isn’t big enough, no other method would work either.

Extending the pyramid

To calculate our 6th number, what we do is use addition to climb from our L4 of the pyramid back to our original level L0. Just like when we would try to calculate 4 * 5 from the first example by taking what we know, namely 3 * 5, and adding 5 to it.

We only need the last numbers of every pyramid layer in this case (so [0,6,24,61,125), and add the number that we calculated on the previous level of the pyramid.

L4: 0   +  0 = 0
L3: 6   +  0 = 6
L2: 24  +  6 = 30
L1: 61  + 30 = 91
L0: 125 + 91 = 216

And there we have our answer.

Example code in Javascript:


#storytime – MongoDB

Some two years ago I started experimenting with MongoDB and found an appropriate use case to actually use it in production, whatever that means if it’s used at a company that does software development. For our specific use-case and setup, it works pretty well, but it took some time.

I’ll try to expand on how I see MongoDB in the realm of databases and how I have used it in practise. For technically details, please refer to the documentation or directly via the urls I’ve tagged in this post.

What is it, really?

MongoDB is a database specialized for documents. A document is a structured type that contains a tree of information about something. If you had to compare it to a traditional RDBMS, you would have a lot of tables and a lot of relations drawn, because having 1 table really just makes for a really boring document. Or a better comparison would be to have 1 table with 1 field containing the serialized document data (ok 2 fields when including the ID). Instead of having strongly typed and defined tables like in an RDBMS, MongoDB is optimized to index and search through the actual document data.

When would you ever need this nonsense?

The obvious answer to this question is; because sometimes you have no clue yet what your records will look like. You can scale and decided the amount of information you want to store at a later time, and you would only have to update the software that gives the information to your document database, not the database itself.

The term Big-Data, be it a really annoying buzzword, means storing a lot of data from anywhere – right now – and making sense of it later. When you find out you need more data to solve a certain problem, you don’t have to redo all the work and start storing data, you’d already have it, and you would only change your algorithms to search through the data.

Using it as a traditional database would be really silly, probably slow, potentially less reliable, and overkill if you would accomodate for the slowness by scaling up your hardware/cloud needed to fulfill a RDBMS role.

There are of course other reasons, like the somewhat native ability to scale your database horizontally over multiple servers. Traditional databases are generally a lot faster on a single server, but though they try very hard to stay hip and cool, they aren’t very good at scaling horizontally to increase performance.

What happens in practice?

We have been using MongoDB for a long time now, storing errors. We let our (Delphi) software sent Exception information and callstacks to one of our HTTP servers by default. You wouldn’t be able to reliably be able to send it directly to MongoDB for many reasons, like firewalls, database security, no preformatting. We do need to format our exception data before being able to insert it into the database, so it looks like a proper JSON document that MongoDB would accept. So the software posts the exception information to our HTTP server, sometimes even with attachments, and we postprocess it every minute to insert it into our MongoDB in a JSON format.

And while we had that process going and while we were still trying to figure out how to get the information out of the database, we added more sources of errors to the database, this time for our websites. Custom PHP error and shutdown handlers collect information and a stack-trace if possible and send it to our HTTP server, which formats it and inserts it into the database. As you can imagine, the format of these error documents look very different from the regular Delphi software, but the database doesn’t really care until you query the information in the documents.

How does this work with MongoDB?

Before you can start inserting documents, you need the server and a collection to store the documents in. After you inserted your documents, you can try out queries via a couple of ways, existence of elements, comparison of values, even regular expressions. To optimize the queries you can put indexes on multiple elements, just like you can do in a regular RDBMS with fields. And you can’t really index the regular expressions (it’s possible text indices would work), just  like you can’t index for SQL queries with LIKE operators.

As I found out later when researching other NoSQL document databases, for example in ElasticSearch you have to supply the types/mappings before inserting the data if they are different from what would be automatically detected, which could be problematic if you would design your system to insert documents over time like we do. MongoDB is pretty flexible in that aspect.

And does it actually work?

Very well, albeit with a few hiccups. Drivers, especially those from 2 years ago, are lacking some reliability. Especially when you try to do queries that will either return too much documents, or search through too much of them. Trying to figure out the right indices and the right queries for a search you want to perform, will take some time at first. Given that after two years we have around 50gb of documents using only 1 server, it still works pretty well for our use-cases.

These days there’s even this concept called MEAN, which is a full stack of javascript to develop with. Things are moving fast in NoSQL land.

Global State Combat

disclaimer: post is not grammar/spell-checked. not going to either, forgive me.

What are global states

A global state is something that is globally influenced, describing the state in which your application resides.

But a global state is not necessarily a globally scoped entity in your code, nor is it necessarily something you conscientiously use as a state of what your application is currently doing.

Think of it like data you use from outside your local function scope, but it can change at an arbitrary point in time. Even if it’s not supposed to be operate as a state, changes in your class implementation or the rest of the application can cause it to switch value, secretly acting as a global state.

Why are they bad

I’m not telling yet, you’ll still write them if I would tell that here.

The many faces of global state

The obvious manifestations of global state are things like global variables, static’s, singletons, class variables that describe state (did I mention how obvious these examples are?).

As I mentioned earlier, global states can also be variables that you use, but that can change value. Obviously, sometimes values need to change, you let the user input something, or you want class that simply contains properties of that object.

Using a class variable in more than one function, or even multiple sections of 1 function, while knowingly – or without knowing – changing the value of that variable somewhere.

Example of a global state introduced by just 1 little line of code

procedure TMyClass.CheckAndSaveAllValues;
  if FMyValue = 1 then FMyValue := 5; // initial value changes state

  SaveValuesProc; // also uses FMyValue

In this example whether or not it’s important that FMyValue needs to change value in this function, it is the introduction of a global state by something that was supposed to be either an initial value set by the owner of the class or set by a user. If one would call SaveValuesProc without having that if statement there, the outcome of SaveValuesProc will change.

Extreme Example of a global state introduced by external factors

procedure TMyClass.SaveData;
  qryMainQuery.FieldByName('DateCreated').AsDateTime := Now;

  if qryMainQuery.FieldByName('NeedsMoreInfo').AsBoolean then

In this example, the important state is technically changed at the DateTime setting, but it’s less important in comparison to what the .Post of the query does.

The Post function does not just save your values to your database, it will also call several callback functions like BeforePost, AfterPost, Scroll, and it changes the queries ‘State’ property (take that as a hint) to dsBrowse (or similar) from it’s previous dsInsert/dsEdit.

Along with that, it will apply all previously set properties of the query to your dataset like Filters, local sorting, master-detail links, etc. after the Post. So anything that you could rely on before the .Post, might have changed state, making your next calls that use your query to behave differently.

Again, Why are global states bad?

  • Global states are bad because you cannot foresee the consequences
  • Global states are usually Not obvious
  • It makes your application “behave weirdly sometimes” – without actually producing errors
  • It’s hard to unit-test

Preparing for combat

Don’t use global state!

… Ok, seriously. I know in this case “don’t use” doesn’t make it an action you can put on your todo list, they creep into your code without knowing, with adding something seemingly innocent or just a quick hack.

Maybe you know the word Compartmentalisation from the tv-series Agents of S.H.I.E.L.D., it’s a way of keeping things separated and only letting someone know what they need to know to perform a certain task, instead of letting them know the entire grand plan.

It might have some negative connotation when it comes to human beings and having a two-way relationship based on trust, but when it comes to software, it’s an amazing tool in making things not only independent of global state and safer to run, but also predictable and thus testable.

Hints to avoiding global state dependencies

  • Write functions that depend on passed parameters vs class variables
  • Let functions use local variables vs class variables
  • Only influence intended states you don’t actually depend on yourself (or other functions in your class)
  • When adding in functionality into an existing function, consider compartmentalising that further (e.g. make a new function with parameters added that have the “right” values)
  • Write as functions and code where the order of calling doesn’t matter
  • If order matters, combine them into 1 function that has well defined input and output, preferably well documented as to which order must be maintained

Valuable resources

I’ve seen these videos before I actually started to understood what global state actually was. Hopefully I’ve given you a better idea, though as with many other things in life you have to experience and do it before fully understanding it.

“Global State and Singletons”

“The Philosophy of Google’s C++ Code”

(I would highly recommend the CppCon 2014 videos to anyone, even without C++ knowledge.)

Sometimes compilers are silly – Delphi

When you’re trying to optimize code, on occasion you’ll be tempted to open the disassembly window to check what the compiler is actually making of your code. Every programmer that’s vocal on forums will probably tell you you’re out of your mind questioning the compiler and doing optimisation by hand. There’s a reason why programming language are so feature rich these days, namely because there’s a lot of optimized internal code to enable the developer to do more with less code. And that’s a good thing in most cases, plus the people who write the compiler and libraries are a lot more experienced in these things you are.

However, there will be instances where you’re just asking for what’s in theory something very rudimentary and the compiler will just go overboard because you – could – do something complicated with the same types and instructions.

Magic string type

In Delphi we have a really nice type that’s called a String (AnsiString and WideString), which is basically a complex reference counted type that contains characters, a length, and a collection of operator functions. A string might look like a basic type, but if you have some experience with ye old C char pointers, you know it’s no picnic. A lot of languages these days have black boxed this type with wrappers that enable you to use a string as if it’s just a regular integer. I say black boxed, because there’s usually no way you can change or see the code.

The Delphi disassembly viewer will show you at least the function names, so you get a sense of what’s going on in these functions.

Let’s take this example of basic looking code that I was using for a simplified StringReplace/ReplaceStr function, that returns the original string that you passed onto it with 0, 1 or more of the same character removed from it.

The code here is initially only optimized in a way so that there’s supposed to be only 1, or at most 2 memory allocations to contain the string.

function StripChar_old(const cIn: AnsiString; cCheck: AnsiChar): AnsiString;
  newlen,x,c: integer;
  newlen := 0;
  c := Length(cIn);
  Result := cIn;
  for x:=1 to c do
    if cIn[x] <> cCheck then
      Result[newlen+1] := cIn[x];
  SetLength(Result, newlen);

You would think that cIn[x] would pick 1 character from the original cIn string, but the compiler disagrees with that reasoning. Instead there’s a UniqueStringA call, likely because the initial assignment of Result only made a shallow copy of cIn, and it needs to make sure that on-write it’s actually a new (unique) allocation. Only after that call the character is copied (mov [eax+ebp], dl).

I wouldn’t be worried if this was an occasional thing, but code inside loops, if you really need them, should be as fast as possible. And a function call, no matter how fast it is on the 2nd call, is still something that requires jumping to a different spot in memory (that’s hopefully not swapped out), to save and restore cpu registers, do its thing, and jump back again.

So you can see that, even though the String implementation is great for rapid development, it comes with a cost.

So how do you solve this problem should the performance be an issue?

If this was a different kind of problem involving different types or more complex algorithms, we could probably rethink and rewrite a different algorithm. But a basic function like this usually requires you to go closer to the metal.

Back to pointers

So I chose to invoke the most grand and powerful of the ancients: the char pointer.

I assume there are multiple ways to do optimize this function, and that there’s probably a better function already in the standard library. Still, in Delphi we can actually implement something that’s probably relatively easy compared to how you would do it in higher level languages.

By explicitly treating the string as a pointer to the characters of the string for example, you can force the UniqueStringA call to be earlier in the function and not within the loop.

new_editedfunction StripChar(const cIn: AnsiString; cCheck: AnsiChar): AnsiString;
  newlen,x,c: integer;
  pStr: PAnsiChar;
  newlen := 0;
  c := Length(cIn);
  Result := cIn;
  pStr := @Result[1];
  for x:=1 to c do
    if cIn[x] <> cCheck then
      pStr[newlen] := cIn[x];
  SetLength(Result, newlen);


Oh right

This is how you can actually see the disassembled code within Delphi.

I don’t get the colour of the links in this theme…

UnitTesting: Just do it.

Books, tweets, blogs, websites and frameworks, testing and unit testing is a subject of discussion ranging from wild statements that any developer not making unit tests should be fired, to vague what-if hypotheses where it does and doesn’t help to unit test, or that unit testing gives you a sense of false security. You can read all about all the opinions, experiences and best practices around the internet – but have you yourself actually tried to “Just do it”?

Really any subject applies to the phrase, if you really want to get anywhere just do it – fail, succeed, it’s all part of the process. No book or detailed wiki pages are going to prepare you for unit testing your own projects, because every project is unique. Not necessarily in concept, but it will be unique to your way of development and setup of the project.

If you don’t have the time, find the time, but don’t over-think it. Don’t spend too much time thinking on testing code that’s hard to test, you can’t cover everything, but writing any unit-tests is better than writing none.

Tips to get you starting:

  1. When you start a new function or class for new functionality, try writing a test for it. You don’t have to cover every situation, but try to capture what it’s supposed to do.
  2. When you encounter a bug, don’t fix it just yet! Try writing a test that catches the same bug, and then fix it. Every time you tweak something, you can run the unit test again and make sure you didn’t reintroduce the same bug again.

In the spirit of “Just do it”, I’ll just try to give an example of a really simple test when writing new functionality. I’ll be using TestGrip for Delphi, but whatever framework you’ll be using, actually writing the tests is better than searching the web for it.

function StrToBase16(const sIn: ansistring): ansistring;

Yes, that’s a fancy name for hexadecimal, let’s code it anyway.

The test is passing, let’s ship it, right? I have nothing against it (let’s forget about the fact that I know I put in a bug). Eventually either during testing the full project, or perhaps a customer will encounter the bug. That’s Okay in most situations. This kind of mistake isn’t really okay, because this is an example of a simple straight in-out function that you can feed a dozen of examples and eventually find the occurrence of the bug. But let’s pretend you missed the scenario because in the current production version of this software only runs “Hello” through this function, and when you let the user influence the input value for this function in the next version, all of a sudden in some cases the software breaks.

You: “What did you do before this happened?”

Customer: “Nothing!”

You: “Ok, let’s go through this step by step. So you start the software, and I think you get a prompt to enter your name, what do you type in?”

Customer: “My name”

You: “…”

In any case, here we go, we enter whatever made our function break and check the testresults.

Typo’s in strings, there should be a compiler error for that, but unit-testing is the next best thing…

Stop browsing the Internet, write tests!

Shameless plug for Delphi-users: Testgrip

Over the last couple of months I’ve been working on an exciting project with the company that I work for.

The backstory of this project is a pretty wild one from the standpoint from a company that doesn’t have the size nor resources for prestigious R&D projects like IBM, Google or Microsoft. But from the moment the idea was passed around the meeting table, we all knew that if it had already been developed, we would greatly benefit from it in our day to day development.

Testing somehow always ends up as a vague notion anywhere you look. It’s mainly because no shoe fits all, but it also seems to be one of those “we should do something with that”-things. Depending on how your company or project is requiring you to develop, you may be either spending the majority of your time writing unit-tests, or you spend all your time on implementation and eventually wish you had written at least some unit-tests.

As every developer will eventually find out, is that unit-testing, if done right, can help you improve the quality of your product in a way that – eventhough your product will never be perfect – you will be able to constantly improve and add functionality without having to spend yet another 40 hours of functional testing to make sure “everything else still works”. It’s one of those in-the-longrun kind of things, so we could probably roll a dice to guess the average yes/no ratio if any company will be actually ready to spend so much time upfront without too much visible gain.

So what is Testgrip and what does it do? Well you can look on the website, or here’s the gist; It’s a concept to take unit-testing to a level where you can tightly integrate testing into your development process, to improve quality, but to minimize time-spent on writing the tests. At the moment we implemented the idea as a set of tools for Delphi, but the concept itself is language independent.

I hope to write a bit more on the subject soon, but for now, happy coding!

Unposted drafts – Algorithm optimization: indexation

Many a times in programming you’ll use the magic words For or While, or your language equivalents, but sometimes a little too much. Any loop you’ll use will slow down your algorithm by n times t. The moment you add a millisecond of computing to 1 iteration, you’ll automatically end up with n times that millisecond as extra time spent.

In the area of looking up or finding an element in an array of n elements, a lot of times we won’t bother to optimize or perhaps we can’t imagine an optimization to a simple variable comparison. Sure we can cheat our way out with a break or exit from the loop when we found our needle, but that’s as far as it goes.

The moment where we again put our find method inside another loop, the time spent with our lookup is going to matter if we value our users time.

Indexation is a common concept of lookup optimization, but why, and how do we do it?

The basic idea of indexation, in programming that is, is that given a space of addressable memory, and a lot of possible objects to find, we can find out if that object exists – by already knowing where it’s supposed to be.

It seems like the world upside down; instead of trying find the object we’re looking for, we’re supposed to know already know, and perhaps find out if we’re right.

We can’t of course stick our hand in the mud randomly and hope for the best. We need to find a way first to make the objects we’re going to look up later, are either where their supposed to be, or nowhere.

If we look at our indexation masterminds: databases, we can find a complicated amount of indexation techniques that work pretty well. Our fancy scripting languages may have a little bit of that in their non-default datastructures and functions. But I want to focus on the simplest form of indexation, well, second simplest.

About arrays

In the world of programming, the standard array is going to be your template for your algorithm. The array works in a way that you arrange memory at the position of variable x, starting from index 0, counting to n, and you assign or requests values in that array by supplying p as in x[p]. Internally this works as a function resulting in another address, f(x,p) = x + p * c. In this formula, c is the size of your datatype; if your type is a byte then c = 1, if your type is a 32bit integer it’s 4, etc.

There’s no magic lookup in regular arrays, its just calculating the start of the requested fragment in a larger fragment of memory, and this works pretty fast.

Simplification to its core when going back to the indexation aspect; e.g. given the array x = {0,1,2,3,4,5}, we’ll know that the number 4 will reside in x[4]. But of course, why look up something you already know.

Textbook example: IntToHex

Integers to hexadecimal text is a textbook example of indexation (unless the writer is a complete nitwit). Given your base ‘hex’, 16 possible values, represented by the array x={0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F}. We can “find” our character by fetching the n-th value in array x, where n is the number we want to translate.

Since in reality there’s a slim chance we only want to translate numbers under 16, we’ll have to find out a way to deduce our translation to 1 or more simple array indexations.

With the hexadecimal system being close friends to the binary system, we know we can physically divide our byte into 2 x 4bit codes, and 4 bits can form 16 combinations.

Easy enough we can thus make a 2 character string of our given bytevalue p in 1 long line;

str = hex[(p & 0xf0) >> 4] + hex[(p & 0x0f)];


Real life applications might not be so simple or magical as hexing, but the principle is the same:

  • determine a way to calculate a predetermined and (unique if non-tree) index
  • put the items in the array with your calculated indices on startup and on modifications
  • locate your item by calculating the possible position and see if it’s there


Unposted drafts feature; here I stopped writing for some reason, and I’m not going to finish it…


color profile madness

I have a Cintiq 21ux since forever it seems now, and I always had issues with the colours being “off” on the normal iMac screen. Obviously the reason lies in the ICC profiles… I thought. They were set the right way however, and embedding the right profile in the picture didn’t help either.

After a lot of fondling around with the profiles on both screens, I decided to do a restore factory settings on the Cintiq. Yet the problem remained. Until I found out that the Default color setting was wrong.

The temperature of the colours was at 6500k, but naturally that doesn’t match the ICC profile at all. The setting “Direct” however did the trick. I have to fix all the colouring now in my pictures, but at least I know they’re going to be the same almost everywhere.

IP Lookup tool

This tool is to retreive ip addresses for a given hostname. This isn’t very new, but I noticed that if you need to refresh ip without wanting or able to call ipconfig /flushdns, you can still refresh the ip’s for the given hostname.

It also has an option to return the ip address that is used to indicate a hostname is either invalid or could not be found.


(free, freeware, no spyware, use for whatever you want, unlicensed, win32 console exe optimized for i586)