# An analogy for Unicode support – how not to do complex numbers…

One thing that really annoys me in computer science is the huge gap between the way human text is understood in programming and how it should actually be done. Mostly programming languages implement toy text processing functionalities, which are good enough for handling simple english, write example snippets, but for real text processing, you need to use proper libraries. I had somehow gotten used to the idea that this would always be broken, but then Swift proved me wrong.

The best way to illustrate how broken things are is an analogy: imagine if your favourite language would have a built-in type for representing complex numbers, but that type would only represent accurately numbers whose polar representation has an angle which is a multiple of $\frac{\pi }{\mathrm{256}}$ because, well, you see the type is encoded in the following way:

```union ComplexValue {
double norm;
signed char angle_parts[8];  // angle encoded in one byte + sign.
};
```

Now this representation is neat: you store the angle inside the least significant bits of the significand, which saves space and basically allows to serialise a complex number as double (compatibility!), and many operations are very fast as they can run of the FPU (performance!), for instance, to get the absolute value, you just call `fabs(v.norm)`, to get θ, you just do the following:

```double arg(const ComplexValue value)
const double a = value .angle_parts[0] / 128 * 2 * kPi;
if (value_.floating > 0.0) {
return a;
}
if (a < 0) {
return a + kPi;
}
return -kPi + a;
}
```

In the general case where the θ = 0, you can use the standard mathematical operators (addition, multiplication) of double, perfect. This can be implemented with no problem, with unit-test for most classical use cases, like norm = 1 and θ = π/2, π/4, it works perfectly.

Now some people might point out that standard mathematical operators will give you garbage results, and that precision will get worse as the argument of your number grows, not to mention that endianness might throw a wrench into things, but these are edge cases, and if your code cannot handle precision errors when using floating point, you are doing wrong, right? Seriously, who cares about these details? mathematicians? If you really need to, use a specialized library.

Basically, build-in Unicode support in languages like Python is as broken, but in reverse, in the sense that texts that are equal are considered different because the underlying representation is different, the floating point equivalent would be that `0.0` is not equal to `-0.0`

## One thought on “An analogy for Unicode support – how not to do complex numbers…”

This site uses Akismet to reduce spam. Learn how your comment data is processed.