Threema Logo

New Messengers in town…

Threema

I just received an e-mail that Yahoo Japan was terminating its messenger system. Whereas network protocols seem to last for a long time, chat programs and their attached protocols tend to die out fast. Except for IRC, who is not much used outside of technical circles, all protocols from 10 years ago have died out. Who remembers ICQ? Even the kind of the hill MSN eventually got turned down.

Maybe because of the acquisition of Whatsapp by Facebook, I have seen people around me migrate to newer chat systems, the more popular ones are Telegram and Threema. What is interesting is that both have a strong emphasis on security, although they take pretty different approaches to it.

Telegram uses an open source protocol, and supports end-to-end encryption, although this feature is optional. I could not find the public key exchange system in neither the iOS app nor the Mac OS X program. The programs are free, and so is the service.

Threema takes the reverse approach: all the messages are encrypted end-to-end all the time, the first action that you do when setting up your account is build the secret key, and the program uses QR-codes to transmit and check public key hashes. The code is proprietary and the servers hosted in Switzerland. The programs (iOS or Android) are paid for.

Which one you prefer probably boils down to your beliefs. Threema is in my opinion more polished, the key building exchange protocol is really smooth, and as a result, I have not sent a single un-encrypted byte with it. I have yet to do a secret chat with Telegram. Being closed source is certainly a drawback, but so is having a system which can fall back to only client-server encryption, in particular when your monetisation plan is a bit nebulous.

While the key validation system in Threema requires physical contact, if you have set up a secure communication channel, say with PGP e-mails, then you can send a signed screen capture from your mobile device to your contact, he can then check the signature and then capture the code with his mobile phone. I did this with a friend in Japan, it worked smoothly.

Flattr this!

Title of this blog as seen using the Sixteen theme

New Theme for 2014

Title of this blog as seen using the Sixteen theme

I have been using the same Japan style theme for this blog ages, it started to feel old, and there was only so much I could hack around before either rewriting it from scratch or picking a new one. I chose to do the latter, and so I switched to Sixteen which is more responsive and handles better screen resizes. Doing this also has been a good exercice in moving many of the customisations I had done out of theme into Jetpack. This means that I should be able to switch themes more easily if I choose to do so. There are still a few kinks to iron out, mostly in the presentation of tables, but we will get there. Thank you for your patience.

Flattr this!

Box containing a tower of Honoi puzzle with booklet

Primary Keys (4)

Box containing a tower of Honoi puzzle with booklet

I already mentioned many ways were a given product can end-up with multiple GTIN codes: a book with an ISBN and a UPC, a book with two national ISBN codes, or products with two product codes on the same box. There is yet another reason for having more than one GTIN: relabelling.

One of the goals of the whole GTIN system is to avoid the need of adding labels to the boxed product: the code is on the box and serves as primary key to be looked-up. Of course this only works if there is no price on the box, or if the price does not change.

I have at home a tower of hanoi game produced by Éditions Trédaniel which has an ISBN printed on the box, 9782849331231, along with a price, 22€. The code and the price were covered up with a label with a different price, 16.90€, but also a different ISBN: 9782849332566. While it makes sense to cover the price, why change the code?

I’m not sure, but this product is in some ways a book: there is a booklet inside and it bears a book number (ISBN), which means that it might fall under the Lang Law, which lets the publisher fix the price of a book, which is printed on it, the seller must respect this price, he is only allowed to give a 5% discount. This also means that if a publisher wants to lower the price, he needs to change the label, but also probably the code. As here both ISBN fall into the same range, this is probably what happened.

Flattr this!

Vodafone 802SE label with two JAN codes

Primary Keys (3)

Label of the Nexus One Desktop Dock with 2 GTIN codes

So we found instances of books with an ISBN and a UPC, and books with two ISBN, but can we find non-book products with two GTIN codes? Yes, we can. Many electronic goods are now sold globally, but often contain multiple national codes, all of which are part of the GTIN system. Case in point: the HTC manufactured Google Nexus One desktop dock has both a EAN (4710937336078) registered in Taiwan and a UPC (0821793004965). I don’t fully understand why this is needed, clearly you need a UPC for being able to sell the device in the US, where there might be old system that cannot handle the 13 digit codes. But then why bother with an EAN code? In theory systems that can handle EAN can implicitly handle UPC codes.

Vodafone 802SE label with two JAN codes

Still these weirdnesses are only caused by the old UPC legacy issue? Not always: there are cases where a box contains two codes in the same national prefix space. Here the example is the Vodafone 802SE that I owned while I lived in Japan, the box harbours two JAN codes: 4908993111252 and 4908993111252, both registered with ソフトバンクモバイル (Softbank Mobile). Maybe one was for the actual phone, the other for the phone subscription.

Flattr this!

Foldable array of solar cell, portable auxiliary battery with solar cell and iPhone 5 charging

Of energy estimations…

Foldable array of solar cell, portable auxiliary battery with solar cell and iPhone 5 charging

I have a vision for solving world hunger: each person just needs to have a small pot in their appartement and grow salad in them. So they will have food. Problem fixed, hand-over the Nobel price please. Thank you very much!

Of course, this is a silly idea – the numbers just don’t add up. It takes many days to grow a single salad, and you need more than a salad every other week to feed a human, not to mention this winter thing. People realise this, because they somehow understand what quantity of food they eat, how fast salads grow. This does not mean that having every person grow her own salad is bad idea, just that it won’t solve world hunger.

When the topic turns to energy, all the common sense flies out of the window: people don’t understand the quantities, so everything seems possible. A typical example is this solar powered window socket. Looks neat no? Just stick the thing to a window, it uses solar energy to recharge itself and acts as a power plug. Cool, no?

This device would contain a 1000mA/h battery that is charged in 10 hours. First problem the voltage is not specified, but if the battery charges in 10 hours, we need a 100mA solar panel. With the current technology, a panel a tad larger that the one in that device would output 100mA at 5V, so let take that as a baseline.

So we have 1A/h at 5V, which gives us 5 W/h (18 KJ) So what can you do with this amount of energy?

  • Run a 1000W hair-drier for 18 seconds.
  • Run a 100W LCD television set for 3 minutes.
  • Run a 40W incandescence light-bulb for 4½ minutes.
  • Run a 10W LCD light-bulb for half an hour.
  • Charge an iPhone 4 (5.254 W/h) to 95%.
  • Charge an iPhone 5s (5.966 W/h) to 84%.
  • Charge a Nexus 7 (16 W/h) to 31%
  • Boil 7 ml of water.

All this is assuming no conversion loss, which would be hard, as the battery would be 5V DC but the plug is 230V AC.

Now I have a solar charger, but it is much larger, and by experience, it produces just enough power to charge an auxiliary battery with an USB plug, which in turn can charge my phone (see picture). All this is done using USB cabling at 5V. This is convenient when travelling, but again, not a solution for energy problems and certainly does not look as stylish as the clean vision of the small plug stuck onto the window.

If you want to have solar power, you need a large surface, for the same reason that if you want to feed people, you need a field… You also might want to do that outside of the house’s windows, which tend to reflect ultra-violet lights.

Flattr this!

Off computer apps

📱

bought for a few billion dollars, less than a week after bought , another phone messaging app. What I find interesting, beside the timing, is that Whatsapp is entirely centred around mobile devices: there is no version for desktop or laptop computers.

In both cases, the goal is probably to get access to users that typically don’t use social networking or online shopping, but who have a smartphone and a data-plan. This is were the growth potential is. Many computer geeks tend to have a blind spot towards mobile phones: there were 6.8 billions worldwide in 2013, five times more than Facebook users.

Flattr this!

Macintosh SE/30

The End of Moore’s law…

Macintosh SE/30

Moore’s law has been a central pillar of computing for all my life. It is not a law, more an observation: every 18 months the number of transistors in a chip will double. This has created a universe of plenty, every two years, everything could double: performance, memory. My first computer had one 4Mhz 8 bit processor with 64K of RAM, my current laptop has two 2.8 Ghz 64 bit cores with 8 Gb of RAM. Basically a million time more capacity.

The end of Moore’s law has be prophesied for ages, I remember one of my university professors saying that circuits could not have features smaller than 120 µm because of the wavelength of the light used to etch them, nowadays they are at 30µm. Still engineers are increasingly hitting walls, processor frequency has stopped increasing at around 2 Ghz and instead the number of cores has started increasing. But programming multi-cores system is difficult and Amdahl’s law still holds, so the number of cores has stayed low.

Running code on the graphic card, with its massive array of computing units, improves certain types of computations by one order of magnitude, but there again, throwing more silicon at the problem yields decreasing returns. Quantum processors might give another boost for certain classes of problems, but there are not ready for consumer production, only improve certain problems, not to mention that programming these things is a completely different art. Things are just getting harder to improve…

Moore’s law has be quite detrimental to software engineering: why work years to improve the code to be more efficient when just waiting will give you a performance increase? As hardware improvement slow down, we will need to make gains at the software level.

In a way this is exciting news: there have been big improvements in algorithms and compilations techniques in the last ten years, they have just been overshadowed by hardware improvement, and there is certainly plenty of more improvements that are possible. Also code that is deployed is generally not highly optimised – fine tuning code is a complicated process and it is generally not cost effective to bother with it. If we were to optimise as aggressively on today’s devices as we used to on 8 bit machines, we could get a large performance improvement.

As the improvements predicted by Moore’s law slow down, the value of software will increase, this at a time when other problems like security are getting harder to ignore. This basically means that cheap software will become increasingly expensive.

The first effect of this situation is that the languages that will dominate in the next ten years will be ones that can be compiled to efficient native code, and platforms that can somehow deploy native code. This probably means that we will have another decade dominate by the spawns of C.

The second effect will be that the classical computing stack will be increasingly challenged. Nowadays most devices run some very mutated variant of Unix as designed in the 80’s. It works, but it is far from efficient. Most of the optimisations that are implicit in the design are outdated and irrelevant. Various parts of the canonical Unix system have already been challenged: the graphical system (X11), the process launching infrastructure (initd, chrond, inetd etc.). The security model has been augmented a lot, and I would not be surprised to see the networking stack changed a lot with the shift to ipv6.

One thing that is bound to increase is the number of devices per person, in particular at home, where the TV is nowadays a respectable computer, tapping into that pool of underused resources will be increasingly tempting, that was Sony’s vision with the Cell processor, it was probably 20 years to early in the making.

Generally, you should expect innovation to be driven by increased interconnection more than increased processing power, there is a large amount of sensors are you, and connecting them represent a huge opportunity, both in terms of potential features as possibilities for abuse.

Macintosh SE/30 image Creative Commons Attribution-Share Alike 2.5 Generic.

Flattr this!

Car

About self driving cars…

🚗

Self-driving cars are one of the these impressive science-fiction ideas, that are nowadays technically possible, will we soon see such cars around? Wide deployment depends on two things: that such car work, and that society accepts them, both socially and legally. In my opinion there is a significant legal problem in self driving cars: they will need to respect the law.

One of the theoretical aspects of the driving exam in Switzerland is breaking distances. The rule says that it should be the distance covered by the car in two seconds. When driving on a highway at 120 Km/h (33⅓ m/s), this represents 66 meters. This distance is meant for perfect conditions, in case of wet roads, it should be increased.

Take any crowded highway and you will notice that the distance between cars is much smaller. Strictly speaking, all these cars are not respecting the law, but that law is not really enforced, so nobody cares. In case of accident, the guy behind is generally responsible, and the average driver seems OK with this risk. Same goes for speed limits: people drive faster that the limit, and live (and die) with the risks and the consequences.

Now introduce self driving cars into the picture, the first obvious question that will arise is: who is responsible in case of accident? When the car is driving by itself, it can hardly be the driver, so the ultimately the manufacturer of the car will be responsible. Of course, there will be clauses that waive that responsibility in case of unavoidable accidents (say a bridge collapses), but such waiver will imply that the car strictly respects the law.

So such cars could not drive fast on a crowded highway, because they cannot legally do so, the car would slow down until the distance between cars is safe, if the distance between cars is 10 meters, the maximum speed is 18 Km/h. So while you would not need to drive your car, it would go slower than an electric bicycle. You might as well take the train.

The law can be changed, in particular for self driving cars that have more advanced sensors, can coordinate with each others. This will require some serious work, both on the legal front and the technical front. So while self driving cars are technically possible, I don’t expect them to be usable in real european traffic soon…

Flattr this!

Macintosh SE/30

Data Entry

Macintosh SE/30

I’m always fascinated by the early 20 century movie that show the office life in New York, hundreds of people in offices typing, doing nothing but entering information into a structured format.

These jobs have steadily disappeared with various machines: telex, fax, photocopier, computers, bar-code systems, RFID tags. All these systems avoid the need for data-entry, mostly because the data has already been entered, so it just needs to be read again.

Standardisation of business to business communication started a long time ago, and states and administrations are finally catching up: many administrations let you fill in form electronically or give you programs to fill in the more complex ones, like tax declarations.

Data entry is not finished, many of crowdsourcing projects are about just that: data-entry of more complicated information, geographic data, food product meta-data. Companies and administrations are also organising similar data entry initiatives, but they typically are heavily automatised, or outsourced to countries were personal costs are low. While we will probably never reach the situation where everything has been entered as data, we are inching closer.

Of course, removing all those data-entry jobs made a lot of people redundant, think at all the people involved in a commercial letter as opposed to an e-mail or an automated business to business interaction. Yet those people never did anything useful, just copied data around. Creating and transforming data is what is important nowadays, but this requires much deeper skills…

Macintosh SE/30 image Creative Commons Attribution-Share Alike 2.5 Generic.

Flattr this!