Perspective of a far away onlooker on the Brexit

So, 51.9% to leave the EU, that’s the result of the Brexit referendum. I had been following the debate as a far away onlooker, and as of this morning, I was still thinking that the Remain decision would pervade, when people finally calm down before they cast their vote.

Although the margin is not large, many people, like me, are still caught by surprise. And even though the result of this referendum would have minimum effect on my live (I really hope so), I still wish that the Remain camp had won.

From the perspective of this far away onlooker (a Canadian living in China), this result is very unfortunate. Since WWII, generations and generations of people, with long term vision for a stable and peaceful Europe, had put their weight to form the Union. It’s certainly not perfect (yet), you can complain about the bureaucracy, or that the European parliament is not elected by direct democratic process, or that this EU thingy is a creation of the elite political class, or that free movement is an exploitation of the corporations, or that there are too many immigrants and refugees, etc. But this is better, by a long measure, than the situation in the first half of the 20th century. Building a common system, while trying to satisfy everyone’s wishes, is a long and hard process, especially when this is done by consensus. Other places, in other times, had achieved it only by bloodshed.

I am a bit amazed that, during this referendum, more older generation stand by the Leave camp. I would have thought that they should be the one who knew better. In retrospect, I think I am wrong on this account. The older generation that I was talking about, probably consists of the baby boomers, a generation which had not known the atrocities of the wars either.

Maybe I should provide a bit information on my background to understand my reasoning. I was born in Cambodia, of Chinese parents, lived through the Khmer Rouge regime when we lost 80% of our family, was put in a refugee camp in Vietnam for 8 years, to be finally received by Canada when I was 18. We arrived in Canada, penniless, and as stateless refugees. My parents moved from China to Cambodia as penniless migrants, took many years to build up a prosperous life, and years later, we ended up in Canada, worse off, as penniless refugees.

We didn’t complain, we rolled up our sleeves, and worked very hard, from the very bottom up again.

In the 1990s, I was very happy to see the Berlin wall fall, and that European countries were rapidly merging into one single block with their interests interconnected. And I could only dream of a same scenario for Asia, a scenario that would take many many more years to even be a prospective, if it would ever be at all.

Since then, I have visited many European countries, including France, Spain, Italy, Germany, Denmark, Sweden, Finland, etc, and I envy what I see. And every time, I think to myself, I wish I could see the same development of a convergent political system in Asia during my life time. And yet, with one referendum, which is fueled by temporary discontent than calm reasoning, they want to dismantle what took years and years to gradually build up, despite that Great Britain already enjoys special privileges that no other EU members do, such as retaining their monetary unit and measuring unit system, the right to refuse entry, etc.

As we can see, right after the result is published, the right wing faction in different countries are calling their referendum for a Nexit, Frexit, Itexit, or what not. Scotland and North Ireland would certainly want to have their say too. We can only hope that this is just a blip, and that the reaction chain would not be too bad, or that it would not rewind the EU to too far back a stage.

Ah well, who am I to comment on this?


Programming is still a stone-age crafts

You might think, declaring that programming is still a stone-age craft is a little bit exaggerating, no? After all, it’s a high-tech job. Ok, I might be exaggerating, but certainly not by much. We might call our work by respectable names, such as software engineering, system design, software architecture, etc, but at the end, it is still, at best, craftsmanship. This is, by no means, belittling craftsmanship. After all, good craftsman, as we programmers all are, take pride in our works. As much as we do, it does not make it less stone-age-ish.

Let’s look at the tools we use everyday in our trade, e.g. the design tools, documentation tools, development tools, verification tools, testing tools, monitoring tools, etc. How many tools do you actually use? And how many of these, can you claim, allow you to easily translate your requirements into a solution, and then into a working program, and allowing you to perform scientifically sound verification to ensure the program’s quality and correctness? We use a lot of tools, but there is a disconnect between our mind and the tools, and a disconnect between the tools used at different stages. There’s no easy way to translate our thought model into a solution, and no reliable way to map the solution into a working program. And certainly no easy way to prove the program is correct.

Now, let’s also look at our daily works, e.g. specify requirements, translate requirements into solution, map solution model into computer codes, verify that codes are correct, run after bugs, etc. How many of these tasks, can you claim, are not trifles? Admittedly, some of these tasks do involve intelligence and creativity, but most don’t. They are tedious, repetitive, mind numbing.

We can claim that this is exactly the beauty of programming. You can do anything on computer, you have freedom and full control. But in reality, most programming works are menial jobs. They hide behind fancy names such as web services, enterprise architecture, micro service, etc, but most of them are simply CRUD functions, with very similar parameters. Obviously, programming tools have progressed, and re-usable software modules have significantly made system integration a lot easier than, say, 20 years ago when I first got into this trade. It would probably take ten programmers 20 years ago to do the same work that could be done by one person today, as we do not have to create everything anymore. The value is in putting different modules together to create a larger solution. However, the way we work hasn’t change much, we still need teams of programmers, banging on keyboard to repeat thousands and thousands of lines of similar codes, and then even lines of codes for unit testing, integration testing, performance testing, etc, etc. No wonder to call ourselves code monkeys in self-mockery. It’s like coolies digging tunnels with pickaxe under a mountain, wondering when we are going to see the light at the end.

If you consider yourself lucky to work on some high-performance, multi-core and parallel programs, the tools are even more rudimentary. Talk to someone who has his feet deep in the multithreaded and parallel codes and you’ll understand. Sure, no-lock concurrency, immutable variables or functions etc, do help in keeping the sanity, but it’s still a crazy world.

And if you have the chance to work on some fancy algorithm, you’d probably claim this is creativity in work. Let’s say you have designed an extremely cool algorithm to solve some pernicious problems. And you have mathematically proved it to be correct. Now, try to convert that into a working computer program. And try to prove that your computer program is as correct as your mathematical algorithm. Yes, you can use fancy programming language, you have type theory and dependency, you run static analysis, you capture programmer’s intentions, you do code review, you use theorem prover, you create DSL (domain-specific language) to abstract away all the nitty-gritty details, you write tons of codes to test corner cases, etc. At the end, can you still be sure that the program is correct? But how do you prove that the tools and frameworks you used to prove the correctness of the program, are themselves correct?

This post is not about complaining, it’s just a personal reflection on the status of our job and our daily chores. We need better tools.


On the Redis vs Hazelcast benchmark

I have read the Redis vs Hazelcast benchmark with a lot of interest, as we use both cache frameworks in our projects for a few years now. However, we are still on older versions, namely v3.2 for Hazelcast and v2.8.x for Redis. We like both of them, a lot. Both have their strengths, and issues, as I wouldn’t really call them weaknesses. Although we have heavily used Hazelcast cluster, but we haven’t tested the new Redis cluster, so I would not be able to comment on the Redis cluster performance.

We do rely heavily on the near cache feature in Hazelcast, and we do know it is a very nice performance enhancer, but to see it outperform Redis by 500% is quite amazing.

After all these years with these two frameworks, we have learned that, as nice as they are, you really need to test them thoroughly for your own use cases. No generic benchmark could give you a definitive answer on which framework you should use, the benchmark should only serve as an indicator.

We have used Hazelcast to cache a lot of things, from string-based key/value to Java objects, to images (yes, we do use it to cache millions of thumbnail images). Here are what we like about Hazelcast:

  1. Native Java API. If you program in Java, nothing can beat its native Java API and data structures. It’s so simple and natural to use.
  2. Well thought-out data structures. Hazelcast has a rich set of well thought-out data structures, and they are as easy to use as the Java library that programmers are familiar with.
  3. Out of the box cluster. Any programmer can have the cluster up and running in five minutes. What more can you expect?
  4. Near cache. This is really a performance enhancer, and with the right percentage of data in near cache, it can significantly reduce network overhead.
  5. Predicate. The predicates make complicated searches easy, and complex searches possible.
  6. On-heap and off-heap memory. The open source version only provides on-heap memory, but if you need to cache a huge amount of data, off-heap memory is the way to go. You have to pay for the enterprise version for that. However, we implemented our own version of off-heap memory management, since the API was pretty straightforward.

We manage millions of objects and images in Hazelcast, and it has been reliable, easy to use and performance was great. However, as nice as it is, you really need to be aware of its internal implementation. And one of the performance hindrance is the serialization and deserialization of the cached data. If you cache only simple data, it works great. Now, if you are caching Java objects, and especially complicated Java objects, you could quickly kill Hazelcast’s performance by serializing and deserializing objects. One way to get around it is to break the object into smaller pieces, but by paying a cost in management hassle and network overhead. Especially, if you are using off-heap memory, then this serialization/deserialization problem is basically unavoidable.

Another problem that you must watch out for is how frequent your cached data get changed. If these are write-once, read-many data, then Hazelcast is great. That’s why we even use it to cache thumbnail in image-heavy projects. Once loaded, thumbnail images are never changed. But in one case where we have a list of frequently used objects whose status and information change at a pretty fast pace. We quickly found that Hazelcast could not keep up with the changes, even in moderate workload. We broke the objects into smaller pieces, but at some point, coordination and management of so many small pieces of data made it not worth trying. At the end, we moved these data to Redis, the Redis’ hashes solved the problem nicely, without performance penalty.

A third pet peeve of mine in Hazelcast is in adding new node to a live cluster. You have millions of objects in cache, you foresee that new workloads are coming in, so you thought it would be simple to just add a few more nodes to the cluster to spread the workload and increase the bandwidth, right? Wrong, adding a new node to a live cluster brings the whole cluster to its knee. When a new node joins the cluster, every node in the cluster suddenly becomes busy calculating what needs to synchronize, and how much, and how to do it. All CPU cores are peaked, all memory allocated fully used up, network ports are almost jammed, all requests to the cluster timed out. This kind of issue is certainly not specific to Hazelcast, as we ran into similar problem with CephCassandra and other cluster framework with automatic data partitioning. But it is still very annoying.

Hazelcast certainly has its own share of issues, but the ones described above are why you always must have a combination of caching solutions, and can’t rely on a single framework.

In the end, I just want to re-state again, if I haven’t before, that overall, Hazelcast is very neat to use.


Five important ways to lead by example

There are countless books, articles, and essays on leadership, and there are as many on why leading by example is more important, and more effective, than all the engagement policies and disciplinary rules. Be it in an athletic team, an army corps, a modern day corporation, or a nation, a leader must always lead by example. Yet, few people seem to be able to put it into practice.

Leadership is not a set of rules or policies, it is a process by which an individual influences the thoughts, attitudes, and thus, behaviors of others. A leader sets direction, and other people follow. But why would people follow you, instead of another person? Obviously, besides your abilities, you need to have certain characters to inspire those around you, and the most important of all, is to walk your talk.

Here are my five ways:

  1. Practice what you preach. This is probably the most important rule for any wannabe leader. You are in no position to ask your team to do anything if you don’t practice what you preach. And it goes from the simplest to the most important thing in life, and people normally neglect the simple things, assuming that they are not important. A very simple example is the daily attendance. If you ask your team to come in early and stay late, but you always come in late and leave early, you are not very convincing.
  2. Set a higher standard for yourself. If you are a slacker, or cut corners, don’t expect quality work from your team. If you are asking a high standard from your team, you must be willing to set a higher standard for yourself. You are supposed to be a leader, so people look up to you.
  3. Honor your commitment/promises. When you promise something, or commit to something, you’d better be able to deliver it. Don’t say something until you have taken time to think it over, and make sure that you can deliver it. Once you made the promise, it’s important to deliver it. That’s how you gain the trust of your team. Sure, life does not always go as planned. Shits happen. Situations change. And sometimes, it becomes hard, or impossible, to honor your initial promise. In that case, you have to be honest, and let your team know and give a thorough explanation, and try your best to make it up. But do not smother up. Acknowledging failure has nothing to be ashamed of, and sincerity is a cohesive force for a team.
  4. Trust your team. If you’ve spent time to hire a team, then trust them. Give them room and resources to do their work. People make mistake, but then, if you don’t allow them to make mistake, then you will never have a team that can deliver. No one likes to make mistake. If you can’t tolerate mistakes, nothing will be done. Remember that trust is reciprocal. If you can’t trust your team, they can’t trust you either.
  5. No double standards. Treat everyone with the same standard. People are smart, they realize very quickly whom you like, and whom you don’t. If you give preference status to some people who prefer, say, the same brand of beer as you do, or play the same computer game as you do, or can crack up jokes with you, then you quickly create factions within the organization. People came in different colors, from all walks of life. That’s the beauty of it. They might not drink the same brand of beer as you do, but that should not make them a less preferred team mate. Be fair.

There are probably many other ways, but these are the five most important that I always try to live up to. I don’t consider myself a good leader, but I’m trying.









中 国先民,和其他民族一样,也相信鬼神。到了春秋诸子百家,都已发展成无鬼神论者(除了墨家还提“明鬼”之外)。既然无鬼神,也就无所谓的灵魂之说。这种中 国思想之大传统,前后一脉,精旨相通,延续了两千多年。用明末大思想家王船山一句最具代表性的话说:“鬼神之道,以人为主”,故中国思想史中所有的鬼神 观,其实是一种人生观,并由人生观而直达宇宙观。中华民族的传统观念认为,人来自自然,亦回归自然,人生与自然之中间,更无另一存在。故每一人之生与死, 只是自然,其过程则全在人文界,此即为人文精神的文化传统。

为何说鬼神观其实是人生观呢?中国人不寻求灵魂的不朽,而寻求 德性之不朽,即立德、立功、立言,因唯有立德、立功、立言之人,其身虽死,其所立之德、功、言则常在人世,永昭于后人之心目,故谓不朽。人能不朽,斯谓之 神。人之成神,则全藉其生前之一种明德,一种灵性。故既谓之神灵,又谓之神明。

佛教传入中国后,投生转世的观念亦在社会中 盛行,而灵魂观念也趁机渗入。不过,在主导中国社会的知识分子中,这不过是俗说。人生短短百年,而灵魂则可以无限转世。社会迷信传说,前世两人是冤家,这 一世却成为夫妻父子,正是一方对另一方报仇索冤。或如佛家之说轮迴,前世或是禽兽、是盗贼、是恶霸、是流氓,而今世却成为一家人,如真信灵魂或投生转世, 而每一人又都自知自己的前世今生,试问又何以相处?



然 若谓有前世今生来世,还有灵魂界,则人生界实如一台戏,灵魂界则如其后台。演剧者从后台化装出演,演毕仍归后台卸装。台前演戏,全非真我,全部人生,那得 认真?帝王将相、圣贤豪杰,全属临时扮演,何尝有真我可言?悲欢离合、啼笑歌哭,台下为之感动,台上人宁不自知其虚假?

而 在中国人传统的人生理想、人生修养上,只有现实世界,人生价值与意义全体现在此一人生中。纵使每人生前有一灵魂,每人死后仍有此一灵魂,亦贵在能消化此灵 魂归入人生,来善尽其人生道义。而此生前死后之一灵魂,则宁可置之不问,把它忘了。即如你上台演戏,就该一心一意和台上其他角色共同演出一好戏,却不要只 想后台。此才是人生大艺术,亦是人生大道义。


Why sharing inside information with your team is important

Whether as a team leader or as a CTO of the company, I always like to share “inside” information with my team. Be it market situation, funding status, challenges we’ve met, the opportunities we are after, new projects we are planning, new government regulations that might affect their financial well-being, a new framework worth studying, a good book I’ve just read, etc, I always like to share it with my team.

Obviously, for some information, I have to caution them not to leak outside, but I always state that I trust their professional integrity.

By sharing inside information, what I am telling them is:

  1. I take them as my equal partners, and I consider them to be part of our inner circle of the company.
  2. I trust their personal and professional integrity, even though some information needs to be kept under lid, I trust them to do so.
  3. I have set up an open communication channel with them, and I’m willing to share inside information with them.
  4. I believe they are intelligent human beings, and that they are capable of understanding all issues involved.
  5. Regardless of the challenges we’ve met, I believe that they can make very constructive contributions to the company, that’s why I want them to be part of the inner circle.
  6. I show my respect to them, as professionals in equal standing.

People like to be engaged. By sharing inside information with them, you are telling them that they are in the know, and that you are trusting them fully. Action means a lot more than fancy words. It is more effective to raise my team’s engagement with this action than any magnificently written and eloquently delivered speech.


When you are overdoing continuous integration

I was having a coffee with a couple of friends last Saturday, and one of them said that since they started doing continuous integration (CI), he has never been busier in project building and tools making. On top of the project works he has to do, he now spends a lot of time writing Jenkins plugins, integrating, debugging, and configuring. After five months of doing CI, he came to the conclusion that something is wrong.

The question was: what’s wrong?

Like any methodology or paradigm that people get acquainted with, they tend to think of it as a panacea. And this is a case where people think of CI as panacea to all evils, and start to overdo it.

First of all, let’s just have a common understanding that, continuous integration might be a buzzword du jour, but ultimately, it’s just a process to make your project run more smoothly. Regardless of the tools you use, be it Jenkins, Continuum, BuildBot, Strider or what not, they are just tools. They are the means to achieve the end. What is important is the project itself, and that’s the ball you need to keep your eyes on.

Now, let’s see when do you know you are overdoing it.

  1. You spend more time working on the tools than working on the project, assuming that you are not into tool making business. In that case, you have to re-think about your process. Either the tool is too immature, too complicated, or it is a misfit for your project.
  2. Say, you are using Jenkins, and you have installed hundreds of plugins, and yet, you still need to write more custom plugins. When something is too complicated and too bloated, that’s a sign you need to step back and re-think.
  3. You need a dedicated team to work on and maintain the tools, again, assuming that you are not into tool making business. Although, traditionally,  it is quite normal that a large software project requires a team for software configuration management (SCM). However, the gist of CI in an agile DevOps environment is certainly not to maintain a large SCM team.
  4. Each job is too big, and is not broken down into smaller jobs. Whether you like it or not, big jobs tend to make Jenkins (or any other tools) complicated. In that case, it is probably better to refactor the code base into more manageable pieces first, or refactor the build workflow, instead of overworking on the build tools.

Ultimately, continuous integration is a practice. It is about what you do, and not about the tools you use. You don’t need all these fancy frameworks to really do CI. You might just have a few scripts and a couple of cron jobs, yet you might still be practicing continuous integration. CI is about splitting changes into small increments, and have the discipline to integrate frequently to not break the build.

CI is about behavior and mentality, do not fall into the trap of thinking that your team is practicing CI just because you have all the tools set up and running. And if you must constantly come back to work on the tools, instead of working on your project, you are overdoing it.


My Key58 DIY Keyboard

There are three things in life that I always tried to find the best: a mattress, a pair of shoes, and a keyboard. I’m not saying that I always buy the most expensive, but I always try to find the best fit for myself. For a mattress, it’s because if you spend one third of a day on it, you’d certainly want to sleep on something that does not give you backache in the morning. As for the pair of shoes, you’d probably spend half of your day walking in it, you’d certainly want something very comfortable, something that would not make your gait faulty and thus harm your health. And for a programmer, a keyboard is one of the major tools, if not the tool, to get our work done. A bad keyboard is a source of repetitive strain on your hands and can make your life miserable.

Like all programmers, I have owned many keyboards, some of them cheap, a lot of them quite expensive. Besides the many many keyboards that came with a computer and laptop, I have owned two of the Microsoft “natural” ergonomic keyboards, a HHKB II Pro, two mechanical ten-less-key keyboards, an old keyboard that came with the IBM 3151 terminal (very nice to type on), a Goldtouch 02 split keyboard, an Ergodox, and many more. But there’s always something missing, something that would make me fully happy. So I decide to create my own, and here is my DIY Key58 keyboard.

The Ergodox is quite good, but the thumb cluster is a bit difficult to harness. I also love the Key64, but it relies heavily on the pinkies for the modifier keys. I also love the Keyboardio, but it will have to wait until at least next April, and not only is it expensive (although I still want to order one!), the key map shows that the designer obviously did not have programmers in mind.  So, I borrowed ideas from these keyboards that I like to create my own.

The main goals of the new keyboard would be:

  • It should be optimized for programming, and the most frequently used programming symbols should be easy to access. For this, I borrowed from Key64.
  • It should be optimized for Emacs and Linux, as this is the main environment I work in.  Therefore, the modifier keys should be very accessible, and the Emacs key combination should be easy to type. For this, I borrowed from Ergodox and Keyboardio.
  • Navigation keys should be accessible without moving away from the home row. As much as I can customize my Emacs, there are many applications that do not provide any way of customizations, and the normal navigation keys are still a must. For this, I borrowed from Ergodox and Key64.
  • It should minimize stretching your fingers side-way, be it your index or your pinky, as this is the source of repetitive strain injury. Even though it is impossible to eliminate it completely, we should minimize it as much as we can.
  • Your arms and shoulders should be in a relaxed position when you are typing. All fingers should be relaxed, with roughly the same bending angle.

With these goals in mind, I set out to design my own keyboard. I tried many key arrangements, with different angles, until I find one that I think would be best for me.


The final layout and keymap look like this:


Note that this arrangement might not be optimal for others, but it is pretty optimal for me, due to the size and length of my fingers. Also, the diagram above does not show the angle of the key layout, see the laser-cut plate below.

The Launch key (at right bottom) is a key to launch application. I always have to key that is bound to bring up gmrun to start applications. So that is the purpose of this Launch key. On the upper right corner, there’s a Lambda key. This is not used for now, but it is intended to be used as programmable shortcut key. I’ll need to figure out how to do that in the firmware. Other keys are just normal stuff.

With that, I used OpenSCAD to design a key holder plate and a bottom plate, and have them laser-cut on 2mm 304-steel, as this is the most commonly found on the market. Originally, my plan was to 3D-print them, but since I don’t have a 3D printer, I got quotes from five or six 3D printing service providers, and the price quotes were ridiculous. Hence the laser-cut steel plates.


The key holder plate and the bottom plate, together, are quite heavy. I’m sure the keyboard could be used as a handy weapon. If someone were hit on the head with it, I’m sure he would have some serious concussion, if not killed on the spot :)

Once the plates came in, it’s time to hand-wire it. Here are some pictures of the wiring and soldering work:


Yeah I know, the wire and solder joints are quite ugly. With my reduced eyesight (I can barely see the pin on the mechanical switch without a magnifier glass) and hands not as sturdy as they used to be any more, it is impossible to ask for beautiful solder work.

For those with sharp eyes, you’d probably have noticed that the key holder plate is missing a screw hole. That’s right, the guy at the steel shop had managed to forget one hole. I don’t know how, as this was supposed to be handled all by computer and steel-cutting machinery, but he really did. However, since this was a job ordered online, I didn’t feel bothered enough to send the plates back for this little imperfection.

Even though I don’t play game at all, I still prefer the red switch as it offers the least resistance, and it is very comfortable to touch type on. Therefore, I had Cherry MX Red for every key. The key caps are just normal non-brand-name, cheap plastic key caps, although they are thicker than the normal ones. They are comfortable to type on, but nothing fancy.

And the  final result looks like this:


And from the top:


Ok, it is not exactly beautiful, but it feels great to type on. So, function over form for now.

I used the Teensy 2.0 for controller, to take advantage of existing works for the firmware, which is based on the tmk_keyboard, with my own small modification. Source codes are available here. Only two layers of layout are implemented at this point, but I intend to add more layers as I fine-tune it to my likings. Maybe a mouse key layer will be added in the near future, by activating the Fn3 key.

After a few days of practice typing, I kind of like it, a lot. I’m typing this blog with this keyboard now. I have not done any scientific measurement on the movement of my fingers, but my pinkies are definitely less busy, and feel a lot less stressed.

However, it is, in no way, perfect. There are still a lot of room for improvement, especially regarding the layout. First of all, the modifier keys under the thumbs should be moved up by 4 to 5 mm, closer to the keys on the fourth row. That would make the size of keyboard a tiny bit smaller, but I feel it would be significantly more comfortable. Worried that the keys might be too close to each other and it would be hard to pull the key cap, I added an extra one millimeter in the distance between the keys, in the last minute, just before sending the plate diagram to the steel shop for laser cut. That was a bad mistake, as the one millimeter added up quickly, and that makes the first row a bit too far too reach. And thinking about it, I found that I almost never pull the key caps, unless for doing some special clean up. Therefore, the cost of that one millimeter is immeasurable.

As you can see, the layout is still not optimal, even though it is designed with the size of my hands in mind. There are four keys that are still hard to reach without moving my hands, namely, the Esc, 5, 6, and the Lambda key on the top right. I don’t really care about the Esc key, I’m not a Vi person. The Lambda key is not used yet, so I’m not sure how often it will be hit in the future. But the 5 and 6 keys need a bit of stretching. However, they are still closer to the home row than the same keys on a normal keyboard.

I’m not also very satisfied with the two Fn0 and Fn1 keys. Even though I put them in between two modifier keys, and even though I used an R2 cap, which is already the lowest cap, I can always feel its presence there, under my palm. I see that Keyboardio used a special cap that is sinking a bit lower than other keys, although I have not been able to try on one yet, I have a feeling that it would solve my problem here. To be frank, I have not hit the Fn key by accident yet, so far, but that constant presence keeps reminding me that I need to avoid it when they are not needed.

There is still something not exactly right with the layout of the four rows. I am not sure what, I can’t put my fingers on it (no punch intended), but it just doesn’t feel perfectly satisfied. Probably I’ll figure out after using it for some more time.

Before we go on to another problem, let’s just say that one thing I’m really happy with, is that my pinky and index, on both hands, need not do side-way stretching very much. If you are confused about what I’m saying here, try this on a normal keyboard. Place your fingers on the home row, and try to hit ‘`’, Tab, Left Shift, ‘B’, ‘Y’, Right Shift, ‘\’, Backspace, Enter, etc, without moving your hand from the home row. Your indexes and pinkies would need to stretch side-way significantly to reach them, and these are the keys we hit many many times on a daily basis. With this new keyboard, my fingers need to stretch a lot less. And that’s a very good thing.

Let’s get back to the problem. Another lesson learned was that, it’s probably easier to design a PCB board and save the work on  hand wiring, and that would make the keyboard cleaner too. I would definitely work on a PCB for next one. Using the Ergodox PCB as a base and add my own modification on it would be something not too difficult to handle. Another possible modification would be to make it a split keyboard. A split keyboard would allow us to reduce the angle of the key layout, and hence, significantly reduce the size of the keyboard. It would be really messy to hand-wire a split keyboard, and that’s why I didn’t do it for this one. Well, a next version then. I’m still looking for my perfect keyboard.


Issue with Realtak RTS 5227 SD card reader on Debian 8.1

Just got a new Lenovo Thinkpad X250 laptop last month, the first thing to do was to format the disk, got rid off Windows and installed Debian 8.1. The only extra thing I needed to do was to install the iwlwifi firmware, and everything just worked. Well, almost, since I haven’t had time to play with the fingerprint scanner yet.

Over the course of the month, I ran apt-get upgrade twice, for security update. Everything seemed ok, until today, when I inserted an SD card to look up something. Nothing came up. I looked at the system log, not even a single event. It was as if the card reader never detected any insertion.

I took a look at the hardware status, sudo lspci -v  showed:

02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5227 PCI Express Card Reader (rev 01)
        Subsystem: Lenovo Device 2226
        Flags: fast devsel, IRQ 16
        Memory at f1100000 (32-bit, non-prefetchable) [size=4K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-00-00-01-00-4c-e0-00
        Capabilities: [150] Latency Tolerance Reporting
        Capabilities: [158] L1 PM Substates

Hmm, it looked like the module is not even loaded. What’s up? Let’s take a look at dmesg:

[    1.315456] usbcore: registered new interface driver hub
[    1.315520] rtsx_pci 0000:02:00.0: irq 56 for MSI/MSI-X
[    1.315546] rtsx_pci 0000:02:00.0: rtsx_pci_acquire_irq: pcr->msi_en = 1, pci->irq = 56
[    1.315687] thermal LNXTHERM:00: registered as thermal_zone0
[    1.315690] ACPI: Thermal Zone [THM0] (45 C)
[    1.315884] pps_core: LinuxPPS API ver. 1 registered
[    1.315885] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <>
[    1.316046] PTP clock support registered
[    1.317042] e1000e: Intel(R) PRO/1000 Network Driver - 2.3.2-k
[    1.317045] e1000e: Copyright(c) 1999 - 2014 Intel Corporation.
[    1.317176] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    1.317192] e1000e 0000:00:19.0: irq 57 for MSI/MSI-X
[    1.317479] SCSI subsystem initialized
[    1.317643] usbcore: registered new device driver usb
[    1.318186] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    1.318398] ehci-pci: EHCI PCI platform driver
[    1.319230] libata version 3.00 loaded.
[    1.415653] rtsx_pci: probe of 0000:02:00.0 failed with error -110
[    1.483569] e1000e 0000:00:19.0 eth0: registered PHC clock

Hmm, the module rtsx_pci failed to load, with error -110. This is funny, I’m very sure that the card reader was working flawlessly many times before, across multiple boot sessions. I have a couple of encrypted SD cards to store personal stuffs, and I had used them many times on this laptop. Something went wrong along one of the updates.

Browsing through the kernel bugzilla showed a few cases which had the same error code, and it seemed to be related to the msi_en option. Well, let’s try to work around by disabling that. We need to unload the module first, because of its half-baked state:

sudo modprobe -r rtsx_pci

Then, load it again, with the option disabled:

sudo modprobe rtsx_pci msi_en=0

Now, the system log showed:

[ 5234.004011] rtsx_pci 0000:02:00.0: rtsx_pci_acquire_irq: pcr->msi_en = 0, pci->irq = 16

Ok, it seems to be loaded. Let’s look at the status again, the command sudo lspci -v showed:

02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5227 PCI Express Card Reader (rev 01)
        Subsystem: Lenovo Device 2226
        Flags: bus master, fast devsel, latency 0, IRQ 16
        Memory at f1100000 (32-bit, non-prefetchable) [size=4K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-00-00-01-00-4c-e0-00
        Capabilities: [150] Latency Tolerance Reporting
        Capabilities: [158] L1 PM Substates
        Kernel driver in use: rtsx_pci

Now, let’s insert an SD card into the slot, and look at system log, we got:

Aug  9 15:35:23 venus kernel: [ 5280.782622] mmc0: new high speed SDHC card at address aaaa
Aug  9 15:35:23 venus kernel: [ 5280.783923] mmcblk0: mmc0:aaaa SU08G 7.40 GiB 
Aug  9 15:35:23 venus kernel: [ 5280.797583]  mmcblk0: p1 p2

Bingo, now the card has been detected, and devices are created properly. All we have to do is to mount it.

To keep the module option across reboot sessions, create a file /etc/modprobe.d/rtsx_pci.conf and add the following line:

options rtsx_pci msi_en=0

Rebooted and it still worked!


Improvement on user geolocation cache with Hazelcast

In the last post, we have seen how to cache user geolocation data in Hazelcast, and search for nearby users. This was great. However, as soon as you have a lot of cached data, you would find that searching for nearby users is quite slow. What was wrong?

As you have remembered, we have implemented a GeoDistancePredicate which computes to see if a user is within the distance limit to a specific point. The way this predicate works is that, for every entry cached in memory, Hazelcast would invoke the apply() method to see if the entry satisfies the distance limit criterion. If you have cached a large data set, this method will loop through all entries in the whole data set, one by one. Therefore, this is basically a O(n) problem. There’s no way it can fast, if you have a large data set.

We need to find a better way to quickly search for nearby users, using only the mechanisms available in Hazelcast.

Hazelcast defined two kinds of predicates, one is the normal Predicate, and the other is called IndexAwarePredicate. As the name implies, the second predicate interface will use the internal attribute indexes of your cached object to speed up query. In the previous post, our GeoDistancePredicate only implements the Predicate interface, therefore, during the query operation, it has to scan through the whole data set to get the right results. In this post, we are going to change the implementation to take into consideration indexing, which should significantly help with search performance.

Before we can use the indexes for searching, we have to tell Hazelcast which attributes to index. Remember in the last post, we had defined a class called MyCachedUser. This simple class has three attributes, but when we search for nearby users, we are only interested in the user’s location, namely their latitude and longitude coordinates. Therefore, we want to build indexes on these two attributes to help speeding up with searches.

At start up, we need to tell Hazelcast that we want to index these attributes, such as:

    HazelcastInstance hz = Hazelcast.newHazelcastInstance();
    IMap<Object, Object> map = hz.getMap("users");
    map.addIndex("latitude", true);
    map.addIndex("longitude", true);

This tells Hazelcast that we want the two attributes to be indexed, and that we will search for ranges, hence, we set the second parameter to true. This tells Hazelcast that these two attributes need to be indexed and sorted.

Now that we have the indexes in place, we can use the indexes to limit the search space to a specific range. Given a point, we want to draw a circle with the point as the center, with the distance limit as the radius of the circle. And we want to limit our search within the circle only. Before we can do that, we need to figure out how to draw the circle first, then find all users whose current location is within the circle.

However, drawing a circle would not allow us to take advantage of the latitude/longitude indexes. It is much easier to draw a square first, and limit the search ranges within the square. We know that the circle is within the square. After we have found all users within the square, we can eliminate those who are not within the circle, mainly, the users found at the four corners of the square.

To draw the square, we need a point to the north of the central point, a point to the east, a point to the south, and a point to the west, all with distance equal to the distance limit we want to search for.

For these four points, we have a starting point, the bearing, and the distance. The formula to find the point is well-known, so I’m not going into all the details. The Java implementation is followed:

	public static GeoCoordinate fromBearingDistance(double lat, double lon, double bearing, double d)
		 * φ2 = asin( sin(φ1)*cos(d/R) + cos(φ1)*sin(d/R)*cos(θ) )
		 * λ2 = λ1 + atan2( sin(θ)*sin(d/R)*cos(φ1), cos(d/R)−sin(φ1)*sin(φ2))
		double lat1 = Math.toRadians(lat);
		double lon1 = Math.toRadians(lon);
		double brng = Math.toRadians(bearing);
		double lat2 = Math.asin(Math.sin(lat1) * Math.cos(d / EARTH_RADIUS)
				+ Math.cos(lat1) * Math.sin(d / EARTH_RADIUS) * Math.cos(brng));
		double lon2 = lon1
				+ Math.atan2(
						Math.sin(brng) * Math.sin(d / EARTH_RADIUS)
								* Math.cos(lat1), Math.cos(d / EARTH_RADIUS)
								- Math.sin(lat1) * Math.sin(lat2));
		return new GeoCoordinate(Math.toDegrees(lat2), Math.toDegrees(lon2));

The coordinates are provided in degrees, but the formula calculates based on radians. Therefore, we need to convert to radians first, and convert the result back to degrees. And the GeoCoordinate is defined as:

public class GeoCoordinate implements Portable
    public static final String KEY_LATITUDE          = "latitude";
    public static final String KEY_LONGITUDE         = "longitude";
	private double latitude;
	private double longitude;

	public GeoCoordinate()
	public GeoCoordinate(double lat, double lng)
		this.latitude = lat;
		this.longitude = lng;
	public double getLatitude()
		return latitude;
	public void setLatitude(double lat)
		this.latitude = lat;
	public double getLongitude()
		return longitude;
	public void setLongitude(double lng)
		this.longitude = lng;
	public int getFactoryId()
		return CachedObjectFactory.FACTORY_ID;

	public int getClassId()
		return CachedObjectFactory.TYPE_GEOCOORDINATE;

	public void writePortable(PortableWriter writer) throws IOException
		writer.writeDouble(KEY_LATITUDE, latitude);
		writer.writeDouble(KEY_LONGITUDE, longitude);

	public void readPortable(PortableReader reader) throws IOException
		latitude = reader.readDouble(KEY_LATITUDE);
		longitude = reader.readDouble(KEY_LONGITUDE);

	public String toString()
		return "lat=" + latitude + ";lng=" + longitude;

Now, it’s time to revisit our GeoLocationPredicate implementation. As we said earlier, we need to make this predicate aware of the indexes. We need to implement the IndexAwarePredicate interface, or we can derive it from the class AbstractPredicate, which also implements the IndexAwarePredicate interface. Without further ado, here is the modified version of GeoLocationPredicate:

public class GeoDistancePredicate extends AbstractPredicate
	private double latitude;
	private double longitude;
	private double distance;
	private double latFloor;
	private double latCeiling;
	private double lngFloor;
	private double lngCeiling;
	public GeoDistancePredicate()
	public GeoDistancePredicate(double lat, double lng, double dist)
		this.latitude = lat;
		this.longitude = lng;
		this.distance = dist;

	private void init()
		GeoCoordinate c = GeoUtil.fromBearingDistance(latitude, longitude, GeoUtil.NORTH, distance);
		latCeiling = c.getLatitude();
		c = GeoUtil.fromBearingDistance(latitude, longitude, GeoUtil.SOUTH, distance);
		latFloor = c.getLatitude();
		c = GeoUtil.fromBearingDistance(latitude, longitude, GeoUtil.EAST, distance);
		lngCeiling = c.getLongitude();
		c = GeoUtil.fromBearingDistance(latitude, longitude, GeoUtil.WEST, distance);
		lngFloor = c.getLongitude();
	public void readData(ObjectDataInput in) throws IOException
		latitude = in.readDouble();
		longitude = in.readDouble();
		distance = in.readDouble();


	public void writeData(ObjectDataOutput out) throws IOException

	public boolean apply(Entry entry)
		boolean res = false;
		Object obj = entry.getValue();
		if (obj instanceof MyCachedUser)
			MyCachedUser u = (MyCachedUser) obj;
			double dist = GeoUtil.getDistance(latitude, longitude, u.getLatitude(), u.getLongitude());
			res = (dist <= distance);
		return res;

	public Set filter(QueryContext queryContext)
		String sql = "latitude BETWEEN " + latFloor + " AND " + latCeiling + " AND " + "longitude BETWEEN " + lngFloor + " AND " + lngCeiling;
		SqlPredicate sqlPred = new SqlPredicate(sql);
		Set entries = sqlPred.filter(queryContext);
		Set endList = new HashSet();
		if (logger.isDebugEnabled())
			for (QueryableEntry e : entries)
				Object v = e.getValue();
				if (v instanceof MyCachedUser)
					MyCachedUser u = (MyCachedUser) v;
					double dist = GeoUtil.getDistance(latitude, longitude, u.getLatitude(), u.getLongitude());
					if (dist <= distance)
		return endList;


As you can see, given a central point, we find out the points at north, at east, at south and at west. Using the coordinates of the four points to create a square, we then limit the search space within that square only. The bearings are defined as:

	public static final double NORTH = 0.0d;
	public static final double EAST = 90.0d;
	public static final double SOUTH = 180.0d;
	public static final double WEST = 270.0d;

In this new implementation, the method apply() will not be useful anymore, but the method filter() will be called to filter results based on our new search criteria. Here, we use the Between and the And predicates to search for all users within the square, then we filter out all those whose current location is not within the circle.

That’s it, there will be no change to your application logic. With this new modification, we reduce the search complexity from O(n) to O(log n), which should be significantly faster than the previous implementation.