söndag 23 februari 2014

Bitcoin and Open Source

Imagine that the Bitcoin had been a company that was trying to create a new international payment system, based on something else than cryptography. And that the project would have gone through similar problems as Bitcoin. The confidence from around the world would have been laughable by now.

What is it that makes Bitcoin coming back again and again, stronger after every set-back? And what is it that makes such a large number of critics angrily claiming that it is all a hoax?

One reason is that there are many who believe in the idea. As long as they continue to invest in it, it grows. But that's not the root cause, it's an effect. In the example of a company that had attempted such a project, I do not think there would have been many followers, much less investors.

The idea of Bitcoin is built upon an invention on how to implement a public distributed ledger. I think the key factor in it all is that Bitcoin is open source. This has some interesting implications, and leads to some mechanisms not generally recognized by critics.
  • The implementation is available for anyone to see. That means you can, if you want, look for weaknesses in the encryption. No one can claim that NSA added back doors. This is a vital aspect if the algorithm is going to be accepted by major investors.
  • There can't be a hidden agenda. Even if it would have been Usama Bin Laden that was the man behind Satoshi Nakamoto, you can trust Bitcoin.
  • Regardless of set-backs, as long as there are people believing in Bitcoin, the project will continue to improve.
There are arguments being repeated by critics why Bitcoin will fail. The effect of Bitcoin being deflationary is a common one. Users and developers of Bitcoin don't care about that. It may actually be the opposite. Classically trained economists can say whatever they think about deflationary currencies, but it is not going to have an effect on the Bitcoin success.

An issue commonly referred to is that Bitcoin will take the power to control the money away from governments. This will make it harder for them to manage debts. Again, this has no direct effects on Bitcoin. Governments will have to find new ways to manage debts. Maybe, they actually have to have a better control of the cause of the debts, stopping the situation before it is going too far.

Linux was one of the first Open Source projects that changed the world, even though people in general doesn't know it. It was done through the Android operating system.

I think Bitcoin is going to be the next Open Source project that will change the world, with even bigger ramifications.

fredag 9 augusti 2013

The grand unified theory of the future

Three mergers

First, there are three mergers that are needed.

Intelligence with computers

The language was the first step. With it, it was possible to transfer lessons learned without the need for people to do the mistake themselves. Written text was the next step, where there was no longer a need to rely on the memory. The World Wide Web is maybe the next step again. Information is now available from more people, to more people than ever. And much faster.

Humans of today are already enhanced with the help of computer intelligence. I don't know what the next step will be, but it could be a more deep integration with one or more of our senses and computers. The eye is the input sensory with the highest data throughput. Eventually, I think the integration will be directly with the brain.

Eventually, there will come a day when there is no clear border between human intelligence and artificial intelligence. It will not be interesting to define a border.

Biology with machines

Originally, organic chemistry was used to denote chemical compounds that were part of life. Soon, it was realized that quite a lot of the chemical elements could be involved in biological organisms. The word virus is already used for both biological entities and a software algorithms.

As machines gets smaller, the mechanism they use to function will become more like living organisms of today, but more efficient. There are experiments ongoing to create artificial life from DNA. There exists organisms today with a completely hand crafted DNA definition.

Eventually, there will be self-replicating nano mechanisms and we will need immunity systems to protect us. We will live in a symbiosis with some of these, just as it is today. There will also be artificial mechanisms that use evolution to improve themselves.

Reality with virtual reality

There are several signs of this:
  • Augmented reality
  • MMORPG
  • Simulation games
While virtual reality will get more and more like real life, there will be VRs that are not at all like real life. It is hard to predict how they will look like. Why should a VR emulate the principles of physical rooms, causality of time or personal identities? I believe every form of VR will always have a set of rules that you have to obey. Just that some rules will be completely alien to what we have today.

Final merger

The final merger will make these three as one. The question what it is to be a human will lose it meaning. But I think there will be many problems on the way, possibly painful, including existential risks to us.

onsdag 26 september 2012

Stock trading robots

Signs of an approaching technological singularity: Robotic trading systems.

A rapidly increasing amount of the trade on the exchange markets are now automated. There are people that make a living from short term investments (day traders), and they complain that it is harder and harder to read the signs. And so they can no longer make a profit.

To be able to make transactions at millisecond level, the computers need to be located in the same room as the "stock market computer" itself. As there are many stakeholders interested in this, they all want as short latency as possible. To make it fair, all such computers have the same connection with the same length of the cable.

This is yet another business taken over by computers. Is it good or bad?

söndag 2 september 2012

Robots and AI


Some people, interested in the future, can't stop thinking about the development of robots and general artificial intelligence. But I think most of them got it wrong, and that it is not going to happen that way.

There will never be human robots, except for marginal needs. If a robot needs to be completely general it would be designed as a human. But the idea of ​​making general purpose robot is wrong. People do not think about it, but everyday life is already full of specialized robots. I have several: one that cleans the dish, one who washes clothes, one that makes coffee for me, etc.

Do we really need general purpose robots? I don't think so. The day it will be possible to make "real" robots, we have already integrated our world with the virtual, and you can't distinguish between them anyway.

I believe it will be the same with general artificial intelligence. Some are obsessed with the idea of ​​making a general AI, smart as a human, that will then create the next, even smarter. One problem with this concept is to define "intelligence". There are already a lot of specialized AIs. Why should we make one that mimics a human mind? And if we create one, do we want it to be human? That means we have to give it the basic human goals, which are a will to live and a will to breed. Without those, it will not have emotions, and so it will not at all be human.

I think it will go the same way as for robots. The day we succeed, then we have already succeeded in making a copy of a physical brain in a computer. The need will be marginal.

So what about the Technological Singularity then? I don't think it depends on the presence of human-like robots or human-like AI. The Singularity is about accelerating progress, which is perfectly plausible with specialized AI.

lördag 4 februari 2012

The future of online gaming

As a generalization, there are two types of games; single player and online multiplayer. As more and more players are connected to the Internet, the demand for online multiplayer games will grow. When looking at online multiplayer games, there are two types; PvE and PvP. PvE means Player versus Environment. This is a game you play in cooperative mode with other players. PvP means Player versus Player.

There are some fundamental differences between PvE and PvP. In PvE games, huge resources have to be invested in by the game designers to create an environment for exploration and adventure. The average cost for a video game 2010 was more than 20 M US$, and typically lasts for 60 hours.

PvP games, on the other hand, can engage players for far longer time, even if the game design is much simpler. It may still take a big effort to produce the game, but there is less need for complex environments. On the extreme side, there are Chess, Go and other classical board games.

How come players return again and again, to the same PvP battle ground? The answer is the opponents. You don't like games that repeat the same every time. You want variation and increased challenges. In PvP, you can expect the opponents to improve, posing new challenges. That is why you can spend your life playing only a game like Go, and still learn and appreciate new ideas.

There is one market that has, to a large extent, been ignored. That is PvE games where players can add to the environment. I played the PvE side of World of Warcraft for several years, and enjoyed most of the time a lot. But in the end, you had to wait for Blizzard to create the next expansion. Imagine if it would have been possible for players to add content to this world, for everyone else to enjoy. The game could have been unlimited. World of Warcraft isn't easily adapted for such a strategy, though.

There are games with player created content. Many game engines allow for mods. One of the most successful multiplayer game, Counterstrike, started out as a mod to Halflife. If you look at what is happening with strategy game mods, like Starcraft, you will find an extreme ingenuity. Players have a way to use features in the most unexpected ways to create intriguing games and challenges. The problem is that these types of mods are of a temporary type. You play the game, it finishes, and the next time you start from the beginning again.

I predict a future type of game with the following characteristics, and think this is going to be the biggest genre in 10 years.

  1. It is online, massive multiplayer.
  2. You login, progress, save the state, and log out. Next time you return, you keep what you had.
  3. Players allocate a territory in the game world and add content to it that other players can explore. That way, the basic game is for ever developing, and it will never be the same from day to day.
  4. The adventures you create are your own, and no one else can modify them (unless you cooperate with friends).
  5. The social aspect is a key as you will do the exploration together with online friends. Friends can be a set of people you choose to associate with, or it may be temporary groups you meet inside the game.
  6. The owners of the game have to create a game server engine, a client, a starting area, and a list of flexible construction blocks.
In the 90s, I created LPMUD (with a lot of support from friends). It was a text adventure fulfilling many of these characteristics. LPMUD did not support graphics, and so it was difficult to move it into more modern settings. A contemporary version of this game type should support thousands of simultaneous players, which requires huge resources. For the first time, hardware and communication are now powerful enough.

This is the background to Ephenation.


onsdag 15 juni 2011

The success of Chromebook


What are the chances for a success of the Google Chromebook?

There are 5 main segments today: Workstation, Laptop, Internet tablet, smart phones and mobiles phones. The Chromebook will have to compete with these, and possible create a new segment. But I think 6 segments is at least one too many, and one of them will be marginalized. Unless the Chromebook do not out-compete either the Laptop or the Internet tablet, I think it may be that the Chrombook is the one that will be marginalized. (Aside from that, I think workstations and "dumb" mobile phones are also going to be marginalized soon.)

An important question is if the Chromebook is right in time. For it to succeed, people have to accept that everything is done in the cloud. The cloud services are improving quickly, but is it quickly enough? It is a paradigm shift, and it may take time to get used to it. I think this paradigm shift will win eventually, but the success of Chromebook is more uncertain.

Why is Google investing so much into Chromebook? Google's revenue comes mainly from Internet advertisements. They prefer people to use Internet services for their daily businesses, instead of legacy applications downloaded to the PC. The perfect world would be if everyone only use a web browser. And so, a perfect world for Google would be if everyone use a Chromebook.

People are not going to change to Chromebooks just because that is what is best for Google, especially if the pricing is not competitive. I think Google will have subsidize the Chromebook to succeed. An alternative solution to a Chromebook would be to have a separate keyboard that can be attached to an Internet tablet. That looks more attractive to me, personally.

tisdag 24 maj 2011

Simulation argument

There is a theory that we are all living in a computer simulation. Maybe not computer, but simulation anyway. This is a theory that is neither provable or disprovable, so it is undecidable. That means it falls into the same category as a religion. Nevertheless, there are signs that suggests we are in a simulation. These are based on what I would assume how such a simulator would be constructed.

Problem: To make a simulator of a real world, a lot of performance will be needed. One way to achieve this is to use a massively parallel computer. One problem with such a simulation would be that every event that happens anywhere would influence every other part of the simulation (through one of the forces in nature). This would be expensive, and has to be minimized.

Solution: Limit the speed with which events can influence other parts.

Our world: No information can travel faster than the speed of light.

Problem: Given time, events will eventually have effects on all other parts of the simulation, even if there is a speed limit.

Solution: Let the simulated world expand in such a way that there is a limit where changes can no longer catch up outside of a limited sphere.

Our world: The universe is expanding, and we will never be able to see beyond a certain point because that point is moving away too fast.

Problem: Computing exact results with infinite precision takes infinite time.

Solution: Impose a limitation on the precision of computations.

Our world: There is something called the Heisenberg Uncertainty principle. It states that precise inequalities that constrain certain pairs of physical properties, such as measuring the present position while determining future momentum; both cannot be simultaneously done to arbitrarily high precision.

Problem: There are many computations that may never be observed and may never have any effects outside a limited environment.

Solution: Make use of a lazy algorithm. Things are not computed until the result is actually needed somewhere.

Our world: In quantum physics, there is something called quantum entanglement, whereby the degrees of freedom that make up the system are linked in such a way that the quantum state of any of them cannot be adequately described independently even if the individual degrees of freedom belong to differentobjects and are spatially separated.