What are the chances for a success of the Google Chromebook?
onsdag 15 juni 2011
The success of Chromebook
What are the chances for a success of the Google Chromebook?
tisdag 24 maj 2011
Simulation argument
Problem: To make a simulator of a real world, a lot of performance will be needed. One way to achieve this is to use a massively parallel computer. One problem with such a simulation would be that every event that happens anywhere would influence every other part of the simulation (through one of the forces in nature). This would be expensive, and has to be minimized.
Solution: Limit the speed with which events can influence other parts.
Our world: No information can travel faster than the speed of light.
Problem: Given time, events will eventually have effects on all other parts of the simulation, even if there is a speed limit.
Solution: Let the simulated world expand in such a way that there is a limit where changes can no longer catch up outside of a limited sphere.
Our world: The universe is expanding, and we will never be able to see beyond a certain point because that point is moving away too fast.
Problem: Computing exact results with infinite precision takes infinite time.
Solution: Impose a limitation on the precision of computations.
Our world: There is something called the Heisenberg Uncertainty principle. It states that precise inequalities that constrain certain pairs of physical properties, such as measuring the present position while determining future momentum; both cannot be simultaneously done to arbitrarily high precision.
Problem: There are many computations that may never be observed and may never have any effects outside a limited environment.
Solution: Make use of a lazy algorithm. Things are not computed until the result is actually needed somewhere.
Our world: In quantum physics, there is something called quantum entanglement, whereby the degrees of freedom that make up the system are linked in such a way that the quantum state of any of them cannot be adequately described independently even if the individual degrees of freedom belong to differentobjects and are spatially separated.
lördag 21 maj 2011
Exponential growth
tisdag 17 maj 2011
Will a general artificial intelligence be benevolent?
I think one of the important building blocks of an AI is the driving force. The problem with chess programs is that the driving force is extremly narrow. It is something like "for each piece, try every possible move. For each of the tests, try every possible response. All the time, evaluate the situation according to a strictly defined algorithm". And some more. This doesn't give the program much ability to generalize.
If the computer instead would have a driving force more generally defined, as "play human opponents, goal is to win", then it would leave more room for out-of-the-box solutions. For example, you win more if the humans play worse. So one solution would be to select low talented opponents. Another solution that would fulfill the goal would be to somehow arrange to have the opponents killed, and win by walk-over. A program that can take this kind of very general definition of driving force doesn't exist yet. Even if the given example is extreme, more near future realistic examples can be imagined.
It is a version of the old adage "the way you ask something determines what kind of answer you get". I think the defined driving force, the algorithms, and the rules, are what will ultimately make up an AI. You don't really need the rules to make an AI, but I strongly encourage the use of them. Even if the driving force is seemingly benevolent, it is no guarantee of the eventual behaviour.
The hope is that using a human driving force for an AI would also give it the same behaviour. But I think that is a mistake, as the nice part of (most) human behaviour is only a side effect. There are spectacular examples of humans that succeed well on an individual point of view, but catastrophically from the point of view of the human civilization.