Monday, August 20, 2012

What is computation?

Computation has many forms, but they all lead to the same result, and they are thus all equally valid. One of those forms is simplification, as in reduction. This is what allows lambda-calculus to compute. When I state the expression x + 5 = 8, the answer is already there. Every bit of information needed to "solve for x" is already embedded in the statement itself. Thus we can simplify, reduce it, to the statement x = 3. This makes it more clearer what x is, and is thus simplified.

That, in its essence, is what computation is. It's the simplification of statements into simpler statements. It is shown in languages such as Haskell, that one can do very powerful things without manipulating variables, by using just the reduction of statements into their simpler forms to compute. It has been shown that lambda-calculus is Turing complete, and can thus compute all efficiently computable expressions.

But what about NP problems? NP problems are problems that cannot be computed in polynomial time as of what we know now. It has however, not been proven that NP =/= P nor that NP = P, so we don't know whether they can or cannot be computed in P time. But as we have seen, they contain all the information needed to solve them, we just don't know any better ways than to try all the different combinations. But they are computable.

Saturday, August 11, 2012

Say's Law

Say's law is a fundamental law of economics, which Lord John M. Keynes claims to have refuted. In his work, The General Theory of Employment, Interest and Money, he states Say's law as "supply creates it's own demand". This statement however, leads to a wrong interpretation of the law. The true law is not that "supply creates it's own demand", but rather that to demand, one must supply, for how would one trade if one did not  have anything to trade with? So a true statement of the law would be "one's own supply creates one's own demand", for what you can demand in trade is a function of what you can supply to trade with.

Saturday, August 04, 2012

What entrepreneurs give back to society

The trend when justifying tax on production, is that the framework that society provides made the profits that the entrepreneurs utilizing the resource got for their trouble possible, and thus it is only natural that the entrepreneurs give something back to society.

What is often forgotten is that these entrepreneurs have already given back to society. If they have made enough of the economic good so as if to satisfy their own demand, they lower the price by lowering the general demand for the good. If they have surplus and trade it for other goods, they have increased the supply, thus further reducing the price of that good. Both these effects lead to a lower price of the good, and thus people have easier access to it.

 One could argue that this portion of the price which the entrepreneurial outfit lowers is what the entrepreneur gives back to society, and does this through the market. Whether this portion is enough, is another issue. But let it not be said that the entrepreneurs give nothing back to society.

Intuitive definitions of operations

Disclaimer: All of this is pure speculation, and should not be take seriously. Inspiration taken from the Wikipedia articles on the Ackermann function and Hyperoperations.

It's interesting to note how the common operations in mathematics are easy to define from one another, on the set of positive natural numbers. The only thing we need for addition, multiplication and exponentiation is simply a function which gives us the next  member of the natural number set,  e.g. the next integer. We'll call this function S(x) (S for successor).

Now, onto the definitions:
S(x) is the function which maps x, a member of a set, to the next member of that set.
We can define Add(a,b)* as S(S(S(...S(a)...), where the S function is applied b times.
Then Mul(a,b) ** is Add(a,Add(a,...Add(a,a)..) where Add is applied b times,
And Exp(a,b)*** is Mul(a,Mul(a,...Mul(a,a)..) where Mul is applied b times.

Simple as that!****

From this we can see, that if you want a machine that's able to preform addition, multiplication and exponentiation on the positive natural numbers, it is sufficient that it can apply the successor function to the members, and that it can call functions recursively.

If we'd want to define subtraction, we'd simply swap the successor function for the predecessor function P. The multiplication function would then become a multiplication of a negative and a positive number, and the exponentiation function would be an exponentiation of a negative number. If we define the (-) sign, when joined with a as this operation (the swapping of S for P), we now have negative numbers (note however that b must remain a positive natural number, but a can be any integer).

It would be interesting to see what happens if we use this definition on other sets, and might be a cause for inquiry later.

Notes:

* Usually written as a+b
** Usually written as a*b
*** Usually written as a^b (not to be confused with the natural exponent)
**** We could of course logically continue this process, and get operations where OP_n(a,b) is OP_(n-1) (a,OP_(n-1)(a,...OP_(n-1)(a,a)..) where OP_(n-1) is applied b times.

p.s. If you spot an error, point it out!

The Dawn of a New Blog


I've decided to publish some of my thoughts and tales, for whomever might be interested in reading them.

The first thing I'd like to do is link to my Github profile, where many of my programming projects will be published. Currently, it only has two, a program which simulates Conway's Game of Life (and other Life-like cellular automata), and a program which efficiently calculates all 2-parasitic numbers less than N in O(log_10(N)) time.

Recently, I've been taking a Standford Cryptography course through Coursera, which has involved some programming, but in order to encourage others to figure it out themselves, I've decided not to share those programs.

I hope you enjoy the tales!