I have realized a method which one can use to multiply two numbers faster than using only addition, by utilizing bitshift in addition to addition. By using the same idea as the fast fast power adding, but utilizing operators of a lower degree, so instead of using powers and multiplication, we are using multiplication and addition.
Now, when multiplying by a power of two in a binary environment, we are simply preforming a number of bitshifts to the left equal to the power of two, so this is an efficient operation. To get the multiplication of a number which is not a power of two, we can simply go over the bits in the number we are multiplying with, and adding up the bitshifts of each power of two times the number where the bit is equal to one.
I think it is best to describe this with an example, e.g.:
Let us say we're multiplying 5 by 3, that is 0101 and 0011. Now, we go over the number and find that in 3, there are bits in slot 0 and 1 (enumerated from the left in a little-endian system such as this), so we find that
5*3 = (0101*0011) = (1010 + 0101) = 1111 = 15.
And all this pretty efficiently!
We can use this to efficiently compute multiplication, and we can also use this to efficiently compute powers. Might we also use it to efficiently compute addition, using addition and successors? Or even higher order operators? Probably!
Update: By efficiently, I mean that it minimizes the amount of additions needed to be done, in favor of the (I believe) computationally simpler bitshift operation, reducing the additions to floor(log_2(N)), where N is the lower number being multiplied with, if preformed correctly.