Google Maps’s Moat — How far ahead of Apple Maps is Google Maps?
Google maps is improving since 20 years. The improvement on the data is phenomenal if you look at it over then past 10+ years. I suspect the edgecases will be solved slowly, like navigating to a shop inside a mall, a market stall, an ad-hoc gathering, a planned building, et cetera.
Bit old but excellent analysis of most popular map service
Google Map’s Moat by Justin O’Beirne
What if the Public Understood How Money Works?
“I think there is an element of truth in the view that the superstition that the budget must be balanced at all times [is necessary]. Once it is debunked [that] takes away one of the bulwarks that every society must have against expenditure out of control. There must be discipline in the allocation of resources or you will have anarchistic chaos and inefficiency. And one of the functions of old fashioned religion was to scare people by sometimes what might be regarded as myths into behaving in a way that the long-run civilized life requires. We have taken away a belief in the intrinsic necessity of balancing the budget if not in every year, [then] in every short period of time. If Prime Minister Gladstone came back to life he would say “uh, oh what you have done” and James Buchanan argues in those terms. I have to say that I see merit in that view.”
What if the Public Understood How Money Works? in New Economic Perspectives
The Chinese Room Argument
The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.
The analogy is supposed to emphasize that AI proponents (and others) presuppose there's a fully materialistic explanation for consciousness. There's no wizard behind the curtain: it's all just deep neural networks ("boxes and books").
Kurzweil had a nice end-run around this problem. It doesn't matter whether the machine is sentient/intelligent/understands/etc. It matters whether the machine can convince humans that it is doing these things. There is of course still room for skepticism, as a researcher could uncover new, testable ways in which the machine isn't actually living up to our definition of any of these concepts. But if all that is left to decide the question is the way in which the thing doing the convincing happens to be implemented, then you're in the land of xenophobia to claim that as a determining factor.
The Chinese argument in Stanford Ecyclopedia of Philosphy
Dear reader, thank you for being with us. Do you think more people should read this issue?
Share this post on your social media or directly with your friends.
PS: You can buy me a coffee in the ko-fi. Try Refind.