This is a fun and mostly on the nail skewering of some predictions on the future of the internet...
http://idlewords.com/talks/web_design_first_100_years.htmAnd some special ridicule is reserved for Elon Musk's hand-wringing regarding artificial intelligence.
The writer makes the claim that those least worried about AI are those nearest to it. "If you talk to anyone who does serious work in artificial intelligence (and it's significant that the people most afraid of AI and nanotech have the least experience with it) they will tell you that progress is slow and linear, just like in other scientific fields."
That may be so, but it's also irrelevant. If it can make people a profit, it'll be used, regardless of how well it works.
After the above article appeared, this one promptly showed up...
http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/No AI involved, but the perfect example of the mayhem that could be created by software running amuck. And you could see why it'd keep Musk awake at night, given he's making cars that are auto-updated wirelessly and which will be self-driving real soon now.
Robot cars
are a product of AI research, so worrying about what AI might do is eminently sensible. And the economic value of having cars talking to the net is so great it's a given it'll be the default for autonomous vehicles. (Knowing what's around the corner has obvious benefits for safety and for avoiding congestion.) That's scary, whether AI's involved or not. And if AI is involved, this is a hive-mind version of it.
The essence of AI is it's learned software, not programmed software. It can't be fully understood. It'll need to be well caged.
(More: 2015-08-04) Machine Learning:
The High-Interest Credit Card of Technical Debt (PDF)"Machine learning packages may often be treated as black boxes, resulting in large masses of 'glue code' or calibration layers that can lock in assumptions. Changes in the external world may make models or input signals change behavior in unintended ways, ratcheting up maintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating as intended may be difficult without careful design."
(More: 2015-08-06) Interview with Stephen Wolfram on AI and the future"I think the notion that you can expect to understand how the engineering works … that’s perhaps one of the things that people find disorienting about the current round of AI development is that 'you can expect to understand how it works' is definitely coming to an end."
It's often argued that "it's still programmed by people, so we're still in control". Not so. We've just chosen to be in control up until now.