Back in the 1980's there was a bubble in the technology industry that was to prove a good model for the dot-com bubble that would follow a couple of decades later. The 1980's bubble was due to the excitement and promise of Artificial Intelligence (AI). Thanks to some rather impressive but narrowly focused programs there was a feeling that the age of thinking machines was upon us. Investors and technologists alike rushed to get in on the act, a lot of money was spent, made and lost. People thought that AI technology would change the world1, but when it didn't change the world overnight much the interest faded away. Of course it did not help that in the excitement a lot of money was spent on functional but rather basic software, like rudimentary Expert System shells.
The parallels with the dot-com bubble are obvious, a new technology comes along that seems like it will change the way we live, work and play, everyone gets overexcited, but it all comes to naught. Except... that it is not true! We know the rest of the story, the boom-bust cycle was just a capitol markets phenomenon and belies the fact there was real revolution taking place. Today we have internet companies that have market capitalisations in the tens of billions so clearly there was value there. Furthermore that is just the direct value to companies whose businesses are focused on the internet. There another fundamental shift from a technology perspective that is less conspicuous but more pervasive. Most software today has to be internet enabled, it is either entirely web-based or connects to web-services. Users now expect the richer experience enabled by the web, applications that sit entirely on the user's desktop with no gateway to the outside world seem dated and quaint.
The web is so integral that almost every working programmer must be at least aware of the technologies underlying it. I would argue that the same can be said about AI, though its importance is a little more subtle. The AI in most systems tends to be obscured, partly because it is not directly visible from the user interface, partly due to its nature2 and partly because usually it does not make up a large part of the system. However a lot of the worlds most interesting software is only possible because of AI. AI systems are in use today in fields such as: flight control, manufacturing robots, search engines, fraud detection, medical diagnosis and games to name but a few. It is like raisins in a loaf of Raisin Bread - they may not occupy a large part of the volume of the bread, but you cannot have Raisin Bread without raisins!
Now you say to me: but AI has failed to make progress over the past few decades, where are those thinking machines and robots, show me the robots! Now we are getting to the crux of the matter, for AI can be about understanding and emulating human intelligence but it does not have to be! Indeed my thesis is that there are two closely related3 but nevertheless distinct sides to AI, the Science of AI and the Business of AI.
From the science perspective of trying to understand and emulate human cognition there may seem to be very little progress being made, but that is a misconception4 which unfortunately I do not have time to discuss in detail here. At any rate this aspect of AI is often referred to as Artificial General Intelligence, but I like to think of it as aI little "a", big "I". That is because there is not that much that is artificial about this endeavour, the ultimate goal is to understand cognition to the point where we can create an artificial agent that thinks at human level and beyond.
The business of AI is what I think people interested in creating smart applications should be aware of. In contrast to the science of aI, the goal here is to build applications and machines that appear to be intelligent. Since in this case we only care about intelligence in the context of the specific application being built, this type of AI is typically referred to as Narrow Artificial Intelligence. I like to think of it however as Ai, big "A", little "i". Since the applications and machines only need to appear intelligent, so it is definitely artificial. Also the means by which this is achieved is not that important, a cheap trick that works is as worthy as an actual intelligent agent. I hope I did not give you the impression that I am down on Ai, quite the opposite! The techniques of AI used in narrow contexts are what enable the applications I mentioned earlier and many more besides. Without an understanding of basic AI algorithms, you may find yourself spending a lot of time creating unreliable and hard to maintain programs, that could be expressed in much simpler ways with standard AI techniques.
OK, so that was a very long-winded way to say that working programmers should know something about AI. After all that there had better be a pay-off and there is. The next few instalments will be introductions to what I consider the most useful and generally applicable Ai techniques. Stuff that you are guaranteed to find useful because it has been useful thousands of times in the past. Stay tuned...
1. And indeed it has, though perhaps not in the way people imagined back then!
2. When you meet a really smart person you might exclaim "Wow! That girl is really smart", I doubt you have ever thought "Wow! That girl has a smart brain".
3. Especially in terms of the techniques used, the research conducted and indeed the personnel involved.
4. This is an argument better made by the people working on Artificial General Intelligence: http://singinst.org; http://www.idsia.ch
Sunday, September 26, 2010
Subscribe to:
Posts (Atom)