Watson, for those of you who have spend the last week under a rock, is an IBM computer which soundly trounced two long standing human champions in the US quiz show Jeopardy last week. The Watson story is a good hook for me to jump out of mundane Higher Education policy and get back to some bright splangly futurism!
Watson is one of the newest incarnations of a weak AI - an artificial intelligence with limited scope and capacity, below human levels. These are increasingly abundant things. They beat us at chess, decide on our creditworthiness, keep our cars going, or even drive them for us, trade on stock markets and so on. A great many human jobs only needed 'Weak AI' levels of function anyway, and they have simply vanished, or were never created. Our world economy runs on a vast network of invisible switchboard operators, filing clerks and so on, invisible in the machines. There would be billions of them, but for the machines.
Strong AI - Artificial intelligence on a human equivalent level is a different matter. Like Moon holidays and Aircars, Science Fiction promised it to us a half a century ago, and it never came. Moon holidays and Aircars were disbarred by economics and physics - they could be made to work, but never at a useful price. But the same forces, economics and physics, that stole these dreams from us brought Moore's Law. This rule of thumb predicts the doubling of the available processing power, at a given price, every 18 months. That makes strong AI inevitable. You can argue when, but not if.
Strong AI will mean the end of Universities as we know them, but perhaps also their rebirth as we dreamed them. To understand why, we need to unpack the economics of first decade or two of a world with strong AI.
One fine day, in our lifetimes, IBM, or HP, or some tech giant unborn, will unveil a strong AI. It will be able to pass a Turing test, and will do so for our entertainment on Oprah, The Late Late show, or wherever. It will hold it's own at Go, write a technically competent Sonnet and then quickly fade from the news cycle. Kurzweil predicts a date of around 2029, others later (there is a famous bet on it). It's development will have cost it's company around US$100 million in today's money, that being about as big a budget as a high risk project can justify and sustain. Most of that cost will have been payroll, the hardware will only be a fraction of that, perhaps US$10m (the Watson hardware will cost you about US$3m).
Let's assume that strong AI is about equivalent to a new graduate. It will have relative strengths and weaknesses compared to us 'meatbags' of course. It can read the manual quickly, but might not be so good at charming potential clients. But it probably won't sleep, take holidays, lunchbreaks, or gossip by the water cooler either, so in terms of raw hours it should be about 10 times as effective as a human. If we take a graduate salary of say, $30,000, and an initial cost for a strong AI hardware at US$10m, it's not economic. But Moores law will halve the cost of that power every 18 months. So in a decade or so, a strong AI is going to be cost competitive with a graduate hire, with a hardware cost of around US$300,000, equivalent to the first year wages of ten graduate hires that do the same work.
There many, many assumptions here. I haven't factored in software licensing (Open Source Strong AI anyone?), recruitment and training costs. I've not considered overheads for the humans or AI's, or AI downtime (will AI's need to spend 8 hours powered off a day sorting out our memories as we do?). Nor have I considered out year salaries beyond year 1. It's all order of magnitude guesses, but with exponential growth in available power, an order of magnitude error makes only 5 years difference. I'm dancing past an enormous debate on whether Moore's Law will hold or not, and taking the probable outcome it that it will. Early in the second decade after you see a strong AI interviewed on the telly, it will be a cheaper alternative to hiring human graduates.
Now, a human graduate takes 4 years to train (on average, assuming a short MSc after a 3 year degree), and another year before that to get college entry exams sorted out. That only leaves five years after graduation to earn back the cost of your University education, if a strong AI exists before you start. Even allowing a few years slack to uptake of AI's, unless you are already in college when you see that strong AI launched on the news, don't bother going. If you planned to do so to help you get a job, it's too late. Even if you get a job, you won't make your degree investment back in time before you are replaced. At best you'll spend a couple of years as a human buddy to an AI, until the HR AI figures out that your presence is no longer reducing the error rate, and you are gone. They'll hire a human to fire you. There's a sensitivity subroutine. They're nice like that.
You can still go to college, but go to have a good time. Study Fine Art, or Ancient Persian. Whatever interests and stimulates you. Do Social Work, or Teaching - people centred jobs will be the last to go. Chase your dreams. Learn to Paint, or dance. Meet people. Make friends. Study comparative literature, and sociology. Forget about Business, or IT, or Law, or any of the bankable professions of the olden days. You can't compete.
Our Universities long and often stormy relationship with practicality will be at an end. No longer will they need to bow before Mammon, and produce MBA's and degrees in Marketing or computational Finance. They will return to our dream of them, playgrounds of the mind, where we pursue knowledge for the joy of it, for it's own sake, and not for profit.
(The end of our day to day involvement in economic life may, of course, present other difficulties, which remain out of scope for this blog).