Jump to content

Grist for the mill - Age of Computers or Age of Unemployment

Featured Replies

Posted

As I See It: Old Hephaestus Had A Bot, A.I.A.I.O.

Published: April 28, 2014

by Victor Rozek

In 1956, Nathan Rochester approached the Rockefeller Foundation to apply for a princely grant of $7,000. He said he wanted to throw a little shindig at Dartmouth University, where the minds of mathematicians and computer scientists could run free exploring what must have seemed like a fanciful and distant notion at the time--the creation of intelligent machines.

He probably would have been dismissed outright, but Rochester was no garden-variety, star-struck futurist. He also happened to be the chief engineer of the IBM 701--the first general purpose, mass-produced computer--and therefore had the requisite gravitas to pacify the normally conservative moneymen.

By all accounts the conference was a stirring success, albeit with one huge unintended consequence. Rochester returned to work bursting with exciting news. Unfortunately it was full of implications that frightened IBM's customers right out of their wingtips. It seems that the conferees had announced, with stunning optimism, that within 20 years "machines will be capable of doing any work a man can do."

It was the first time the term "Artificial Intelligence" (coined at the conference by computer scientist John McCarthy) entered the public consciousness, and it arrived with all the welcome of a foreclosure. Suddenly owning a computer didn't seem like such a swell idea after all. Orders for the 701 dried up as the threat of displacement became personal. No one wanted to hasten his own demise and end up being supplanted by a bank of blinking lights.

The financial impact was grave enough that IBM announced it would suspend further research into Artificial Intelligence, and sent forth its sales team with a carefully crafted message designed to assuage the fears of jittery clients: Not to worry; "computers can only do what we program them to do."

For the next 50 years that bromide became an article of faith among both users and developers. Machines were incapable of independent thought, and bad robots were the stuff of science fiction. Of course that didn't prevent millions of people from being displaced, but at least unemployment was a byproduct of our design, not the will of the machines.

But all of that has changed, according to Jerry Kaplan of Stanford, scientist, futurist, and entrepreneur for all seasons. Kaplan cites three recent developments that are transforming AI. First, there has been a dramatic increase in computing power. Kaplan notes that when IBM's Watson spanked its human opponents on Jeopardy!, it did so armed with 4 terabytes of memory. That same memory can now be purchased for $150.

Next, computers have been outfitted with an assortment of sophisticated sensors that allow them to collect information on--and interact with--the larger world. Collecting data supports decision-making, and the results of those decisions shapes experience from which machines can learn. Which frees computers from the limitations of direct software instructions. If the object is to create a computer that can play chess or drive a car, it has no choice but to learn from experience. You're only allowed to back through the garage door once.

Finally, the Internet gives computers access to the accumulated knowledge of humankind; in other words, a limitless supply of learning materials. That combination of factors, warns Kaplan, portends unprecedented displacement for the workforce. He cites a recent study that predicts 47 percent of today's jobs will be wholly automated within the next 10 years. And that includes white-collar jobs. The bold predictions of the Dartmouth Conference may, at long last, be coming true.

But the ability to learn is a far cry from consciousness, says Kaplan. He makes a distinction between Strong AI and Weak AI. He characterizes Strong AI as the stuff of pixie dust and science fiction, whose worst scenarios depict malevolent machines turning on their makers. Kaplan sees "absolutely no indication" that computers will ever possess consciousness. He is a proponent of so-called Weak AI, which he describes as an engineering approach to solving specific problems like navigation or nuclear fuel rod handling. "The proof," says Kaplan, "is in the processing."

Nonetheless, Kaplan believes that computers will develop the skill to manipulate us, even without conscious intention. They will study our habits and preferences, and learn to react to our micro-expressions, providing an insight into our experience without the accompanying blame or judgment common to human interactions. Computers might also, for example, discover that nagging will get us to exercise, or that compliments spur us to work harder. And while computer behaviors will not be driven by conscious deliberation, it may be difficult to tell the difference between learning and cognizance.

Dutch computer scientist and winner of the Turing Award, Edsger Dijkstra, offered this insightful analogy: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." No matter what we call it or how it is achieved, the function will essentially be the same. The threat, argues Kaplan, will not come directly from the machines, but from our tendency to include them in the circle of humanity.

Although learning machines are thought of as contemporary achievements, in fact their education began a year before the Dartmouth Conference. In 1955, another IBMer, Arthur Samuel, wrote what is arguably the first learning program, a remarkable piece of software that played checkers and learned enough to challenge skilled amateurs.

But Western fascination with "living" machines dates back to Greek mythology. Hephaestus, son of Zeus and Hera, was the weapons-maker to the gods. He had his own palace on Mt. Olympus, where 20 bellows worked at his bidding tended by automatons he had forged from metal.

Bridge across the centuries to Al-Jazari the Turkish inventor and mathematician who, in the 13th century, created a programmable orchestra of mechanical human beings. On to the 17th century when Pascal invented the first digital calculating machine. Then to Mary Shelley who eerily foresaw ethical concerns in creating a sentient Frankenstein. By the 19th century Charles Babbage and Ada Lovelace had combined their genius to create a programmable calculating machine; and a century after that Konrad Zuse climbed on their shoulders to produce the first programmable computers. The dream of conscious machines was alive and well and hurdling headlong into the limitless possibilities of the computer age.

Which is how we got from Hephaestus, weapon maker, to U.S. Army, weapon user. Meet Sgt. Star, the chatbot developed by the Army to recruit kids who think war is just another interactive game. It has 835 responses (which are constantly updated) to frequently asked questions, and it answers about 1,550 inquiries a day. According to government documents, this chatbot technology was originally used by authorities to "engage pedophiles and terrorists online." Charming. But what Sgt. Star lacks in charm he makes up for with guile. Predictably, he's a little vague about the realities of permanent disability and death.

An argument can be made that war is the ultimate expression of artificial intelligence, and you have to question the desirability of a recruit who was convinced to join up by an avatar. But who knows, maybe a new generation of robots will allow the kids to sit out the next conflict. Now, wouldn't that be intelligent?

 

Archived

This topic is now archived and is closed to further replies.

Featured Content

  • REPORT: St. Kilda

    Hands up if you thought, like me, at half-time in yesterday’s game at TIO Traeger Park, Alice Springs that Melbourne’s disposal around the ground and, in particular, its kicking inaccuracy in front of the goals couldn’t get any worse. Well, it did. And what’s even more damning for the Melbourne Football Club is that the game against St Kilda and its resurgence from the bottomless pit of its miserable start to the season wasn’t just lost through poor conversion for goal but rather in the 15 minutes when the entire team went into a slumber and was mugged by the out-of-form Saints. Their six goals two behinds (one goal less than the Demons managed for the whole game) weaved a path of destruction from which they were unable to recover. Ross Lyon’s astute use of pressure to contain the situation once they had asserted their grip on the game, and Melbourne’s self-destructive wastefulness, assured that outcome. The old adage about the insanity of repeatedly doing something and expecting a different result, was out there. Two years ago, the score line in Melbourne’s loss to the Giants at this same ground was 5 goals 15 behinds - a ratio of one goal per four scoring shots - was perfectly replicated with yesterday’s 7 goals 21 behinds. 
    This has been going on for a while and opens up a number of questions. I’ll put forward a few that come to mind from this performance. The obvious first question is whether the club can find a suitable coach to instruct players on proper kicking techniques or is this a skill that can no longer be developed at this stage of the development of our playing group? Another concern is the team's ability to counter an opponent's dominance during a run on as exemplified by the Saints in the first quarter. Did the Demons underestimate their opponents, considering St Kilda's goals during this period were scored by relatively unknown forwards? Furthermore, given the modest attendance of 6,721 at TIO Traeger Park and the team's poor past performances at this venue, is it prudent to prioritize financial gain over potentially sacrificing valuable premiership points by relinquishing home ground advantage, notwithstanding the cultural significance of the team's connection to the Red Centre? 

    • 4 replies
  • PREGAME: Collingwood

    After a disappointing loss in Alice Springs the Demons return to the MCG to take on the Magpies in the annual King's Birthday Big Freeze for MND game. Who comes in and who goes out?

      • Clap
      • Like
    • 128 replies
  • PODCAST: St. Kilda

    The Demonland Podcast will air LIVE on Monday, 2nd June @ 8:00pm. Join Binman, George & I as we have a chat with former Demon ruckman Jeff White about his YouTube channel First Use where he dissects ruck setups and contests. We'll then discuss the Dees disappointing loss to the Saints in Alice Springs.
    Your questions and comments are a huge part of our podcast so please post anything you want to ask or say below and we'll give you a shout out on the show.
    Listen LIVE: https://demonland.com/

      • Sad
    • 37 replies
  • POSTGAME: St. Kilda

    After kicking the first goal of the match the Demons were always playing catch up against the Saints in Alice Spring and could never make the most of their inside 50 entries to wrestle back the lead.

      • Clap
      • Like
    • 306 replies
  • VOTES: St. Kilda

    Max Gawn still has a massive lead in the Demonland Player of the Year award as Christian Petracca, Jake Bowey, Clayton Oliver & Kozzy Pickett round out the Top 5. Your votes please. 6, 5, 4, 3, 2 & 1

      • Like
    • 31 replies
  • GAMEDAY: St. Kilda

    It's Game Day and the Demons have traveled to Alice Springs to take on the Saints and they have a massive opportunity to build on the momentum of two big wins in a row and keep their finals hopes well and truly alive.

      • Shocked
      • Like
    • 907 replies