Why AI and Machine Learning Now?

Why is adoption speeding up?

It is a commonly held belief that Artificial Intelligence (AI) started with ELIZA. It’s not true. Despite the fact that it could (and did!) pass the Turing Test on many occasions, it was never designed to be an AI. Although this article is chockful of information that you need, it may be useful for you to take a couple of moments to use the ELIZA demo above to understand how it works. Since it is psychologist-based, try discussing a real or fictional personal problem with it.
Back already? Interesting, wasn’t it? As you can see it’s not hard to imagine that people began to believe, it was an actual person speaking to them—the essence of the Turing Test.

The creator, Joseph Weizenbaum, insisted that it was not an AI, and was stunned by how quickly people fell into the idea that this was a real person. He doggedly explained that it was just an algorithm that turned the question around and gave it back to you, the way an interviewing psychologist would. Even his secretary would ask him to leave the room so she could talk privately to her “friend.”

Evolution

It’s also said (true or not) that ELIZA spurred on the AI industry. Real scientists saw how readily the emulation worked but sought for something that was greater—something emulating a human—where the computer’s speed could be used to process real data, analyzing it faster than a mere person.
Surely this would be the gateway to the future where a “device” could condense and consolidate all human knowledge and answer any question we cared to pose. It would be like creating the Oracle of Delphi—providing vital knowledge just by asking the right question.
Back at the dawn of the computer age that wasn’t seen as ambitious. To the thinking person, it just seemed like the way things were headed. Technology was the answer to every problem back then.

Science writers of the day spoke of tremendous technical leaps forward like this one about the incredibly advanced ENIAC computer:
“Where… the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have 1,000 vacuum tubes and perhaps weigh just 1-1/2 tons.”
–Popular Mechanics magazine, March 1949, page 258
It is almost comical in retrospect that we now have computers that are smaller than a grain of sugar, which are as capable as the x86 chips of the 1990s. Here is IBM’s latest creation from March of 2018, actually sitting on a small pile of salt. Fitted on a tiny motherboard with a solar cell and a signaling device it could be placed virtually any place you desired—such as the corner of a sheet of paper, making it into “smart paper”—and they cost less than 10¢ to make. Done with your smart paper? Ball it up and throw it away, toss it in the shredder, or just “blank” it and use it over and over again.
With storage getting dot-sized, too, soon we’ll have enough space on such a device to sustain a completely independent AI—your personal buddy—presumably with access to all of the World Wide Web, and databases galore. Pretty soon each of us could possess a personal “Jarvis” like Tony Stark/Ironman…and who wouldn’t want that?

In the Beginning

Up until now, although computers were blindingly fast and storage was reasonably cheap, we hadn’t quite hit the peak of technology required to sustain such complex constructs as Neural Nets (NN), and we’ll discuss that below.
First Person Shooter (FPS) video games led the charge for playing games whose online version included the ability to play with several fellow gamers. Among the best implemented was the 1999 game Unreal Tournament, for example.
What do you do if there aren’t enough players, however? The games created AI players that behaved in a very human-like manner. These artifices would instantly drop out of the game if a real person came along and requested to join the game. Based on behavior alone AIs were difficult to distinguish from real players except by name.

We Have the Power

Let’s imagine an AI Medical Network as an example. Through interviews, self-description of style or technique, demonstrations, or videos of work in progress, an AI could theoretically learn the entire body of knowledge relating to medical practice for one doctor.
If you did that again with two different doctors, you would have three different recordings of vast bodies of knowledge. For a small sample like that, it would be possible to put all that information in one database. At some point, while you are interviewing doctors, that database is going to reach a speed or size limitation where adding more information would be problematic.
So you start a parallel database and keep repeating this every time the database reaches its limit. A single AI may have access to all of that information, but since they are in separate databases, a particular bit of knowledge included in one database is very difficult to relate to another bit of knowledge in a different database because the information is not contiguous and doesn’t share the exact same filing system.
Think of it this way: Two of the doctors in database “A” know about mumps and rubella. A doctor over in database “J” knows about a nasal spray that is toxic to a proto-bacterium that creates a vulnerability pathway for several diseases. Without a filing system shared by the databases, we might never learn that in the event of an outbreak you could be completely protected with a single shot of nasal spray. Not good for humanity.
Neural Networks
The goal for early neural networks was to attempt to connect each experience to thousands of other experiences. By finding subtle relationships, and by creating “inferences,” it could “learn.” It might be a positive, negative, or neutral result, but it was saved as an “experience.”

At first, these AIs are going to be very limited and very specific about their collections of knowledge. A successful model was IBM’s WATSON computer.
It was a significant step in that direction, being programmed with every scrap of knowledge the development team could muster about the subject matter of Jeopardy questions. Mainly about how the questions were worded with puns, double entendres, or deliberately misleading clues. Its crude NN (only in comparison to what we expect in the future) understood the relationship between the questions and answers.
It took thousands and thousands of hours of programming and testing, along with millions and millions of dollars, but they did create an AI that was capable of beating the best human player. It is the story of BIG BLUE all over again when IBM spared no expense to build a computer that could beat the world chess champion.

The ultimate (gaming) challenge was to build a machine that could win at the ancient game of GO, with thousand more possible “moves” than chess. Google did this with the Alpha GO computer.

What is the difference?

The difference between the three machines was that Big Blue worked on raw computing power, looking at every single move available, plus each consequent move as far ahead as it could calculate. WATSON worked by having all the necessary data programmed in so it would be available instantly upon need. It found relationships based on the words in the question and the ways they related to each other.
Alpha GO, on the other hand, learns on its own via something called Machine Learning (ML), or more specifically, a sub-field called Deep Learning (DL). It doesn’t require pre-programming.
If you have ever played a game called MYST, the player is dropped in the middle of an idyllic scene, never meets any other characters, and progresses by figuring out that there are puzzles to solve. There are no fights, guns, or swordplay—just simple, relentless exploring and puzzle-solving—a complete departure from regular computer games. At a
Humans know what a door is, or a path, or a light, and what they are all useful for, so they have an intrinsic advantage. Alpha GO could play that game, and if you left it overnight, playing over and over again, maybe with thousands of instances simultaneously, by the time you got back in the morning, it might well perform better than any human could.

It doesn’t rely on rules except to try everything, in every possible variation, and to record events as a success, failure, or neutral. It can then take what it learns that is successful in one situation and investigate to see if the same strategy works elsewhere. The learning curve is exponential once it has isolated a few strategies that are occasionally-to-frequently successful since it will apply those more often.

The Takeaway

Once we hit that peak with the perfect combination of AI using ML to understand the world with DL, the AIs will acquire the ability to use DL to enhance their learning process. (Don’t you love letters?)
What that all alphabet soup above boils down to is machines that can teach themselves to learn. And once they turn their full computational power to the problem, they’ll be able to teach themselves faster than humans ever could.

All that will remain for us is to connect all the databases we have assembled, all of human knowledge, and let the AI start to integrate it all for us. There’s nothing to stop us from continuing to explore and reveal new information.
Meanwhile, the AIs can sort and collate all our information, revealing all the subtler and invisible relationships that we haven’t yet discovered. Right now, for example, we may already have the knowledge to build matter transmitters like the “transporters” of Star Trek. The information is just too widely separated, into too many different fields, for us to pull it all together.

Or by combining the knowledge of a botanist, an oceanographer, and a professional Pool Cleaner in Sausalito, create an immortality serum. It wouldn’t surprise me one bit if somewhere in the world there is a person, or a field of study, with a tidbit of knowledge that when combined with some well-understood field of science will tell us how to travel faster than the speed of light and visit the stars.

A wise man once said, “Artificial Intelligence is the last thing that humanity will ever have to create.” That might very well be true since once it is adept at training itself to learn, it could very well experiment and assimilate data faster than humans.

How else do you expect that self-driving cars will faultlessly depart and arrive at their destinations with no accidents or deaths ever? The biggest danger to future self-driving car users will be the radical human that elects to operate their vehicle in the midst of the syncopation and ballet of AI operated vehicles.
One day you’ll be able to pose any question to a general planet-wide AI and receive a definitive answer to any question. Back in 1956, Isaac Asimov revealed what we would all soon experience, with a delightful science fiction short-story called “The Last Question,” that is well worth the few minutes it will take to read. Enjoy!