Where should you look for Smart Machines?
IBM, Google, Facebook, Apple, Microsoft? Who or what is driving the Artificial Intelligence (AI) field right now? Is it a company? Is it a field? Is it a technique? You’ll encounter terms like Bayesian Neural Nets, Robotics, Deep Reinforcement, and/or Transfer Learning. But what is truly important?
We all know that IBM created Deep Blue (the chess-playing computer that took one of two matches against reigning champion Garry Kasparov), and went on to create WATSON, the Jeopardy-playing computer that beat the world champion, Ken Jennings. But we also know that those selfsame computers were pretty well useless for anything else at that point in their development. IBM poured millions of dollars and thousands of hours of research into building WATSON strictly for research purposes. They ended up with precisely what they set out to design: the world’s best Jeopardy-playing Computer.
Since then they have taken what they learned and turned it into a formidable AI system which is available online for you, or anyone else’s use online. Arguably, WATSON is the world’s most powerful natural language computer, so if you have data and want to derive insights from it, Watson can do that for you.
Google has been working with AI for many years specifically in developing algorithms to drive their web crawlers and to sort the data they collect. It may surprise you to learn that despite having indexed over 35 trillion web pages, that only covers 4% of the entire web. AI will help index more content, and the more information we have available, the easier it is to discover associations and relationships.
Of course, that’s not Google’s only project. Whether from the original company, or their related (but separate) ALPHABET, they continue in-depth research into Artificial Intelligence. Their most significant current AI project is Google BRAIN, combining System Engineering, open-ended machine learning, and computing resources that can only be described as Google-scale.
What is Facebook doing on this list? Their FAIR (Facebook Artificial Intelligence Research) project is extensive. ONNX (Open Neural Network Exchange) was released as a Beta in September 2017, and recently (December 6th, 2017), in partnership with AWS (Amazon Web Services) and Microsoft the first official version 1.0 was made available.
FAIR’s goal is to create a flexible open standard resulting in an AI ecosystem that allows the smooth transfer of data and the ability to utilize different tools. Since the September launch, many chip manufacturers have climbed on board including IBM, AMD, Intel, Huawei, ARM, NVIDIA, and Qualcomm.
Whereas most people think of Apple as being an innovative technological development giant, even though Steve Jobs has retired with an unassailable degree of permanence, the company still hasn’t budged philosophically. Come Hell or high water; Apple is not going to “play nice” with any competing system. They develop in secret and create proprietary systems.
Even a card-carrying aficionado of Apple with the latest iPhone doesn’t want to work there if they are an AI researcher. The overall success of AI is dependent on interoperability, publishing results, and sharing data with the AI community—none of which is permitted by the corporate culture of Apple. Nothing goes out the door unless it’s a “finished product” for them.
In the last two years, Apple has acquired four AI startups. It is trying to buy enough AI resources to “share with itself” but still stay proprietary. Apple will struggle until it democratizes its efforts.
Another contender for AI implementation leadership is Wildfire Force. They provide consulting services for AI, Data Science, and Big Data. Their focus is to help users and developers to create and enhance strategies for deep learning, predictive analytics, machine learning, and, of course, AI.
They provide a variety of services to the end user including Fraud Detection, Assets Planning, customer-facing Chatbots, Big Data Analysis technologies, Cloud Services, Time-Series Analytics, and more. Learn more here.
This is another entry from Google, from its ALPHABET family. Deepmind, formerly a purely British company before being acquired by Google, is arguably the research leader in Artificial Intelligence because whenever they produce a new research paper, it automatically heads to the top of the list for Hacker News and the Machine Learning page for Reddit…and you can’t garner much more respect than that.
Their cadre of developers includes User Experience (UX) designers, to create user tools to aid in research; software engineers, to create infrastructure and traditional tooling; and researchers from a wide variety of fields including neuroscience, ecology and many more. Without an understanding of how other disciplines work AIs will never be able to understand our world in a human way.
OpenAI introduced a software platform named Universe. This allows an AI to examine a screen (in batches of pixels) to interpret what is going on, to move the pointer as if it were operating a mouse, and to type on a virtual keyboard. This is a superb training aid for AI’s general intelligence growth so they can interact with applications, websites, and games, just as a human would.
Playing games is one of the best tools for teaching AIs to cope with unreliable learning data. Something that seems logical may not always produce the same result, so the machine learns to be versatile in its “thinking.”
OpenAI also created Gym, a set of tools for developing algorithms for Reinforcement Learning (RL). It also allows the comparison of different algorithms, crafted for the same purpose, to judge efficacy. These are two of the most favored tools by AI developers. The goal for them is to develop an AI agent that can use what it has learned in one situation and apply it to a new situation successfully.
While it used to be a prestigious place to seek a position for AI research, their focus on their Cognitive Tool Kit (CNTK), a Deep Learning framework, has impeded their progress. Nowadays Deep Learning research is done almost exclusively on Linux platforms. This has allowed their star to fade somewhat when compared to programs such as PyTorch, Chainer, and TensorFlow.
MR does have an impressive list of well-credentialed experts on AI among their ranks, but since most Deep Learning development is within the province of Ph.D. students, they are lagging behind at other researchers.
Using AI at LinkedIn (now owned by Microsoft) is vital to connecting the right content to the proper user just when it is needed, or likely to be desired. LinkedIn claims it is now presenting the “right careers” to the right people at the right time, or in other ways enhancing their 470 million users’ careers.
Their blunders (like offering a CEO a job opportunity as an Executive Assistant) are getting fewer and fewer as their systems gain more experience, and programmers become more skilled at identifying “never do this” situations for their AIs.
They continue to evolve programs that can replace the need for a human to perform a boring task, such as a ChatBot that can schedule meetings. Onboarding new staff can be automated, too, in the form of a Personal Assistant connecting neophytes to fellow workers with whom they may have had a prior relationship, arranging meetings, or identifying sources for needed information.
The taxi-alternative company is no slouch when it comes to AI. Refusing to be lost in the dust, they have two primary programs underway.
The core is their undertaking to focus on advancing basic research, and Connections is about integrating AI with all of their sub-operations.
That includes elements such as their fundamental transportation service and predicting the timing for certain times of day; it includes factors such as calculating food prep time for their Uber Eats program; it incorporates automating customer support and service, and it includes future-forward areas such as the development of their self-driving cars.
“Hey, Alexa, turn on my music.” The Amazon Echo is making inroads into AI for the home with its AI voice-interactive interface “Alexa.” It’s great fun to walk into a friend’s home that is so-equipped and say “Hey, Alexa, turn on all the lights” to cause a bit of momentary confusion if they haven’t set it up correctly.
The truth is, Alexa, or something like it, will be ubiquitous in the next few years. There is a generation, about to be born, who will never know about not talking to the house, and issuing requests or instructions to the thin air.
Intel provides the Intel® AI Academy, which provides learning materials, tools technology and a community of fellow developers to support you. You can hone your talents in machine learning, algorithms, and more. They have their computing architecture (of course) such as the Nervana™ Neural Network Processor. You can learn the basics of Machine Learning (a 12 weeks course, for free), get started with AI, and then move on to a substantive understanding of Neural Networks with an 80 minutes explainer video. Another course, called Deep Learning 101, delves even more in-depth, but is still only 12 weeks (and free).
So, is it the companies that matter? Not really, because there are dozens more of them doing interesting things. It is a synergy of company, field, technique, and innovation by bright young minds joining the profession that is driving the AI field right now and all of its developments.
The commonality of the platform is probably the most critical element, because with too many divergent standards we’ll face incompatibilities until there is only one left standing. That represents a lot of wasted effort.
There is a lot of hubris in a company that believes their single team can do more than thousands of individuals working on an open source project together. For the first time in Human History, let’s get it right the first time and cooperate.