Recently, development of “AI,” or artificial intelligence technologies, has leap-frogged into the center of the tech world with more media focus, increased investment, and inclusion in more everyday consumer products. The rush to produce the next AI product is starting to resemble the race to build the nuclear bomb, complete with some of the same dire predictions of human extinction on the one hand and human salvation on the other.
Before we limit ourselves to either the salvation or destruction path, it may be worth gaining greater clarity about exactly what AI is and what it means for our communities. The common description of AI as “everything from robotic process automation to actual robotics” doesn’t help us know more about the path we’re on. That definition simply allows the industry to label more products “AI” to justify higher pricing.
But if you attend AI seminars and conferences that have company spokespeople from a variety of AI industry sectors, you’ll hear multiple definitions of AI that match what that company or sector claims ins AI. Multiple definitions of AI create a more full picture of AI. But too many dilute the common thread between AI technologies and our understanding of why AI is so unique, risky and even dangerous.
Any human-made product that performs a task that reflects human-like intelligence is an adequate definition but barely useful if we’re going to identify the more ‘dangerous’ types of AI from the less dangerous. Or if we want to trace the social impact of the more complex AI technologies, without being distracted by other AI technologies that interact with, but don’t contribute to that social impact.
A more useful definition must reflect the reality that different tasks don’t reflect the same level of human-like intelligence, risk or benefit. Light AI and Heavy AI are 2 categories that distinguish different types of tasks, requiring different levels of intelligence levels, and that assume different levels of risk and promise.