Although robots are more than capable today of carrying out all kinds of business tasks efficiently and accurately, the concept of building machines that can think like humans has always been a dream for tech companies and smart city developers. However, the actual way in which the human mind works and processes information is up for debate, with several parties having conflicting opinions regarding the same. Once enough data is generated, simulation models can be created to build software that can think along the same rational or emotional lines as humans. Human thinking is generally influenced by a variety of factors—cognitive, behavioral, geometric, kinematic and physical. Using cognitive modeling, such factors can be considered while attempting to create robots that think and behave like humans.
The concept of human thinking is still too vague to be accurately replicated in robots. Even then, multiple types of approaches could be taken to reach the ideal end result—enabling AI and robotic tools to think like humans.
Improving Robotic Perceptibility
Thinking in humans is deeply influenced by the sensory perception of their immediate environment. Therefore, improving robotic perception can be a significant part of enabling machines to think like humans.
Most modern robots already contain a variety of sensors to improve their data collection and autonomy. For instance, service robots in the fields such as healthcare, agriculture, and entertainment are capable of performing a series of tasks. For mobile robots in such fields, in-built visual sensors and actuators improve their mapping and exploration of different areas. While visual sensors are used for mobility, navigation and visual data analysis, tactile sensors are used for intelligent manipulation of objects. Robots can use such sensors to detect the hardness, flexibility, elasticity, roughness or shape of objects and surfaces. Robot developers can use various approaches to improve the perceptive abilities of robots. For example, using visual and tactile sensors in combination, using diverse datasets to train algorithms and other ways in which the interaction between robots and their environment becomes more organic.
There are several ways in which existing robots and future models have greater perceptibility of indoors and outdoors as compared to previous models. Indoor positioning systems using data from smart cameras and RFID tags improve the location detection capabilities of robots in indoor environments. Robots with such systems are also adept at detecting the number of persons in a room to improve interaction. In outdoor environments, computer vision tools can be used to build visual maps from Google images. Such maps can be useful for autonomous navigation and localization-related tasks. IoT, AI and robotics work in tandem to improve the perceptibility of robots in terms of sight and touch.
Sensory perception involves all senses, and not just vision and touch. To improve the audio-based perceptibility of AI and robotic tools, researchers at Carnegie Mellon University have conducted a series of tests and developed a prototype robot that could identify objects through their distinctive sounds. To develop a robot with highly-improved audio-perceptibility, the developers carried out experiments that involved robots being able to accurately identify the types of objects—such as fruits, tennis balls, metal objects and others—in a metal container by shaking it. The object identification accuracy of the robot, named “Tilt-Bot,” was 76% during such audio tests. The AI models that enabled the robot to identify objects and their material makeup were trained with thousands of datasets involving the sounds naturally created when different types of objects interacted with their surroundings. With continuous improvement, robotic applications with accurate sound, touch and sight-related perceptibility can be developed in the future.
Although deep learning has improved the perceptibility of robots, it is a well-known fact that human reasoning and perception are still better than those of robots. One of the ways to improve robot grasping, perception and manipulation is gamification. In other words, object recognition and information processing in robots can be improved through online puzzle games. The framework for this may involve an attribute matching system that encodes data into such games. This enables AI and robotic tools to utilize the collective intelligence and grasping abilities of actual players, which can be used to widen the attribute database and help robots learn how humans react to real-life conflicts and situations.
Thinking like humans also involves robots interacting with people and other machines to collect data, process and use it for a variety of purposes. Technologies such as NLP and NLG can be used for making interaction simpler between robots and humans in various sectors such as healthcare, manufacturing and others. For example, doctors could use voice commands to direct robots to execute certain tasks during an operation, such as administering anesthesia or accurately monitoring platelet count, blood viscosity and other health details. In product quality testing, robots can be used for checking material integrity based on how it sounds when poked or gently dropped to the ground.
Although the exact dynamics of human thinking are not known, perceptibility certainly plays a big role in it. So, improving robotic perceptibility may be the much-needed first step towards enabling robots to think like humans.
Combining Cognition and Emotion in Robotic Information Processing
Generally, developers and researchers in the field of robotics prefer rationality and cognition while designing and building robots. Emotions, they generally believe, are detrimental to rational thinking. So, hyper-intelligent robots are created with designers focusing on the cognitive capacities of robots. However, human thinking can be heavily influenced by emotions on certain occasions. Combining emotions with cognitive thinking can also be hugely useful in situations where urgency and Teutonic patience is required.
Imbibing robots with an emotional quotient depends on certain factors—such as perceptibility to surroundings and cognition. With these two attributes, developers can create an emotion module for robots. This will enable robots to access a deep and diverse dataset of emotions and know when to use certain emotions appropriately through facial expressions, language or tone of interaction with a user. Such an emotion module will enable robots to know which memories are useful and which ones to discard.
Emotion and cognition must go hand in hand for robots. Emotion will allow robots to accurately convey if there are any internal system problems such as overheating. Based on that, their users can take the appropriate measures to correct them. Robots need to possess emotional acumen to meet the futuristic AI objective of general intelligence in organizations and smart cities.
Initiating Robotic Consciousness
There is no dearth of think pieces on the internet that grapple with the idea of AI and robotics-based tools possessing a soul. The idea of machines becoming conscious is a fascinating one as it implies that machines can become just like humans by “programming in a soul” in them. More importantly, such a development may allow robots to process information like humans.
In recent years, robotics developers have flirted with the idea of using machine learning to enable machines to understand context-based language. This allows robots to detect underlying patterns in conversations and stacks of data. AI plays a key role in imbibing robots with artificial consciousness. For example, AI robots can use emotion detection and human behavioral pattern replication to create artificial consciousness. Embedding “consciousness” in robots comes with its own set of challenges. For one, no semantics regarding soul or consciousness exist to guide developers. Secondly, making robots conscious also involves creating artificial sentience—a way to experience feelings. While improving perceptibility may make robots sentient up to a certain degree, replicating human consciousness remains a challenge, at least for now.
Out of the different approaches that could be taken to have robots think like humans, initiating machine consciousness is seemingly the most far-fetched. However, it may also be the closest concept to the human-like intelligence that AI developers ideally want the technology to be in the future.
AI and robotics exist to solve problems through cold logic and unerringly accurate calculations. While that is enough for most businesses and smart cities, for now, robots can go beyond simply being problem solvers. Fields such as healthcare and CRM, in which AI and robotics have made steady inroads in recent times, have room for services being more “human.” So, possessing attributes such as empathy, logical reasoning and qualitative analysis, which are all significant when you talk about human thinking, will make robots even more valuable resources than now.