Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
More

    Toward a Brain-Like AI with Hype dimensional Computing

    The human brain has always been under study for inspiration of computing systems. Although there’s a very long way to go until we can achieve a computing system that matches the efficiency of the human brain for cognitive tasks, several brain-inspired computing paradigms are being researched. Convolutional neural networks are a widely used machine learning approach for AI-related applications due to their significant performance relative to rules-based or symbolic approaches. Nonetheless, for many tasks machine learning requires vast amounts of data and training to converge to an acceptable level of performance.

    A Ph.D. student from Khalifa University, Eman Hasan, is investigating another AI computation methodology called ‘hyperdimensional computing,” which can possibly take AI systems a step closer toward human-like cognition.

    This work analyzes different models of hyperdimensional computing and highlights the advantages of this computing paradigm. Hyperdimensional computing, or HDC, is a relatively new paradigm for computing using large vectors (like 10000 bits each) and is inspired by patterns of neural activity in the human brain. The means by which can allow AI-based computing systems to retain memory can reduce their computing and power demands.

    HDC vectors, by nature, are also extremely robust against noise, much like the human’s central nervous system. Intelligence requires detecting, storing, binding, and unbinding noisy patterns, and HDC is well-suited to handling noisy patterns. Inspired by an abstract representation of neuronal circuits in the human brain, developing an HDC architecture involves encoding, training, and comparison stages.

    The human brain is excellent at recognizing patterns and using those patterns to infer information about other things. For example, humans generally understand that just because a chair is missing a leg, that doesn’t mean it’s no longer a chair. An AI system may look at this three-legged chair and decide it is a completely new object that needs a new classification. HDC vectors, however, offer some margin for error. With HDC, recognizing certain features will generate a vector that is similar enough to a chair that the computer can infer the object is a chair from its memory of what a chair looks like. Hence, the three-legged chair will remain a chair in hyperdimensional computing while in traditional object recognition this is a difficult task.

    “In an HD vector, we can represent data holistically, meaning that the value of an object is distributed among many data points,” explained Hasan. “Therefore, we can reconstruct the vector’s meaning as long as we have 60% of its content.”

    The structure of the vectors leads to one of the strongest advantages of the HDC approach, which is that it can tolerate errors and therefore is a great option for approximate computing applications. This arises from the representation of the hyper vectors, where a bit value is independent of its location in the bit sequence.

    HDC is also powerful in that it is memory-centric, which makes it capable of performing complex calculations while requiring less computing power. This type of computing is particularly useful for ‘edge’ computing, which refers to computing that’s done at or near the source of data. In a growing number of devices, including autonomous vehicles, computations must be carried out immediately and at the point of the data collection, instead of relying on computing done in the cloud at a data center.

    Hyperdimensional computing is a promising model for edge devices as it does not include the computationally demanding training step found in the widely used convolutional neural network. However, hyperdimensional computing comes with its own challenges as encoding alone takes about 80 percent of the execution time of its training and some encoding algorithms result in the encoded data growing to twenty times its original size.

    Hasan studied the HDC paradigm and its main algorithms in one-dimensional and two-dimensional applications. Research has shown that HDC outperforms digital neural networks in one-dimensional data set applications, such as speech recognition, but the complexity increases once it is expanded to 2D applications.

    HDC has shown promising results for one-dimensional applications, using less power, and with lower latency than state-of-the-art simple deep neural networks. But in 2D applications, convolutional neural networks still achieve higher classification accuracy, but at the expense of more computations.

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.com/
    ELE Times provides extensive global coverage of Electronics, Technology and the Market. In addition to providing in-depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build experience, drive traffic, communicate your contributions to the right audience, generate leads and market your products favourably.

    Technology Articles

    Popular Posts

    Latest News

    Must Read

    ELE Times Top 10