CynLr Technology is a tech company dealing in deep-tech robotics and cybernetics. They enable robots to become intuitive and capable of dynamic object manipulation with minimal training effort. CynLr has developed a visual object intelligence platform that interfaces with robotic arms to achieve the ‘Holy Grail of Robotics’ – Universal Object Manipulation. This enables CynLr-powered arms to pick up unrecognised objects without recalibrating hardware and even works with mirror-finished objects (a traditionally hard obstacle for visual intelligence).
To know more about robotics and the company, Sakshi Jain, Sr. Sub Editor-ELE Times had an opportunity to interact with Nikhil Ramaswamy, Co-Founder & CEO and Gokul NA, Founder – Design, Product and Brand at CynLr Technology. He talked about robots and their ability to mimic the intuitiveness and range of the human hand. Excerpts.
ELE Times: How does your visual robot platform transform machines into mindful robots?
Nikhil & Gokul: Today, most modern approaches in AI and machine vision technology utilised in robotics heavily depend on either presenting an object to a robotic arm in a predetermined manner or by predicting object positions based on static images, regardless of whether they are in 3D or any other format. However, this approach presents significant challenges since the object’s orientation and lighting conditions can greatly alter its colour and geometric shape as perceived by the camera. This becomes even more complicated when dealing with metal objects that have a mirror finish. As a result, traditional AI methods that rely on colour and shape for object identification lack universality. The intuitiveness of vision and object interaction in human beings is taken for granted. Most tasks which are innate for human beings are so far difficult to replicate in automation systems even in extremely controlled environments.
At CynLr, we work to bring intuitiveness to machines, the key to which lies with machine vision being as close as possible to human vision. Of course, in comparing it to human beings we are competing against millions of years. However, we have been able to decipher the fundamental layers of human and animal vision to create the foundations of our visual object intelligence technology. We use cutting-edge techniques such as Auto-Focus Liquid Lens Optics, Optical Convergence, Temporal Imaging, Hierarchical Depth Mapping, and Force-Correlated Visual Mapping in our hardware and algorithms. These AI and Machine Learning algorithms enable robotic arms to perceive and generate rich visual representations, allowing them to manipulate and handle objects based on any environment.
Currently, the manufacturing industry faces challenges in automation for products with shorter life cycles. A change in model or variant means an overhaul of the entire automated line as automation is part specific. CynLr’s technology enables robots to become intuitive and capable of dynamic object manipulation with minimal training effort. The idea is to make one line capable of handling varied objects freeing the automation industry from part-specific solutions only. Whether it’s a packet of Lays chips, a metal part, or a mirrored finish spoon, the same robot can handle them all. These robots are future-proof and capable of performing a wide range of tasks. (You can refer to our demo videos – Untrained Dynamic Tracking & Grasping of Random Objects and Oriented Grasps and Picks of Untrained Random Objects).
ELE Times: Highlights some key milestones achieved by CynLr in their mission to simplify automation and optimise manufacturing processes.Â
Nikhil & Gokul: CynLr’s one of the major achievements is passing the Litmus paper test for AI – recognition and isolation of mirror-finished/reflective objects. Our visual robots can recognize and isolate mirror-finished objects without any training, under varying lighting conditions. Our highly dynamic and adaptive product eliminates the need for customised solutions, sparing our customers from dealing with multiple technologies and complex engineering.
We have vital interests in collaboration with the US and EU. We have currently begun engagements for reference designing our tech stack to build advanced assembly automation with two of the five largest automotive car manufacturers globally and a large component supplier from Europe. We are also entering new markets like the industrial kitchen and Advanced Driver Assistance Systems (ADAS) use cases.
ELE Times: How is your company planning to address the global challenge of part-mating and assembly automation?
We are strategically addressing the global challenge of part-mating and assembly automation by leveraging our advanced visual intelligence technology. This cutting-edge approach combines computer vision, machine learning, and robotics to tackle the complexity of automating intricate part-mating and assembly processes, which have historically been demanding to automate. Our visual intelligence technology allows robots to accurately perceive and interpret spatial relationships between parts, enabling precise alignment and assembly.
We go beyond visual perception alone. As a deep-tech company, we also recognize the successful automation of part-mating and assembly requires additional elements such as tactile feedback and knowledge of how to manipulate different components. By integrating tactile sensing capabilities and leveraging our expertise in robotics, we aim to create robots that can not only guide the assembly process visually but also interact with parts in a manner that emulates human-like dexterity and precision.
Our ultimate objective is to offer a holistic solution for part-mating and assembly automation, overcoming challenges related to complex spatial relationships, part geometry variations, and the need for meticulous manipulation. By automating these processes, we intend to empower manufacturers to achieve remarkable gains in efficiency, cost reduction, and increased production.
We also collaborate closely with global customers and run pilots across different regions, including the US, Germany, and India. We aim to further validate and refine our technology to find real-world applications.
ELE Times: Highlights your new mission in automation.
Our new mission in automation revolves around revolutionising the manufacturing industry through advanced robotics and artificial intelligence technologies. Below are some highlights of the same:
- Automation Transformation: We aim to spearhead a transformative shift in manufacturing by automating processes that were previously considered non-automatable. Our mission is to enable robots to perform complex tasks with precision, efficiency, and adaptability.
- Addressing Industry Challenges: We are committed to tackling the challenges faced by manufacturers in part-mating, assembly, and other intricate processes. We aim to overcome the limitations of traditional automation methods by leveraging our expertise in visual intelligence, tactile feedback, and a comprehensive understanding of fundamental sciences.
- Universal Factories: CynLr envisions the establishment of universal factories, where robots can seamlessly adapt to different tasks and products without the need for specialised infrastructure or extensive reconfiguration. This approach simplifies factory operations, enhances versatility, and optimises logistics.
- Reduction of Manual Labor: With our visual intelligence technology, CynLr seeks to automate labour-intensive processes, reducing the reliance on manual labour and streamlining production lines. By automating tasks such as part-mating and assembly, we want to enhance productivity, minimise errors, and free up human workers for more strategic and value-added activities.
- Global Impact: Our mission extends beyond regional boundaries. By collaborating with leading OEM players in Europe and the US, and aiming to expand our business in India, we strive to have a global impact, transforming manufacturing industries worldwide.
Overall, our new mission in automation revolves around pushing the boundaries of what is considered automatable, simplifying factory operations, reducing manual labour, and driving innovation in the manufacturing industry on a global scale.
ELE Times: What are your plans for Indian markets?
Nikhil & Gokul: We feel that the Indian market is still premature for our solution. We need a space where customers can actively engage, invest their resources, and transform our technology into a practical and functional solution. This is precisely why we have constructed one of the most densely populated robotics labs focused on visual intelligence.
Although we possess the technological capabilities, without a viable customer solution, any technology will remain underdeveloped. Therefore, a comprehensive ecosystem is needed for the success of a foundational technology like ours. We further aim to engage with customers, channel partners and academic institutions who can further build on our machine vision stack and come up with viable solutions.
ELE Times: How does your visual object intelligence platform increase the ability to mimic the intuitiveness and range of the human hand?
Nikhil & Gokul: Vision technology is severely limited in robotics today. The intelligence and eyes needed for robots to adjust to different shapes, and variations and adapt accordingly are simply not there. So, we saw an opportunity to enable that technology by working with the problem closely.
Our visual intelligence platform helps robotic arms to adapt to various shapes, orientations and weights of objects in front of it. We have developed a technology that can differentiate sight from vision and begins its algorithms from its HW using Auto-Focus Liquid Lens Optics, Optical Convergence, Temporal Imaging, Hierarchical Depth Mapping, and Force-Correlated Visual Mapping. It enables the robots with human-like vision and versatility to grasp even Mirror-Finished objects without any pre-training (a feat that current ML systems can’t achieve). CynLr’s visual robots can comprehend the features of an object and re-orient them based on the requirements. The AI & Machine Learning algorithms help robotic arms process the task even in an amorphous setting and align them in the best way possible.
ELE Times: Share your views on HW & SW Vision Platform and how it helps in the building of machines.
Nikhil & Gokul: HW & SW Vision Platform is instrumental in building machines as it provides them perception, understanding, intelligent decision-making capabilities, automation, enhanced safety, and adaptability. By incorporating vision technologies, machines can effectively interact with their surroundings, analyze visual data, and make informed decisions, ultimately improving efficiency and expanding the possibilities of machine applications.
The HW component of the platform typically involves specialized hardware devices such as cameras, sensors, and image processors. These components capture visual data from the machine’s surroundings, convert it into digital information, and process it to extract relevant features and patterns. On the other hand, the SW aspect of the platform involves the software algorithms and frameworks designed to analyze and interpret visual data. These algorithms perform tasks such as image recognition, object detection, segmentation, and tracking, enabling the machine to comprehend its surroundings and make intelligent decisions based on that information.