Mowito is transforming industrial automation with its advanced AI-powered vision software, delivering unmatched precision and adaptability to robotic arms. Designed for critical applications such as bin picking, machine tending, and assembly, Mowito’s technology redefines operational efficiency and establishes a new benchmark in intelligent robotics.
Engineered for dynamic environments, the company’s software excels at handling moving objects and operates seamlessly on conveyors. Its hardware-agnostic design ensures effortless integration with diverse robotic arms and cameras, offering unparalleled flexibility for varied industrial needs. Furthermore, Mowito’s intuitive, no-code interface empowers factory teams to adapt and reconfigure systems quickly, eliminating reliance on specialized engineering expertise.
Mowito’s innovation extends to managing a broad spectrum of items, from tiny components like nuts and bolts to delicate fresh produce, making it a versatile solution across industries. Its rapid deployment capability sets it apart, with systems becoming fully operational within days. This minimizes downtime and delivers immediate impact, ensuring seamless operations.
Since its launch in 2021, Mowito has partnered with over 50 factories and warehouses, addressing complex automation challenges with precision while leveraging AI-powered vision technologies. Their solutions enable robotic arms to handle even previously unseen objects efficiently, ensuring both safety and productivity. Trusted by leading global manufacturers, Mowito has earned a reputation for reliability, innovation, and exceptional performance.
In an exclusive conversation with The Interview World, Safar V., Director of Engineering at Mowito, shares insights into the company’s AI-powered vision technologies tailored for the automotive sector. He discusses the extraordinary accuracy these technologies achieve, the groundbreaking innovations Mowito aims to pursue in the next five to ten years, and the market opportunities shaping the future of computer vision. Below are the key highlights from the conversation.
Q: Can you provide insights into the AI-powered computer vision technologies your team is developing for the automotive sector, and how they are addressing key industry challenges?
A: In the world of automation, one persistent challenge is the rigidity of traditional setups for robotic arms. Users often rely on fixed fixtures that, while functional, lack flexibility. If a variant needs modification, the entire protection layer must be redesigned—a time-consuming and resource-intensive process.
We address this limitation by unlocking the mechanical potential of robots, which is often constrained by inflexible software programming. Our solution adds an extra sensor, specifically a camera, combined with advanced AI-powered vision technology. This allows us to teach robots new tasks with remarkable ease and efficiency.
Take, for example, an assembly application. With our technology, the robot can learn the required task in just 30 minutes. If the dimensions of the part change or a different variant is introduced, the robot can be retrained—just as you would train a human worker. This flexibility revolutionizes production lines, making them highly adaptable to evolving needs.
Additionally, our solution significantly enhances accuracy, ensuring not only flexibility but also precision. This dual advantage—improved adaptability and precision—empowers industries to embrace automation with greater confidence and efficiency.
Q: What level of accuracy do your AI-powered vision technologies achieve?
A: The accuracy of our system largely depends on the cameras we use and key parameters such as the height at which the camera observes the part. For instance, in certain projects, we have achieved remarkable accuracy of 300 microns, or 0.03 mm—a significant improvement over conventional methods.
While accuracy is crucial, manufacturers prioritize consistency just as much. Consistency ensures that the system reliably delivers the specified accuracy every single time. This is where cameras offer a distinct advantage through continuous feedback.
Unlike open-loop systems that rely on fixed setups and hope the parts align correctly, our approach utilizes a closed-loop feedback mechanism. The system constantly monitors for variations, ensuring real-time adjustments. If a manufacturing defect occurs in one part during assembly, the feedback system immediately accounts for it, maintaining precision and minimizing errors. This continuous feedback creates a more reliable, efficient, and adaptive manufacturing process.
Q: Is your company focused on developing the hardware, software, or both aspects of AI-powered vision solutions?
A: Our core expertise lies in developing the software component. We leverage off-the-shelf hardware, partnering with leading robot manufacturers like JAKA Robotics, Universal Robots, and HANA, among others. Our software is both arm-agnostic and camera-agnostic, allowing us to work seamlessly with various robotic systems.
To achieve this, we use an integration layer that captures the necessary data from any robotic arm or camera. This data is then processed through our filters and AI models. Finally, the output is used to control the system, ensuring precise and efficient operation. This flexibility allows us to integrate our software with a wide range of hardware, offering maximum adaptability for diverse industrial applications.
Q: What key innovations are you planning to pursue in the next 5 to 10 years, and how do you envision their impact on the industry?
A: One of our key objectives is to bring the robotic arm closer to human capabilities. We have already made significant progress by integrating a camera that provides the vision aspect, much like how humans manipulate objects. In human task manipulation, two senses are crucial: vision and touch.
The Japanese manufacturing approach highlights this concept, where operators rely heavily on touch and feel to perform tasks. In some assembly processes, operators don’t need to see the part; they simply rely on their sense of touch to complete the task.
To replicate this, we are developing a solution that combines tactile feedback and force sensing with the camera. This enhancement will enable the robotic arm to handle more complex tasks, improving both accuracy and precision. However, there are limitations. In certain scenarios, a camera and robotic arm alone cannot overcome challenges, such as when parts become occluded or obscured. To address this, we are continuously exploring ways to enhance our systems’ capabilities.
Q: What market opportunities do you foresee for these emerging technologies in India, and how do you plan to leverage them?
A: The opportunities in India are abundant, particularly as industries increasingly embrace technology. Companies are realizing that achieving scale cannot rely solely on human resources; technology must be integrated to stay competitive. Without it, businesses risk falling behind. This shift represents a broader trend across all automation sectors.
Specifically, the automation we focus on—vision-guided manipulation systems—will always play a pivotal role. Nearly every process in manufacturing, whether it’s gluing, assembly, or screwing, requires some form of part manipulation. This makes the market vast, spanning multiple industries and extending beyond traditional automation.
For example, we also collaborate with the warehousing industry, where our systems are applied in pick-and-place, packing, and palletizing tasks. The potential for growth and innovation in these areas is immense, showcasing the far-reaching impact of our technology.