Increased visibility and granular-level data tracking
Self-learning ML model
Images processed with our proprietary recognition engine and KPI’s are auto-calculated
Machine Learning model is set up for self-learning and auto correction
Results are pushed to the mobile app and backend portal within a few minutes
For All Shelf Conditions
Ivy Eye can easily distinguishing similar SKU’s from each other
Easily read small price tags using our advanced image recognition capability
Analyze image captured at any angle accurately
Handles low lighting shelf conditions
Enhanced Retail Execution
Ivy Eye is built-into our retail execution application, providing a unified experience
The solution pushes its results and actionable insights directly to the mobile app as well as back office
Enhanced capabilities to stay ahead of the competition
FAQs
The initial training process may take about 3-5 weeks. During this timeframe, the AI model will reach an accuracy level of above 90%. That is when we start generating the KPIs, and subsequently, with another 2-3 weeks of training, it will reach close to 97% accuracy.
Ivy Eye accuracy level depends on factors of the input images like light conditions, camera clarity, focal distance, and image angle. We have seen slight modifications across different product categories and outlet conditions. We have never seen a fall below 95% in these cases.
Yes, the solution is compatible with both iOS and Android devices.
Yes, it is possible. We have standard APIs to transfer images from a third-party application to our image recognition engine and send the tracked KPIs back to their system.
We will have to do some incremental retraining of the model with new product images. The retraining can be done within a few days and takes only minimalistic efforts.
The process involves a workshop to understand the requirements, the system set-up, Machine training, and finally, roll out to the end users. The process typically takes about 8-12 weeks from start to finish the implementation.
No, image recognition can only process KPIs based on what is visible in an image. It can count the facings like the first row of the products on a shelf. The second and subsequent rows will not be visible on an image and hence will not be detected. But we have separate stock capture modules to capture the information manually.
Guided image capture is to address these challenges. We allow a 20% buffer space to overlap images captured so that merchandisers can effectively capture a sequential order of the aisles minimizing chances of errors and overlap of products.