American engineer and specialist in the field of “cloud” platforms Terren Peterson published a blog FreeCodeCamp the story of how he created a device for the automatic replenishment of coffee JavaWatch. Product controller-based Raspberry Pi transparent pictures of a jar of coffee and sends the pictures to the cloud and then scans AI and determines the presence of coffee beans in the picture. When the grains disappear, the system sends a request to Amazon Dash, and after a while the driver brings the product home to the engineer. Edition vc.ru published a translation of the material.
Over the years I have perfected your method of preparing for the race to the shops. First few times I open the fridge and carefully scanning it a few times. Then I do the same trick with other cabinets in the kitchen, and then make a list of all the missing ingredients.
Because I love it, I am sure that stock replenishment is possible to find a better way. But I havent seen a suitable solution. And although I tried various mobile apps to become even more organized, I still have to improve this simple “paper” method. I like the benefits of online shopping on Amazon, but still there is a delay which requires me to devote more planning time than I can.
Then I visited the idea. “Maybe the solution is to better track the number of residues?”. And I started thinking in this direction. Since we have the house all love the coffee, and the Internet is a great range, I decided to start with this product.
. Participation in the competition of a community of electronics enthusiasts Hackster motivated me to make the leap from a simple concept to writing the code to fully automate the replenishment of coffee in my house. The full version of the project can be found on his page on Hackster. One of the technologies presented by Amazon, is called Dash — its programmable buttons to order the items from their website.
Some are already set up for the sale of goods of a particular brand of detergent, batteries, snacks, and so on. The technology used in these buttons can be used in other devices. This allows you to replace the button by another sensor, which will request the required goods. And after intense brainstorming I came up this idea.
After I came up with a creative concept and a “catchy” name for the device (JavaWatch), I described the architecture and principles of operation. Between the device controller based on Raspberry Pi and API Dash Replenishment I added the service image recognition Amazon Recognition Service, which launched recently. Here is the diagram.
. The key idea of the project is to use the service for image recognition from AWS for automatic ordering of coffee. I came across excellent examples of using this technology for face recognition, but Im probably the only person who will take pictures of coffee beans. Heres how it works.
Camera Raspberry Pi intervals taking pictures of a jar of coffee and loads images to AWS S3. Each photo is assigned a unique address that is used when referring to Rekognition API that allows you to apply artificial intelligence. Here is a picture of full cans of coffee, which made the device.
In response to the Rekognition API predicts that can be depicted in the picture. I was lucky that the AI is able to recognize the grain (Bean), and this photo he made a prediction that with a probability of 73%, they are present in the picture. Many other assumptions were also correct — in the photo could be bottle and homework — but for my purposes they are irrelevant.
So, what happens if the supply of coffee is almost over. Here is a snapshot that made the controller to this day. And the answer is AI.
It worked for me. “grain” gone out of the issue and was never mentioned in many. The parsing of a data set allowed me to formulate the criteria by which the system issued an order to replenish banks.
Adding the ability to order a certain type of coffee was another task. To deal with it, I have created a simple site that allowed you to register my device on Amazon. How to integrate the device and the product catalog Amazon I said in the video.
When I made coffee, I decided to check on other products, how useful can be the application of AI in the kitchen. Looks like a model still in need of training. But still this is a good start, and it is not surprising that AI is not konkretisiert that in the picture it coffee beans.
After working on the project a little longer, I found that probably too much of a complicated process. Because I like to create things with the Raspberry Pi, the next step for UX product is to create an application that will use the camera features of the smartphone product that is already in the pocket of every. If I can make a decision that is completely portable, it will be a big step forward. So I will be able to reduce the costs that would be required to build the devices.
Most of the tasks that are required for the process of recognition has already been solved in the cloud, so the question is how to pick out a photo. Rekognition service only a few months, and if improvement models affect purchases in Amazon, I bet he will quickly become very good.