Date of Award


Document Type



Santa Clara : Santa Clara University, 2022.


Computer Science and Engineering

First Advisor

David C. Anastasiu


In the United States, a significant portion of landfill is undiverted compost and recyclable material. Instead of being sent to landfills, compostable material can be used as nutrient rich fertilizer and soil in local communities. Likewise, recyclable material can be processed at appropriate facilities to reuse materials and reduce the demand for unsustainable resources, like plastics. We seek to reduce the portion of undiverted recyclable and compostable waste by aiding in the proper sorting of waste at common sites of disposal, particularly trash cans in cafeterias and restaurants. We have designed a mobile, augmented reality, 3D object detection model that utilizes a Convolutional Neural Networks to detect and classify waste in real time. The model distinguishes between landfill waste, recyclable material, and compostable material, corresponding to the common categories of existing sorting bins.

With our initial research, we found several other projects that developed similar systems to solve similar issues as us. CompostNet [1] and SpotGarbage [2] also applied machine learning algorithms to identify waste through mobile devices. However, these solutions are limited by their categories and user interaction. SpotGarbage is only capable of detecting one category of waste within photos, and thus cannot be used for a waste sorting and diversion problem. While CompostNet could detect multiple different classes of waste, such as trash, recycling, and compost, their system only works with images featuring singular objects. As a result, users have to manually take an image, then await results on what their picture was.

Our solution utilizes an object detection algorithm to allow users to classify their waste in real time. Users only need to hold up their waste in front of the camera and bounding boxes are drawn over their waste. The advantage of this system is that users do not need to interact with the mobile device to take a photo, and thus can quickly and efficiently sort their waste. Additionally, with bounding boxes, we can be more specific about what is waste, rather than classifying an entire photo as ”recycling”. The final advantage of our mobile application lies with the ability to classify multiple objects. CompostNet has little practical usage, as users would not hold up their trash one at a time. However, with our solution, users could hold up a whole tray of trash, food, and recyclable items, and they would be appropriately identified. With the introduction of our Augmented Reality Glasses System, we provide even more flexibility and use case. Users no longer need to point phones at their trash, instead, simply looking at trash would detect it. While AR Glasses are currently pricey and unwieldy, we believe that in the future they will become more lightweight and integrated with our current devices. By incorporating them into our solution, we prepare for a more technologically advanced future. Overall, we have expanded upon previous solutions to create a more practical and flexible solution. Our solution was capable of a mean average precision of .863 on our testing dataset, and an average inference time of 59.98 ms, indicating that it is relatively effective at detecting and categorizing different waste objects.