Recycle your waste with AI and Firebase Extensions

Discover the innovative approach to recycling education with KnowWaste, an application powered by AI, Firebase Extensions, and Flutter.

Pavel Ryabov
11 min readOct 20, 2023

--

Introduction

We live in the 21st century, an era defined by not only innovations and cutting-edge technologies, but also new challenges. The thing is, as we move up the hierarchy of needs, we find ourselves thinking about issues that didn’t concern us much before — things like mental health, carbon footprint, sustainability, and recycling. The key in today’s world is to recognize these new problems, understand their nature, and raise awareness to achieve effective resolutions.

Recycling remains a top global issue today. Even though, over the last decades, it has become a part of our lives, many people still find it confusing. While the debate of whether individuals or companies should take the lead in recycling, there is no doubt that having a basic understanding of waste management is essential for everyone.

The question is, how can we make recycling education more accessible? How can we share information in a way that doesn’t turn people away, and how can we make it an enjoyable and engaging subject?

The best tool at our disposal is modern technology. We have a wide range of tools that can simplify, enhance, and make the learning process enjoyable. I believe that the main technology in this regard is AI, and that’s what we’ll discuss in this article.

Examples of AI Utilization in Recycling

Artificial Intelligence is already actively used in waste management and recycling education around the world. One notable example would be AT&T’s Oscar, a smart trash bin with a scanner that guides users on where to place their waste items. New technology always arouses interest among people, especially when they get to try it themselves. In this case, Oscar also causes a voluntary desire to try recycling.

Photo courtesy of Intuitive AI

Similar technologies are already in full swing across different sectors, ranging from smart trash bins in public spaces to companies like Recycleye, which create advanced AI robots capable of sorting items into different waste categories directly on the conveyor belt.

These innovations demonstrate how AI can revolutionize the way we approach recycling, making it more accessible and engaging for individuals and businesses.

Photo courtesy of Recycleye

The idea of KnowWaste

In August of this year, I came across the Firebase Builders ’23 competition organized by Invertase. The requirements were simple: create a project using as many Firebase Extensions as possible. Particular attention was paid to AI Extensions.

Inspired by the examples mentioned earlier, I got the idea to develop something similar, but in a more portable and accessible form. The solution was obvious — an AI Recycling Scanner application. I decided to develop the application using Flutter and Firebase, with the power of Google Cloud Vision AI. This is how KnowWaste (pun intended) was born.

The initial idea was to create a Waste Scanner — a utility that would allow users to take a photo of an item to identify it and provide helpful recycling information. The primary role of AI here would not only be data processing but also presenting information in a conversational and user-friendly way. Although the concept seems simple, it took some clever tricks and strategies to bring it to life, and we’ll cover those later in the article.

As development progressed, KnowWaste became more than just a Waste Scanner. The app now includes Articles about recycling with the AI summarization feature, as well as recycling Guides and Challenges. All these features are created to gamify the recycling process, which makes it a fun and interesting activity for people.

Waste Scanner: Implementation

The Scanner quickly became the most challenging part of the project — the condition of the competition to use as many AI Firebase Extensions as possible was, as I thought at first, to the disadvantage of my project. I knew that the Scanner logic would require quite a bit of AI processing, which would result in many API calls and data processing in between.

However, after further exploring the available resources, I realized that using Extensions allowed me to build a completely autonomous analysis process, starting from object detection and ending with the results output. This entire series of tasks is triggered by a simple action — image upload to Cloud Storage. Exciting, right? We will discuss the implementation in detail, but first, let’s talk about Firebase Extensions.

What the… Firebase Extensions?

First introduced back in 2019, Firebase Extensions feature pre-packaged solutions that aim to simplify and make the development process more efficient. They are incredibly handy, as they allow to implement complex backend solutions with just one click. Some of these solutions include automated user creation and deletion, email notifications, payment system integration, and various AI solutions. Extensions can be easily installed into any Firebase project and feature customizable parameters.

Earlier this year, Firebase allowed anyone to create their own extensions and publish them to Extensions Hub.

Now, back to KnowWaste. The processing steps for the Scanner looked like this:

  1. Detect objects in the image.
  2. Recognize object type and material.
  3. Summarize all gathered information and return recycling advice.

For steps 2 and 3, I quickly found ideal solutions in the form of Label Images with Cloud Vision AI and Language Tasks with PaLM API extensions, respectively.

The former uses Google Cloud Vision AI to label (describe in terms of appearance and features) all the objects in a photo, while the latter uses the capabilities of the PaLM 2 language model (one of Google’s LLMs) to handle complex language tasks and queries.

Since Extensions Hub didn’t have a solution for step 1 object detection, but Vision AI had an API for it, I created my own extension called Object Detection with Cloud Vision AI.

After installing the extensions, creating supporting Cloud Function triggers, and extensively testing different approaches, I came up with the following logic:

  1. The user takes a picture of the waste.
  2. The image is uploaded to Cloud Storage.
  3. Frontend starts listening to new documents in the analyzedWaste collection.
  4. Two extensions are triggered on image upload: Detect Objects and Label Images.
  5. After processing, these extensions create new documents in detectedObjects and labeledWaste collections. These documents contain data about detected objects (name and score) and labeled items (description, score).
  6. processLabeledObjects and processDetectedObjects onCreate triggers listen to new documents in the corresponding collections. Then, the data is processed and stored in the wasteQueue collection under the filename, which is a UUID v4.

A special onUpdate trigger listens to updates in the wasteQueue collection and, when both labels and objects fields are present, gathers all data and stores it in the analyzedWaste collection. The most important fields in this collection are labelsString and objectsString, containing stringified lists of detected object data.

An example of labelsString:

“Drinkware: 0.9199069738388062, Beverage: 0.9056464433670044, Tableware: 0.904464066028595, Can: 0.833684504032135, Serveware: 0.8178635239601135, Computer monitor: 0.8141725659370422, Wood: 0.7573911547660828, Grid: 0.7550596594810486, Wall: 0.7516270875930786”

An example of objectsString:

“Beverage can: 0.7620814442634583”

Image example for the data above

The last thing we need to do is make sense of all the data and present the analysis to the user. This is where the Language Tasks Extension comes in handy. There are multiple fields in the extension configuration that we can change. I set variable fields to be labels and objects, the response field — to advice, and the trigger — to analyzedWaste collection changes.

PaLM Configuration

Now, each time a new document is written to the analyzedWaste collection, the extension will be triggered. It will retrieve data from the objects and labels fields and store the generated text in the advice field of the same document.

With this, more challenges arise. We not only have to handle the data but also organize it into three blocks: recyclability, advice, and tips & tricks. To make this happen, we use a complex query that tells PaLM to process information and wrap the response into tags. This process works like a charm, with PaLM doing an impressive job of keeping track of data and creating labels.

Here is the prompt that I came up with after hours of configuring and turning prompts with PaLM itself. While it can be improved, this prompt works really well.


You are a Recycling Assistant. You are provided with detected objects: [{{objectsString}}] and possible labels for them: [{{labelsString}}]. Each label and object has a probability next to it.
1) Remove all labels from the list above that might cause confusion (describing background, floor, wall, people, etc). Always remove labels like "hand", "finger", "nail". "gesture", "thumb". Some materials might describe the OBJECT.

2) Analyze remaining labels. Choose the first OBJECT from the objectsString (with max probability) If OBJECT is "Unknown object" OR "Packaged good", IGNORE it and only use labels. OBJECT must not be "Unknown object" or "Packaged good". If that is the case, define the OBJECT based solely on the labels.

3) Define OBJECT_MATERIAL based on the labels (could be plastic, glass, etc.) 4) Recognize if the object is a living thing. If it is, do not write tips/advice and set OBJECT_IS_RECYCLABLE to false.

5) Tell me IN DETAIL how to handle the recycling of the OBJECT (which container to use, how to prepare for recycling, etc.). Write it as advice, and be friendly, make it pretty long and interesting to read. If you notice multiple objects in the labels that can be part of one, write a recycling guide for parts of this object.
Use formatting and emojis. Follow the same structure: "I think the object in the photo is a OBJECT_NAME. Let me help you recycle it: ", then write the steps as a list with numbers, and include specific recycling practices for materials like glass, paper, and plastic, based on WM's Recycling 101.

6) If the object is not recyclable, MUST write "<tips>No tips</tips>", otherwise, write tips and interesting recycling solutions for this object, always start that section from the tag <tips> and end with </tips>. The last line of your response must be tags containing OBJECT NAME and OBJECT_MATERIAL like this:

<object-material>OBJECT_MATERIAL</object-material><object-name>OBJECT_NAME</object-name><object-recyclable>OBJECT_IS_RECYCLABLE<object-recyclable>

Here is an example of the advice output:

I think the object in the photo is a beverage can. 

<object-material>Aluminum</object-material>
<object-name>Beverage can</object-name> <object-recyclable>True</object-recyclable>

Let me help you recycle it:
1. Rinse out the can and remove any labels.
2. Place the can in a recycling bin.
3. The can will be recycled into new aluminum products.

<tips>
* Check with your local recycling center to find out what materials are accepted.
* Look for a recycling symbol on the can to make sure it can be recycled.
</tips>

All the tags are processed in the app, resulting in a nice-looking UI:

Does it always work perfectly? Absolutely not! Yet, a significant part of the fun with KnowWaste comes from scanning items all around your home and discovering what kind of crazy recycling advice the AI might return. Often, the hiccups are due to Object Detection AI not always accurately identifying objects — a limitation that will certainly be improved with future Cloud Vision AI updates.

The AI needs to “touch grass”

Other extensions used

Besides the AI extensions, I’ve used a couple of other extensions that, while might be overkill for the tasks they’re used for, have simplified the development process a lot. For example, I use Search Firestore by Algolia for precise text searches in articles and guides and Firestore User Document by Rowy for the automatic creation of user documents.

I’ve also added a new extension I created, FlowLinks. It provides a powerful alternative for Firebase Dynamic Links, which are scheduled for deprecation in 2025. The extension utilizes various Firebase services to allow creating deep links with customizable paths and metadata.

FlowLinks allowed me to create shareable deep links for different content types, including articles, guides, and challenges. If you would like to find out more about FlowLinks, and how they replace Dynamic Links, you can explore them here.

Future plans

While KnowWaste is a quite polished application, there are some shortcomings that are yet to be resolved. Mainly, the problems arise from AI incorrectly determining object type (you might see a lot of generalized “packaged good” responses), which, I am sure, will be resolved with more advanced versions of Cloud Vision. Apart from improving the current functionality, I am planning to integrate new tools — for example, a recycling symbol scanner, which allows users to determine the type of plastic and indicate the correct recycling process.

Did using extensions pay off?

Absolutely! While all of the above can be recreated using API calls, the “step-by-step” data processing system for the Scanner allows me to cut and replace steps, always know why and how one of the processes went wrong, and track the exact progress of a task, which is great for the user experience. Using all the other extensions significantly increased my development speed, allowing me to deliver KnowWaste in just two weeks!

Of course, I did encounter limitations that required some creative solutions. For example, instead of relying on PaLM to correctly generate tags for the output, I could have used multiple API calls. Another problem lies in the installation speed of extensions, where I sometimes had to wait for 6–8 minutes for a parameter change to take effect. This, however, is completely understandable, since the parameter changes rebuild all Cloud Functions related to the extension, which always takes some time.

Conclusion

With all that said, I do believe that extensions hold a bright and promising future, especially as more and more big companies and enthusiasts create solutions that greatly simplify and speed up the development process.

Remember, you also have the opportunity to create your own extension! If you have an interesting extension idea, and it is not already present in the Extensions Hub, you can find an in-depth tutorial on how to create it here.

KnowWaste is fully open-source and is available to download on App Store and Google Play.

📣 Follow me on Twitter (𝕏) to keep in touch and receive a daily dose of Firebase and Flutter blogs, as well as tips and tricks.

More articles

--

--