Type something to search...
Building Projects With Chrome’s On-device AI

Building Projects With Chrome’s On-device AI

A Guide to Prototyping with Gemini-nano-in-Chrome

Using the experimental prompt API on Chrome to build prototypes with AI features

On-device / Edge AI

On-device AI refers to AI models that run directly on end-user devices, such as smartphones, tablets, or IoT gadgets, without relying on cloud computing or a server to host these models.

This is useful in many ways:

  1. Since the model is on the device, we can run offline inferences.
  2. We can reduce the operational costs of running AI features by offloading certain inferences to the client devices.
  3. Since the data never leaves the device, we can offer more privacy and data security with on-device models.

However, since these models are run on memory-constrained devices, they can’t perform general-purpose inferences that a Large Language Model hosted in the cloud could do. Instead, these are smaller models with specific capabilities.

Chrome ships with one such model. Let’s take a look at it:

Gemini Nano in Chrome

The latest version of Google Chrome ships with an on-device AI model, which is the Gemini-nano . However, the APIs interacting with it are experimental and are behind a flag.

So if we intend to use the experimental API, we’ll first need to enable this feature flag through the following steps:

  1. Update to the latest version of Chrome and then visit chrome://flags .
  2. Search for Prompt API for Gemini Nano
  3. Enable the flag
  4. Restart the browser

Building Applications with Chrome’s On-device AI

Once the feature is enabled, we can access the model from a global object as follows:

window.ai

The Prompt API

We can create a session with a system prompt as follows:

const inferenceSession = await window.ai.languageModel.create({
  systemPrompt: `You are an English teacher. 
                 Analyse a given word and come up with a sentence 
                 to demonstrate the usage of the word.
                 Always respond in English.`
});

Once the inference session is created we can invoke the prompt method on it as follows:

await inferenceSession.prompt('Precarious');

A Sample Project

Let’s build our above idea into a simple web application. The system design for our project can be architected as shown below:

Our final product will be as follows:

To keep the focus of the article on AI integration let’s look only at how that part of the code is composed:

The link to the GitHub repository with the complete code is at the bottom of this article.

The AI Helper Methods

The module design we have for this utility can be depicted as shown in the image below:

We can implement the above with the following code:

// src/utils/ai.js
export async function setupAI() {
  if(!window.ai?.languageModel){
    throw new Error("AI feature is not enabled on this browser.");
  }
  const inferenceSession = await window.ai.languageModel.create({
    systemPrompt: `You are an English teacher. For a given word and come up with a sentence to demonstrate the usage of the word.
    Always respond in English in the following format: 
    <h3>Usage:</h3> <p>Your sentence here</p>
    <h3>Meaning:</h3>  <p>The meaning of the word</p>
    `,
  });
  return inferenceSession;
};

export async function prompt(inferenceSession, word){
    const response = await inferenceSession.prompt(word);
    return response;
}

Notice the system prompt, where we instruct the model to return the response as HTML elements. This is to simplify our application logic. If we were to deploy this app, it would be a good idea to sanitize and validate the response before injecting it into the DOM. Since this is just a proof of concept, we can skip that part in this context.

Setting Up the Inference Session on Content Load

The on-load control flow is as follows:

Which could be implemented with the following logic:

// main.js

import { setupAI } from "./src/utils/ai.js";

const initUI = () => {
  // ... code to initilize the user interface
};

let inferenceSession = null;

document.addEventListener("DOMContentLoaded", async () => {
  try{
    inferenceSession = await setupAI();
    initUI();
  }catch(error){
    console.error(error);
    alert("App failed to load. Please check the console for more details.");
  }
});


Prompting for Word Usage and Definition

The inference control flow can be visualized as below:

We could implement this logic as follows:

// main.js
import { setupAI, prompt } from "./src/utils/ai.js";

const initUI = () => {
  // ... existing code
  setupButtons(document.querySelector("#button-container"), {
    onSubmit: () => {
      const trimmedValue = input.value.trim();

      if (trimmedValue) {
        updateTitle(trimmedValue.charAt(0).toUpperCase() + trimmedValue.slice(1));
        updateContent(`
        <p>Asking the AI for the word usage instructions...Please wait...</p>
      `);

      prompt(inferenceSession, trimmedValue)
        .then((response) => {
          updateContent(`
          <div>${parseBold(response)}</div>
        `);
      })
        .catch((error) => {
          updateContent(`
          <p>Failed to get the usage instructions. Please try again.</p>
        `);
        console.error(error);
        });
      }
    },

  // ... exisitng code
  });

}

// ... existing code

Since it is only a proof of concept, we intentionally skipped input validations and checks, when a user enters a word and clicks on Submit .

GitHub Repositories

The complete functional code for this demo can be accessed from this GitHub repository:

If you are interested in exploring a bit more sophisticated application built using this on-device AI model and Svelte, this hobby project of mine might interest you:

Related Posts

10 Creative Ways to Use ChatGPT Search The Web Feature

10 Creative Ways to Use ChatGPT Search The Web Feature

For example, prompts and outputs Did you know you can use the “search the web” feature of ChatGPT for many tasks other than your basic web search? For those who don't know, ChatGPT’s new

Read More
📚 10 Must-Learn Skills to Stay Ahead in AI and Tech 🚀

📚 10 Must-Learn Skills to Stay Ahead in AI and Tech 🚀

In an industry as dynamic as AI and tech, staying ahead means constantly upgrading your skills. Whether you’re aiming to dive deep into AI model performance, master data analysis, or transform trad

Read More
10 Powerful Perplexity AI Prompts to Automate Your Marketing Tasks

10 Powerful Perplexity AI Prompts to Automate Your Marketing Tasks

In today’s fast-paced digital world, marketers are always looking for smarter ways to streamline their efforts. Imagine having a personal assistant who can create audience profiles, suggest mar

Read More
10+ Top ChatGPT Prompts for UI/UX Designers

10+ Top ChatGPT Prompts for UI/UX Designers

AI technologies, such as machine learning, natural language processing, and data analytics, are redefining traditional design methodologies. From automating repetitive tasks to enabling personal

Read More
100 AI Tools to Finish Months of Work in Minutes

100 AI Tools to Finish Months of Work in Minutes

The rapid advancements in artificial intelligence (AI) have transformed how businesses operate, allowing people to complete tasks that once took weeks or months in mere minutes. From content creat

Read More
17 Mindblowing GitHub Repositories You Never Knew Existed

17 Mindblowing GitHub Repositories You Never Knew Existed

Github Hidden Gems!! Repositories To Bookmark Right Away Learning to code is relatively easy, but mastering the art of writing better code is much tougher. GitHub serves as a treasur

Read More