First month for free!

Get started

  All Tutorials
GPT 3.5 to Mistral API

Published on Oct 31st, 2023

The Power of TypeChat + LLM AI models

Getting a consistent output format from a large language model can be challenging. You might face problems when the AI doesn't strictly follow your instructions or when the user request is abstract. This is where TypeChat can help and turn an AI response into actionable instructions. Where traditional text prompts, can sometimes be misinterpreted by AI, TypeChat allows you to specify the exact output format. A classic example is when taking food orders; instead of getting a text output from the AI with each item, you can lay out a schema with the precise structure, such as items, sizes, or toppings.

In this tutorial I will go through building a very simple AI system that manages your smart home. A simple request like "turn on the bedroom light" would need a function call behind the scenes to action it. With TypeChat, this becomes straightforward, as all you need is a well-defined schema. Let's start a schema setup with lights and an automated coffee machine:

export interface Actions {
    actions: (Light | CoffeeMachine | UnknownText)[];
}


// For prompts that don't match any existing category
export interface UnknownText {
    type: "unknown";
    text: string;  // Unrecognized text
}


export interface Light {
    type: "light";
    room: "bedroom" | "bathroom" | "kitchen" | "living room";
    value: "on" | "off";
}


export interface CoffeeMachine {
    type: "coffee machine";
    program: "espresso" | "latte" | "cappuccino" | "americano";
    milk?: "whole" | "skim" | "soy";
    size?: "small" | "medium" | "large";
}

This is a very basic schema, so feel free to add or modify components as you want.

Choosing Your AI Model

Having defined our schema, the next step is to pick an AI model. While TypeChat is compatible with various OpenAI models out of the box, for our illustration, we're opting for the much cheaper Zephyr 7B AI API from Lemonfox.ai. Only if you're dealing with very complex tasks, something more powerful like GPT-4 might be a better fit.

OPENAI_MODEL=zephyr-chat
OPENAI_API_KEY=YOUR_LEMONFOXAI_API_KEY
OPENAI_ENDPOINT=https://api.lemonfox.ai/v1/chat/completions

Setting Up the Translator

To process user requests, we need a translator. This translator will take user inputs and convert them into actionable commands, as defined by our schema.

const model = createLanguageModel(process.env);
const schema = fs.readFileSync(path.join(__dirname, "actionsSchema.ts"), "utf8");
const translator = createJsonTranslator<Actions>(model, schema, "Actions");

Handling User Requests

To demonstrate, let's assume a user request is: “I just woke up, turn on the lights and I need a big coffee with milk.”

const response = await translator.translate(request);
const actions = response.data;
console.log(JSON.stringify(actions));

The output will be a structured JSON, like:

{
  "actions": [
    {
      "type": "light",
      "room": "bedroom",
      "value": "on"
    },
    {
      "type": "coffee machine",
      "program": "espresso",
      "milk": "whole",
      "size": "large"
    }
  ]
}

This JSON can then be further processed to trigger the respective functions, turning the user's request into a reality. This structured output can be useful for many different scenarios, like segmentation, improving natural language UIs, simplifying prompt engineering, or building pragmatic natural language interfaces.

To find more examples, check out the official TypeChat repository: TypeChat on GitHub

Learn more about Lemonfox.aiExplore more tutorials