First month for free!

Get started

  All Tutorials
GPT 3.5 to Mistral API

Published on Oct 16th, 2023

Transitioning from GPT-3.5 to Mistral AI API in Node.js

Introducing Mistral AI API

In the ever-evolving landscape of artificial intelligence and natural language processing, new advancements are continuously reshaping the way we interact with machines. One such groundbreaking development is the Mistral 7B language model, which is comparable to OpenAIs GPT-3.5. The Mistral 7B API delivers low latency, high throughput, and robust performance when compared to larger models, all while maintaining minimal memory requirements. This makes Mistral an attractive choice, especially in scenarios where GPT-3.5 might be too costly due to high scale or when you just want to save on your OpenAI bill.

Additionally, this model is freely accessible under the permissive Apache 2.0 license, with no usage restrictions. So you can either host your own version of Mistral or in this article, we will explore how to replace GPT-3.5 with the Mistral API in Node.js.

Step 1: Install the Required Library

If you've previously worked with the OpenAI large language models, you're in luck! The Lemonfox.ai Mistral AI API utilizes the same library as GPT-3.5, which means if you already have it installed, you can use it for Mistral as well. If not use npm or yarn to install the openai npm package. Run one of the following commands in your project's directory:

npm install --save openai
# OR
yarn add openai

Step 2: Configure the API

Before you can make API requests, you need to set up the Mistral AI API with your credentials. This includes specifying your API key and the base URL for the API. Replace YOUR_API_KEY with your actual API key. If you don't have an API key yet, you can get one by signing up here.

import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: "YOUR_API_KEY",
  baseURL: "https://api.lemonfox.ai/v1",
});

Step 3: Making an API Request

Now that you have configured the API, you can create a conversation and request a response from Mistral AI. In the Mistral AI API, conversations often start with a system message, followed by alternating user and assistant messages. The system message helps set the context for the assistant.

Here's an example of how to make an API request to Mistral AI:

const completion = await openai.chat.completions.create({
  messages: [
    { role: "system", content: "You’re a social media manager writing Instagram captions." },
    { role: "user", content: "I have a sock online shop and want to post something on Instagram on Mother’s day. Brainstorm some ideas." },
  ],
  model: "zephyr-chat",
});

console.log(completion.choices[0].message.content);

Step 4: Extending the Conversation

Conversations often involve multiple interactions. You can extend the conversation by simply adding more messages to the messages array. For instance, if the user has follow-up questions or the assistant needs additional information, you can keep the conversation going, for example giving feedback that you want a funnier or more serious text.

Conclusion

Mistral AI API introduces a cost-effective yet powerful alternative to GPT-3.5, allowing developers to leverage advanced language models for various applications. It's easy to switch and test Mistral AI in your Node.js projects and explore the potential of this AI model.

Learn more about Lemonfox.aiExplore more tutorials