Text-Based Models
To utilize text-based models effectively in your application using the Merlin API, follow this guide to understand the available models and how to interact with them using Node.js.
Available Models
Below is the list of the available text-based model IDs you can use with the Merlin API along with a brief description:
Model ID | Description | Provider | Pricing (in $ per 1K tokens) |
---|---|---|---|
gpt-3.5-turbo | A cost-effective and fast model, suitable for a wide range of tasks. | OpenAI | Pricing |
gpt-4-1106-preview | An early preview version of the GPT-4 model. | OpenAI | Pricing |
gpt-4 | The latest generation GPT-4 model, offering advanced capabilities. | OpenAI | Pricing |
gpt-4-0314 | A version of GPT-4 with specialized optimizations. | OpenAI | Pricing |
gpt-4-0613 | A GPT-4 variant optimized for certain tasks. | OpenAI | Pricing |
gpt-4-32k | A GPT-4 model with an extended token limit. | OpenAI | Pricing |
gpt-4-32k-0314 | A specialized version of GPT-4 with a 32k token limit. | OpenAI | Pricing |
gpt-4-32k-0613 | Another variant of GPT-4 with the 32k token limit, with different optimizations. | OpenAI | Pricing |
gpt-3.5-turbo-1106 | A variant of GPT-3.5-turbo for specific use-cases. | OpenAI | Pricing |
gpt-3.5-turbo-16k | An expansion of GPT-3.5-turbo with a larger token limit. | OpenAI | Pricing |
gpt-3.5-turbo-0301 | A specific configuration of GPT-3.5-turbo. | OpenAI | Pricing |
gpt-3.5-turbo-0613 | Another GPT-3.5-turbo configuration for particular applications. | OpenAI | Pricing |
gpt-3.5-turbo-16k-0613 | Combines the 16k expansion with the 0613 optimizations. | OpenAI | Pricing |
claude-instant-1 | A model from Anthropic, designed for instant completions. | Anthropic | Pricing |
claude-2 | The second version of the Claude model from Anthropic. | Anthropic | Pricing |
gemini-pro | Google's competitor to the GPT-4 model, with multimodal capabilities | ||
codellama-34b | A fine-tuned model specialized in generating code-related content. | Meta | 0.001 |
llama-2-7b | Small sized open source model by Meta | Meta | 0.00015 |
llama-2-13b | Medium sized open source model by Meta | Meta | 0.00025 |
llama-2-70b | Largest open source model by Meta | Meta | 0.001 |
Llama-Guard-7b | Small sized open source model by Meta | Meta | 0.00015 |
mistral-7b | A medium-sized, instruction-following version of MistralAI. | Mistral | 0.00015 |
mistral-7b-openorca | Fine tuned mistral model trained by Orca Foundation | Orca Foundation | 0.00015 |
Mixtral-8x7B-Instruct-v0.1 | open source model by Mistral | Mistral | 0.0005 |
zephyr-7b-beta | A prototype version of a large-scale model from HuggingFace. | Huggingface | 0.00015 |
thenlper-gte-large | Model by Thenlper | Thenpler | 0.00005 |
Interacting with Text-Based Models
To interact with the provided models using Node.js, you can use the following sample code:
import { Merlin } from "merlin-node";
const apiKey = "<YOUR_MERLIN_API_KEY>"; // Replace with your API key from Merlin
const merlin = new Merlin({ merlinConfig: { apiKey } });
async function createCompletion() {
try {
const completion = await merlin.chat.completions.create({
messages: [{ role: "system", content: "You are a helpful assistant." }],
model: "gpt-3.5-turbo", // Adjust model as needed
});
console.log(completion.choices[0].message.content);
} catch (error) {
console.error("Error creating completion:", error);
}
}
createCompletion();
Remember to install the merlin-node
package and add your API key to use Merlin API.
Notes
- Make sure to replace the
model
property's value with the desired model ID from the table above depending on your task requirements. - The output from the
createCompletion
function call is logged and will visible in the console soon.