Gemini AI API Demo

Gemini API Documentation

This documentation describes the Gemini API endpoints available in this application. All endpoints accept POST requests and return JSON responses.

Base URL: https://feer-mcbot.feer-mcqueen.com/api
Note: All API endpoints are prefixed with /api
API Endpoint Tester

Test the API endpoints directly from this page:

Text Generation Endpoint
POST /api/gemini/generate-text

Generate text content using Gemini AI.

Request Parameters:
  • prompt (required): The text prompt to send to Gemini AI
Response:
{
    "text": "Generated text from Gemini AI..."
}
Example:
// Request
{
    "prompt": "Write a short poem about technology"
}

// Response
{
    "text": "Silicon dreams in digital streams,\nPulsing through our modern schemes.\nConnected minds across the void,\nIn technological embrace deployed."
}
Image Analysis Endpoint
POST /api/gemini/analyze-image

Analyze images using Gemini Vision.

Request Parameters:
  • image (required): Image file to analyze (must be less than 4MB)
  • prompt (required): What you want to know about the image
Response:
{
    "analysis": "Description and analysis of the image..."
}
Note: This endpoint requires file upload and is not testable from this interface. Use Postman or a similar tool for testing.
Chat Endpoint
POST /api/gemini/chat

Engage in conversation with Gemini AI.

Request Parameters:
  • message (required): The message to send to Gemini AI
  • history (optional): Array of previous messages in the conversation
Response:
{
    "reply": "Gemini's response to your message..."
}
Example:
// Request
{
    "message": "What are the best practices for Laravel development?",
    "history": [
        {
            "role": "user",
            "message": "Tell me about Laravel"
        },
        {
            "role": "model",
            "message": "Laravel is a PHP web application framework..."
        }
    ]
}

// Response
{
    "reply": "When developing with Laravel, here are some best practices to follow: 1. Use Eloquent ORM for database interactions..."
}
Stream Content Endpoint
POST /api/gemini/stream

Stream content generation from Gemini AI in real-time.

Request Parameters:
  • prompt (required): The text prompt to send to Gemini AI
  • history (optional): Array of previous messages in the conversation
Response:

Returns a text/event-stream with generated content chunks.

Headers:
  • Accept: text/event-stream
  • Content-Type: application/json
Note: This is a streaming endpoint used by the main chat interface. The response is streamed in real-time.
Token Count Endpoint
POST /api/gemini/count-tokens

Count the number of tokens in a text prompt for pricing estimation.

Request Parameters:
  • text (required): The text to count tokens for
Response:
{
    "totalTokens": 15
}
Embeddings Endpoint
POST /api/gemini/embeddings

Generate text embeddings (vector representations) for text.

Request Parameters:
  • text (required): The text to generate embeddings for
Response:
{
    "embeddings": [0.123, -0.456, 0.789, ...]
}

Note: Embeddings are represented as an array of floating point numbers and can be used for semantic search, clustering, and other NLP tasks.