Examples

Table of Contents


Each example demonstrates a different capability. Examples using the Gemini provider require Google Cloud authentication as described in the installation instructions. Examples using the OpenAI provider require an API key (unless using a local server like Ollama).

The --provider flag is required. Refer to Google’s latest stable models for Gemini and OpenAI’s models for OpenAI.

Text Analysis

Gemini

Classify text sentiment from STDIN with inline system instruction and JSON schema with output to STDOUT.

System instructions and schema can be provided as inline strings or loaded from external files as shown in later examples.

echo "this is great" | prompt2json \
    --provider gemini \
    --system-instruction "Classify sentiment as POSITIVE, NEGATIVE, or NEUTRAL" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string","enum":["POSITIVE","NEGATIVE","NEUTRAL"]},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["sentiment","confidence"]}' \
    --location global \
    --model gemini-2.5-flash

Output:

{"sentiment":"POSITIVE","confidence":95}

OpenAI

The same text classification using an OpenAI-compatible Chat Completions endpoint.

echo "this is great" | prompt2json \
    --provider openai \
    --system-instruction "Classify sentiment as POSITIVE, NEGATIVE, or NEUTRAL" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string","enum":["POSITIVE","NEGATIVE","NEUTRAL"]},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["sentiment","confidence"]}' \
    --model gpt-5-nano  \
    --api-key "$OPENAI_API_KEY"

Output:

{"sentiment":"POSITIVE","confidence":95}

OpenAI Endpoint on Ollama

When using --url with the openai provider, the --api-key is optional. This allows using local servers that don’t require authentication.

For localhost URLs, the default HTTP timeout is disabled. Set --timeout if you want a deadline.

Use a local Ollama server with the openai provider:

echo "this is great" | prompt2json \
    --provider openai \
    --url "http://localhost:11434/v1/chat/completions" \
    --system-instruction "Classify sentiment as POSITIVE, NEGATIVE, or NEUTRAL" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string"}},"required":["sentiment"]}' \
    --model llama3.2

OpenAI Endpoint on Vertex AI

Google provides an OpenAI-compatible API on Vertex AI. Use the openai provider with the --url flag to target this endpoint and provide the necessary access token. Note that the model names are prefixed with “google/” and the URL includes the project and location.

prompt2json \
  --prompt "this is great" \
  --provider openai \
  --url "https://aiplatform.googleapis.com/v1beta1/projects/${GOOGLE_CLOUD_PROJECT}/locations/global/endpoints/openapi/chat/completions" \
  --api-key "$(gcloud auth application-default print-access-token)" \
  --model "google/gemini-2.5-flash" \
  --system-instruction "Classify sentiment as POSITIVE, NEGATIVE, or NEUTRAL" \
  --schema '{"type":"object","properties":{"sentiment":{"type":"string","enum":["POSITIVE","NEGATIVE","NEUTRAL"]},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["sentiment","confidence"]}'

Attachments

Image Processing

Process an image attachment to extract structured information.

Attach a file using the --attach flag. Supported formats include .png, .jpg, .jpeg, .webp, and .pdf. Files are included as inline base64-encoded data in the request payload. The file extension is used to determine the content type, which is sent in the request metadata. Support for attachments varies based on provider and individual LLM that is being used.

Gemini

prompt2json \
    --provider gemini \
    --prompt "Identify the character in this photo" \
    --system-instruction "Extract the character name, franchise they belong to, and your confidence level" \
    --schema '{"type":"object","properties":{"name":{"type":"string"},"franchise":{"type":"string"},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["name","franchise","confidence"]}' \
    --attach picture.png \
    --location us-central1 \
    --model gemini-2.5-flash \
    --pretty-print

OpenAI

prompt2json \
    --provider openai \
    --prompt "Identify the character in this photo" \
    --system-instruction "Extract the character name, franchise they belong to, and your confidence level" \
    --schema '{"type":"object","properties":{"name":{"type":"string"},"franchise":{"type":"string"},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["name","franchise","confidence"]}' \
    --attach picture.png \
    --model gpt-5-nano  \
    --api-key "$OPENAI_API_KEY" \
    --pretty-print

Output:

{
  "name": "Eevee",
  "franchise": "Pokemon",
  "confidence": 100
}

PDF Processing

Extract structured data from a PDF document.

Gemini

prompt2json \
    --provider gemini \
    --prompt "Resume attached" \
    --system-instruction "Extract basic screening information from the resume. Do not infer missing details." \
    --schema '{"type":"object","properties":{"name":{"type":"string"},"current_role":{"type":"string"},"years_experience":{"type":"integer"},"skills":{"type":"array","items":{"type":"string"}}},"required":["name","current_role","skills"]}' \
    --attach resume.pdf \
    --location us-central1 \
    --model gemini-2.5-flash \
    --pretty-print

OpenAI

prompt2json \
    --provider openai \
    --prompt "Resume attached" \
    --system-instruction "Extract basic screening information from the resume. Do not infer missing details." \
    --schema '{"type":"object","properties":{"name":{"type":"string"},"current_role":{"type":"string"},"years_experience":{"type":"integer"},"skills":{"type":"array","items":{"type":"string"}}},"required":["name","current_role","skills"]}' \
    --attach resume.pdf \
    --model gpt-5-nano  \
    --api-key "$OPENAI_API_KEY" \
    --pretty-print

Output:

{
  "name": "Sherlock Holmes",
  "current_role": "Consulting Detective",
  "years_experience": 23,
  "skills": [
    "deductive reasoning",
    "forensic science",
    "observation",
    "chemical analysis"
  ]
}

File-Based Workflows

Using External Files for Instructions and Schema

Load system instructions and JSON schema from files instead of inline strings. This approach is cleaner for complex prompts and reusable schemas.

Instructions file can be stored as a separate text file that is referenced.

classify_instruction.txt

Categorize the support ticket by department and priority level.
Use TECHNICAL for infrastructure or software issues.
Use BILLING for payment or invoice questions.
Use ACCOUNT for login or access problems.
Use GENERAL for everything else.

Schema file can be stored as a separate JSON file that is referenced.

classify_schema.json

{
  "type": "object",
  "properties": {
    "department": {
      "type": "string",
      "enum": ["TECHNICAL", "BILLING", "ACCOUNT", "GENERAL"]
    },
    "priority": {
      "type": "string",
      "enum": ["LOW", "MEDIUM", "HIGH", "CRITICAL"]
    },
    "summary": {
      "type": "string"
    }
  },
  "required": ["department", "priority", "summary"]
}
cat ticket.txt | prompt2json \
    --provider gemini \
    --system-instruction-file classify_instruction.txt \
    --schema-file classify_schema.json \
    --location us-central1 \
    --model gemini-2.5-flash

Output:

{"department":"TECHNICAL","priority":"HIGH","summary":"User cannot access dashboard after login"}

Files for Input and Output

Process files and save output to a file.

Input file can contain any plain text content to be passed as the prompt.

notes.txt

The deployment failed during the final rollout step due to a missing environment variable.
Engineering resolved the issue by updating the configuration and redeploying.
No customer impact was reported, but the release was delayed by two hours.
prompt2json \
  --provider gemini \
  --prompt-file notes.txt \
  --system-instruction "Summarize the incident and extract key facts for reporting. Keep the summary and key facts concise including the important details. Do not invent details." \
  --schema '{"type":"object","properties":{"summary":{"type":"string"},"key_facts":{"type":"array","items":{"type":"string"}}},"required":["summary","key_facts"]}' \
  --location us-central1 \
  --model gemini-2.5-flash \
  --pretty-print \
  --out summary.json

Output:

summary.json

{
  "key_facts": [
    "Deployment failed during final rollout step.",
    "Cause: Missing environment variable.",
    "Resolution: Engineering updated configuration and redeployed.",
    "Customer Impact: None reported.",
    "Release Delay: Two hours."
  ],
  "summary": "A deployment failed during the final rollout step due to a missing environment variable, causing a two-hour release delay. Engineering resolved the issue with a configuration update and redeployment, and no customer impact was reported."
}

Dry-Run

Show Request URL

Output the API URL that is used when making the request. This is useful for debugging and understanding which endpoint is being targeted.

The actual request is not made when using the --show-url flag.

Gemini URL

echo "this is great" | prompt2json \
    --provider gemini \
    --system-instruction "Classify sentiment" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string","enum":["POSITIVE","NEGATIVE","NEUTRAL"]},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["sentiment","confidence"]}' \
    --project example-project \
    --location us-central1 \
    --model gemini-2.5-flash \
    --show-url

Output:

https://us-central1-aiplatform.googleapis.com/v1/projects/example-project/locations/us-central1/publishers/google/models/gemini-2.5-flash:generateContent

OpenAI URL

echo "this is great" | prompt2json \
    --provider openai \
    --system-instruction "Classify sentiment" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string"}},"required":["sentiment"]}' \
    --model gpt-5-nano  \
    --show-url

Output:

https://api.openai.com/v1/chat/completions

Show Request Body

Output the JSON request body that would be sent to the API. This is useful for debugging request structure and verifying the prompt and schema are formatted correctly.

The actual request is not made when using the --show-request-body flag. The --pretty-print flag formats the JSON output for better readability.

Gemini Request Body

echo "this is great" | prompt2json \
    --provider gemini \
    --system-instruction "Classify sentiment" \
    --schema '{"type":"object","properties":{"sentiment":{"type":"string","enum":["POSITIVE","NEGATIVE","NEUTRAL"]},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["sentiment","confidence"]}' \
    --project example-project \
    --location us-central1 \
    --model gemini-2.5-flash \
    --show-request-body \
    --pretty-print

Output:

{
  "systemInstruction": {
    "parts": [
      {
        "text": "Classify sentiment"
      }
    ]
  },
  "contents": [
    {
      "parts": [
        {
          "text": "this is great"
        }
      ],
      "role": "user"
    }
  ],
  "generationConfig": {
    "responseJsonSchema": {
      "properties": {
        "confidence": {
          "maximum": 100,
          "minimum": 0,
          "type": "integer"
        },
        "sentiment": {
          "enum": [
            "POSITIVE",
            "NEGATIVE",
            "NEUTRAL"
          ],
          "type": "string"
        }
      },
      "required": [
        "sentiment",
        "confidence"
      ],
      "type": "object"
    },
    "responseMimeType": "application/json"
  }
}

OpenAI Request Body (with inline image)

prompt2json \
    --provider openai \
    --prompt "Identify the character in this photo" \
    --system-instruction "Extract the character name, franchise they belong to, and your confidence level" \
    --schema '{"type":"object","properties":{"name":{"type":"string"},"franchise":{"type":"string"},"confidence":{"type":"integer","minimum":0,"maximum":100}},"required":["name","franchise","confidence"]}' \
    --attach picture.png \
    --model gpt-5-nano  \
    --show-request-body \
    --pretty-print

Output:

{
  "messages": [
    {
      "content": "Extract the character name, franchise they belong to, and your confidence level",
      "role": "system"
    },
    {
      "content": [
        {
          "text": "Identify the character in this photo",
          "type": "text"
        },
        {
          "image_url": {
            "url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAdsA..."
          },
          "type": "image_url"
        }
      ],
      "role": "user"
    }
  ],
  "model": "gpt-5-nano",
  "response_format": {
    "json_schema": {
      "name": "response",
      "schema": {
        "properties": {
          "confidence": {
            "maximum": 100,
            "minimum": 0,
            "type": "integer"
          },
          "franchise": {
            "type": "string"
          },
          "name": {
            "type": "string"
          }
        },
        "required": [
          "name",
          "franchise",
          "confidence"
        ],
        "type": "object"
      }
    },
    "type": "json_schema"
  }
}