How-to-use

How to use

Introduction

In this document we show how to utilize Large Language Model (LLM) functionalities through a specialized Web interface.

We also show how to build a Docker image and run a container with it.

Docker

Build the Docker image with the command:

docker build --no-cache --build-arg PAT_GIT=MYXXX -t llm:1.0 -f docker/Dockerfile .

Run a container over the image with the command:

docker run --rm -p 9191:9191 --name webllm2 -t llm:1.0

To stop the container run the command:

docker container stop webllm2

Setup

OpenAI API key

Set OPENAI_API_KEY with the URL:

http://localhost:9191/setup?api_key=<YOUR_API_KEY>

Or the command:

http://localhost:9191/setup?llm=ChatGPT&api_key=<YOUR_API_KEY>

PaLM API key

Set PALM_API_KEY with the URL:

http://localhost:9191/setup?llm=PaLM&api_key=<YOUR_API_KEY>

Question answering

An LLM-based Question Answering System (QAS) invocation can be utilized with the URL:

http://localhost:9191/qas?text='Today is Wednesday and it is 35C hot!'&questions='What day? How hot?'

You can specify the LLM service you want to use with the parameter llm:

http://localhost:9191/qas?llm=PaLM&text='Today is Wednesday and it is 35C hot!'&questions='What day? How hot?'

References

[AAp1] Anton Antonov, "LLM::Functions", (2023), GitHub/antononcube.

[AAp2] Anton Antonov, "LLM::FindTextualAnswer", (2023), GitHub/antononcube.

LLM::Containerization v0.1.0

Docker containers, Cro Web API, and CLI scripts packaging LLM functionalities.

Authors

  • Anton Antonov

License

Artistic-2.0

Dependencies

Cro::HTTPCro::WebSocketLLM::FunctionsLLM::PromptsML::FindTextualAnswerURI::Encode

Test Dependencies

Provides

  • LLM::Containerization::Routes

The Camelia image is copyright 2009 by Larry Wall. "Raku" is trademark of the Yet Another Society. All rights reserved.