LLMs Gallery¶
-
OpenAILLM
OpenAI LLM implementation running the async API client.
-
ClientvLLM
A client for the
vLLM
server implementing the OpenAI API specification. -
AnyscaleLLM
Anyscale LLM implementation running the async API client of OpenAI.
-
AzureOpenAILLM
Azure OpenAI LLM implementation running the async API client.
-
TogetherLLM
TogetherLLM LLM implementation running the async API client of OpenAI.
-
AnthropicLLM
Anthropic LLM implementation running the Async API client.
-
CohereLLM
Cohere API implementation using the async client for concurrent text generation.
-
GroqLLM
Groq API implementation using the async client for concurrent text generation.
-
InferenceEndpointsLLM
InferenceEndpoints LLM implementation running the async API client.
-
LiteLLM
LiteLLM implementation running the async API client.
-
MistralLLM
Mistral LLM implementation running the async API client.
-
MixtureOfAgentsLLM
Mixture-of-Agents
implementation. -
OllamaLLM
Ollama LLM implementation running the Async API client.
-
VertexAILLM
VertexAI LLM implementation running the async API clients for Gemini.
-
vLLM
vLLM
library LLM implementation. -
TransformersLLM
Hugging Face
transformers
library LLM implementation using the text generation -
LlamaCppLLM
llama.cpp LLM implementation running the Python bindings for the C++ code.