LLMs Gallery¶
-
AnthropicLLM
Anthropic LLM implementation running the Async API client.
-
OpenAILLM
OpenAI LLM implementation running the async API client.
-
AnyscaleLLM
Anyscale LLM implementation running the async API client of OpenAI.
-
AzureOpenAILLM
Azure OpenAI LLM implementation running the async API client.
-
ClientSGLang
A client for the
SGLangserver implementing the OpenAI API specification. -
TogetherLLM
TogetherLLM LLM implementation running the async API client of OpenAI.
-
ClientvLLM
A client for the
vLLMserver implementing the OpenAI API specification. -
CohereLLM
Cohere API implementation using the async client for concurrent text generation.
-
GroqLLM
Groq API implementation using the async client for concurrent text generation.
-
InferenceEndpointsLLM
InferenceEndpoints LLM implementation running the async API client.
-
LiteLLM
LiteLLM implementation running the async API client.
-
MistralLLM
Mistral LLM implementation running the async API client.
-
MixtureOfAgentsLLM
Mixture-of-Agentsimplementation. -
OllamaLLM
Ollama LLM implementation running the Async API client.
-
VertexAILLM
VertexAI LLM implementation running the async API clients for Gemini.
-
TransformersLLM
Hugging Face
transformerslibrary LLM implementation using the text generation -
LlamaCppLLM
llama.cpp LLM implementation running the Python bindings for the C++ code.
-
MlxLLM
Apple MLX LLM implementation.
-
SGLang
SGLanglibrary LLM implementation. -
vLLM
vLLMlibrary LLM implementation.