Highlights
Pinned Loading
-
How to run LLM AI model locally on a...
How to run LLM AI model locally on a PC/Server 1Here are the steps to get a model running on _your not-so-powerful_ computer:
231. Install llama.cpp (you can also build it from source with CMake):
4```shell
5brew install llama.cpp
-
tunapanda/h5p-standalone
tunapanda/h5p-standalone PublicDisplay H5P content without the need for an H5P server
-
Issuing TLS Certificates on Traefik ...
Issuing TLS Certificates on Traefik Using Let's Encrypt Pebble ACME Test Server 1# Issuing TLS Certificates on Traefik Using Let's Encrypt Pebble ACME Test Server
23### Prerequisites
4- Docker with the Docker Compose plugin
5 -
HashiCorp Vault Docker Compose Examples
HashiCorp Vault Docker Compose Examples 1services:
2openbao:
3image: docker.io/openbao/openbao:2.4
4restart: unless-stopped
5command 32A1 span>: ["server"]
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.