Category: Machine Learning

How to Create Remote MCP Server with MCP Proxy

Machine Learning

Learn how to deploy an MCP server on Vultr using FastMCP and connect it with Claude Desktop via a local proxy. Explore how MCP enables natural language interfaces for structured, context-aware workflows. The Model Context Protocol (MCP) is a new standard for organizing contextual data used by Large Language Models…

How to Deploy Large Language Models (LLMs) with Ollama

Machine Learning

Learn how to deploy large language models locally with Ollama, enhancing security and performance without internet dependency.   Ollama is an open-source platform for running large language models (LLMs) locally. Ollama supports most open-source Large Language models (LLMs) including Llama 3, DeepSeek R1, Mistral, Phi-4, and Gemma 2 models, you…

How to Install LM Studio – A Graphical Application for Running Large Language Models (LLMs)

Machine Learning

Prerequisites Before you begin, you need to: Have access to a GUI-enabled Linux based remote instance with GPU or a desktop workstation with the x86 with AVX2 processor architecture. Note This article uses Ubuntu 24.04 to demonstrate the installation steps. A Domain name such as example.com if you want to use your LM Studio remotely with…

Object Detection with Tensorflow on a Vultr Cloud Server

Inference

Introduction Object detection is a computer vision technology that identifies/locates instances of objects of a given class, such as humans, buildings, or vehicles, in digital images or videos. Object detection has uses in several computer vision tasks, including image annotation, face detection, object tracking, activity recognition, and vehicle counting. Tensorflow…

Generating Videos with HuggingFace ModelScope Text2Video Diffusion Model on Vultr Cloud GPU

Inference

Introduction ModelScope is an open-source platform that offers a range of machine learning models and datasets for use in AI applications. The ModelScope text-to-video model allows you to generate short videos from text prompts and customize the generation parameters. The text-to-video model is trained on public datasets with around 1.7…

StableLM 2 Language Model Inference Workload on Vultr Cloud GPU

Inference

Introduction StableLM 2 1.6B is a text-completion small language model by StabilityAI with 1.6 billion parameters by StabilityAI, the model is trained on multilingual data. Its compact size makes it feasible to infer the model with limited hardware resources. Similarly, StableLM 2 Zephyr 1.6B, a model by StabilityAI with 1.6 billion parameters is trained…

Load More