// #r "nuget:OllamaSharp" open OllamaSharp // For better Collections.Generic.IAsyncEnumerable<_> support for F#: // #r "nuget:FSharp.Control.TaskSeq" open FSharp.Control open System open System.Threading.Tasks.Task // See complete list of libraries: https://ollama.com/library let model = "gemma3n:latest" // Google's single device 5.2GB model // "gpt-oss:latest" // OpenAI 14GB model // "deepseek-r1:latest" // Deepseek 5.2GB // "deepseek-v3.1:latest" // Deepseek 404GB // Set up the client let uri = Uri "http://localhost:11434" use ollama = OllamaApiClient uri ollama.SelectedModel <- model // The model has to be installed locally first, and the download can be big. ollama.PullModelAsync model |> TaskSeq.iter(fun status -> printfn $"{status.Percent} %% {status.Status}") // List of locally downloaded models (is empty before Ollama downloads the model) let listModels = task { let! models = ollama.ListLocalModelsAsync() models |> Seq.iter(fun x -> printf $"Model {x.Name} size {x.Size} bytes") } listModels |> Async.AwaitTask |> Async.RunSynchronously ollama.GenerateAsync "How are you today?" |> TaskSeq.iter(fun stream -> printf $"{stream.Response}") // Prints something like: // "> I am doing well, thank you for asking! As a large language model, // I don't experience feelings like humans do, but my systems are // running smoothly and I'm ready to assist you."