By successfully configuring a local instance of Ollama within a Docker container, we overcame hardware and networking hurdles to run the Qwen2.5-Coder LLM on an AMD Radeon RX 590. We bypassed the limitations of "legacy" Polaris architecture by utilizing Vulkan and the HSA_OVERRIDE_GFX_VERSION=8.0.3 environment variable to "spoof" GPU compatibility, effectively offloading compute tasks to the graphics card's 8GB of VRAM.