![]() ![]() Ask me how I know >.> Getting and Converting the Stable Diffusion Model □įirst thing, we're going to download a little utility script that will automatically download the Stable Diffusion model, convert it to Onnx format, and put it somewhere useful. Take note of that -force-reinstall flag! The package will override some previously-installed dependencies, but if you don't allow it to do so, things won't work further down the line. Pip install pathToYourDownloadedFile / ort_nightly_whatever_version_you_got. Once it's downloaded, use pip to install it. (Or, if you're the suspicious sort, you could go to and grab the latest under ort-nightly-directml yourself).Įither way, download the package that corresponds to your installed Python version: ort_nightly_directml-1.13.0.dev20220913011-cp37-cp37m-win_amd64.whl for Python 3.7, ort_nightly_directml-1.13.0.dev20220913011-cp38-cp38-win_amd64.whl for Python 3.8, you get the idea. So instead, we need to eitherĪ) compile from source or b) use one of their precompiled nightly packages.īecause the toolchain to build the runtime is a bit more involved than this guide assumes, we'll go with option b). None of their stable packages are up-to-date enough to do what we need. Now, we need to go and download a build of Microsoft's DirectML Onnx runtime. 0 pip install transformers pip install onnxruntime We're going to create a virtual environment to install some packages into. Once created, open a command line in your favorite shell (I'm a PowerShell fan myself) and navigate to your new folder. Preparing the workspace □īefore you begin, create a new folder somewhere. My only assumption is that you have it installed, and that when you run python -version and pip -version from a command line, they respond appropriately. I'll assume you have no, or little, experience in Python. A working installation of Git, because the Hugging Face login process stores its credentials there, for some reason.The fortitude to download around 6 gigabytes of machine learning model data.I'm using an AMD Radeon RX 5700 XT, with 8GB, which is just barely powerful enough to outdo running this on my CPU. A reasonably powerful AMD GPU with at least 6GB of video memory.Requirements □īefore you get started, you'll need the following: Because Stable Diffusion is both a) open source and b) good, it has seen an absolute flurry of activity, and some enterprising folks have done the legwork to make it usable for AMD GPUs, even for Windows users. Unfortunately, in its current state, it relies on Nvidia's CUDA framework, which means that it only works out of the box if you've got an Nvidia GPU.įear not, however. The prompt I used for that image was kirin, pony, sumi-e, painting, traditional, ink on canvas, trending on artstation, high quality, art by sesshu. See the cover image for this article? That was generated by a version of Stable Diffusion trained on lots and lots of My Little Pony art. It's an open-source machine learning model capable of taking in a text prompt, and (with enough effort) generating some genuinely incredible output. Stable Diffusion has recently taken the techier (and art-techier) parts of the internet by storm. It says everything this does, but for a more experienced audience.) (Want just the bare tl dr bones? Go read this Gist by harishanand95. ![]() Tags: programming art ai ml machine learning stable diffusion windows amd python the robots are coming for us ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |