This is a series of articles looking at the ASRock Industrial NUC BOX-255H running Linux. In this series, I examine every aspect of this Mini PC in detail from a Linux perspective.
The barebone machine was supplied by ASRock Industrial, a respected Taiwanese manufacturer of computer hardware including AI Box computers and embedded motherboards.
One of the NUC’s aspects that makes it interesting is its GPU and NPU capabilities. The machine has the Intel Arc 140T, the iGPU used in the Intel Arrow Lake H/HX processor series. It’s quite a powerful integrated graphics setup. It shares the system DDR5 memory, it has 1024 cores, 64 TMUs, and 32 ROPs. It also has 128 tensor cores, and 8 ray tracing cores.
In the series, I’m going to explore the GPU capabilities of the machine covering AI and gaming. Let’s start with deep learning.
Stable Diffusion is a deep learning text-to-image diffusion model capable of generating photo-realistic images given any text input. In seconds you can create stunning artwork. Stable Diffusion uses a kind of diffusion model, called a latent diffusion model.
Installation
The client GPUs on Ubuntu must be installed. This involves adding the intel-graphics PPA, and installing various compute-related and media-related packages. The steps are well explained on Intel’s website so I won’t reproduce them here.
Next, I need to install Stable Diffusion. I’ll use Stable Diffusion web UI with OpenVINO Acceleration. It offers a browser interface based on Gradio library for Stable Diffusion with OpenVINO Acceleration Script. It’s a fork of AUTOMATIC1111/stable-diffusion-webui.
The installation process is somewhat complicated as a few additional steps are needed which aren’t covered in the wiki notes.
The GitHub repository instructions say you can use Python 3.10+, I used Python 3.10.6 (as recommended for installing under Windows).
As a modern Linux distribution has a newer version of Python than Python 3.10.6, I’ll use the older version within a virtual environment. As I’ll compile Python 3.10.6 from source, I need the lzma-dev package installed, or it will not be built into python.
I downloaded the source code for Python 3.10.6 (Python-3.10.6.tar.xz) to the ~/Downloads directory. Uncompress the xz file, change into the newly created directory, and compile.
$ ./configure --with-ssl
$ make
Then create my virtual environment with Python 3.10.6
$ ~/Downloads/Python-3.10.6/python -m venv sd_env
Activate that environment with:
$ source sd_env/bin/activate
Clone the repository and change into the newly created directory.
$ git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git
$ cd stable-diffusion-webui
The GitHub repository says you need to issue the following commands:
$ export PYTORCH_TRACING_MODE=TORCHFX
$ export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"
But I also needed a few additional commands including downgrading huggingface_hub and installing a different version of the torch and torchvision packages.
$ pip install huggingface_hub==0.25.0
$ export USE_OPENVINO=1
$ pip install torch==2.1.0 torchvision==0.16.0
Now I can launch the web user interface with:
$ ./webui.sh
Next page: Page 2 – Example output
Pages in this article:
Page 1 – Installation
Page 2 – Example output
Complete list of articles in this series:
ASRock Industrial NUC BOX-255H | |
---|---|
Introduction | Introduction to the series and interrogation of the NUC BOX-255H |
Benchmarks | Benchmarking the NUC BOX-255H |
Power | Testing and comparing the power consumption under various workloads |
Stable Diffusion | Deep Learning with Stable Diffusion |