Our preliminary benchmark suggests it is ~30% faster than our previous air-cooled 5950X in a variety of computing task. 100% faster when running matrix multiplication with Intel MKL.

Running RStudio within Jupyter has been possible for quite some time with jupyter-server-proxy
. Doing so has its benefits, notably the ability to leverage JupyterHub’s systemdspawner
to control the amount of resources users can use, a feature that is not available in the free version of RStudio.
It would have been nice to have the ability to choose between different R versions, which is another feature that is only available in the paid version of RStudio. Because jupyter-server-proxy
relies on iterating through entry points in each proxy package, the only way to enable that right now is to modify jupyter-ression-proxy
itself.
This is where our #PR133 comes in. By allowing setup_rserver
to receive a custom name and a configuration file as arguments, all it takes to add additional R versions on Jupyter is to create a new skeleton package that imports jupyter-ression-proxy:setup_rserver and
has additional entry points for jupyter_serverproxy_servers.
Here is a working example we currently use on our HPC cluster.
How hard it is to build a server with four top-of-the-line GPU for an high-performance computing cluster? Harder than you might think.
When I started building the SCRP cluster back in 2020 summer, the GPU servers were provided by Asrock Rack. Everything except the GPUs were preassembled. This is the sensible thing to do in normal times.
Fast forward to 2021 summer, and times were not normal. The supply chain distribution and semiconductor shortage were in high gear. Pretty much every name-brand server manufacturer quoted us months-long lead time, if they were willing to deal with us at all. To get everything in for the new academic year, I constructed a series of servers with parts sourced from different parts of the world. It is actually not that hard to build servers—they are basically heavy-duty PC’s with all sorts of specialized parts—that is, unless you want a GPU server suitable for an HPC cluster.
So what is so special about GPU servers for HPC cluster?
To conclude, if you think building your own PC is challenging, building a GPU server for an HPC cluster is probably three times the challenge. Another reason why you should not maintain your own infrastructure.
That’s one long extending cord.
There, nothing a bit of art and craft couldn’t fix.
Spent over an hour trying to figure out why some new GPUs were not working. The server is concern is a Asrock Rack 2U4G-EPYC-2T, which is a specialized server that allows four GPUs to be installed in a relatively small case. Google was not helpful because, understandably, this is a niche product produced only in small quantities.
What did not work:
What work:
Took me a good hour to figure out that the issue was caused by the PCIe extender board. The three GPU positions at the front require the extender board, but the board was only for PCIe Gen 3. Normally, Gen 4 GPUs can negotiate with Gen 3 mainboards to communicate in PCIe Gen 3, but apparently they cannot do that through the extender board. Once the issue had been identified, the solution was actually very straightforward—manually setting the PCIe lanes to Gen 3 solves everything.
Yet another reason why maintaining your own computing infrastructure is not for the faint hearted.
We will be running tests and benchmarks here at CUHK SCRP over the next few days. Users should be able to access the new RTX 3090 through Slurm after the scheduled maintenance next week.
仲有一個星期就出RTX 3080,有冇人會用呢個價錢買?$5899 HKD translates to $760 USD. That’s $60 more than the MSRP of RTX 3080 due next week.