Using Apple File System (APFS) with your virtualized Mac

Apple has just released the macOS High Sierra with new features, one of them is the brand-new Apple File System (APFS) that is optimized for flash storage which newer Macs enjoy. If you happen to be using macOS in a virtualized way, e.g. with VMware, you may have trouble getting the new OS to work as the upgrade forces conversion of the boot partition to APFS which the VMware UEFI does not support.

To solve the problem, we need to let the VMware UEFI know APFS and luckily the APFS driver can be extracted from the High Sierra installer as a UEFI driver executable. We can then slip the driver to the UEFI BIOS that bundles with VMware Player itself and everything should work.

Getting Started

We’ll need 3 things before modifying the VMware UEFI BIOS. They are listed below:

To simplify things, you can download my modified UEFI BIOS (tested on VMware Workstation Pro 14, may work for other versions too). If that ROM doesn’t work for you, go after these steps to get a modified BIOS with APFS support.

Use UEFITool to open EFI64.rom located at [VMware Installation Folder]/x64/, select File > Search and choose GUID tab. Type in 961578FE-B6B7-44C3-AF35-6BC705CD2B1F and double click the result inside Message section. Leave this screen for now.

Extract the FFS tool to the same directory as the APFS driver file. Open your command prompt, change directory to that place and run this command: GenMod apfs.efi .


Go back to UEFITool, right-click the selected item and choose Insert After, then select apfs.ffs from the FFS directory. The screen should look like this.


Save the modified ROM with the name efi64_apfs.rom to your VM directory.

Applying the new UEFI BIOS

To get the modified UEFI BIOS to work, use a text editor to open the VMX file. Ensure the file contains the following lines.

firmware = "efi"
efi64.filename = "efi64_apfs.rom"

Save the VMX file and start your VM, your macOS High Sierra will now boot as expected with an APFS volume. Voila!

Installing NVENC SDK and CUDA SDK on Ubuntu 14.04

After I set up my streaming server, there are some problems brought by the design. Using CPU to process the streams will consume lots of CPU cycles and if the streaming server have lots of connections, resource to handle them will run low if the machine itself does not have strong CPUs. NVIDIA’s NVENC is a way of offload the transcoding to GPUs that is dedicated to such processing and leaves much more CPU cycles for other purposes. However, installing NVIDIA’s driver is a nightmare, which is why I decided to write it down for future reference.

# Fetch system updates (if it is a fresh install)
apt-get install update && apt-get install upgrade

# Install required packages for NVIDIA driver installation
apt-get install build-essential linux-source linux-headers-3.13.0-68-generic linux linux-image-extra-virtual -y

# Get NVIDIA CUDA SDK (this is v7.5, for the latest version please visit NVIDIA's site)
dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb

# Install CUDA SDK (which includes the NVIDIA graphics driver)
apt-get update
apt-get install cuda -y

# Check whether NVIDIA kernel module is loaded
modprobe nvidia

# If the above fails, try installing NVIDIA graphics drive seperately (this is drier version 352.63, for the latest one go check NVIDIA's website)
chmod +x

# Run the installer. Accept the EULA and it will ask you whether to overwrite the previously installed driver, choose continue. The installation here should complete.

# Check whether NVIDIA kernel module is loaded
modprobe nvidia

# If all things go well here, this command will show detailed information about your GPU

Then the below installs NVENC SDK’s header into your system.


# Uncompress the zip and copy the headers to /usr/local/include/
unzip -q
mv nvenc_5.0.1_sdk/Samples/common/inc/*.h /usr/local/include/

You can then now compile programs that uses NVIDIA’s NVENC to speed up video processing, including ffmpeg.

Optimizing Nginx for (large) file delivery

Some times ago, I have a need to host some big files for open download. At first, I think Nginx will perform pretty well without muck configuration. In reality, there are complaints about slow and interrupted downloads which is quite annoying.

I ended up using Xender for PC to transfer the files, but after digging the Nginx docs, I did find some nice changes that can fix these problems and produce a high throughput. Here’s my tweaks made to the nginx.conf file:

  1. Turn off sendfileThe Linux sendfile call is known to have throughput degradation when in high load. Disabling it helps to keep a higher throughput at high load. Also, when serving large files with sendfile, there are no ways to control readahead.
  2. Enable TCP nopushTCP nopush fills the TCP packet to its maximum size before sending. This can help increase throughput if you’re serving large files.
  3. Use Nginx’s directio to load fileUsing directio can help improving performance by skipping a bunch of steps happened in the kernel when reading files, thus speed up the throughput.
  4. Enable the use of libaio for optimal performancelibaio allows asynchronous I/O to be done in kernel, which results in faster read and write speed. However, it needs libaio to be installed and re-compiling your Nginx in order to have it supported. I used the following flow to recompiling Nginx with aio support.
    # Install libaio on RHEL/CentOS
    yum install libaio libaio-devel -y
    unzip -q
    cd nginx-1.9.4
    # Configure Nginx according to your needs, but it should also include
    # --with-file-aio in order to use libaio
    ./comnfigure --with-file-aio

The complete nginx.conf should look like this:

http {
    sendfile off;
    tcp_nopush on;
    aio on;
    directio 512; # required if you're using Linux and uses aio

There are also some lower-level tewaks like mounting your disks with noatime flags and use ext4/xfs when serving files.