[Tony Yip] libSSE-php Version 2

On 7 Jun, the libSSE-php released version 2.1.0 and providing well interface available to integrate with popular framework. And I release the laravel-sse for the Laravel. The libSSE-php has changed so much comparing from version 1. In this post, I hope to share about the libSSE-php version 2.

The most important change is adding namespaces and available on packagist. To make it simple and easy to remember, I picked the prefix SSE and the packagist package name as tonyhhyip/sse. Most of the classes had the prefix SSE and not being autoload. The prefix was changed into namespace SSE. The method name is using the snake case which is not the common practice in PSR-2, all of them is then changed into camel case.

In the most common issue of the libSSE-php is people complain about they are not able to change session data. To solve this issue, Data Mechanism is added for people, which is using apc, mysql and file in version 1. In version 2.0, memcache, redis and PDO mechanism is added to provide more choice for users. In version 2.1, a Session Mechanism is added by making use of Symfony session interface. The core heart – class SSE had changed in the version 2.1, it makes use of the Symfony StreamedResponse. As a result, it is more easy to integrate the libSSE-php into the popular PHP framework like Symfony 2 and Laravel.

Hope you will love and make use the libSSE-php. Come and give it a star on Github and Packagist.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Some thought about SITCON x HK

Recently, some of my friends including those in Taiwan asking me about when will be the next SITCON x HK. I am here to thanks all of your support. And I am here to talk some more about SITCON x HK.

However, I need to take the Hong Kong DSE in this year. The Hong Kong DSE would finish on the late April. In May, I am going to have my speech day and ready for the coming up Hong Kong Open Source Conference 2016 in June. And then in July, I have to face the hardness of the release of result of HKDSE and JUPAS. Also, I am going to join the COSCUP in August. It is a hard job for me to organize the SITCON x HK before the Autumn. If you are hoping in SITCON x HK, you may expect it in Autumn or Winter.

Organizing a conference is not a simple task like a school project study. It involves call for proposals, promotion, budget planning, ticket selling (even in free), looking for venue and sponsorship, technical (thanks @licson) and many more basic decision.

In the last year, SITCON x HK held in the HKOSCon because of luck of time, so most of the problem is solved by the other committees of the HKOSCon. However, SITCON x HK have to become a standalone conference. I have to face these problem by myself which bring me, a F6 student, so much pressure.

It is a very sad news for me every time when I look at the posts on SITCON x HK Facebook Page. The number of people reached is also in a low level. I did ask myself whether I am not doing in the right way or anything I have missed ?

The last SITCON x HK also make me very upset. The number of participant is in a low rate, specially the main target audience, students . I am not meaning there is any problem on the speaker. The main reason of this I think is because of my promotion is not enough and success. After the conference, I am a bit losing the way on SITCON x HK. Specially after the University IT Exploration Conference 2015 (UnitExCon 2015) , I saw much more students joining the conference comparing to the SITCON x HK. I started to ask myself whether I should stop SITCON x HK and let the organizer of UnitExCon, Joint Universities Computer Association (JUCA) to held the conference instead.

I remember the hope of friends of SITCON in Taiwan. I hope they would not be disappointed on this. Therefore, I have decided to let the Hong Kong students to make this decision. I am going to organize SITCON x HK in October or November 2016 and open for the call for staff, proposal and sponsorship since this artical has published until August. I will decide whether I should continue organizing SITCON x HK or not depending on the response from the public in SITCON x HK.

First Taste on GNOME Ubuntu

As the task of Google Code-In required and the needs of improvement of control users, I installed GNOME Ubuntu in the school library computer as the OPAC. The original version is Ubuntu Desktop 14.04.3 LTS and I installed the GNOME Ubuntu 14.04 LTS which means I can use the same kernel and commands in the computer.

After doing tests on the GNOME Ubuntu, I figure iyt some advantage of GNOME Ubuntu than Ubuntu Desktop. The starting up time of GNOME Ubuntu is much shorter than the Ubuntu Desktop. It took 2 to 3 minutes to start up the OPAC with the Ubuntu Desktop in the old days but now it just takes less than 2 minutes to bootstrap the OPAC.

Moreover, GUI supporting is much better. Since many GUI application is written with GTK+ and GNOME, GNOME Ubuntu has a better support than Unity and provides a better user experience.

Furthermore, GNOME Ubuntu provides a more natural UI for users than Ubuntu Desktop. The button are placed on the top right corner instead of top left corner which is more easy to use for Windows users. Also, the menu bar is place at the top of the application instead of the top of the whole window.

However, no pay no gain. There is cost in taking the advantage of GNOME Ubuntu.

The layout of GNOME Ubuntu is a bit boring for those using the colourful Unity. The colour using in GNOME Ubnutu make users feeling repeating and easily being upset in using. This would be a bad news as the computer is providing service for students search books.

What is more, GNOME Ubuntu does not have as much support as Ubuntu Desktop. For the field of desktop, Canonical Ltd. is still putting the focus on Ubuntu Desktop instead of GNOME Ubuntu. This is a unfavourable factor for putting GNOME Ubuntu in production.

Next, GNOME Ubuntu is much harder to download. As I am living in Hong Kong, I can easily get the Ubuntu install image from the mirror of CUHK. However, I need to get the GNOME Ubuntu installation image from cdimage.ubuntu.com which is far away from my home and takes me a long time to download.

Finally, installing GNOME Ubuntu on the school library computer as OPAC seems not a bad idea for now. I would observe whether the users prefer which Ubuntu.

GNOME Ubuntu

Creative Commons Licence
This work is licensed under a Creative Commons Attribution 4.0 International License.

Installing NVENC SDK and CUDA SDK on Ubuntu 14.04

After I set up my streaming server, there are some problems brought by the design. Using CPU to process the streams will consume lots of CPU cycles and if the streaming server have lots of connections, resource to handle them will run low if the machine itself does not have strong CPUs. NVIDIA’s NVENC is a way of offload the transcoding to GPUs that is dedicated to such processing and leaves much more CPU cycles for other purposes. However, installing NVIDIA’s driver is a nightmare, which is why I decided to write it down for future reference.

Then the below installs NVENC SDK’s header into your system.

You can then now compile programs that uses NVIDIA’s NVENC to speed up video processing, including ffmpeg.

Setting Up Adaptive Streaming with Nginx

Recently, I’m working out a system to smoothly stream live events for an organization. That is pretty new to me and, after a bunch of research, found that Nginx with the RTMP module seems to be a good choice. There are many difficulties when setting all this up and after several days of testing, I found a good setting that is worth a post.

Setup Nginx and RTMP module

First, let’s get Nginx set up. In order to use the RTMP module, we need to compile that as an Nginx module. It would look something like this:

After all things are done, check whether nginx is compiled properly.

Capture

If you can see that Nginx RTMP is included, you can go to the next step. Before we proceed to configuring Nginx for live streaming, we should confirm what kind of resolution we should provide for live streams and how much hardware power you have.

Prerequisites

For converting live streams into several streams for adaptive streaming, you need to make sure your server have enough CPU for such workload. Otherwise, the live stream will suffer from continuous delays and/or server becomes unresponsive. I have spawn some EC2 c3.large and c3.xlarge instances, test with them and I found out their optimized CPU can handle such workload with ease. Something that also worth mention is about the I/O limits of the disks. If possible, store the HLS fragments generated to an high-speed SSD helps maintain smooth streaming experiences.

ec2CPU Usage when using an EC2 c3.xlarge instance.

Then, you also need to think about what kind of resolutions you will be offering for adaptive streaming. Generally about 4-5 variants are good enough to provide great loading speeds for different network speeds and devices. Here’s my recommended list of variants used for live streaming:

  1. 240p Low Definition stream at 288kbps
  2. 480p Standard Definition stream at 448kbps
  3. 540p Standard Definition stream at 1152kbps
  4. 720p High Definition stream at 2048kbps
  5. Source resolution, source bitrate

Configuring nginx for live streaming

Here is my own nginx.conf with comments that you can have references on.

Then, configure your live encoder to use these settings to stream into the server:
  • RTMP Endpoint: rtmp://yourserver/live/
  • RTMP Stream Name: [Whatever name you like]
Finally, configure your player for live playback. The HLS URL would look like this:
http://yourserver/hls/[The stream name above].m3u8

Recommended encoder settings for live events

If you can adjust the encoder, the following settings can help to gain better experiences.

  • Full HD Resolution (1920×1080) is recommended
  • H.264 Main profile, with target bitrate of 4000Kbps, maximum 6000Kbps
  • 25fps, 2 second keyframe interval
  • AAC audio at 128Kbps, 44.1kHz sample rate

And that’s all! I hope you can enjoy doing live events with these techniques.

Optimizing Nginx for (large) file delivery

Some times ago, I have a need to host some big files for open download. At first, I think Nginx will perform pretty well without muck configuration. In reality, there are complains about slow and interrupted downloads which is quite annoying.

After digging the Nginx docs, I find some nice changes that can fix these problems and produce a high throughput. Here’s my tweaks made to the nginx.conf file:

  1. Turn off sendfileThe Linux sendfile call is known to have throughput degradation when in high load. Disabling it helps to keep a higher throughput at high load. Also, when serving large files with sendfile, there are no ways to control readahead.
  2. Enable TCP nopushTCP nopush fills the TCP packet to its maximum size before sending. This can help increase throughput if you’re serving large files.
  3. Use Nginx’s directio to load fileUsing directio can help improving performance by skipping a bunch of steps happened in the kernel when reading files, thus speed up the throughput.
  4. Enable the use of libaio for optimal performancelibaio allows asynchronous I/O to be done in kernel, which results in faster read and write speed. However, it needs libaio to be installed and re-compiling your Nginx in order to have it supported. I used the following flow to recompiling Nginx with aio support.

The complete nginx.conf should look like this:

There are also some lower-level tewaks like mounting your disks with noatime flags and use ext4/xfs when serving files.