Creating a 24/7 “Lofi Like” stream part 2: Adding Features | Liquidsoap – Mikulski
Site Overlay

Creating a 24/7 “Lofi Like” stream part 2: Adding Features | Liquidsoap

It is assumed that you have managed to create your simple 24/7 stream by following the instructions from this article https://mikulski.rocks/lofi-stream-24-7guide/ (this post refers to the script template from there). You’re a little bit used to the Linux terminal (I’m showing everything using Ubuntu as an example), you’re ready to delve into the wilds of Liquidsoap and make your radio more interesting and functional. I have a few examples for you that may be helpful.

DISCLAIMER:
I am neither a programmer nor a linuxoid, but merely an enthusiastic copypaster who shares what he has been able to figure out. Therefore, it is possible that knowledgeable experts may find some points or wording incorrect or ridiculous.

Most of the information given here is taken from The Liquidsoap Book, which is essentially a textbook from the developers themselves with a lot of sensible and working examples. But which is often ignored even by experienced Liquidsoap users who use only "dry" documentation (there is a drop-down menu with version selection in the upper left corner). To all those who want to get more out of their script - I highly recommend reading both!

From version to version of Liquidsoap, it happens that the syntax of some elements changes. A basic list of such changes can be found here: https://www.liquidsoap.info/doc-2.2.0/migrating.html.

It will also be useful to create a virtual machine on your own PC, so that you can run all the experiments on it before uploading it to a VPS.

Scene (canvas) settings

In the first part, I omitted the moment with the setting of the scene / canvas so that the guide corresponds to the principle of a quick start and without piling up unnecessary details. Because the default settings will suit most users (1280 x 720, 25 fps).
So, first of all, you need to indicate the resolution of the scene – width, height, fps:

settings.frame.video.width.set(1280)
settings.frame.video.height.set(720)
settings.frame.video.framerate.set(30)

#for versions 2.2.x, the syntax is slightly different:
settings.frame.video.width := 1280
settings.frame.video.height := 720
settings.frame.video.framerate := 30

Adding a logo / image

By completing the guide https://mikulski.rocks/lofi-stream-24-7guide/, you will get something like this in the output:

Any image can be overlaid on top of the video source (in this case a GIF animation) using the video.add_image operator.
For example, the channel logo to the top right corner:

background = single("/home/user/radio/background.gif")

background = video.add_image(x=1200, y=20, width=58, height=58, file="/home/user/radio/logo.png", background)

I think it’s pretty clear here:
x, y = these are the coordinates of the location on the screen
width, height = image size
file = image path

Text shadow (workaround)

Unfortunately, Liquidsoap doesn’t have a tool that allows you to add a shadow to text to improve its readability. But nothing forbids duplicating the same data by slightly shifting its coordinates.
As with all overlaid images, you should take into account the sequence of layers. That is, the code “with shadow” will go first, and the main text will go below it:

#text shadow
background = video.add_text(color=0x000000, font="/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", speed=0, x=52, y=52, size=26,
get_track_name_text,
background)

#text drawing
background = video.add_text(color=0xFFFFFF, font="/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", speed=0, x=50, y=50, size=26,
get_track_name_text,
background)

Playback progress bar

I got the idea from this discussion https://github.com/savonet/liquidsoap/discussions/3149, where the code is shown only in general outline. By joint brainstorming with get_ked and SpaceMelodyLab we managed to make a working version, simplifying it to one line (example for 1280 x 720 resolution):

background = video.rectangle(color=0xFFFFFF, x=0, y=700, height=10, width={int(1280.0*source.elapsed(audio) / source.duration(audio))}, background)

#in Liquidsoap 2.2.0 the syntax for drawing rectangles has changed: instead of video.rectangle -> video.add_rectangle

It should be noted that in some cases (probably due to a large amount of text or images on the screen), the rectangle may be rendered with glitches along its lower border.
This can be fixed with a workaround: make the rectangle “thicker” and hide the lower border under the bottom of the scene: for example, y=715, height=15 (the strip will go 5 pixels under the screen and 10 pixels will remain visible).

Showing the next track in the queue

When using a playlist, it is possible to “check” which track will be played next. To do this, add a log_next function to the script that saves metadata to a file (e.g. called next_song) and add a check_next argument to the playlist source to trigger this function.

def log_next(r)
m = request.metadata(r)
file.write(data="Upcoming Next : #{metadata.artist(m)} - #{metadata.title(m)}", "/home/user/radio/next_song")
true
end

audio = playlist(reload_mode="watch", "/home/user/radio/music", check_next=log_next)

With the file.getter operator create a new source that regularly reads the contents of the next_song file. This information is then output to the screen via the familiar video.add_text.

next_song = (file.getter("/home/user/radio/next_song"))

background = video.add_text(color=0xFFFFFF, font="/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", speed=0, x=50, y=85, size=20,
next_song,
background)

Requests

In Liquidsoap, there are several ways to create a system to queue up listener requests. I’ll cite just one that I was able to figure out. The method is very simple but effective:

queue = request.queue()
audio = fallback(track_sensitive=true, [queue, audio])

def on_request()
fname = string.trim(file.contents("/home/user/radio/request"))
log.important("Playing #{fname}.")
queue.push.uri(fname)
end
file.watch("/home/mikulski/radio/request", on_request)

A source with requests – queue – is created. Then, when a request appears, the fallback operator switches the main playlist (audio) to the queue. When the request queue becomes empty, playback is switched back to the audio source.
In other words, fallback puts the first active source in the list (prioritized from left to right [queue, audio]) on playback and when it is no longer active, moves to the next one.
The track_sensitive argument, if = true, Liquidsoap waits until the current track ends and only then switches sources; if = false, it cuts the current playback and switches immediately.
The on_request function reads the contents of the request file (it must be created before running the script), and when the full and correct path to the track appears in it as text (for example, “/home/user/radio/music/Artist - Title.mp3“), this track is pushed into the queue.
At the same time, we specify that we need to watch for changes in the request file and when they occur, the on_request function is run.

This is very convenient in the context of chatbots: the user sends the command !sr Artist - Title to the chatbot. The argument (Artist - Title) is taken from it and saved to the request file, for example, in the form of

'/home/user/radio/music/' + argument + '.mp3' 

Where argument is the text following the !sr command, i.e. Artist - Title.
Here is the main disadvantage of this method – the query must match the title of the track. Any typo or extra character will cause Liquidsoap to fail to find the file and the request will not be processed.

In general, it is not difficult to build such a chatbot: there are quite a lot of instructions and templates for current platforms and messengers on the web.

Jingles

Adding your signature interjections (“You’re listening to the radio…”, “Thanks for sticking with us…”, etc.), can also be done in a variety of ways, but I’ll focus on the simplest one:

jingles = playlist(reload_mode="watch", "/home/user/radio/jingles")
audio = rotate(weights=[1, 5], [jingles, audio])

A Jingles source playlist is created with a folder where all audio files with jingles (or one) are stored. Then, using the rotate operator and the weights argument, it is specified that 1 jingle will be played after 5 tracks of the main playlist.
Naturally, these values can be changed at your discretion.

Direct connections to the stream | input.harbor

If you have a desire to go live, for example with your dj set or with a podcast, this is also possible thanks to the input.harbor operator.
Input.harbor runs an Icecast-compatible server from inside Liquidsoap, which you can connect to remotely via programs that can stream to Icecast servers.
For example, Mixxx or VST plugin ShoutVST.
It is enough to specify the stream name (mount), connection port and password.

live = input.harbor("live", port=8000, password="hackme")

To switch from the main playlist to input.harbor when it becomes active you can use the same fallback operator

live = input.harbor("live", port=8000, password="hackme")
audio = fallback(track_sensitive=false, [live, audio])

#could be done this way, but the documentation recommends creating a new source for this combination:
#audio_live = fallback(track_sensitive=false, [live, audio])
#hence, you will need to change the audio source in the mux_video section as well:
#radio = mux_video(video=background, audio_live)

There are a number of arguments to input.harbor, the most interesting of which is the buffer size. By default buffer = 12. (seconds) and max (maximum buffer size) = 20. (seconds). The values are of type float, so they should be written with a period.
The minimum values I have been able to set for correct operation are 1. and 2.

live = input.harbor("live", port=8000, password="hackme", buffer=1., max=2.)

Unfortunately, I can’t tell you how to extract metadata from input.harbor.

If you want to output the signal from your microphone on top of the main playlist, you can use the add operator and use the ShoutVST plugin to connect to the stream directly from your DAW.

mic = input.harbor("live", port=8000, password="hackme", buffer=1., max=2.)
audio_mic = add([mic, audio])

#You must create a new source here, otherwise the metadata display will be broken.
#Therefore, you will need to change the audio source in the mux_video section as well:
#radio = mux_video(video=background, audio_mic)

By default, the add operator will try to equalize the microphone and playlist volume, which is not always convenient. You can disable this function to keep the microphone volume “as is”

audio_mic = add(normalize=false,[mic, audio])

Or using the weights argument, make the microphone, for example, twice as loud as the playlist

audio_mic = add(weights=[2., 1.], [mic, audio])

If you are using a Firewall, don’t forget to open the required port and specify the IP from which you can connect to it. Also, the allowed IP can be specified in the Liquidsoap script (preferably at the very beginning)

settings.harbor.bind_addrs.set(["0.0.0.0"])

#In version 2.2.0 -> settings.harbor.bind_addrs := ["0.0.0.0"]

But this is not the only thing: you can also switch the playlist to a live video broadcast (for example, from OBS) using the input.rtmp operator.
A separate post is dedicated to this method:
https://mikulski.rocks/liquidsoap_input_rtmp_en/

Interactive values | Real-time control of the source volume

Another curious ability of Liquidsoap is interactive control of individual elements via the built-in web interface. For example, adjusting the volume of the source (which will be handy when connecting a microphone).
First, you need to activate this web-interface by adding a line to the script header

interactive.harbor(port=8010, uri="/control")

Since port 8000 is busy with microphone connection to input.harbor, you should specify any free port (e.g. 8010 – don’t forget to configure Firewall!). In uri you can specify any other address instead of control at your discretion.
The web-interface will be available at: http:/YOURS_VPS_IP:8010/control

Here’s what you’ll see when you open this page in your browser after running the script.
There’s nothing here yet, as there are no values in the script yet.
Now is the time to add them.

To control the volume of the source you must enter the following

volume = interactive.float("volume", 1.)
audio = amplify(volume, audio)

Where 1. – is the default volume of the source (conventionally, 100%).
Now you can adjust the values with the arrows, or enter them manually.
The volume will change in real time!

But it is much more convenient to add “limiters”: for example, if there is no need to raise the volume above 100% and turn it down to 50%. In this case, the field with the values is converted into a slider

volume = interactive.float("volume", min=0.5, max=1., 1.)
audio = amplify(volume, audio)

Video Playlist

Well, up until now we’ve been looking at everything in the context of “we have an audio playlist and a looped single animation as a background”. I think you’ve already guessed that instead of using the single operator for the background image, you can also use playlist in the same way.
But can you play a playlist with videos that have an audio track? Yes, of course. The only thing is that it’s more resource intensive: for 720p, a VPS with two CPU cores is desirable.
Plus, fix the script a bit. Remove the line with mixing audio and video sources (mux_video), as it is no longer needed. And all images and text should be attached to the video source

videos = playlist(reload_mode="watch", "/home/user/radio/videos_mp4")
videos = mksafe(videos)
#this line is no longer needed. -> radio = mux_video(video=background, audio)
#instead of background = video.add_text(..., get_track_name_text, background),
#background = video.add_image(..., background) and background = video.rectangle(..., background)
#will be:
videos = video.add_text(..., get_track_name_text, videos)
videos = video.add_image(..., videos)
videos = video.rectangle(..., videos)

My playlist consists only of mp4 files in 720p resolution and with a bitrate in the neighborhood of 3500kbps, but as I understand that any format that ffmpeg supports will do.

But, if you want to connect the microphone via input.harbor and the add operator, unfortunately, you will get an endless Buffer Overrun line running in the log and no sound from the microphone on the stream.
However, in Liquidsoap 2.2.0, thanks to the new multitrack feature, you can get around this issue as well.

Liquidsoap-daemon

In the first part of the guide I showed how to run the script as a background process via the

nohup liquidsoap <script_name>.liq &

And stop with a command

killall liquidsoap

Which is inconvenient in some cases and not quite right.
It is much better to create a background system service and manage it via systemctl commands – exactly like the Nginx server from this guide -> Your Own Restream Server.

For this purpose, Liquidsoap developers have prepared a repository that will automatically make all the necessary settings in the system to add such a service for any liquidsoap script.

The first thing to do is to download this repository from github:

git clone https://github.com/savonet/liquidsoap-daemon.git

And go to the downloaded folder:

cd liquidsoap-daemon

Now all that remains is to execute the bash script (by a user with sudo rights), where the argument is the name of the liquidsoap script, previously placed in the directory “~/liquidsoap-daemon/script/” (as recommended by the developers):

bash daemonize-liquidsoap.sh <script-name>
or
bash daemonize-liquidsoap.sh <script-name>.liq

or by writing the full path to it:

For example,
bash daemonize-liquidsoap.sh /home/user/radio/<script-name>.liq

The Systemd-service will now appear in the system with the name – <script-name>-liquidsoap.

Start:
sudo systemctl start <script-name>-liquidsoap

Restart (e.g. after making changes to a script):
sudo systemctl restart <script-name>-liquidsoap

Stop:
sudo systemctl stop <script-name>-liquidsoap

View the status of the service:
sudo systemctl status <script-name>-liquidsoap

As far as I understand, the service settings already include automatic startup at VPS reboot and restart in case the script fails and stops.
The log (if the liquidsoap-script does not save the log to another directory) will be stored in the “~/liquidsoap-daemon/log/” folder.

Result

After all the additional manipulations, the script will now look like this:

settings.frame.video.width.set(1280)
settings.frame.video.height.set(720)
settings.frame.video.framerate.set(30)
settings.harbor.bind_addrs.set(["0.0.0.0"])
interactive.harbor(port=8010, uri="/control")

#metadata functions
song_author = ref('')
def apply_song(m) =
song_author := m["artist"]
end

song_title = ref('')
def apply_song2(m) =
song_title := m["title"]
end

def get_track_name_text()
"$(artist) - $(title)" % [ 
("artist", !song_author),    
("title", !song_title)
]
end 

def log_next(r)
m = request.metadata(r)
file.write(data="Upcoming Next : #{metadata.artist(m)} - #{metadata.title(m)}", "/home/user/radio/next_song")
true
end

#audio sources
audio = playlist(reload_mode="watch", "/home/user/radio/music", check_next=log_next)
audio = mksafe(audio)
queue = request.queue()
audio = fallback(track_sensitive=true, [queue, audio])

#live = input.harbor("live", port=8000, password="hackme")
#audio = fallback(track_sensitive=false, [live, audio])

mic = input.harbor("live", port=8000, password="hackme", buffer=1., max=2.)
audio_mic = add(weights=[2., 1.],[mic, audio])

#volume control
volume = interactive.float("volume", min=0.5, max=1., 1.)
audio = amplify(volume, audio)

#requests
def on_request()
fname = string.trim(file.contents("/home/user/radio/request"))
log.important("Playing #{fname}.")
queue.push.uri(fname)
end
file.watch("/home/user/radio/request", on_request)

#video source
background = single("/home/user/radio/background.gif")
background = video.add_image(x=1200, y=20, width=58, height=58, file="/home/user/radio/logo.png", background)

#progress bar
background = video.rectangle(color=0xfcb900, x=0, y=700, height=10, width={int(1280.0*source.elapsed(audio) / source.duration(audio))}, background)

#calling metadata
audio.on_track(apply_song) 
audio.on_track(apply_song2) 

#next-song file reading
next_song = (file.getter("/home/user/radio/next-song"))

#text shadows
background = video.add_text(color=0x000000, speed=0, x=52, y=52, size=26,
get_track_name_text,
background)
background = video.add_text(color=0x000000, speed=0, x=52, y=87, size=20,
next_song,
background)

#drawing text
background = video.add_text(color=0xFCB900, speed=0, x=50, y=50, size=26,
get_track_name_text,
background)
background = video.add_text(color=0xFCB900, speed=0, x=50, y=85, size=20,
next_song,
background)

#mixing sources
radio = mux_video(video=background, audio_mic)

#rtmp+codec
url = "rtmp://localhost/live"
enc = %ffmpeg(format="flv",
%video(codec="libx264", width=1280, height=720, pixel_format="yuv420p",
b="750k", maxrate="750k", minrate="750k", bufsize="1500k", profile="Main", preset="veryfast", framerate=30, g=60),
%audio(codec="aac", samplerate=44100, b="128k"))

#output
output.url(fallible=true, url=url, enc, radio)

If this material is useful to you and you have the opportunity,
then support the author and the site with a small tip:
https://hipolink.me/mikulski/tips
Thanks💛


0 comments
Inline Feedbacks
View all comments