liked a post by

Creating a Self-Hosted Alternative to Facebook Live using Nginx and Micropub

Facebook Live offers a seamless viewing experience for people to watch your livestream and then see an archived version after you're done broadcasting.

  • When you turn on your camera, a new Facebook post is created on your profile and indicates that you're broadcasting live.
  • When you stop broadcasting, Facebook automatically converts the video to an archived version and shows people the recording when they look at that post later.

I wanted to see if I could do this on my own website, without any third-party services involved. It turns out there is free software available to put this kind of thing together yourself!

The diagram below illustrates the various pieces involved. In this post, we'll walk through setting up each. In this setup, the streaming server is separate from your website. You can of course host both on the same server, but I found it was nicer to fiddle with the nginx settings on a separate server rather than recompiling and restarting nginx on my website's server.

Video Source

You should be able to use any RTMP client to stream video to the server! I've tested this setup with the following video sources:

  • Teradek Vidiu hardware encoder (connected to an HDMI switcher or camcorder)
  • On my Mac, I've used OBS, a cross-platform desktop application
  • On iOS, Larix Broadcaster (also available on Android)

The job of the video source is to perform the h.264 encoding and send the video stream to the RTMP endpoint on the streaming server. Once configured, starting the broadcast is as simple as starting the streaming device.

Building the Streaming Server

Nginx with RTMP extension

The instructions below are a summary of this excellent guide.

  • Download build system dependencies
  • Download nginx source code
  • Download RTMP extension source code
  • Compile nginx with the extension

Download the build system dependencies

sudo apt-get install build-essential libpcre3 libpcre3-dev libssl-dev

Find the latest nginx source code at http://nginx.org/en/download.html

wget http://nginx.org/download/nginx-1.10.2.tar.gz

Download the rtmp module source

wget https://github.com/arut/nginx-rtmp-module/archive/master.zip

Unpack both and enter the nginx folder

tar -zxvf nginx-1.10.2.tar.gz
unzip master.zip
cd nginx-1.10.2

Build nginx with the rtmp module

./configure --with-http_ssl_module --add-module=../nginx-rtmp-module-master
make -j 4
sudo make install

Now you can start nginx!

sudo /usr/local/nginx/sbin/nginx

Configuration

The steps below will walk through the following. Comments are inline in the config files.

  • Set up the nginx configuration to accept RTMP input and output an HLS stream
  • Configure the event hooks to run the bash commands that will make Micropub requests and convert the final video to mp4
  • Set up the location blocks to make the recordings available via http
  • Ensure the folder locations we're using are writable by nginx

First, add the following server block inside the main http block.

server {
  server_name stream.example.com;

  # Define the web root where we'll put the player HTML/JS files
  root /web/stream.example.com/public;

  # Define the location for the HLS files
  location /hls {
    types {
      application/vnd.apple.mpegurl m3u8;
    }

    root /web/stream.example.com; # Will look for files in the /hls subdirectory

    add_header Cache-Control no-cache;

    # Allow cross-domain embedding of the files
    add_header Access-Control-Allow-Origin *;    
  }
}

Outside the main http block, add the following to set up the rtmp endpoint.

rtmp {
  # Enable HLS streaming
  hls on;
  # Define where the HLS files will be written. Viewers will be fetching these
  # files from the browser, so the `location /hls` above points to this folder as well
  hls_path /web/stream.example.com/hls;
  hls_fragment 5s;

  # Enable recording archived files of each stream
  record all;
  # This does not need to be publicly accessible since we'll convert and publish the files later
  record_path /web/stream.example.com/rec;
  record_suffix _%Y-%m-%d_%H-%M-%S.flv;
  record_lock on;

  # Define the two scripts that will run when recording starts and when it finishes
  exec_publish /web/stream.example.com/publish.sh;
  exec_record_done /web/stream.example.com/finished.sh $path $basename.mp4;

  access_log logs/rtmp_access.log combined;
  access_log on;

  server {
    listen 1935;
    chunk_size 4096;

    application rtmp {
      live on;
      record all;
    }
  }
}

Starting Streaming

When a stream starts, the nginx extension will run the script defined by the exec_publish hook. We'll set up this script to create a new post on your website via Micropub. This post will contain the text "Streaming Live" and will include HTML with an iframe containing the <video> tag and the necessary Javascript to enable the video player.

The nginx extension takes care of building the HLS files that the player uses, and will broadcast the input stream to any client that connects.

Your server will need to support Micropub for this command to work. Micropub is a relatively simple protocol for creating and updating posts on your website. You can find Micropub plugins for various software, or write your own code to handle the request. For the purposes of this example, you will need to manually generate an access token and paste it into the scripts below.

Save the following as publish.sh

#!/bin/bash

file_root="/web/stream.example.com/rec"
web_root="http://stream.example.com"

micropub_endpoint=https://you.example.com/micropub
access_token=123123123

# Create the post via Micropub and save the URL
url=`curl -i $micropub_endpoint -H "Authorization: Bearer $access_token" \
  -H "Content-Type: application/json" \
  -d '{"type":"h-entry","properties":{"content":{"html":"<p>Streaming Live</p><iframe width=\"600\" height=\"340\" src=\"http://stream.example.com/live.html\"></iframe>"}}}' \
  | grep Location: | sed -En 's/^Location: (.+)/\1/p' | tr -d '\r\n'`

# Write the URL to a file
echo $url > $file_root/last-url.txt

When the Broadcast is Complete

When the source stops broadcasting, the nginx extension will run the script defined by the exec_record_done hook. This script will eventually update the post with the final mp4 video file so that it appears archived on your website.

  • Update the post to remove the iframe and replace it with a message saying the stream is over and the video is being converted
  • Do the conversion to mp4 (this may take a while depending on the length of the video)
  • Create a jpg thumbnail of the video
  • Update the post, removing the placeholder content and replacing it with the thumbnail and final mp4 file

Save the following as finished.sh

#!/bin/bash

input_file=$1
video_filename=$2
# Define the location that the publicly accessible mp4 files will be served from
output=/web/stream.example.com/public/archive/$2;

file_root="/web/stream.example.com/rec"
web_root="http://stream.example.com"

micropub_endpoint=https://you.example.com/micropub
access_token=123123123

# Find the URL of the last post created
url=`cat $file_root/last-url.txt`

# Replace the post with a message saying the stream has ended
curl $micropub_endpoint -H "Authorization: Bearer $access_token" \
  -H "Content-Type: application/json" \
  -d "{\"action\":\"update\",\"url\":\"$url\",\"replace\":{\"content\":\"<p>The live stream has ended. The archived version will be available here shortly.</p>\"}}"

# Convert the recorded stream to mp4 format, making it available via HTTP
/usr/bin/ffmpeg -y -i $input_file -acodec libmp3lame -ar 44100 -ac 1 -vcodec libx264 $output;
video_url="$web_root/archive/$video_filename"

# Generate a thumbnail and send it as the photo
ffmpeg -i $output -vf "thumbnail,scale=1920:1080" -frames:v 1 $output.jpg
photo_url="$web_root/archive/$video_filename.jpg"

# Replace the post with the video and thumbnail (Micropub update)
curl $micropub_endpoint -H "Authorization: Bearer $access_token" \
  -H "Content-Type: application/json" \
  -d "{\"action\":\"update\",\"url\":\"$url\",\"replace\":{\"content\":\"<p>The live stream has ended. The archived video can now be seen below.</p>\"},\"add\":{\"video\":\"$video_url\",\"photo\":\"$photo_url\"}}"

Note that your Micropub endpoint must support JSON updates, as well as recognizing the photo and video properties as URLs rather than file uploads. The filenames sent will be unique, so it's okay for your website to link directly to the URLs provided, but your endpoint may also want to download the video and serve it locally instead.

Web Player

We'll host the HLS video player on the streaming server, so that you don't have to worry about uploading this javascript to your website. We'll use video.js with the HLS plugin.

Create a file live.html in the web root and copy the following HTML.

<!DOCTYPE html>
<html>
<head>
  <link href="https://vjs.zencdn.net/5.8.8/video-js.css" rel="stylesheet">
  <style type="text/css">
    body {
      margin: 0;
      padding: 0;
    }
  </style>
</head>
<body>
  <video id="video-player" width="600" height="340" class="video-js vjs-default-skin" controls>
    <source src="http://stream.example.com/hls/live.m3u8" type="application/x-mpegURL">
  </video>

  <script src="https://vjs.zencdn.net/5.8.8/video.js"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/videojs-contrib-hls/3.6.12/videojs-contrib-hls.js"></script>
  <script>
  var player = videojs('video-player');
  player.play();
  </script>
</body>
</html>

Now when you view live.html in your browser, it will load the streaming player and let you start playing the stream! This is the file that we'll be using in an iframe in posts on your website.

Setting up your Website

As previously mentioned, the scripts above use Micropub to create and update posts. If your website is a fully conformant Micropub endpoint, you shouldn't need to do anything special for this to work!

You will need to make sure that your website allows Micropub clients to create posts with HTML content. You will also need to ensure your endpoint supports the photo and video properties supplied as a URL. You can hotlink the URLs your endpoint receives instead of downloading the files if you want, or your endpoint can download a copy of the video and serve it locally.

Realtime Updates

To really make this shine, there are a few things you can do to enable realtime updates of your posts for viewers.

  • When your Micropub endpoint creates or updates a post, broadcast the HTML of the post on an nginx push-stream channel, and use Javascript on your home page to insert the post at the top of your feed.
  • Use WebSub (formerly known as PubSubHubbub) to publish updates of your home page to subscribers who may be reading your website from a reader.

Doing this will mean someone who has your home page open in a browser will see the new livestream appear at the top as soon as you start broadcasting, and they'll be able to see it change to the archived video when you're done. People following you in a reader will see the new post with the streaming player when the reader receives the WebSub notification!

Publish Once, Syndicate Elsewhere

Since the nginx RTMP extension supports rebroadcasting the feed to other services, you can even configure it to also broadcast to Facebook Live or YouTube!

You'll need to find the RTMP endpoint for your Facebook or YouTube Live account, and configure a new block in your nginx settings.

Doing this means you can use Facebook and YouTube as additional syndications of your live stream to increase your exposure, or treat them as an automatic backup of your videos!

liked a post by

New side project: Indie Map

I’m launching a new side project today! Indie Map is a public IndieWeb social graph and dataset. It’s a complete crawl of 2300 of the most active IndieWeb sites, sliced and diced and rolled up in a few useful ways:

The IndieWeb‘s raison d’être is to do social networking on individual personal web sites instead of centralized silos. Some parts have been fairly straightforward to decentralize – publishing, reading, interacting – but others are more difficult. Social graph and data mining fall squarely in the latter camp, which is why the community hasn’t tackled them much so far. I hope this inspires us to do more!

Indie Map was announced at IndieWeb Summit 2017. Check out the slide deck and video (soon!) for more details. Also on IndieNews.

liked a post by

Sending likes and replies using custom fields

Historically, we would visit someone else’s site to leave a comment or click a like button. Sometimes these interactions would be held within their own site’s data but, frequently, they would be stored remotely – think Facebook Likes or Disqus comments.

In keeping with owning your content, part of the #indieweb ethos is to perform these actions on your own site but pass them back so they show in both locations. The original, however, is held by yourself.

The Post Kinds plugin for WordPress is designed to add support for “responding to and interacting with other sites” by implementing “kinds of posts” – specific post types with a particular purpose. So I thought I’d give it a try.

The plugin didn’t work in the way I’d imagined, however, and caused issues with my theme due to the way it maps its own post types to those already in WordPress.

While new templates can be designed for how it integrates, all I really wanted it for was likes and replies so the effort required to get everything back as if should be seemed a bit counter-productive.

Back to the drawing board.

A different way

Once webmentions are enabled the actual markup required to turn a link to another page into a like or reply is actually pretty simple – specific classes are added to identify their purpose:

  • Reply = class="u-in-reply-to"
  • Like = class="u-like-of"

This would be easy enough to add to the post HTML but, as I avoid the WordPress back end as much as possible, I wanted an easier way.

What if I could automatically add this without a plugin?

As I post from my phone I starting thinking how I could pass a URL to WordPress along with the post; I was instantly reminded of the trick I used to tell it about the path to microcast episodes:

Custom fields.

A like is usually a short post so perfect for Drafts and Workflow – custom fields can be populated directly from the ‘Post to WordPress’ action.

Replies are more likely to be longer posts but Ulysses doesn’t, natively, allow for the same behaviour. I would just have to add the custom field after posting as a draft.

Now that the link data could be included with the post how could it be added with the relevant markup to trigger webmentions?

Functions

I had already used code in functions.php to alter posts (the creation of hashtag links, for example) but this was purely a run-time change altering how the content was displayed, not stored:

add_filter( 'the_content', 'linked_hashtags' );

To trigger webmentions the links need to be included in the actual body of the post so modifying the_content wouldn’t work. Luckily, WordPress includes a way to do this in content_save_pre which lets you modify a post’s content before being saved to the database.

In order to build the webmention links I needed to get the page title as well as the link. The function file_get_contents() reads the contents of a file (in this case a web page) into a string and I used an example found on the web to extract the page title from that:

$str = file\_get\_contents($replyurl);
$str = trim(preg_replace('/\s+/', ' ', $str));
preg_match("/\<title\>(.*)\<\/title\>/i",$str,$replytitle);

Putting it together

With all the pieces in place, all that remained was to put everything together running a function to build the links when saving the post:

add_filter( 'content_save_pre', 'mentiontypes' );

Pulling the URL from the custom field is done using get_post_meta() specifying the post ID and field name. The required string is built and added to the front of the post content before being returned back to the post as the new body.

Because content_save_pre runs whenever a post is saved editing would cause the link to be re-added on each occasion. To prevent this I opted to delete the custom field using delete_post_meta() after the link is first inserted to avoid duplication.

The full code is included below. Let me know if you can think of any improvements.

Update: Jeremy Cherfas pointed out that some consider file_get_contents() to be insecure so advised using wp_remote_get() instead. The code below has been updated to reflect this change.

function mentiontypes ( $content ) {

  $id = get_the_ID();
  $types = array ( 'Reply', 'Liked' );

  foreach ( $types as $type) {
    $mentionurl = (get_post_meta($id, $type, true));

    if ( $mentionurl !="" ) {
      $url = wp_remote_get($mentionurl);
      $str = wp_remote_retrieve_body($url);
      $str = trim(preg_replace('/\s+/', ' ', $str));
      preg_match("/\<title\>(.*)\<\/title\>/i",$str,$mentontitle);

      if ( $type == 'Reply' ) {
        $mentionstr = '<p><em>In reply to: <a class="u-in-reply-to" href="' . $mentionurl . '">' . $mentiontitle[1] . '</a>...</em></p>';
      } else {
        $mentionstr = '<p><em>Liked: <a class="u-like-of" href="' . $mentionurl . '">' . $mentiontitle[1] . '</a>...</em></p>';
      }

      $content = $mentionstr . $content;
      delete_post_meta( $id, $type, $mentionurl );
    }
  }

  return $content;  
}

add_filter( 'content_save_pre', 'mentiontypes' );

Reply on Medium, or with a webmention.

liked a post by

Mastodon, Twitter and publics 2017-04-24

Long ago, I wrote about the theory of social sites, with the then-young Twitter as the exemplar. As Mastodon, GnuSocial and other federated sites have caught some attention recently, I thought I'd revisit these theories.

Flow

A temporal flow with no unread count that you could dip into was freeing compared to the email-like experience of feed readers back then. Now this is commonplace and accepted. Twitter has backtracked from the pure flow by emphasising the unread count for @'s. GnuSocial replicates this, but Mastodon eschews it, and presents parallel flows to dip into.

Faces

Having a face next to each message is also commonplace - even LinkedIn has faces now. Some groups within the fediverse resist this and prefer stylised avatars. On twitter, logos are the faces of brands, and subverting the facial default is part of the appeal to older online forms that is latent in the fediverse.

Phatic

Twitter has lost a lot of its phatic feeling, but for now Mastodon and the others have that pleasant tone to a lot of posts that comes with sharing and reacting without looking over your shoulder. Partly this is the small group homophily, but as Lexi says:

For many people in the SJ community, Mastodon became more than a social network — it was an introduction to the tools of the trade of the open source world. People who were used to writing interminable hotheaded rants about the appropriation of “daddy” were suddenly opening GitHub issues and participating in the development cycle of a site used by thousands. It was surreal, and from a distance, slightly endearing.

Eugen has done a good job of tummling this community, listening to their concerns and tweaking Mastodon to reflect them. The way the Content Warning is used there is a good example of this - people are thinking about what others might find annoying (political rants, perhaps?) and tucking them away behind the little CW toggle.

The existential dread caused by Twitter’s reply all by default and culture of sealioning is not yet here.

Following

Part of the relative calm is due to a return of the following model - you choose whom to follow and it’s not expected to be mutual. However there are follow (and boost and like) notifications there if you want them, which contains the seeds of the twitter engagement spiral. This is mitigated to some extent by the nuances of the default publics that are constructed for you.

Publics

As with Twitter, and indeed the web in general, we all see a different subset of  the conversation. We each have our own public that we see and address. These publics are semi-overlapping - they are connected, but adjacent. This is not Habermas’s public sphere, but de Certeau's distinction of place and space. The place is the structure provided, the space the life given it by the paths we take through it and our interactions.

Since I first wrote Twitter Theory, Twitter itself has become much more like a single public sphere, through its chasing of ‘engagement’ above all else. The federated nature of Mastodon, GnuSocial,  the blogosphere and indeed the multiply-linked web is now seen as confusing by those used to Twitter's silo.

The structure of Mastodon and GnuSocial instances provides multiple visible publics by default, and Mastodon's columnar layout (on wider screens) emphasises this. You have your own public of those you follow, and the notifications sent back in response, as with Twitter. But you also have two more timeline choices - the Local and the Federated. These make the substructure manifest. Local is everyone else posting on your instance. The people who share a server with you are now a default peer group. The Federated public is even more confusing to those with a silo viewpoint. It shows all the posts that this instance has seen - GnuSocial calls it “the whole known network” - all those followed by you and others on your instance. This is not the whole fediverse, it’s still a window on part of it. 

In a classic silo, who you share a server shard with is an implementation detail, but choosing an instance does define a neighbourhood for you. Choosing to join witches.town or awoo.space or botsin.space will give you a different experience from mastodon.social

Mutual Media

By showing some of these subsets explicitly, the fediverse can help us understand the nature of mutual media a bit more. As I said:

What shows up in Twitter, in blogs and in the other ways we are connecting the loosely coupled web into flows is that by each reading whom we choose to and passing on some of it to others, we are each others media, we are the synapses in the global brain of the web of thought and conversation. Although we each only touch a local part of it, ideas can travel a long way. 

The engagement feedback loops of silos such as Twitter and Facebook have amplified this flow. The furore over Fake News is really about the seizures caused by overactivity in these synapses - confabulation and hallucination in the global brain of mutual media. With popularity always following a power law, runaway memetic outbreaks can become endemic, especially when the platform is doing what it can to accelerate them without any sense of their context or meaning.

Small World Networks

It may be that the more concrete boundaries that having multiple instances provide can dampen down the cascades caused by the small world network effect. It is an interesting model to coexist between the silos with global scope and the personal domains beloved by the indieweb. In indieweb we have been saying ‘build things that you want for yourself’, but building things that you want for your friends or organisation is a useful step between generations.

Standards

The other thing reinforced for me by this resurgence of OStatus-based conversation is my conviction that standards are documentation, not legislation. We have been working in the w3c Social Web Working Group to clarify and document newer, simpler protocols, but rough consensus and running code does define the worlds we see.

liked a post by

Slack no more. Why you should use Riot.im and Matrix.org

There's been a trend where open source projects start a Slack for team communication.  I understand why.  The Slack UI is refined, you get searchable, synced conversions on all devices and even emails when you're away.  Nice!  Except the price you pay is vendor lock-in and a closed source code base.  Plus aren't you fed-up with creating dozens of slack accounts for each projects?  I know I am.

What if I told you there was an open alternative?  One that even included access to your favorite IRC channels? Well there is.  For the past month I've replaced Slack usage with Riot.im (aka vector.im) and Matrix.org and I am very, very happy with the results.  

Let's start with the UI.  Here's my Web UI right now:

 

 

On the left: rooms/channels. I've customized mine into high/low priority with full control over notification settings.

In the middle: the  IRC channel on Freenode.  Read/unread state is maintained on the server so I can easily switch to the Android or iOS app and participate there.

On the right: the member roster.  You can hide it, or use it to Initiate direct messages.

And look, here's the same UI, on Android showing the Matrix HQ Room:

As you can see Riot supports video/audio calls using WebRTC and file upload too.  Works really well!

Did I mention that these super high quality clients are all open source?

So what about the underlying service?  Well, we're in luck.  The matrix.org service is also well designed, fast, interoperable and open.  So what exactly is it?  From their FAQ:

Matrix’s initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on - making it as easy as sending an email.

The longer term goal is for Matrix to act as a generic HTTP messaging and data synchronisation system for the whole web - allowing people, services and devices to easily communicate with each other, empowering users to own and control their data and select the services and vendors they want to use.

Bold and ambitious, and the FAQ has answers to some common questions like why not XMPP and more.

What all this means in practice is that anyone can run Matrix protocols using their own servers.   Want your own private internal system?  Run your own server disconnected from the network.  Want your chats to stay on your own server?  Run your own; with the benefit of interoperating and communicating with other servers in the mesh.  Want to bridge to another chat system, like IRC?  Yes, you can.

And the IRC integration is very, very good.  As you saw above identity and channel state is carried through, direct messages are supported. Offline for a while?  Scroll back to your unread indicator.  Or just check your email:

So there you have it.  An open system that enables chat.  A highly polished front end.  Full support for one to one and one-to-many conversations. Yes, it's beta, so there are some rough edges.

Give it a try.  You can find me at @lindner:matrix.org or just drop into some IRC channels, my nick is plindner.

liked a post by

Site updates: Displaying Webmentions!

Webmentions are one of the most interesting and powerful technologies floating around the IndieWeb. At their most basic, they sites on the web to interact by sending a notification when a page on one site links to a page on another. When combined with machine-readable metadata like microformats2, they enable really neat social interactions between sites, feeding back likes, comments, bookmarks, shares, event RSVPs, and plenty more.

Receiving Webmentions

A site doesn't have to do all its own Webmention handling, and there are a few services that will handle them for you. I set up my website with the Webmention.io service back in August 2016 (so long ago!) and it's been accepting mentions from other sites since then. And, while there aren't a lot of websites that send Webmentions natively, there are services like Bridgy which uses Webmentions to backfeed social interactions to my site from sites like Facebook and Twitter. Pretty neat!

Sending Webmentions

When I publish a post with a link to a site that support Webmentions, I still need to actually send that notification. I haven't yet built a tool that does that for my own website, but I have been able to make use of Aaron Parecki's Telegraph, which will take in a link to one of my posts and parse it for outgoing links, find out of the targets of those links support Webmentions, and allow me to send them with the press of a button. It's ridiculously easy to use and has the added benefit of letting me pick-and-choose which links go out as Webmentions.

Displaying Webmentions

Webmention.io has been collecting mentions for my site for something like 6 months, but they don't just magically show up on my site! Webmention.io provides an API for fetching the mention data for individual pages, or all mentions for my domain.

My site is built on Jekyll, a static site generator, and I like that so far it doesn't rely on JavaScript for folks to read it. I didn't want to require JavaScript for displaying mentions, so I needed a way to "bake in" my mentions for each post. I was inspired by Aaron Gustafson's jekyll-webmention_io, but found that I didn't like some of the choices in markup or the way that it stored the mention data, so I went ahead and wrote my own. It's still heavily a work-in-progress, but I do hope to release it for other folks to use once it's more stable.

What works? Let's see!

Here's an example post with some Likes and RSVPs (both "yes"es and "maybe"s):

And an example post with some replies backfed from Facebook:

All of these are being displayed with the data that Webmention.io provides with its API, and there are some types of post that I would like to handle differently such as the ❤️ above (which was a Facebook "heart" reaction), and I'd like to include a JavaScript enhancement that will show any new mentions, so they aren't sitting in "limbo" until I make a new post.

Overall, I'm really excited to finally be showing these on my site! I think Webmention is a pretty critical part of bringing the "social web" into the IndieWeb and back out of the silos. I am grateful to all the folks that have made this possible with their work on standards and tools!