Diaspora API Real World Usage: A Blog Discussion Timeline

“Dogfooding” software is one of the best ways to wring out any problems with a design or implementation. The Diaspora API was designed with a wide variety of uses in mind including something potentially as grand as being the replacement backend for a revamped website. With the actual API now “in the can” and waiting for the real PR review I decided to try to use the API for an actual purpose and start dogfooding it. I had several ideas but the first one I decided to latch on to was a blog discussion timeline feature.

We’ve seen blogs and other sites that have a Disqus comment system or something like that. In my previous Wordpress version of this blog I had comments that were part of the system itself, along with a few plugins to prevent spamming and that sort of thing. The current version of the site is static content only. I could get comments by bringing in something like Disqus or something like that but I figured it’d be best to try to get some integration into the Fediverse rather than just go with one of these or some open source equivalent. Julian Andres Klode’s article on writing the equivalent of this for Mastodon back in October got my mind going. At that point I was just starting to work on the API for real and I still had my Wordpress blog. “Looking at the Diaspora API something similar should be feasible for this platform,” I thought to myself. I hadn’t expected moving my blog so soon but that presented a golden opportunity. I already had wired in static links to related Diaspora posts so this would just be a bit of an extension for that.

Design Introduction

My design goals were pretty basic for MVP-1 (Minimum Viable Product #1):

  • Displays all the interaction statistics for the related post
  • Displays all the comments for the related post
  • For every reshare also display all of the comments and interaction statistics
  • Show aggregated statistics for all the post and the reshares

These last two points were important to me for a couple of reasons. Getting a post and it’s comments is essentially reproducing the data flows of the main website. It’d literally just be two REST calls. However have you ever wanted to see all of the comments across all of the reshares of a post you made? How about even just having some idea of how many comments or likes across all of the reshares and your main post? I’ve wanted that sort of thing but it’s cumbersome to do manually. There is no automated dataflow for doing that on the website either. It’d therefore be a great but simple example of using the API in a way that the site does not use.

Along with the main features of the end-user displays for the service itself I had some other goals for MVP-1:

  • It should have one endpoint that takes a POST GUID and nothing else and returns all of the data necessary for the UI
  • It should automatically refresh its access token when it expires
  • It should only allow posts data to be pulled for the blog author’s posts

To jump start things I decided to start with all of the test harness libraries that I had been writing alongside the API development. Those were written in Kotlin so either a Java or Kotlin server for this new REST endpoint was in order. After fumbling around and pulling my hair out with Spring Boot, my not-so-long-ago favorite microservice platform (see previous post) , I decided to start looking at Ktor for my microservice solution. I really only need the one endpoint and hosting so that wasn’t too bad. Ktor itself has it’s own libraries for making HTTP requests to other servers however I didn’t want to re-implement any of the requests (although for these simple gets it wouldn’t have been too bad) so I stuck with the Kotlin Fuel library the test harness code was written in. For the front end I wanted to get some experience with Vue.js since I hadn’t done much besides read about it and do a simple “Hello World” type thing. I was hoping that’d allow me to make things pretty reasonably easily combined with Bootstrap but I only got part of the way there, not to tangent too much. So if you look at the stack and technologies of the whole thing you have:

  • The Diaspora Server
    • Standard Diaspora Services
    • Beta of the Diaspora API
    • Nginx hosting static files and doing HTTPS
  • Blog Server
    • Nginx hosting the Jekyll site static content and doing HTTPS
    • Vue.js dynamic content rendering in the static site data
    • Ktor REST service hosting the endpoint and using Fuel for making requests to the API

Implementation and Execution Sequence Details

If we look at the transactions between the major pieces we have a sequence that looks like Figure 1 below:

Discussion Timeline MVP 1 Sequence Diagram

Figure 1: Discussion Timeline MVP 1 Sequence Diagram

  • Step 1: The end user requests a specific blog post from the Nginx-hosted Jekyll blog website.
  • Step 2: The server returns the static content page which contains at the bottom a Vue.js component for the discussion timeline
  • Step 3: The first thing the Vue.js component does in make a request to the blog server’s /post_interactions/{post_guid} endpoint. Diaspora Post GUIDs are stored in a variable in the statically generated site to build links and such so all of these things happen automatically when Jekyll built the site originally.
  • Step 4: Nginx proxies this to the Ktor Discussion Service server hosting the actual service running locally
  • Step 5 and 6: The first thing the Service does is query the configured Diaspora server’s /api/v1/posts/{post_guid} endpoint to get the post details. If this comes back with an authentication error it means that the stored access token has expired. The service will then try to get a new token with it’s own refresh token by going to the /api/openid_connect/access_tokens endpoint with grant_type set to refresh_token. If this fails then the entire call chain stops and an Authentication error is thrown. However if it works then it makes the original request again with the new token and moves to the next step.
  • Step 7: With the post details in hand it can check that the Post’s author is the configured author for the website. We don’t want this service to be used as a trawling mechanism therefore it will throw an Authorization error back to the requesting browser if the Post Author’s GUID doesn’t match the configured one. Assuming all of that checks out then it’s time to start processing the additional data.
  • Step 8: Using the /api/v1/posts/{post_guid}/reshares endpoint we can get a list of all of the reshare posts of the original post, returned in Step 9. For each reshared posts the service does the following:
    • Step 10: Make a call to /api/v1/posts/{reshared_post_guid} to get the reshared post’s author data and interaction stats, returned in Step 11.
    • Step 12: Make a call to /api/v1/posts/{reshared_post_guid}/comments to get the comments back
    • Step 13: Returns a list of all the comments and comment related data (comment author data, comment time, etc.).
  • Step 14: Once the reshares are processed we call into the /api/v1/posts/{post_guid}/comments endpoint for the main post to get all the comments back in Step 15.
  • Step 16: With all the data in hand it’s time to start massaging the data to our particular needs. This is stuff like converting the markdown to HTML so that links render correctly et cetera. It’s also building some link information that gets returned so that the UI can create hyperlinks to the hosted posts and comments. Finally it’s tallying up all of the likes and comments across all of the reshares so that we can get a total count of each.
  • Step 17: With all of that assembeled the JSON bundle is passed back to the server which sends it back to the client in 15.
  • Step 18 and 19: Vue now has all of the data it needs to render the comments and interactions.
  • Step 20 and 21: As Vue builds the DOM and the browser renders the browser uses the avatar field to bring up the images for each poster/commenter. Those images are hosted back on the traditional Diaspora pod software not the API.

As one can see the standard endpoints did a good job of making getting the data back pretty succinctly. Items which naturally have groupings can be pulled back in bulk rather than doing a query for each of them. The one oversimplification above comes with paging. All of these endpoints that return multiple data elements has paging. In the current beta it is set to only return 15 items at a time. Therefore the service keeps track of whether it got an empty page or not (among other infinite loop safeguards). If it didn’t have an empty page and the return data had a next link to get the next page it follows that link to get another page of data and so on until it completes loading all of the reshares, comments, etc.

Once all is said and done the end result of the rendering looks like Figure 2.

First Discussion Forum Picture

Figure 2: MVP-1 Version of Discussion System Written using Diaspora API

Next Steps and Summary

As I wrote on a previous post, it’s functional but could use some prettying up. That’ll be something I will return to in a future MVP release. I had hoped to pretty up the counters and some of the rendering but for an MVP-1 I thought it did more than got the job done so wanted to move on to other tasks. Overall things I’d like to improve in a future version in the near future are:

  • Improved styling of the counters so that it looks something like what it does on the Diaspora site
  • Better rendering of the user icons
  • Some tweaking of the font selections, spacing, etc.
  • Parsing the user references into hyperlinks to their profile like on the main site.
  • Get it out of an IFRAME by figuring out how to stop the Jekyll parser from clobbering the Vue.js variables without converting this to some webpack generated Node.JS project
  • Improved performance by not having each REST call be a blocking operation. That’d preferrably look like using Kotlin Co-routines and some parallelism.
  • General code clean up and refactoring, with mocking tests. This is a one-day effort and I’m not happy with the overall code quality but it works sufficiently for now.

All of the above are nice to haves since the discussion timeline concept works fine but I think it would make things better. However this wasn’t an ends to itself but a means to an end. I wanted to try to do something real with the API and see if I ran into any problems. For the most part it worked very well. However I can already see a few nits that will have to get addressed. These are things like the URL of the avatar in the main post is a relative path rather than a full URL. The way I coded up the test harness I have a clean interface for the initial queries but I’m handling raw URLs handed back to me for the additional pages. That’s inconsistent and something I’d like to makes ure dosen’t amke its way into real libraries (but it’s not a deal breaker by any means). I think there may be some performance issues we’ll want to watch out for but that’s part of a bigger discussion with respect to Diaspora queries. Overall though it was a solid first real world experience with the API.