“Dogfooding” software is one of the best ways to wring out any problems with a design or implementation. The Diaspora API was designed with a wide variety of uses in mind including something potentially as grand as being the replacement backend for a revamped website. With the actual API now “in the can” and waiting for the real PR review I decided to try to use the API for an actual purpose and start dogfooding it. I had several ideas but the first one I decided to latch on to was a blog discussion timeline feature.
We’ve finally done it! Frank and I were able to get the last of our internal reviews done and the API code is now in the “real” code review for integration into the main Diaspora development branch. That alone is an amazing thing but I have a second piece of big news related to the API as well. Today I was able to stand up a first version of a blog “Discussion Browser” that uses the API to pull all comments and other interactions for a blog post that is associated with a specific Diaspora post. I’m going to be doing a write up of that in more detail later but as a first cut it worked pretty well and showed that the API design and the code itself is functioning pretty well.
I was so excited when I finally got a real pod interacting with the API that I knew I’d have to get it written down before I could get to sleep. However before dropping right to the interactions itself I decided to take some time describing how a piece of software would be allowed to do anything with a server. In Part1 I laid all of those details out to get across some very important points:
- We are using a standard (OpenID/OAuth2) protocol for doing this
- Users have to give explicit permissions to an application, including being told what it is and is not asking to do
- There are security measures once an application is granted permissions as well.
This article essentially details the very first communications and gives people a feel for what the Diaspora API specification looks like in practice not just in theory.
Okay I’m obviously over excited about the fact that something which I knew should work actually did work. However all the previous API usages were on servers on the local machine, not behind an HTTPS link, and not being shared with the rest of the fediverse. This one breaks through that barrier. I have therefore decided to document it in excruciating detail. For the first pass all of these interactions were manual using cURL and FireFox RESTClient plugin. The next step, which will be coming up very shortly, will be creating the very first server to use this for a real purpose (I’ll document that as that happens). This document goes over the nitty gritty details of the whole authentication piece. The next article will go into the calls themselves. If you don’t care about the nuances of the authentication steps then just skim or skip this and go to the Part 2. So without further ado, here we go…
As we begin to wrap up the year we also are beginning to wrap up the API getting ready for the “real” pull request for the API code. We are down to one last code review of the final clean up pass before we have it looked at by the core team. I think the code is pretty solid but it will of course have problems that are discovered during the review and the testing. Ah the testing, real world testing that we really need to do. To get there we need to have a test server. Thankfully that’s all taken care of now and we’ve had the first data interactions with a pod.
Today is a momentous day in the Diaspora API development saga. Today we have completed primary development of the API, the unit tests, and the external test harness. There are still two code reviews between that and the real code review for integration into the main development branch, but all of the major work is complete. What does that mean exactly?
Boy are we really coming down the home stretch now! All of the scopes are implemented in every API endpoint now with their corresponding tests to confirm that the permissions are working correctly. The most difficult of those, I thought, was the Streams, again. After beating my head against a rock a lot yesterday I put the whole project down for the day and then picked it up today. After warming up on the other endpoints I started working my way through getting Streams working such that it could filter private data. After a bit of fumbling I finally got a relatively simple solution to the problem and got all the tests passing correctly.
It’s been almost a week since there’s been an update on the API. I’ve been busy with other things and travel so it didn’t get as much focus as I would have liked to have given it. However there has been some progress. Thanks to Frank’s help we’ve been able to get all of the side branches merged into the core API branch so that we are now coming down the home stretch on getting it ready for integration. The first order of business for that is getting the OpenID security stuff squared away. I’m still working on understanding that better and the more I go back to it and work with it here the better that looks. There is still the question of the "refresh token" workflow but work has been done on it so if anything it’s a small tweak thing or a documentation thing versus a from scratch development thing. Even in the event that it was a from scratch thing with the code base I have and the examples I mentioned before it shouldn’t be a huge effort to get that working. Most of the security work is therefore integrating in the much more fine grained security scopes which Senya has been working to hone.
With the documentation changes wrapped up, but holding off on PR’s until things solidify up a bit more from the code scrub process, it was time to move on to the OpenID deep dive and review. Up until now I’ve been working with an authorization workflow that required me to request a new token ever 24 hours and for the user to authenticate it. I wasn’t sure how much of that was because of the flow I chose or intrinsic to how it was coded up. As I continued to go over the OpenID documentation and other articles on the process I just couldn’t get it working. It was then clear to me that what I needed was an example to help me.
Luckily Nov Matake created some example projects to go along with his OpenID gems, one for the OpenID Connect Provider (the server side) and one for the OpenID Relying Party (the app side). I figured with that everything would be good to go. After all this was the same code he had running up on Heroku but I wanted to see the nitty gritty details and set it up on both sides since I was going to need to do that with Diaspora and the test harness, or any other API use case I may be interested in. As I had come to find out quickly these projects have never been updated. They still rely on old versions of Ruby and Rails. Instead of trying to downshift everything to these versions I decided to fork the projects and get them running under Ruby 2.4+ and Rails 5. Unfortunately that derailed my entire Diaspora development effort for the day. The upside is that the community will have modern versions of these projects to use. I intend to polish them up a little more and then issue a PR back to the original project. My versions however can be found on my GitHub profile with the Connection Provider here and the Relying Party here.
In the process of doing these upgrades I was able to learn a lot more about porting Ruby code up from older versions. I also got a much better understanding of some OpenID flows. I’m going to use that to continue to move forward on the review of the implementation in the API and looking at client side implementation details. Because of the complexity of that whole process I think that’s probably something developers can use a good amount of help for via blog posts and examples.
- Documentation updates are complete but waiting for PRs for after the code scrub
- Updated Ruby on Rails OpenID examples from Nov Matake to work under Rails 5
You can follow the status dashboard at this Google Sheet as well.
Yesterday I said the paging API was complete but needed to be reviewed. The more I talked over some elements with people and in exchanges on Diaspora I realized there were a couple of tweaks I needed to do. The first suggestion I implemented was to have paging on any endpoint that returns multiple elements. The second thing was to have a parameter for specifying the number of elements requested. I was pleased that supporting that feature was really just two lines of code to change. However while in there I decided to beef up some other defense programming techniques in some other places.
After that was done I moved on to implementing the ability to vote on polls. There was no home for it but since it is interacting with a post I put it on the Posts Interactions endpoint rather than create a dedicated endpoint with just one method. It aliases to a path in the same way as the rest of the interactions as well so I think it’s consistent. That also required a little moving things around from the existing endpoint into a service and then having both calling that. Since there were no tests around that capability I ended up writing those as well. With that done it’s time to move on to the documentation and then start hitting up the OpenID review.
- Incorporated suggestions in the paging in the API
- Completed the Poll Voting method
- Moving on to documentation updates
You can follow the status dashboard at this Google Sheet as well.
After a day of coding the paging is now in every endpoint that should have it. That means that we have paging right now for:
- Conversations (but not messages in conversations)
Because of the size of the code changes I would imagine there will at least be some tweaking and I could imagine there being some larger refactoring afterward too but it’s in a solid, working, and as performant space as the existing standard endpoints so I’m happy with it.
Now it’s on to the rest of the checklist. With the scopes being rounded out I’m going to hold off on the security review for a little while longer. The first low hanging fruit I’m working on is adding to the API Spec the ability to vote on polls. It was an oversight in the original design but it should be easy to do. I just need to decide which endpoint to add it to. After that I’m going to double back to the mundane documentation update task. At that point I think it’ll be time to go through and get up to my elbows into the OpenID code and get ready to make changes for the new scopes.
- Paging is now complete and ready for review
- Starting work on voting on polls through the API
You can follow the status dashboard at this Google Sheet as well.
Paging paging and more paging. I haven’t been committing as much time to development the last few days as I’d like. Some of that is frustration with the development process on the paging, which has been a lot of trial and error. Some of it is just how my schedule is working out too. There is progress there though. I have what I’d consider to be the rounded out API Paging infrastructure in place. It has migrated a bit since the last update since as I tried to use it I wasn’t happy with it. I’m still not happy with it but it is suitable. There will probably be some additional tweaking before final integration but what it allows is for us to have paging. I ended up wringing out design problems by wiring it into the Aspects Contacts endpoint method (to test index-based paging) and the User’s Posts endpoint (to test time-based paging). With all of that working and unit tests I’m now moving on to adding it to the rest of the endpoints. There has also been some additional discussions on the permissions scopes for the endpoint as well, and I think we’ve converged on a good final set.
- Paging API infrastructure modified to current MVP (I think) status
- Paging API now used in the Aspects Contacts and the Users Posts method
- Rounding out finishing the endpoints and updating the test harness
You can follow the status dashboard at this Google Sheet as well.
Coming up with a paging infrastructure for the API while looking at all of the ways it could be used and abused hasn’t been fun. Not that it hasn’t been totally worthwhile. I’ve actually learned a lot more about some of the nuances of how ActiveRecord and related libraries are building up their queries. I’ve thought a lot more about the nature of the queries within Diaspora too. At the same time my head is numb and for all of the effort I only got a half completed design and less than 100 lines of code across two classes, not that more lines is necessarily better.
So what we will have are two paginator types: index based and time based. The standard methods across the two are:
- page_data: returns the current page of data for passed in query
- next_page: returns information to go to the next page of data
- previous_page: returns information to go to the previous page of data
The previous/next page functions will either return a new paginator object that corresponds to the next page or it will return a string that represents query parameters that can be passed back out from a REST endpoint.
Both paginator types take a query object that will then have additional paging stuff wrapped around it. If one is doing an index-based query this is just wrapping the WillPaginate library. However if one is doing a time based query then it’s a little more complicated than that. We aren’t simply moving around indexes we actually are doing some time math. That is all coded directly in the class. The big difference between the two comes in how the ordering happens on the SQL query. In the case of both you can pass in an ordered query without throwing an error. However in the case of the IndexPaginator one probably wants to pass in their preferred order otherwise they’ll get whatever the natural order from the database is. In the case of the TimePaginator it wants to keep control over sorting by whichever time field the calling code is using. Therefore adding an additional sort could create confusing results.
Now that the paginators are done I need to add a present class that knows how to turn the query parameters into a “link” field with full URLs, per the API specification, and to update the services to call into and return the paginated data instead of their current form. I think I’ll do one that uses indexes, like contacts, followed by one that uses time, like user posts, and then start filling it out the rest of the way from there.
- Completed playing around with the base pagination classes and completed them.
- Starting to wire in first pagination into some first endpoint
Now that we’ve hit feature complete status it’s about getting more of the legwork down to get us really ready for integration. The first necessary feature we need before that is paging. As I wrote earlier, some endpoints don’t need paging and all of them technically have it as an optional thing. However to be really useful we need to have paging for several endpoints like posts, photos, conversations, et cetera. It looks like we can leverage a lot of the way we do paging in the lower levels for streams and just create a standard pager class that the API endpoints that need it can use. I’ve laid out how I want to approach that so now it’s on to implementation.
Along with the progress on the paging there has been progress on other mundane areas. All of these features were developed in side branches which needed to be reviewed and integrated into the main API branch. We are down to one endpoint left before the API branch itself is feature complete, not just having the code. All of the branches are orthogonal except for the routes.rb file and the en.yml messages file so it’s pretty easy integration but needs to be done properly. In the mean time we are also having discussions about the finer grained permission sets that apps will request and users will be notified about. So for example, an app could be given permissions to only read posts but read/write comments on posts, and so on. The endpoints already check for read/write tokens but they are broad tokens. Part of the next steps will be putting in the proper requests and making sure that the information presented to users is clear.
- All but one endpoint is integrated back into the API main branch
- Started work on the API Paging infrastructure
- Looking at the finer grained permissions for each endpoint
We’ve finally reached the milestone we’ve all been waiting for. With the completion of the Search API Endpoint the Diaspora API is now feature complete. That doesn’t mean that it’s ready for integration into the mainline branch. It also doesn’t mean that there isn’t more fundamental work that has to be done before it can be used on a production system. It does however mean that we can start working on rounding out some of the other fundamentals and make our way in that direction.
The first thing that I am going to work on is the paging aspect to the API. The API spec discusses paging as a thing that endpoints may or may not do. Right now there is no paging. That’s fine for some things, like getting a list of Aspects for a user. It is a requirement for something like getting a list of a user’s posts or for getting your stream. For non-developers who are reading this think of this as the piece that makes your “infinite scroll” work. Diaspora has implemented this in other areas but it will have to work a bit differently for the API. We’ve already had discussions about how we want it to work and there is a format specification for reporting it back. It therefore should be relatively straight forward to get it implemented. That is what I’m working on right now. After that we’ll want to go over all of the new code with a fine tooth comb for style and idiom consistencies (beyond the automatic style checker), security reviews, etc. Lastly we’ll want to get the OpenID authentication/authorization/etc. stuff polished up a bit. Currently the app has to be re-registered every day. That’s not going to be viable for a real user even if it is for testing.
Still, the fact we’ve reached a feature complete milestone is great news and I’m excited to be ending the weekend on that high note.
- Diaspora API is now feature complete
- Search API endpoint, unit tests, and test harness are complete
- User contacts endpoint implemented completing that endpoint
- Beginning work on paging infrastructure for API endpoints that need it
To follow along with status please see the Google Sheet Dashboard.
After the long-winded post a few days ago on the API Status the latest update is pretty brief but important:
- Notifications API endpoint, unit tests, and test harness are complete
- Work on the last endpoint (search) has begun.
The last couple of days has been a lot of heavy effort of slogging through some ever increasingly complex changes to get the API going. I started with what I thought was going to have a relatively easy time with the notifications however the deeper I went into the more I realized that I either had to come up with some relatively (for me anyway) complex queries to populate some of the return types or I have to settle for some N+1 type query behaviors. “N+1 queries” are one where you pull the results one piece at a time. That’s fine for smaller data sets, like five or ten or something, but if you are dealing with hundreds of entries you are really thrashing your system. So I got about half way through the notifications API and then put it on the shelf and moved on to the API was dreading the most: Photos.
I was really psyching myself out about having to deal with the whole image file upload part of the Photos API and then the subsequent tie in with the Posts API. It shouldn’t be that complicated but these are things I had never done in Rails or with the Kotlin Fuel framework. How would they interact? How difficult would the security checks be? You get the idea. It did take several hours of figuring out what the current controller is doing and then how I wanted to refactor the more complicated operations into a service but I got there. Once I had that I had to test the whole aspect of limited posts et cetera, which I hadn’t done as well as I had thought previously. Thankfully my Ruby unit tests were solid I just had some hiccups in my test harness.
At the end of the day we have the Photos API and the Posts API working with the photos perfectly, to the point where I was able to make a fully populated post including with an image that was uploaded externally as well. That means I’m going to jump back on the Notifications API to wrap that up and all that’s left is the Search API.
- Partial Progress on the Notifications API but shelved to figure out queries later
- Posts API is feature compleet with full tests
- Was able to create an entirely populated post with the respective images from scratch using an external application for the first time ever in Diaspora (see this post)
- 1.5 Endpoints left to go to be feature complete
After slogging away for most of today on the Photos API, with lots of needing to understand how things work and a couple more tweaks before it was ready, I decided to celebrate by showing the ultimate progress report: a screenshot. What is so special about this screenshot? It is the first post in Diaspora that has been fully made by an external application. The “external application” in this case is a test harness written in Kotlin which is designed around the API spec. This test harness first uploaded the image file, then it created the post with every feature a post can have including: location, polls, and references to other users. The post was written by a “user3” (for testing might as well stick to simple names). This is a screenshot from user1’s perspective. Notice that they also got the expected notification. Yes it’s still a bit of a ways from done but it’s still a great milestone, so I’d say it’s time to celebrate for a bit before getting back to it :).
Brief update from today on the Diaspora API development progress:
- On the Users API turns out we probably still want to have the contacts endpoint if only for the primary user since the Contacts API works on a per-aspect level the way it is mapped. Whether that method shows up in Contacts API at a different mapping or on the User itself is still TBD but it will be a change to the spec.
- The Post Interactions API is feature complete with full tests and the completed test harness.
- Work has begun on the Notifications API. This is the first change I’ve done that will require a DB migration, adding a new GUID column to notifications, so this is going to take a bit longer for me to complete as I do background research on that.
At this point it’s actually easier to look at what is left to do versus what we have done (which is a huge plus sign):
- The only two endpoints that haven’t been touched are Photos and Search. Once these are done (along with work on Notifications) the entire API spec will have been implemented.
- Implement a new poll interaction method for answering a poll through the API
- We need to implement paging on several of the endpoints. This technique will be similar to how it’s done in the core controllers but it has to be different because the return type needs to have the next/previous pages and the corresponding format needs to honor that. The actual mechanics of the queries are pretty much the same though so grafting them into the existing feature complete controllers should be relatively easy.
- Right now the OpenID integration works well enough for testing but it currently requires revalidating the app every 24 hours. This has to be tweaked to be more reasonable. There may be some refactoring in there as well.
- The Posts API Endpoint accepts any photos currently, including those that are already attached to another post. This is not consistent behavior and has to be corrected to only allow a “pending” photo to be added.
- Sweep of all of the APIs for consistency on security, service initialization (where appropriate), params parsing idioms, etc.
- Sweep through the unit tests to make sure that edge cases are covered in the same way
- Documentation updates to account for things discovered during the development (error codes added, format tweaks etc.)
It’s been two weeks since my last Diaspora API Dev Progress report but that’s not because nothing has been going on. Between the RubyConf 2018 attendance last week and this week being a holiday week there was definitely a drop off in how much development time I put into Diaspora, and therefore mostly into the API. However over that time there has been some development progress:
- All of the previous work has been successfully merged down into the main API branch.
- The Contacts API is feature complete with full tests and the completed test harness
- The Users API is feature complete with full tests and test harness with the exception of the User Contacts API method. That method was supposed to be able to return another user’s contacts if that user allowed that. However that feature no longer exists in Diaspora so I believe it is extraneous. If that’s agreed upon then this is feature complete and ready to go.
This week I should be able to apply a lot more development effort than I have been able to the past couple of weeks. Hopefully that translates into forward progress on some more endpoints. The trend seems to be that they are getting more difficult to knock out so my velocity is slowing. I guess it’s better than being stymied in the beginning.
Yesterday was the first day in several I could commit to real time towards D* again. After getting back up to speed and making the status post I went on into the API development again. I was able to make some good progress on some brand new endpoints. The first one I worked, which is the first that needed from scratch coding of the main code, was the Tag Followings controller. The day before I had struggled getting Rails to make the POST for creating tags work against the spec. However after talking it over and thinking about it it was the spec that needed changing. In another software framework I could just make it work but relying on the auto-wiring in Rails brought the design flaw nature to light. With a simple change starting yesterday real development of the Tag Followings endpoint started.
The methodology I’m using when developing the new controllers is as follows. First, I want to get the basic infrastructure in place and the tests. That means that the first phase is to write the skeleton of the controller code, the skeleton of the RSpec tests, and to wire the two together. I make sure that the routes behave the way I think they should according to the API Spec without worrying about returns etc. The skeleton of the controller should implement all routes. The skeleton of the unit tests should be testing for happy path and reasonable error conditions. So that’s stuff like: the user passes the wrong ID for a post that they are trying to comment on, or an empty new tag to follow, etc. I then go over to the external test application and code up the corresponding code in there as well. With everything running I make sure that the endpoint is reachable from the outside (which it should be), but don’t worry about returns, processing etc. If it’s possible to setup fake returns easily I do that otherwise I just ensure the proper methods are called. After all of that is coded and committed then it is off to filling in the controller method by method. For each one coded up I complete the unit tests and the external test harness interactions as well. Once that’s all done then I move on to the next one. In some cases, like Tag Followings, there needs to be refactoring elsewhere which has implications on the above flow. I usually do those pieces before coding the controller. It is at the design time that whether I should be using common code with another controller which may not exist as a Service component becomes apparent. If I need to make any changes over in other code I check that there are unit tests which properly cover the changes I am going to make, at least as best as I can tell, write those and then make the changes. This should minimize the possibility of disruption.
When interacting with Frank R. on the merge requests one of the pieces of feedback I got was that with everything compressed down to one commit it was hard to tell why I did certain things. As I code all of that is there but I’ve been rebasing everything down to one commit per endpoint so that when it comes time to merge the API branch into the main develop the log will look something like: Post API endpoint complete, Comments API endpoint complete, etc. To get around this I’m trying a new flow. When I think something is ready to be merged i’m doing a Work in Progress (WIP) Pull Request (PR). That PR has the raw commit history and the name “WIP” in the leader of the label. After a review and a thumbs up I’m going to rebase it down to one commit and then submit the final one for integration. By the time WIP is done the code is feature complete however and should be ready to be merged. I’m therefore counting WIP PR’s as the threshold for saying something is feature complete.
With all that said the three new endpoints that were feature complete as of yesterday are: Tag Followings, Aspects, and Reshares.
After a week of distractions I finally have a new update on the progress. We’ve successfully merged all the work done to date into the one main API branch and are now working on new features moving forward. The first feature we have completed with full tests and test harness interaction is the ability to manage and work with the user’s followed tags. So we have the full post lifecycle from before, and now tags done but not merged into the main branch yet.
The merging of the various side branches into the main branch is coming along. Because this isn’t being done as a primary job there is a bit of an expected delay between the pull request (PR) being generated and the branch being merged in. This is giving me the opportunity to work on other features on Diaspora though. The process is going along much faster than I expected it to, which is good. At this point we have merged the Likes, Comments, and Post Endpoints together. The PR on the Post Endpoint is now queued up however all of those changes exist in one branch. What that means is that I was able to perform a full Post life cycle test using the test harness. This means that we have an external application talking through the API and doing the following for a user:
- Creating a post
- Querying for the post and printing out it’s data
- Adding a comment to the post
- Liking to the post
- Printing out the comments and who liked the post
- Deleting their comment on a post
- Unliking a post
- Deleting a post
This is a very important step. Follow additional progress on the API Progress Google Sheet.
It’s been a few days since I’ve been able to put some real time into Diaspora development but I’m back today. Being back home from travel too means I can finally get past the blockers on the other branches. I’ve actually gotten all of the branches I had been developing on to feature complete status, with full tests, and the test harness fully coded against it. That means that through the API one can complete the entire Post, Comment, Like, etc. lifecycle for posts with all data types (regular, Photos, Polls, location, etc). Conversations are also feature complete with full test harness as well. Streams are also complete, however I haven’t tested with sufficient post volumes to test paging behavior. Now it’s going to be the trick of getting past the tech debt of getting them merged together into the API branch. Hopefully that’ll come in the next day or two. I’m going to spend some time doing other Diaspora stuff besides that as I work through those pieces as well. As always follow the progress on the API Progress Google Sheet. After the merge I’ll be moving on to the Tags Endpoint, the first endpoint that is a full from scratch development for me.
- Fully feature complete endpoints with full external test harness interaction completed are: Comments, Conversations, Likes, Posts, and Streams (except for paging behavior).
- Ready for merging of the side branches into the main API branch
Even though it was another short day on the road it was a productive day. The Conversations Endpoint’s Messages method got completed shortly after I typed up the previous day’s status message this morning. I then jumped onto the Streams API.
I’m still on the road so my contributions aren’t as great as I’d like them to be but I did manage to make some progress on the API development. At this point Conversations Endpoint minus the message listing of a conversation itself (next up). The test harness is coded up against the Conversations such that it can create, read, and hide/ignore them. As I finish up the Conversations Endpoint work and wrap up the Posts Endpoint work when I get back home I will soon be leaving the world of reviewing the existing implementation done by Frank while augmenting the tests, writing test harnesses, and making changes to get all of the tests to pass. I will then be entering the world of from scratch development on the rest of the API.
While I’m on the road I’ve been hoping to get some more work in on the API. Yesterday was a bust, and I knew it would be. Today looked like it was going to be a bust but I actually was able to get some time in tonight due to some plans that were cancelled last minute. As I sat down to start working I realized that I hadn’t been quite as prepared to develop on the road as possible. Before leaving I made sure my development laptop Ruby VM was fully configured, could compile the main code and the Kotlin test harness. I was all good to go! Except, I forgot to push my work up to the GitHub and Gitlab. Oops. Well, that derailed continuing work on the Posts API Endpoint, but with plenty more endpoints to go I started up on the Conversations endpoint, the next most filled in one to start from.
I did make a good amount of progress of fleshing out the unit tests and making some code changes to make the requests and returns on the Create method to correspond to the specification. It was at that point I realized I didn’t quite test my setup even further. I didn’t have a registered application in my OpenID setup on this dev instance. I also didn’t have the configurations I used when I set it up on my main development machine either. After some fumbling around I did manage to get it registered so I could then start testing the external test harness against the endpoint. After some final code tweaks I got that up and running and now have the test harness generating new conversations between two users! On to the rest of the conversations API tomorrow!
I’m still making good albeit slow progress on the Posts Endpoint. While the Posts Endpoint doesn’t have a lot of methods the complexity of the send and the return data is far greater than the other endpoints I’ve done so far. Posts have more than just text. They can have polls, geolocation data, mentions, aspects management, and photos. Yet posts are the core of the whole system. They are the digital elements we interact with the most. So progress on this endpoint is crucial. I’m pleased to say that at this point I’ve made enough progress with the unit tests and the test harness that from an external application I have been able to do have an external program do the full lifecycle of posting: Create a post, read a post, comment on a post, and like a post. I’m pretty stoked about that! While I have the full complement of all post data available on the GET method tested, I still have to create the test harness test methods around pushing posts with ancillary data (location, polls, mentions, photos), and need to write the unit tests for photos as well. The Photos endpoint for uploading photos during a real post creation process is a whole other matter though, but we’ll get to it soon enough!
Today I didn’t get as much progress as I had hoped on the API but still important work was done. Yesterday I discovered that something was probably off in the way that the repository rebasing was done when I did it about a week ago. Today I confirmed it. Working with Benjamin Neff (SuperTux) I was able to figure out a path forward for correcting the problem. While the git commands are pretty straight forward, me being comfortable that I’ve done it correctly is another matter so I did the process three times in a row. Each time I looked at the corresponding git log afterward and did a three way diff of the API branch head before the new rebase, the API branch head after the rebase, and the main Diaspora develop branch. I may end up doing it a fourth time (or reconfirm this last time anyway) before doing a final push after talking with Frank about it.
After getting past that I spent the other half of the time making actual progress on development. Thanks to Dennis Schubert’s (DensChub) efforts we were able to make some progress on some API questions I had. After that I made changes to the respective implementations to make it consistent. Then I went back to the Posts Endpoint testing. I completed the full GET path happy path testing for simple and fully filled in posts (text, photos, polls, mentions, and location). I now have to add in failure path testing on the GET, and the corresponding test harness methods to complete that and move on to posting and deleting Posts.
Another day another progress report on the state of the Diaspora API development. I had hoped by now that I’d be picking up a little more speed but I always underestimate how minute working on high coverage unit tests are. If I was doing a whack it together MVP startup-mode app I would always put automated tests around it for my own sanity but since things are going to change, or maybe even get thrown away entirely, in relatively short order there’s no need to go gnat’s ass down to the details. That’s not the case with the API. Yes the API is technically in a draft mode but it always looked like a really good draft. The more I code against it and use it the more I believe that’s true. Yes, my development speed is increasing as I become more familiar with all the technologies and get past some more technical hurdles but it might take the better part of a man month to finish this up (which is maybe a man-week more than I originally eyeballed).
The progress though has been steady. I had a hiccup late last night with my test harness. The Fuel HTTP library I’m using in Kotlin pushed a new release that requires the 1.30 version of Kotlin, which apparently is harder to come by than I thought. Manually setting the version fixed it all but not until after I spent half an hour fumbling around with it before giving up. Today was the deep dive into the Comments endpoint. As was the case with the previous Likes endpoint Frank’s previous work left a very solid base. Fleshing out the tests for some different errant behaviors, testing error messages as well as codes, and finding problems with the interactions once the test harness interacts with it over HTTP were the usual gremlins to squash. Still, with only two more mostly fleshed out end points to work with coming from Frank’s code base, I have a feeling that the development pace will be slowing down. Maybe I’ll have gained sufficient efficiencies in my speed of coding on all of these to make up some of that difference.
Along with the above gremlins now that it’s being interacted with I am seeing some potential small grained details that need to be discussed about the API. That’s all tracked in the issue tracker on the API documentation page though. Again, this is solid work by the team putting the API together and Frank’s initial code base that I’m starting from.
In summary progress for the day:
- Comments API Endpoint is finished and ready for pull request
- Test harness example of interacting with the Comments API is completed
- Some Issues were submitted to discuss minor changes to the status reporting back from the REST services on things like what happens when a Comment ID doesn’t match the Post ID that the REST endpoint was called with.
- Some small documentation touch ups to address navigation
Being in the early phases of getting the implementation started it was inevitable I would encounter a little extra inertia to overcome. Part of that is my own doing, but all of it is important to have confidence in what I’m developing. The easiest part was filling out the API Implementation Stoplight chart so everyone, including me, can track what is going on with the development. Then it was on to a fork in the road of sorts: do I want to start an external test harness now or wait until more is implemented. I decided for former.
While I made progress with a few hours of Diaspora API Dev yesterday it wasn’t until today that I finished my first code change towards the API: completing the Likes Endpoint.
Yep, two Diaspora API dev reports on one day. After taking a break for dinner and just watching some TV I got back to figuring out how to properly interface with the authentication and API from an external client. I was re-reading the OpenID spec, watching some videos, reading some presentations, et cetera. If I’m going to be working on the API this is something I definitely need to be deep diving into a lot more. My initial order of business however was just getting it working.
I’m only a few hours into getting fully going on the Diaspora API development project. I had been pre-flying that whole experience earlier last week by studying the existing code base, familiarizing myself with the discussion threads et cetera. Over the last couple of days I’ve been trying to focus more on moving the ball forward as well. Before really doing that though there is still a little ground work to do.
I can’t express how happy I am that I have the privilege of having a combination of time, ability, desire, and energy to contribute substantially to the Diaspora project right now. Ever since I started using it in the spring it’s something I’ve wanted to be able to help with. I certainly got my feet wet back then on some tweaks to the Twitter and Facebook interaction code, the latter of which is permamently broken thanks to Facebook’s new API spec. With the amount of getting up to speed on Ruby, Rails, and the Diaspora code base I’m looking forward to helping tackle a much larger and persistently requested piece of code: a Diaspora API.
I’ve mostly been “microblogging” updates on Diaspora recently. That’s a fancy way of saying I haven’t been doing any in-depth writing but instead just making quick ad hoc posts on social media. As I am now ramping up my development on open source projects, primarily Diaspora by the looks of it, I’m hoping to start posting here more frequently capturing new lessons learned, observations from my exploration of these newer languages and code bases, and just getting more writing in.
Over the summer I actually spent a good deal of time exploring different cross platform development frameworks of the .NET and C++ variety. That was intended to be to work on a very niche open source project idea that I had conjured up around my classic computing hobby. By the time I made enough progress on that to the point where I could potentially be productive, although I still want to explore wxWidgets a bit more, the bug to help on alternative social media platforms bit again.
Sorry for the absence. I hope to be a regular poster again for the half dozen of you that actually read this!