Some people just can’t leave well enough alone, I swear! When last I left Spring Boot world everything was going great. The project bootstrapping was pretty straight forward. The documentation pretty much matched the actual behaviors. The actual behaviors were pretty well laid out. Today I tried to create a project from scratch. Between fighting Java version hell from the online generator, to fighting gradle dependency hell both there and in IntelliJ, to then wrestling with some new fucked up syntax for something as simple as reading in the configuration file I have wasted two hours and gotten absolutely fucking nowhere!
It seems like just a couple of years ago that Microsoft, the evil empire of the 1990s and early 200s, embraced open source and put the .NET ecosystem into the open source. It was a shocking event which was meant with some pessimism by a community that had been bitten far too many times by the old mantra “embrace, extend, extinguish” from Microsoft (not that they were unique in this mantra). It’s shocking that we are four years into this process but more shockingly is how well the .NET community is functioning. This is not an “in the open source” which is code for “you can see the code but we are the developers.” Microsoft, against all my expectations, has successfully built an open source community around open source .NET. Take a look at the pull request statistics. There is a substantial community element in most of the pieces (Chart and to read more it check out Matt Warren’s blog post on this):
If you look at the time series data Warren has created it looks even more promising. That’s not to say all is well for everyone in the .NET open source world.
As a person that tried to get back into it, to the point of polishing off SharpenNG to make it work in a post Java 7 world, I have to say that even with the improvements over the last few years the non-Windows platforms are still not first class citizens. Development for .NET sings under Visual Studio, which of course only runs on Windows. The old Xamarin Studio rebranded as Visual Studio Mac does provide a decent experience but still nothing in comparison. People on Linux on the other hand are out in the cold. Yes there are the command line tools and Visual Studio Code. That works a lot better than I expected but you can feel how clunky that development is in comparison, and MonoDevelop seems to get worse and worse as time goes on. When I think about dabbling with .NET again I think about trying Rider by JetBrains the next time. Perhaps they’ve cracked the nut. One thing I refuse to do is jump to Windows.
Related to all of that is the other elephant in the room: Microsoft doesn’t support UI development nor has any plans to on Linux. There are open source alternatives like Avalonia and Eto.NET. I know that Michael Dominic’s development shop was able to turn out a live geospatial cross platform app, Gryphon, using Avalonia so there can be some serious work done with this. Maybe because of that official blessing from Microsoft isn’t needed, especially if Rider combined with the above fits the bill. Maybe that’s the community evolving beyond Microsoft too? Still, at this stage there is a second (or in the case of Linux third) class citizenship feel about it. It’s orders of magnitude further along than I thought they would get though, which is a promising sign.
The Cambridge Analytical debacle from earlier this year and the subsquent #deletefacebook storm brought me into the alternative social media platform Diaspora. At the time, as I wrote here, I had hoped to leave the walled gardens forever. Initially I did just that but practicalities changed that forced isolation quite a bit. In some cases, like DDG, I’m still 99% using the open alternative. In others, like YouTube, I’m mostly using the old system because I just can’t get what I need out of the alternative system yet (although I still try more and more every week). However for much of it, especially on the social media side, it’s more of a mix. I’m on Diaspora as much as I’m on Facebook. I’m on Mastodon more than I’m on Twitter, but that was always a small platform for me versus my usage of Facebook. The best way to think of this blend for me is that I try to make Diaspora and Mastodon my primary platform and Facebook my secondary one, with Twitter being a distant third.
What that means practically is that I’m pretty much logged into Diaspora, Mastodon, and Facebook continuously throughout the day. The first places I’m posting to are Diaspora and Mastodon. The first places I’m checking posts is Diaspora and Mastodon. Most of the new activity from me is on Diaspora and Mastodon with manual cross posting, thanks again Facebook for screwing up your API permanently to prevent external posting, when I want to share the same thing on Facebook as well. Because I have just over 1000 friends on Facebook and almost all of them are people I’ve interacted with in real life (most mere acquaintances or met once at a social function or something) there is just a larger volume of relevant and more personally resonating posts from others I interact with. So if one were to look at my activity feeds and notifications on a given morning when I start the day you’d see tons of activity on Facebook and a little activity on Diaspora and Mastodon. Today was different.
Today the equation was reversed. Today I had more interactions to wade through on Diaspora. I had more relevant interactions to wade through at that. I had more notifications to wade through. I even got comparable engagement on my cross-posted material from late last night on all three systems. That’s the first time that’s happened since I went back to having a foot in both worlds!
Is it that I crossed a tipping point in people I’m connected to on these alternative social media systems? Is it that the influx of Google+ users have caused a spike in engagement across the systems in question? I don’t know the answer to why, and this will probably stay a noteworthy exception rather than a rule moving forward. However it can’t be a bad sign, except in one way. In the span of how long I’ve been writing this article, which is a free association lasting 15 minutes, I’ve already received almost ten notifications on Diaspora. I know that the notification controls are not as fine grained on Diaspora as they are on Facebook. It’d be a great problem to have to need to tackle that sort of feature request in the near future :).
I’m a bit baffled by this Motherboard article on “screwing ourselves” with the pursuit of thinness in laptops. This is of course coming out of an Apple controversy where some bad software and a lot of tipping the scales in favor of form over function has lead to a haus looking laptop spec turning out some pretty bad real world performance numbers. As has often been the case, once the initial outrage of some Apple stumble starts growing old people start looking around at other vendors and then the quiet problems of the industry are shown the light of day. Apple’s MacBook Pro thermal problem is more egregious than others but turns out they all are essentially hobbling performance on their UltraBooks and compact laptop chassis making the list specs nothing real world. Why are we surprised by this though?
I’m now three weeks into picking up and using non-walled garden social media systems instead of traditional ones, specifically Diaspora over Facebook and Twitter. It has mostly been a good experience despite some major disagreement on some of their decisions on user experience and other rough edges that I hope to help fix soon as a contributor. But the thing that puts social media apart from blogging or other static production ecosystems is the concept of sharing and interacting with other users. By the nature of the the fact these massive digital halls are still pretty empty I’m just not getting my fill of that.
This pro-Swift article came across my RSS feed recently and while I don’t want to do a direct comparison of Swift versus Kotlin since I haven’t done Swift coding I did think it was interesting to point out similar points of efficiency in their simple example built as a product of the Kotlin language compared to others like Java, the language they picked on too.
Over the weekend I had made a bunch of progress on migrating away from the walled garden systems. I’m happy to report substantially more progress. This will of course be an ongoing process of refinement and testing. However I’m currently getting substantial amounts of my needs met in enough areas that I’m prepared now to start pulling the plugs on Facebook, the Google Ecosystem, Twitter, and so on. When I wrote about this over the weekend I had completed my hypothetical replacement of several systems. I have some updates to those elements as well though. My current replacement portfolio looks as follows (summary at the very end):
As I wrote earlier this week after the Cambridge Analytica event came to light my nagging feeling that I needed to get off these Facebook, Google, etc. platforms crossed a threshold. It was no longer something that I thought I should do but something I was going to actively do. In one week I’ve made progress in pretty much every dimension (scroll down to the bottom if you just want my list of alternatives).
I’ve had my moments in the past where Facebook pissed me off and I tried Google+. That didn’t work out too well so I went back to Facebook after they addressed some of those problems. I had my moments in the past where I was concerned about the amount of tracking Google does in searches so I went to DuckDuckGo. That’s still my main search engine but sometimes I need results that come out better in Google so go there. I also use the Google platform for my e-mail, documents, etc. The concept of them selling my data in exchange for giving me free service has bothered me to varying degrees over the years, but seeing how greedily it was manipulated recently is really amping that up to me. The amount of information available to the highest bidder has always been a known quantity to me but these recent stories are just putting that up to eleven. It’s not just the Cambridge Analytica story. There is also the story about Facebook and other companies forcing users to turn over their keys, so to speak, so they can look at any and all their personal data as a condition for working for them. There is the way they exploited that data in difficult discussions.
I almost never wait in huge lines for anything. I camped out once for football tickets in college. Once. I also once waited six hours for an iPhone 4 when it first came out. It was my first smart phone and I had been putting off getting one way too long. That was it though. Yet I know people who have waited in ever decreasing lines for each iteration of the iPhone. The reduced lines are definitely part of the sizzle wearing off and the iPhone being just another smart phone. Yet even at 8 pm last night there was a line for iPhones outside our local Apple store. It didn’t wrap around the mall like in the iPhone 4 days but the end of the first day still having a line for an iPhone 8 was pretty telling to me.
It was just a few months ago when Ubuntu announced they were killing off Unity, their main desktop option. Many people were wondering if this was part of their larger pivot towards more profitable ventures and thus they would be leaving the desktop behind. I too was filled with worry about that potential outcome but calmed myself remembering that I was not locked into one vendor for my OS any longer. In the intervening months however it has become clear that Ubuntu is not killing of the desktop, far from it. In fact the strides they are taking with Ubuntu 17.10 and Ubuntu 18.04 look like they are about to put out the strongest desktop offering to date. Not having to carry the weight of a phone platform, their own desktop environment, etc. has allowed their team to focus on giving positive contributions to Gnome proper. I’ve had the opportunity to play around with the Ubuntu 17.10 betas and have to say that I don’t think I’d be missing anything from my current Ubuntu experience. I look forward to upgrading to 18.04 when the time comes and no longer worrying about if one of my desktop baselines was going away.
On one of my classic computing Facebook Groups there was a post quoting Edsger Dijkstra stating, “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.” It’s actually part of a much larger document where he condemns pretty much every higher order language of the day, IBM, the cliquish nature of the computing industry, and so on. Despite most of it being the equivalent of a Twitter rant, in fact each line is almost made for tweet sized bites, there are some legitimate gems in there; one relevant to this topic being, “The tools we use have a profound (and devious!) influence on our thinking habits, and, therefore, on our thinking abilities.” No, I don’t agree with the concept that starting with BASIC, or any other language, permanently breaks someone forever, but the nature of the tools we use driving our thinking means that it can lead to requiring us to unlearn bad habits. Yet has someone tried to actually write BASIC, as in the BASIC languages of the 60s, 70s, and early 80s, with actual design principles? Fortunately/unfortunately, I tried a while ago, with some interesting results.
While I’m obviously becoming quite enamored with Kotlin recently, this is like the early dating stage for me. Everything is great when you first start dating someone but it’s after you’ve been with them for awhile and see their warts, which everything and everyone has, that you finally decide whether it’s the right fit or not.
As I wrote about here yesterday I am taking my exploration of Kotlin to the next level by looking at performance metrics using the Computer Language Benchmark Game. As of right now I’ve completed my first two steps: got the benchmark building/running Kotlin code, and doing a straight port of the suite (follow along at the Gitlab project). This was really 90% IntelliJ auto-converting followed by me fixing compiling errors (and submitting a nasty conversion bug that came up from one test back to JetBrains). So now onto the results! Well, actually not so fast on that one…
Like many people I got into my software development stack rut; complaining about the things I hated about it and why I wanted to change. The Java stack had been treating me ever increasingly well, especially with the refactoring of Spring into what it is today, but the language itself and it’s stagnation bothered me and had me starting to gaze at .NET now that it’s open source. I am now officially done with that exploration and will be sticking with the JVM-based system for the time being. This is driven by three major things: Kotlin, JavaFX, and the fully open source nature of the pieces I use.
I am totally loving ArsTechnica’s two part series on the history of the IBM-PC (Part 1, Part2). However there are some glaring omissions around the MS-DOS part of the story that I think they should have added in at least an afterward. My write-up here is based substantially on other articles but most importantly this article from the Computer History Museum.
There are certain things in life that you take for granted but didn’t know you did until you didn’t have them anymore. Swagger is definitely one of them.
As the whole “what happens to Unity” thing unfolds I decided to redouble my efforts in trying different distros again. I’m trying everything from trailing edge (latest Debian) to bleeding edge (Solus). As luck would have it it was time for me to refresh one of my development VMs so I decided to jump that one from Mint to Solus to give it a real world spin. My first impressions are that it is a really interesting distro and one I’ll keep playing with but there is one not-so-tiny problem that hopefully they will grow out of.
As much as I’ve never been a fan of Unity I’ve learned not to hate it as much as my host OS (and even in some of my VMs). Sure, my go-to desktops of late are mostly MATE distros or Cinnamon, but Unity hasn’t been completely unacceptable. With Ubuntu’s recent announcement of the demise of Unity and people openly pontificating on if this means Ubuntu is abandoning the desktop or looking to sell to someone like Microsoft who will then kill it on the desktop I started to analyze what this meant to me as a Linux desktop user. Is this the end of the road for that journey and therefore back to Mac or, god forbid, Windows?
Back in November I started trying to mess around with .NET again, with the twist of I refused to become Windows bound to do it. After some time experimenting holidays got in the way, then work got in the way, and as usual life gets in the way of hobbies. Today I needed to work out some standard C# code samples for interacting with REST services I had written in Java. I could have spent two hours installing Visual Studio in the virgin Windows 10 VM on my laptop, or I could fire up a new Linux VM and give cross platform .NET another try.
I’m being impatient, and it’s my own fault. I started that Linux Craptop experiment to see how much mileage I could get out of a decade old laptop running a lean(ish) Linux. That actually became my only home laptop while my 6+ year old (I think) MacBook Air was getting its battery replaced. I was going to “suffer” through it for just the few days and then the MacBook would hold me over for at least another couple of years. At this point however I’m really chomping at the bit to retire that Mac and go Linux full bore.
At the beginning of January I decided to try my hand at using a ten year old laptop running Linux Mint MATE as my daily at home machine. While there is certainly some cruft associated with using such an old machine for the most part the experience was perfectly fine. In fact I’m using it right now to write out this article. I wouldn’t recommend running out and buying one solely for the purpose, but the fact remains that Linux Mint MATE, and probably Ubuntu MATE as well, provide a great average user load experience on underpowered hardware.
I’ve been a huge convert to Linux Mint and Ubuntu for several years now. In the last year I went so far as to be running Linux as my bare metal OS on both my work laptop and home desktop. I’ve never had an update for Mint or Ubuntu get so borked up that the UI refused to function properly…until now.
Over on Slashdot there is an article about an IP saga of sorts between Wix and the makers of WordPress. While the Slashdot title accuses Wix of “stealing” code, not even WordPress’s Matt Mullenweg accused him of that in the original post. What happened is pretty simple. The Wix engineers decided to wrap a WordPress rich text control so it would work well with React Native. The Wix engineers made that project under an MIT license and then dutifully used it in their proprietary iOS application. The WordPress control they wrapped was licensed under GPL, and that is where the problem is.
With the release of the latest MacBook Pro’s Apple has finally returned to some semblance of modernity with their product line in the laptop regime. They have left their desktop line to languish at least for another six months though. That makes my recent purchase of a Hackintosh Rig (that I admittedly still happily run Linux on without even considering the need to go back to OSX) seem like an even better idea. Even with my embrace of Linux I still would have kicked myself if a dream iMac came forward, but thankfully nope! Which brings me back to the latest laptops. They are obviously a welcome upgrade to a laptop line that time forgot. They have some very neat features. They have the usual Apple Tax, in this case about $400 for a comparably priced Dell and about the same dollar price for a much better System 76 laptop. But Apple has far better battery life than either of those two machines ever would.
Is it a great upgrade? Yes. Is it worth the money? Probably/maybe/depends. Is it something I’m dying to buy? No. At this point none of the Apple laptop offerings are drawing me in. My MacBook Air mostly gets the job done, even if it’s starting to show it’s five year age. But the processor isn’t the thing that’s killing me, it’s mostly the memory limit when I try to run VMs. So to spend $1500-$1800 just to fix that problem seems outrageous. At this point I’m going to go with my original plan: play around with my seven year old Dell running Linux and then give a System 76 laptop for a whirl.
I’ve been prepping for potentially jumping from iPhone to Android for my personal phone. I’m getting sick of the quality of iOS and apps going down. I’m getting sick of vendor lock. My problem with vendor lock has a lot to do with a feeling that I’m not in control of my data. Based on what I’ve been reading, and this article on TechCrunch it seems like the problem is becoming far more exacerbated on Android with the Google platform. I could already see it with the latest service offerings that Google has been pitching with the new Pixel. As I played around with the Google apps it seems they were at least as wonky if not more so and on top of that they seem to be far more invasive about how they deal with your data. They also seem to do a lot more of the insipid “opting out” versus “opting in” problem that I see on iOS. While I may be buying into a supposedly more open platform, would I be doing it at the expense of my own data control? Do I need to look further than Android to Ubuntu phones or something? SMH.