I have a collection of solo and weird RPGs. One thing I learned, is that just like books, buying TTRPGs and playing them are totally separate hobbies. I have way more games than I can actually play, but here is a list of some of the ones I like the most at the moment.
A solo roleplaying game of loss, memory, and vampires.
In Thousand Year Old Vampire you chronicle the many centuries of a vampire’s existence, beginning with the loss of mortality and ending with inevitable destruction. Prompt-driven play and simple resource tracking provide easy rules for exploring your character’s human failings, villainous acts, and surprising victories. Expect gut-churning decisions and irreconcilable acts.
The game mechanics are simple and intuitive. Play progresses semi-randomly through a book of Prompts which let you explore your vampire’s wants and needs, resolve problems, and chart the decline into senescence. Play can happen entirely within the character sheet or can become a journaling activity--both work equally well.
That's straight from their page.
I love how you write your journal and then destroy pages of it as your own memories are lost through the ages.
A forest of beasts need your help!
Play as a tiny animal from the Guild of Poulticepounders, making remedies for local beasts as you go on grand journeys with your Familiar sidekick.
Explore Scotland’s varied landscapes as seasons change, reagents bloom and fade and the beasts of the Bristley Woods adapt to survive.
Go on adventures spanning all four seasons, across bogs, lochs, mountains, ancient titan ruins and devious behemoth barrows.
A quest game that gets you exploring woodlands always conscious of your resources, the environment, and those you can help. It is a hard game but soooo cute.
Four Against Darkness is a dungeon delving game. You build four characters and put them through dungeons until you find the final boss or you’re forced to escape. The mechanics are simple and easy to learn and just reading the core book and maybe watching a couple videos on YouTube is enough to get you started. I really recommend checking the tutorial videos by Livi and the dungeon playthrough by the Tabletop Engineer.
I posted my first adventure here
The first solo game I played. Super easy, all you require is a pack of playing card and dice. It is a simple journaling game you can learn in ten minutes and it is a delight to play. There are so many hacks built upon its simple premise, you can get a bundle of them called Alone On A Journey on itch. There is a version for two players called Together Among The Stars.
The play is basically journaling about your journey on strange planets and environments.
An immersive RPG played entirely via text message.
Alice is Missing is a silent role-playing game about the disappearance of Alice Briarwood, a high school junior in the small town of Silent Falls.
The game is played live and without verbal communication. Players inhabit their character for the entirety of the 90-minute play session, and instead of speaking, send text messages back and forth to the other characters in a group chat, as well as individually, as though they aren’t in the same place together.
Hauntingly beautiful, deeply personal, and highly innovative, Alice is Missing puts a strong focus on the emotional engagement between players, immersing them in a tense, dramatic mystery that unfolds organically through the text messages they send to one another.
This is a deep and hard game. One to be played with a group of friends you trust.
]]>This is a rant commentary about the 404 Media article: Tumblr and Wordpress to Sell Users’ Data to Train AI Tools.
"Tumblr and Wordpress are preparing to sell user data to Midjourney and OpenAI, according to a source with internal knowledge about the deals and internal documentation referring to the deals." — from the linked article.
If true, this is a disgusting move by Automattic. Blogs are the last bastion of a user-driven Web. By now, we all realised that social networks will both mine and influence users for a profit, that the real product they sell is not convenience or connection, but user data and attention to whoever is able to pay. Blogs on the other hand has been the opposite, they've always been about connections and relationships, fostering communities and grassroots content over algorithmic anxiety-inducing feeds. Many people flocked to Automattic-owned platforms such as Wordpress and Tumblr in an attempt to escape social networks and own their own content. Automattic promised a home on the Web for its users, and now it will apparently simply sell them out.
This is not an opt-in scheme, there is no revenue sharing with users, there is only greed.
I don't think we can trust these companies anymore. Clearly, all of them will simply sell your content to train A.I. models and to hell with your ownership.
It is crucial that we pay more attention to movements like the IndieWeb. As Web users we need to own our own platforms. For a while, it seemed that hosting a blog on Wordpress was enough, now it is clear that we need more than that.
If you have a blog hosted on Wordpress dot com or Tumblr, maybe it is time you make yourself heard by moving your data elsewhere. IndieWeb's own Getting Started Guide will help you out. Move away from those leeches.
#blogging
]]>In the mentioned post, the author ends up hosting a vCard file in a web-accessible Github repository and then creating a QR Code with the URL to the file. A simple and elegant solution.
I used that in the past too but I noticed that there was an edge case that hurt me a lot. Some conventions and meetups have horrible internet connections. It is not unusual for a conference center, specially one full of developers carrying a multiple connected devices, to have its own network go down. Or even worse, you’re so deep inside a building that you don’t have mobile reception.
The lack of a reliable connection during events is something that happened more often ten years ago, but it is still a problem. You don’t need a network connection to share contacts using QR Codes.
Yes, you read that right, you can use a different format instead of URL to share contact details. The format is MeCard. This format is similar to vCard format but much more concise, wasting fewer bytes to make QR Encoding easier.
So while a vCard might look like:
BEGIN:VCARD
VERSION:4.0
FN:Simon Perreault
N:Perreault;Simon;;;ing. jr,M.Sc.
BDAY:--0203
GENDER:M
EMAIL;TYPE=work:simon.perreault@viagenie.ca
END:VCARD
A MeCard looks like:
MECARD:N:Doe,John;TEL:13035551212;EMAIL:john.doe@example.com;;
Simply QR Encoding the MeCard string will just work. The decoding app will see it as a valid contact card. There is no network request, no need to upload a file to a web server.
I spent many years travelling around doing tech talks all over the world and eventually developed some throwaway apps to help me engage with people after the talks. I rewrote that app in lots of different languages, it is one of the first things I reimplement when learning a new language or framework.
It had two main features. One was to allow me to quickly fire an email with the topics I talked about and links to more reference material. That didn’t involve a request either. It would simply load markdown files into a very long mailto: URL and open the default mail client pre-filled with the information. I would just need to press send and the email would sit on my outbox until a stable connection was available. Also, it helped exchanging emails with people.
The other feature was just contact sharing, which used a MeCard generated from data stored in localStorage.
This app was developed for FirefoxOS so it would be installed and offline-first on the phone.
The source for that app is online. Be aware that this is something I threw together for my own personal usage. This is not production code such as you’d see working on a proper job. It still do the job though.
]]>Less than 5 minutes walk from Haymarket station in Dalry road. This is my local coffee shop. It has a cozy down to earth vibe with comfy armchairs and a no nonsense attitude to their coffee. Their blends of Arabica and Robusta pack 3x times more caffeine than the usual coffee you get elsewhere hence the throat punch name. The staff is super friendly and lovable. There are two resident dogs—bear and buck—that are always at the shop and are the heart and soul of the community that formed inside it. Come for the coffee, stay for the dogs. They offer a small selection of pastries and brownies. Oh, and their t-shirts rock, I have a bunch of them. Closes at 15h00 or 15h30 depending on the day.
Good prices, good coffee, doggos. It is my favourite coffee shop :-)
Also on Dalry road, just three doors down from Throatpunch Coffee. Kinda of a bakery disguised as a coffee shop. They have amazing bread, pastries, and sandwiches. Seriously good stuff. Their coffee is good as well. That staff is super cool too. Closes at 16h00.
This is a vegan shop also in Dalry road just a bit further in the direction of co-op. They have very good food and coffee. Similarly to the other two, the staff is great and the shop is cozy. I like how people go there to chat and read and it has a kinda of social vibe. Love their granola bowl. Closes at 18h00.
My second favourite coffee shop in Edinburgh. I love the vibes there and the fact that it is in the meadows. It is my favourite place to read and write when the weather allows it. They have really good food there. The staff and regulars are super friendly and kind. Closes at 17h00.
Located in Morrison street, this small coffee shop has amazing food. I love their cakes and breakfast options. Staff is supper friendly. Closes at 15h00.
Very cozy place. Lovely baked goods and wonderful tea. It is my favourite place to have a tea and read for a while. Closes at 18h00.
Up in Bruntsfield, this is a great shop with amazing coffee (love their light roasts). Cozy vibes all around and some really good light roasts. Can’t really get phone reception or wifi there so it is a good place to disconnect in my opinion. I’m quite partial of their pour over coffee. Closes at 18h00.
A bit on the posh side, this Tollcross coffee shop has the best pour over I had in Edinburgh and they had lots different varieties to choose from. It is on the pricey side but we deserve to treat ourselves every now and then, right? Seriously, go there and have delicious pour over, just do it. If it is a hot day, get their Cold Brew. You won’t regret it.
This very small coffee shop in Dundee Street is a little gem. Located just next to Grow Urban, which is also very good, it is easy to pass it by without noticing. Doing that is a disservice to your taste buds for they have very good coffee and delicious treats as well from a diverse selection of flapjacks flavours to my favourite childhood treat, the Portuguese cream bun, which in Brazil we call sonho (dream). They don’t have sitting space inside, it is more of a take away place but the bench by their front door is very nice on a sunny day. Their staff is also super friendly. Closes at 15h00.
This cozy shop is another little gem in Tollcross. This cozy shop with bookish vibes and a real fireplace is wonderful. Not only they serve good coffee but they have amazing food. Pay special attention to their toasties. Last time I was there there were four cheeses to choose from. Really friendly staff and it closes at 22h00, yes you read right, it closes way late in the evening! Really good place for an afternoon/night reading session.
]]>I’m so happy right now. My interview with Angeline Trevena from Unstoppable Authors podcast is up!
In this week’s episode, Angeline chats with non-fiction author Andre Garzia about his journey towards the publication of his fiction debut.
They discuss his very unusual route into publishing, the difficulties of writing in a second language, and his fearless approach to things that send other authors into a panic.
]]>My party survived their first dungeon in Four Against Darkness solo dungeon-delving game.
I almost thought they were all gonna die on the second chamber where they faced a level 5 Minotaur (they are all level 1) and Sebastian miscast the sleep spell.
The City Watch Rejects guild is composed of Clara (warrior), Inah (cleric), Zix (rogue), and Sebastian (wizard) who are now licking their wounds and counting their loot.
Some random highlights from the play.
Sebastian is hoarding scrolls…
Four Against Darkness is a solo ttrpg game. In the spectrum it is less of a journaling/roleplay solo game and more of a roguelike/dungeon delving mechanics game. You make four characters and through the clever use of flowcharts and random tables, you move through a randomly-generated dungeon trying to survive the ordeal and loot the place.
It is a very easy game. The mechanics and workflow are really easy to learn. Surprisingly, what adds some friction is the layout of the corebook. Things that should be close together are separated by dozens of pages and playing is an exercising of flipping back and forth. Luckily, you can photocopy or print most of the tables and have a better experience.
]]>This was to be their first mission, their launchpad for a new life with purpose.
The Goblin Den was an adventure in the the city watch rejects guild campaign played using the rules from Four Against Darkness.
The dungeon entrance was a long antechamber with three doors. There were noises coming from the rightmost door and that is where they went. Surprising the four orcs on the other side was they moved through the room easily. This adventure was looking quite easy.
Until in the fourth room they encountered a minotaur and panicked.
—“Cast something!”, Clara shouted.
Sebastian was just too nervous and the spell fizzled. The rest of the party tried their best but they couldn’t hit the monster. It was after the minotaur hit Clara that they all realised this was for real, that either they got their shit together or they were not going to leave that room.
Surviving the minotaur took a lot of struggle. Clara and Inah attacked the beast using their coordinated moves while Zix flanked and attacked from the back. Sebastian mostly stayed out of the way but attempted and attack or two.
In retrospect, moving north after the minotaur was a mistake. The path led to a dead end filled with goblins and goblin swarmlings. Those vermin are a horrible nuisance for the city watch and for inexperienced adventures like our heroes a very dangerous problem.
Backtracking to the minotaur lair, they decided to pursue another path east. Nothing prepares you to meet an angry fungi folk for the first time.
When they finally finished with the fungi, Sebastian just exhaled and decided to move to the next room.
—“let’s get this done quickly.”
A dart flew by as soon as he stepped through the door. Luckily Sebastian is a very short mage and the dart was aimed at the height of someone who was not so vertically challenged.
—“Whatever happens, DO NOT TOUCH THAT STATUE!” Inah said when they moved to the next corridor. In the center of the corridor there was a huge statue of a fungi princess with her arms extended to the sky.
They carefully moved around it.
Gutural sounds and screams lied behind the door at the end of the corridor. It was a hard door to open, it required both Inah and Clara forcing it. It was as if it was barricaded.
Eight goblins were mounting a defensive position with both doors barred. Dead fungi folk were everywhere. Apparently those two don’t get along together, who knew.
Fungi folk reinforcements came from the northern door.
—“I’ll never eat mushroom stew again” said Zix while flanking the fungi folk.
Fighting goblins and fungi was not a silent business and their screams attracted the attention of something big.
They heard it coming from the corridor in the east, a big ogre.
They were not skilled enough to fight an ogre. Clara held her sword so tight that that her nails dugg into her palm. Inah started praying. Zix looked for a way out. Sebastian read a scroll of sleep, and the ogre fell...
What they did to the sleeping ogre was not honourable or pretty, but it was better than dying in the hands of an awake ogre.
Backtracking through the dungeon was not easy, specially when they encountered zombie goblins (courtesy of fungi priests for sure) but they persevered and survived.
]]>Pocketfold is a remix of Pocketmod which is a foldable design that allows you to create cute booklets. Pocketfold is in my opinion specially suitable for creating character sheets as you can design it in a way that make accessing relevant information easy while still being pocket-friendly.
The front of the character sheet gives you basic information about the character. Makes it easy for you to flip between multiple sheets looking for the correct character, like a book cover. It lists name, background, and special notes.
Opening the first fold, you can see the most used stats and data. Most of the player actions will use only what is in the first fold and I expect these sheets to remain unfolded like that for most of the game.
The left-side flap gives access to the inventory. This photo is from an unfinished version, the current version has twelve lines in it since you’re only allowed to keep a maximum of twelve items.
The right-side flap has a rules summary that is handy for opposed and unopposed rolls. Pocketfold design enables one to see the summary while still being able to see the most important stats for dice roll actions.
Unfolding the whole sheet exposes its backside which is divided into two sections, one for campaign notes and another for drawing passport stamps from the spheres the player visits.
I haven’t used a desktop-publishing software in ages even though I have a license for Affinity Publisher since it was released. I’ve toyed with them but I never released anything. At the moment, I’m not confident in my Affinity Publisher skills so for this character sheet I used a much simpler program that I actually know how to use: Swift Pubklisher 5.
I really like that software, it is very easy to use. It offered all features I needed to make this character sheet and even though there aren’t many users, I decided to share the project file as one of the downloadable assets at the character sheet Itch page in case someone wants to remix the character sheet.
It is hard for me to finish things. I’m very good at starting things but I guess I get terrified of releasing them. Sharing unfinished work is easy for me because it is unfinished, declaring something finished and shipping it is another story.
Even though I have six books published, I’m still afraid of reviews. People are very cruel and sometimes it escapes them that stuff you ship is usually the best you can do but not necessarily the best it could have been. Being on the other side of reviewers without empathy saps a ton of my energy.
Most things I’ve shipped in my life are related to software development. That is my comfort zone—even though those reviews and comments still hurt—and I want to branch out of it. I want to release more RPG and creative writing products, I just need to get into the habit of finishing them.
]]>Sherlock Holmes, Poirot, and many other detective tales are mostly independent stories with very little in terms of consequences and causal chains carrying over from one story to another. Harry Potter books are complete stories, but they have a strong causal chain linking them with a lot of consequences carrying over from one book to the next.
For most episodic series, you can watch or read them out of order. Swapping one episode of Star Trek TOS with another will cause no harm to your enjoyment of the story, same thing with most police procedurals.
While I enjoy long stories told over several books—Stormlight Archives, I’m looking at you right now—I think that my favourite mode is when each episode or book is a complete story with just some worldbuilding and essential causal chain carrying over. Dresden Files and Alex Verus stories are a good example. There are a lot of consequences from one story manifesting in the books that follow it, but the books are not direct sequels continuing a single story from where the previous book stopped. Yes, you can pick a cause and effect chain from the very first paragraph from the first book all the way to the end of the series and claim it is a single story, but that is not the point.
I find these kind of serials a lot more friendly than long series. They are approachable because you don’t need to make a commitment to read them all, each book is their own story and deciding to stop reading a series somewhere in the middle will not spoil the enjoyment of all you had read up to that moment. Trying to stop reading The Lord of The Rings or Stormlight Archives in the middle leaves your understanding of that story incomplete.
Binging and rapid release cycles changed how people engage with series. I noticed that fandoms today are very different than what they were for past series. There is nothing today that approaches the amount of obsession that people had with X-Files and Lost, and I think that binging is the problem.
One of the reasons I prefer serialised storytelling over movies or non-serialised books is that you have time to engage with the material between releases. People would dissect with forensic precision and theorise all the way into absurdity between Lost episodes. The time between episodes was fun and engaging.
We need time to connect with and process stories. That moment in your commute that you somehow end up worrying about what will happen with a fictional character next week. That is what turns people from consumers to fans.
Binging, whole-season dumps, extremely fast release cycles are all akin to force feeding to me. Companies are shovelling a ton of content into the audience hoping for some ratings spike that will look good into a shareholders report and failing to create fans.
Consider the Witcher TV series. I find it wonderful. I never played the games, I haven’t read the books, all I know is from the Netflix series and watching some lore videos on Youtube. Even though I find it damn great, I don’t consider myself a fan. I watched the whole season in a weekend. There wasn’t enough time for me to connect with those characters before it was over and I was browsing for the next thing to watch.
I like the experience of slower releases. I don’t mean they can’t be weekly or even twice a week releases, what I want is for episodes to reveal themselves over time instead of all at once. Give time for the audience to breath and absorb the story even if only subconsciously.
This post is not just some random musings, I’ve been deep into research about serialised storytelling because I’m looking for how to write my next project. I’ve graduated from a film school and my favourite subject there was drinking, I mean scriptwriting. I have some experience with the techniques and tools of storytelling: plot structures, character design, worldbuilding, etc. My main challenge is my English prose skills. English is not my native language and while I am quite confident in my ability to understand said language, I’m less trustful of my skill to write good prose.
I want to make a career out of writing and the only way to improve my skills is to actually sit down and write. A novel or even a novella is too long a project for me to tackle without confidence. I’ve quit nanowrimo a couple times because of that, which is of course the wrong decision.
What I was looking for was a way to iterate quickly. Creating content, shipping it, and getting feedback on it faster than what a novella would take. Short stories and flash fiction were the obvious options, but I suspect that my main strength lies in that thin causal chain between stories, that sprinkle of lore and worldbuilding that only becomes apparent over works longer than a short story.
Which is how I stumbled upon serialised episodic storytelling. I think I could be good at it. At least, it would provide me with a fast enough mode that I could practice and get feedback on my work fast.
My research on serialisation led me to the usual suspects: RR, Wattpad, Tapas, etc. I don’t have a horse in this race, I like them all. Still, Kindle Vella is the platform I’d like to be on. Even though it is as overwhelmed and dominated by the usual genres as the other platforms I mentioned, Kindle Vella has clear path to monetisation. Remember, I said I want a career, which means I need to think about that stuff too. Unfortunately, the damn thing is only available to the U.S.
I’m trying to be a glass half-full person and use my time to create a backlog of episodes so that when Kindle Vella launches in the UK, I’m ready to dive in. If they decide to scrape the thing, I’ll simply post it elsewhere.
Circling back to modes, I’ve seen many authors simply chunking their standard novels into Kindle Vella-friendly morsels and shipping those fragments as episodes. A good way to reuse your backlog, I suppose. Who am I to judge? They have a whole novel ready and are putting it to good use, that is better than me who only have technical books out but no fiction.
I have a hunch that approaching a Kindle Vella story like an episodic TV series might be a better fit for that format. Let an episode be a complete story and lay a thin causal chain and lore between the episodes.
Browsing for Web series and TV series with short 20 minutes or less run time per-episode is a good way to check out how a series can be told in shorter episodes.
Right now, I’m brainstorming possible series. Murder mysteries are an obvious choice for this kind of serialisation, I just don’t think that I could do a full murder mystery story with a ceiling of 5k words. I don’t have the skills, it is too short for me to weave a compelling mystery and solve it.
I’m firmly in the Fantasy and Science Fiction side of fiction, so I’ve been thinking around classic series I enjoy. Taking their structures and creating something that fits that mould. Star Trek, Stargate, Warehouse 13, X-Files, and Supernatural are all series I really love and are all quite episodic for the most part.
Both Star Trek and Stargate are at a surface level about exploration of unknown places and the adventures that happen there. Star Trek has a pillar that sustain all their series: how to approach moral dilemmas. “Can we make the right decision even when it is harder and it will be dangerous?” The whole space pew pew pew is just background for the characters to face a moral challenge, one that has two clear paths forward: an easy one, and the right one. You take that out and it is no longer Star Trek. Stargate is more about adventure. Of course it has challenging ethical conundrums every now and then, every series has them, but it is not the pillar that sustains the series like it is with Star Trek.
I think it would be easier for me to write something more akin to Stargate, which is closer in terms of tone to Warehouse 13 and Supernatural. Adventurous and fun. All three series have a strong lore component that is unveiled slowly in the background. One can appreciate the stories without fully comprehending the lore, each episode is self-contained, but an understanding of the worldbuilding will reward fans more than the simple recognition of a lore fact in an episode. Also, in all those series, the effect of lore in the story grows as seasons pass because the audience is assumed to have become familiar with it. This is way better than requiring the audience to absorb a ton of lore just to start your series out (looking at you all TV series that start with a narration infodump).
I feel there is a lack of good episodic Sci Fi in the market these days, but I’m much more comfortable writing Fantasy. I’m really not sure, I might just try both starting with with cozy fantasy.
]]>
The Pocket Reform is glorious.
I love small computers. I am the target market for such machines. I adapt very easily to small keyboards and screens, actually I’m beginning to understand that I might work better and be more focused with such machines than when using large multi-screen setups.
My love for small computers probably begun with my Newton MP2000.
My Newton, probably 20 years ago or something
I used that machine a lot. I had the keyboard for it and it was my favourite device to use for writing. I’d still be using it had the screen not been broken by someone accidently sitting on top of it while it was under a magazine on the sofa. The Newton was my gateway drug into small computing.
After that I tried my hand at an iPaq but Windows Mobile sucked. I went back to the Newton OS with an eMate 300. A device much slower than my MP2000, but that had an amazing keyboard.
Look at that gorgeous design.
I still have an eMate with me. I changed countries, but I still got a 24 years old device tagging along with me.
Fun as it they are, both Newton OS devices are two decades old and it is impossible to do most modern workflows with them. They are still wonderful for writing, taking notes, and information management, but they can’t access the Web easily or run modern apps.
I tried a ton of devices in my quest to replace them with something. I was quite happy with an HP Touchpad running WebOS. It gave me the power of a UNIX-like system and a gorgeous UI. WebOS is still one of the best operating systems I’ve ever used. Its demise as a mobile operating system is one moment from our collective computing history that still fills me with sorrow.
Firefox OS, BB10, even Android. I used them all, but they were not the small computing joy I remembered from Newton OS and WebOS. It was quite ironic that the one device that brought all that joy back for me was actually a Windows machine.
My beloved 10′’ Microsoft Surface Go is probably one of my favourite devices. I got it in an emergency, my previous device broke and didn’t had the budget to buy a Surface Pro. I got the Go as a temporary solution but in the end I loved it so much that I used it as my main device up until my work changed and I needed a Mac.
Go, Go, Go!
I spoke about it in my one year with the Microsoft Surface Go blog post. I really love that device. Even though I’m typing this on a M1-based Macbook Air, which is a marvel of modern computing, that device doesn’t bring me the same joy as using my small little slow-as-hell Surface Go.
Small computers have their place. For me they are focuses. They enable me to be a better writer. I can carry them anywhere. It is easy to place them on small coffee shop tables or balance them on my lap in parks. They’re often not the best machines to do development on, so I didn’t feel compelled to context switch when using them and procrastinate my writing by developing stuff. Small computers are useful.
I know network engineers who love their GDP Pockets full of ports. They are fantastic for troubleshooting networking problems.
A lot of people (me included) love their digital typewriters—such as those made by Freewrite, Kim Jing, AlphaSmart—and wouldn’t trade them for full laptops.
Hacker friendly devices like those made by Pine64 or unleashed by PostmarketOS are a wonderful portal into experimenting with new interfaces and workflows.
It is not because a pocket device benchmarks worse than your full-blown Snob-cooled Quad Xeon that they’re not useful to a ton of people.
And for me that is the beauty of the MNT Pocket Reform. It ticks all the boxes. It is small and will probably be as underpowered as I want it to be since I can choose a weaker CPU module to keep things slow. It is hacker friendly, so I know that I can tinker with it beyond simply changing the background on a Linux distro. IT HAS A FUCKING GREAT ORTHO KEYBOARD!!!!
Sorry, I’m too excited about that device, my bad.
Anyway. These days, I’m mostly a writer or doing development to support writers. In that space there are some software which are great (i.e. Scrivener, Vellum), some writing hardware that is great (I love my Freewrite Traveller and am curious about the new Freewrite Alpha). But there is nothing in the market that combines being open source with great hardware for writers—such as having a proper keyboard—with good software.
The MNT Pocket Reform running Linux is already a writing powerhouse, but I want to get one and moving it further. I want to craft a little writing specific distro of my own. I don’t mean forking Debian and just preinstalling software. I mean crafting some software not unlike Freewrite or Scrivener that I can set to take over the machine. A machine dedicated to writing/blogging, one that makes those tasks easier. I think that would be a fun project to do in my spare time.
]]>Growing up in the 80s and 90s was great. My days were filled with RPGs and arcades. It was a glorious time for tabletop RPGs. So many companies experimenting with crazy ideas, I think that it was only recently (in the last 10 years or so) that our ecosystem became richer and more willing to experiment than the early 90s. We are surely living a golden age of RPG innovation right now.
From those days all the way to right now, I’ve dreamed of being a writer. It was only recently (more precisely this very month) that I realised that I can actually call myself an author. Heck, I have six published books already, I’m probably an author. But those books are not RPG books.
I want to dive deeper into that ecosystem from a writer POV. Yes, I’ve published some small stuff on Itch, even some incomplete stuff because I was too busy, but I really want to make that kind of writing a more meaningful aspect of my life. That is why I decided it was time me to take the jump and enrol into the Write Your First Adventure course from The Storytelling Collective.
Want to write epic content for your favourite tabletop roleplaying games, but don’t know where to start? Start here!
Write, produce, and publish your first one-shot tabletop roleplaying adventure in this one-month-long, self-paced workshop!
— Source: Write Your First Adventure | General Path | Summer 2022
It is a one-month long program to write a small one-shot adventure. I’m so excited for it. From the three possible paths, the Chaosium one felt the right one for me right now. As a game developer, I’m usually more inclined to roll my own stuff which would mean joining the General Path, but I also want to leverage an existing community of fans and be able to get feedback on my adventure from people outside my own platform. In the 90s and 00s, I was heavily invested into Call of Cthulhu RPG. It is one of my favourite games, and RuneQuest (in its current Glorantha flavour) has been pulling me for a while. All that made me opt for the Chaosium path.
Yesterday, I went to Leisure Games — my friendly brick and mortar store — and got myself both an up to date 7th edition Keeper Rulebook for CoC and a starter set for RuneQuest.
Both games feel so familiar to me, but so fresh. The last Keeper Rulebook I had was the 20th Anniversary one, I guess that is 5th or 6th edition. The 7th edition book looks great. RuneQuest is brand new to me, but it doesn’t seem that hard due to my BRP background.
I’m not sure if I should go with Call of Cthulhu or RuneQuest for the adventure. I want to do both, but that may be overreaching.
As the course moves on, I plan to post regularly about my progress. Subscribe to my feed if you want to follow along.
]]>Every time Apple does anything related to the iPad, no matter what, people will write long treatises about how they can’t be a developer on an iPad. How they want to run Docker, VS Code, PostgreSQL, k8s, Linux, or whatever in their beloved tablets; and how since they can’t, the iPad is useless.
Of course anyone has the right to voice their opinion. If you totally feel so strongly about these topics that you must vent, be my guest. I’ll make space for you and listen to you trying to understand your point-of-view. I’m not here to say you’re wrong, I just want to present my understanding of things and why I think I some people can’t see the solution to their problems even when it is right there.
Apple is an Experience Company. Much like Disney, what Apple sells is dreams. It sells you on an idea, on a curated experience of what computing should be in their vision. It doesn’t matter if your own personal vision clashes with that, what matters is that their business is based on this idea and everything they make is done in favour of supporting this vision.
When Apple makes a tablet, they don’t want it to be a laptop. They want it to be the truest Apple tablet it can be. Whatever that means, I’m not even sure they know, but that is their guiding star. Apple wants their devices to be true to themselves. That means that a tablet is a tablet, a phone is a phone, and a mac is a mac. Their experiences complement each other, but they are not necessarily interchangeable. Some workflows are present in all of them, but others are not.
Looking for experiences that Apple want to keep in one device class while using another is a recipe for frustration. That is not how Apple rolls.
The iPad is getting a lot smarter and more flexible since they decided to decouple iPadOS from iOS. My own beloved iPad Mini is getting a ton more usage than what I thought it would, but an iPad is not a Mac. You can use Swift Playgrounds, and have some development apps such as Working Copy and others. It is still not a Mac. That doesn’t mean it is not useful. Stop looking for the Mac in your iPad, and instead find your iPad in it.
Apple is firmly in the Two Boxes camp. Instead of a single convergent device, they want to build discreet separate devices with different experiences because that is what they sell. They sell experiences.
Now, most of those criticisms I mentioned above could be coalesced into a single category of people who want a Single Box convergent solution. They want their tablet to be their computer and more. They are looking at the wrong company for that.
The convergence company is not Apple, it is Microsoft.
Hear me out! don’t close the tab! Keep reading, I’m going to unpack this.
If you’re a frequent reader here, you might have seen previous posts of mine in which I was using a Surface machine as my primary machine. I’ve rocked a totally underpowered Microsoft Surface Go as my main development machine for more than a year. I had a ton of fun pre-COVID with working from anywhere with a 4G-enabled Surface Pro X.
The Surface Pro is what these people are looking for. It is a tablet and a laptop. It can have keyboard, mouse, touch, and pen input. It can transition between those experiences on the fly. It can run both Windows and Linux at the same time. Heck, these days WSL can even run graphical Linux apps.
It can be your development laptop, your notetaking tablet, your media consumption gizmo. Surfaces are damn awesome. I just stopped one because I wanted macOS development tools over Windows development tools (I mean for native apps and stuff), but I still think that in terms of form-factor, hardware design, and flexibility that a Surface trumps everything else. It is a wonderful machine.
The Surface is a worse laptop than a Macbook Pro. It is also a worse tablet than an iPad, but it is a better laptop than an iPad and a better tablet than a Macbook. If you want a single device to be your tablet and laptop, go with Surface.
You can even remove Windows and go full Linux with many Surface models if that is your jam.
People keep asking why they can’t run VS Code, containers, RDBMS, Linux on their iPads and all I can think is why don’t you use a Surface? It can do all of that.
]]>Cymera Festival has been the best convention I’ve been so far. Everyone was so friendly and generous. An important aspect that plays a huge role in my enjoyment of that event is that after going to three other cons during the year, I’m starting to know people and can finally arrive at a con and find friends in it. This for me is a game changer because my first three years in London have been very lonely in terms of new friendships, while these cons have been a fountain of new connections.
I spent most of the time just enjoying the sun (which was scorching by both the Scottish and broken-Brazilian standards), having pints and wonderful conversations. I went to three workshops:
A small spread from my Goblin Market tarot.
I loved the workshops and took copious notes about everything.
I managed to get many books signed by the authors, unfortunately I forgot to get a signature on one of them even though I spent a good solid hour talking to the author. We were just enjoying the sun, well, there will always be a next time.
I made so many friends. I met writers, poets, editors, publishers, developers, musicians. Wow. It feels so good to be among like minded people. I had a great time.
The venue is a place called The Pleasance and it was indeed quite pleasant. It is a U of Edinburgh Student Association venue that is used for the Fringe Festival as well. Don’t get me started on the cheap prices, it was probably the cheapest coffee shop I’ve been in the UK. A cup of tea was £1.40 and a flat white was below £2. Here in central London both cost at least £2.70
Edinburgh is a great place for coffee. There are so many cozy coffee shops in there, and the city is so gorgeous.
Anyway, I’m now a part of a discord server with Edinburgh SFF. They meet once a month IRL. I will probably hop to Edinburgh every now and then to join them.
It’s the friends you make along the way…
]]>I was never able to transpile the SSB NPM modules successfully with any tool except browserify. WebPack, Vite, Rollup, they all barfed and ended up with non-working JS. It all boils down to node built-in module polyfills and how that transformations are handled.
There is a ceiling to the size of a JS bundle when you’re trying to ship a WebExtension to Firefox Add-on Store, it is 5mb. If you try to ship a larger file, the portal will just reject it. In theory splitting bundles is easy, right?
Not if you’re using browserify. Anyway, Patchfox was divided into two 4.4mb bundles. One covered the SSB low-level and high-level libraries, the other was all the UI (aka the Patchfox Packages). All that decoupling was done manually. Both pats of the add-on were compiled separately and glue code connected them using the most awkward shortcuts. It served me well for many years, but now I can’t add more features to Patchfox without exploding the the limit. Patchfox is stuck.
Patchfox has always been built as a companion, a lightweight client that piggybacks onto whatever server you’re running. This worked well for many years but the signs of trouble have been knocking on my door for some months already.
The new metafeeds and per-application feed are a game changer for SSB and Patchfox can’t participate on that without control over a server.
When Manyverse shipped without a WebSocket server endpoint, all Patchfox users who migrated to Manyverse lost the ability to use Patchfox. As the userbase of Manyverse grows, the userbase of classic Patchfox becomes more limited.
Lots of engineers inside the companies responsible for building browsers love WebExtensions, but the truth is that Web companies and Browser vendors as entities treat them as hostile. They don’t really want to make WebExtensions a prime citizen of the Web. They want just enough features.
The changes to manifest v3 proved to me that Google has complete control over the WebExtension ecosystem and will force their hand whenever they feel they should. Mozilla will do a mea culpa and follow suite because they are not strong enough to resist that pressure at the moment. Mozilla can’t steer the WebExtension ecosystem in any direction that Google doesn’t want it to go. It might be able to steer it in a direction that Google feels it is OK to go even if they don’t really care, but try to ship a feature that makes a dent on Google business and you’ll see how quickly this ecosystem fragments.
All that is mentioned above shows the lack of agency Patchfox WebExtension has in its own future: it can’t get more features without being rejected, it can’t adopt cool new SSB stuff because it has no server, it can’t trust it’s own ecosystem because it is a tug-of-war between browser vendors.
SOMETHING HAD TO CHANGE!
I struggled with a long time with this decision, but the only way forward for Patchfox is to escape the browser. Patchfox needs to become a desktop application with a server. That will solve all the challenges mentioned in the previous message.
What you’re seeing that video is a work-in-progress sneak peek into the new Patchfox. It is an Electron-based application because trying to use anything else would just lead to a lot of extra friction which I can’t handle right now.
It is early days, I’ve been working on it for three or four days, but it is shaping up really nicely.
The objective is to have all Patchfox features available. This is the future of Patchfox.
Oh, you are still reading? Ok, let me tell you some of the development details.
I’m reusing as many code from the WebExtension as I can, but I hit a wall getting Svelte to play nicely with true nodejs built-in modules. Actually from the three or four days I’ve been working on it, I lost half of them actually trying to get the build system working.
See Svelte is a compiler, not a library, and it really wants to be on the web. The combination of WebPack/Rollup/Vite and Svelte was not working. Svelte wanted new import calls and would convert them to the wrong require calls. If I passed requires on my own, it would complain even if the compiler was set to accept them.
The answers I found online for those challenges has been: “Svelte exposes its own compiler as svelte/compiler, you can fiddle with it at runtime to guarantee the transpilation works.” to which my answer was “fuck you.”.
I’ve always hated fantasy js and build systems anyway, so I’m converting everything to Mithril. Fuck fantasy js, Patchfox is coded in real JS, the bugs I write are the ones that end up in the webview.
Patchfox is now developed without a build system. Mithril is used to power all that Svelte was doing before.
I’m slowly converting all the packages from Svelte to Mithril. It is a tedious and error-prone process.
I was going to develop all this in secret and surprise everyone, but that is not really how I roll, so you’re all getting a sneak peak. The code is being worked in a branch called escape-the-browser which I’ll push later tonight.
If you want to see more work happening on Patchfox, want it to be a kickass desktop client, and would like to support my work, you can do a one-time or recurring donation at my ko-fi.
Let a thousand clients bloom.
As can be seen in the video, it is quite early. I’ve implemented just some of the views. It is very fast, way faster than the WebExtension version of Patchfox.
The current code is being worked on as a direct port of the Svelte codebase. I’m just reworking the Svelte templates into Mithril components. It is being done as idiomatic Mithril, at the moment the main objective is getting it all to work, then I’ll polish.
In the video, you’ve seen some single-line messages of type vote (aka likes) flying by, let me quote the full source-code responsible for rendering them so that you can get a feel for the new codebase:
const m = require("mithril")
const AvatarChip = require("../../core/components/AvatarChip.js")
const VoteView = {
oninit: (vnode) => {
vnode.state.loadingBlurb = true
vnode.state.loadingAvatar = true
vnode.state.label = vnode.attrs.msg.value.content.vote.link
vnode.state.person = vnode.attrs.msg.value.author
},
view: (vnode) => {
let msg = vnode.attrs.msg
let expression =
msg.value.content.vote.expression === "Like"
? ":heart:"
: msg.value.content.vote.expression
let msgid = msg.value.content.vote.link
let encodedid = encodeURIComponent(msgid)
if (vnode.state.loadingBlurb) {
ssb
.blurbFromMsg(msgid, 50)
.then((blurb) => {
vnode.state.label = blurb
vnode.state.loadingBlurb = false
m.redraw()
})
.catch((n) => {
console.log("error retrieving blurb for", msgid)
console.error(n)
})
}
if (vnode.state.loadingAvatar) {
ssb.avatar(msg.value.author).then((data) => {
if (data?.name) {
vnode.state.person = data.name
vnode.state.loadingAvatar = false
m.redraw()
} else {
console.log("odd", data)
}
})
}
const goThread = (ev) => {
ev.stopPropagation()
ev.preventDefault()
if (typeof msgid === "undefined") {
throw "Can't go to undefined message id"
}
if (ev.ctrlKey) {
window.open(`?pkg=hub&view=thread&thread=${encodeURIComponent(msgid)}`)
} else {
patchfox.go("hub", "thread", { thread: msgid })
}
}
const avatarClick = (ev) => {
let feed = ev.detail.feed
patchfox.go("contacts", "profile", { feed })
}
return m("p.m-2", [
m(AvatarChip, {
inline: true,
glyph: expression,
feed: msg.value.author,
onclick: avatarClick,
}),
m(
"a",
{
href: `?pkg=hub&view=thread&thread=${encodedid}`,
onclick: goThread,
},
vnode.state.label
),
])
},
}
module.exports = VoteView
It is a very old-school way of doing JS. Just plain-old objects keeping their own state. There is no TEA, no reducers, no actions. It is boring old technology. I like it that way.
Also, don’t you love having real menus? Like a proper desktop app? :D :D :D :D
When you have real menus on macOS, you get integrated help:
I hope you folks enjoy this path moving forward. I’m really excited to ship the new Patchfox as soon as possible.
]]>In this video I’m talking about a creative writing exercise that I love: using Vision Cards from Everway TTRPG to do daily flash fiction. You can buy Everway at the link below:
Link: Everway Game
]]>For the sake of contextualisation, these are the laptop specs:
HP Dev One Specifications |
8-Core AMD Ryzen 7 Pro |
16GB of DDR4 RAM |
1TB of NVMe M.2 storage |
14’’ FHD screen | Running Pop_OS. |
The last couple decades we’ve gone from fighting to install Linux on a laptop to having multiple vendors focused on Linux laptops such as System76, Tuxedo Computers, Framework, and many others. Larger vendors have been offering Linux laptops for developers for a while as well. You can buy a Linux machine from Dell, Lenovo, and soon HP.
This new announcement by HP should signal that the industry is kinda listening. That they’re accepting that at least the development audience wants good Linux machines, and are responding positively by launching specific SKUs. Still, what you see every time one of such laptops reach the news is a gazillion complaints. I’m not saying that the complaints are not valid, heck, one is free to complain about whatever they want. What bugs me is that there appears to be no winning scenario. There is no endgame for such complaints, I’m starting to believe that there is no way to make this audience truly happy. Or maybe rants get more engagement and people are ranting mostly to feel good.
I’ll not enter into the discussion of price vs performance. There are a lot of variables that go into calculating the price of a laptop, it is not as simple as just summing the bill-of-materials. Some intangibles can’t be quantified well, things such as branding and so on. In the end it kinda becomes something more to the tune of what can we get away with than some equation you can put on a paper. Pricing is hard. What I’ll say about that price is that developers in major tech cities are not usually a struggling class in terms of having a good salary and being able to come up with enough funds to pay $1000 for a machine like this. Of course there are a thousands of other developers who are elsewhere and struggling, people who are in countries where the exchange rate makes $1000 laptop impossible to acquire. There needs to be a different product for every one, that is why there are many SKUs. This laptop is clearly not targetted at those situations.
For such price, you’re getting decent specs. If the build quality is great and all the drivers work out of the box, I can see the value in that laptop. People underestimate how useful it is to have a it just works Linux laptop. Of course you can dive into forums, mailing lists, patch a driver or two and get something more powerful and cheaper to work. That is the nature of Linux. But some people just want to buy something and have it work from the first hour. Paying $1000 for that is OK.
What bugs me in such threads is that there is a group of Linux aficionados who think that Linux is the pinnacle of computing and that everything under the sun will be better when running Linux. This is the main topic I want to address in this post. Don’t get me wrong, I definitely think that running Linux everywhere is a desirable goal, machines should be yours to tinker. What I want to highlight is that for some people (including this author) running Linux on some machines make them less useful than whatever they begun with. This is what I want to unpack going forward because a lot of criticism in these threads are focused on macOS, M1 machines, Windows, etc.
Disclaimer
If your workflow requires you to run Linux. Or better, if you *WANT* to run Linux, that is fine. You can do it in any machine you fancy, no one should tell you what you should do. This is not a post advocating you change your preference. This is a post trying to explain why preferences are not universal.
First is the idea that a machine that is not with the top-of-the-line AMD or Intel CPU and at least 64GB of RAM is not useful. Some people need to understand that their particular workflow that demands such specs is not universal. Some developers are just fine with much less powerful machines. The HP Dev One specs might not be useful to you, but that doesn’t mean that other developers are not going to have a great time with it.
I’ve seen comments from people running all sorts of containers from MongoDB and Redis to complex Machine Learning workflows denouncing the specs on this machine to be useless. That is a desktop-class machine, a simple workstation, it is enough to do desktop development tasks. It is not a server or spec’d for those workflows. Go grab a beefed up Xeon desktop + NVidia gizmo and you’ll be happier.
Just because one has a desktop-class machine and is doing these demanding workflows in it, doesn’t mean that all desktop-class machines should be expected to handle such workflows. That is a very specialised setup when you need to run the equivalent of a company’s production environment inside your small laptop. There are laptops geared towards such workflows, they’re a different SKU.
The HN thread is full of rants about Apple and running Linux on M1 machines. I don’t think some people realise that other people actually prefer macOS over Linux. There is a very tight integration between Apple hardware, the operating system, and the official toolkit for developing desktop apps. When you’re running a native macOS application on M1, a lot of gears connect just right for you to have amazing performance and battery life. Running Linux on such machines will not have the same benefits. It will have other benefits such as running a FOSS stack, but it won’t have the same performance/energy magic that makes it so impressive.
In my opinion people who really want to run Linux, should focus instead on getting laptops that are great with Linux. Heck, the Framework laptop is really well built. A friend of mine has one and I got to see it for the first time last week, it is beautiful. There is a lot more bang for your bucks when you pick a Linux-focused SKU than when you force Linux onto new hardware that was not built for compliance.
A good example is my beloved Microsoft Surface. When the Surface Pro 4 was current, there was a lot of activity about running Linux on it. People been running Linux on Surfaces for a long time, if you’re curious about it check out the /r/SurfaceLinux on Reddit. At the time, running Linux on that machine had the following consequences (which are probably no longer true):
At that time, all I could think was: Why did you get a Surface then?!. If you really want Linux just grab a ThinkPad. The Surface and M1-based Macs are what they are because they were built to provide a specific experience.
The Surface is a unique form-factor, it is more than a tablet summed with a laptop, it is versatile in a way that no other device is. It is your portrait ebook reader, your tablet to consume media or draw, a laptop for development, etc. The Surface Pro X running Windows 11 can run native ARM32 and ARM64 Windows and Linux applications, and emulated x32 and x64 Linux and Windows applications. It is a wonder.
The M1 Macs are a game changer for people like me who are always away from home/office and don’t want to sacrifice performance. I can edit my videos on battery and have DaVinci Resolve work the exact same way as if I was connected to the wall socket. I can do my programming from a park and compile things just fine. When I prioritise running native macOS apps over Web Apps, I get 10 or 12 hours of battery. I’ve never run out of juice on that machine. Heck, one day I was coding at a library and saw that the battery was at 48%, I got really angry annoyed that the battery was draining too fast just to realise that I’ve been coding there for six hours already. That is how good they are when you use them in the way they want to be used.
Some developers like me prefer these experiences over Linux. I’ve run Linux as my primary OS for years, I was never happy with it. I’m a developer and I ship FOSS stuff. I like open-source, I felt that I needed to run a FOSS system. I was mostly running Linux due to self-imposed expectations and peer pressure. I was not happy with it at all. That was not because there is something wrong with Linux, it was because what I wanted was something different. It is OK to want something different.
I hate this arrogant and prejudicial mindset that some FOSS advocates spew out in all threads about Linux laptops. Some people genuinely think that running Linux makes them better people, it is very strange. As if it is a crime to run anything else and gasp be happy with it.
“The Macbook is not really yours!!!” they cry out. Well, it is mine in all ways that it counts for me. I can run the OS I want, I can develop the apps I want, and I can write my books. There are firmwares and locked stuff in it, yes, so does your phone and probably the laptop you’re running. If your laptop is blob-free, then tell me again how does this makes your work better than mine, because I don’t see it.
These are just tools. What matters the most is what you use them for. Want to use a Surface running Windows to create amazing art that will inspire people everywhere, do it! Don’t let someone tell you that if you were running Linux your art would somehow get a shinier aura. Are you using macOS or anything else to write software, and you’re happy with it? Ignore the people saying that without Linux it is not a true development machine.
Heck, I don’t know how people can think that their subjective personal experience running Linux can simply be extrapolated to be applicable to 100% of the developers in the world. There is no pinnacle of computing, there is no perfect machine for developers. Each person will have a different idea of what they want and what is good for them, let them be.
Instead, why not cherish that yet another major vendor is looking at Linux in a positive way? Why not be happy that Pop_OS! is being taken seriously by other companies. That acquiring a laptop with Linux no longer requires checking compatibility lists against every chip inside the machine.
Why can’t people be happy that a new Linux laptop exists even if it doesn’t fit their workflow.
I guess people just enjoy complaining.
]]>Recently I had a ton of fun implementing Mercury protocol (a subset of Gemini protocol) on MacOS 9 (I think Gemini is a little gem), and that got me thinking: Blogging should be an ideal activity for an older machine. By older, I don’t mean running Linux on a Core 2 Duo. That is just normal computing, even if performance is a bit annoying. Working on a native app and rekindling my love for blogging got me back into the mood to implement blogging clients. I believe we should have both feed readers and blogging clients for all our machines.
I don’t believe that the Web is the best solution for having feed readers and blogging clients everywhere. It is for sure the easiest, as long as the device you’re using has modern web browser, you’re good to go. I’ve never used a Web interface that was better than a good native interface. Of course there are crap native apps that compare poorly to well-crafted Web applications, I’m not saying that all native apps are better than Web apps, I’m saying that given both a very good Web interface and a very good native interface, that I usually prefer the native one.
Native applications have the following advantages:
There is a lot of effort and knowledge that has been poured into graphical user interface toolkits and libraries over the last fifty years, we’d be fools to simply throw them all away in favour of reinventing all the wheels with Web interfaces for everything. There are many reasons to opt to ship a Web interface, the most common ones are because it is easier to make it work in multiple platforms than developing natively, and also because it has an easy value proposition that keeps the company that produced it relevant and able to extract profits from those who use their service. Most desktop applications don’t need their mothership once they’re running. I have many apps I love that are working just fine even though their parent companies folded.
Another important observation is that for the kind of workflows that would label one as a power-user, native interfaces tend to provide more features than Web interfaces which are usually targeted at a broader demographic.
In my opinion there are two key blogging app categories. One can have an app that does both, much like Mozilla Suite used to do Web Browsing, Web development, Mail, and News. Those are all distinct categories. Having them in the same app may make that app more attractive, but having them as independent apps also make sense.
Reading blogs is one of such key categories. We call apps in that category feed readers and they used to be way more popular. Google came in and sucked all the activity into their own feed reader only to kill it later basically destroying the awareness of such apps to new Internet users. To bring back blogging, we need to bring back awareness that feed readers exist. They are not extinct, there are brand new feed readers popping up every month, they are just under the surface. The bloggosphere needs to push feed readers more, I don’t mean push specific apps, but foment awareness over the concept of using a feed reader and curating your own feed.
Curating your own feed is one of the most important selling points of feed readers. Our current web is governed by large organisations who will prioritised shareholder profit over everything else. Social Networks are the main enemy of blogging. They are silos with algorithms designed to extract attention and value from their users by keeping them inside their platform for as long as possible while inciting emotional responses. Anything that makes a user react and keep inside the platform is good for the shareholders, it doesn’t matter if it is a flamefest, a conspiracy cesspool, or hate crime, if it keeps people spending time inside the social network it is good for the shareholders.
That is why social networks hate blogging. Blogs are by definition independent of each other, a user reading multiple blogs is hopping from web site to web site (or using a feed reader), getting away from what generates profit for the social networks. Their algorithms will penalise blogs, they’ll throw downwards anything that moves the user away from the platform. A good example of this is how Instagram doesn’t allow links. You can’t create a post and link to your blog for further reading.
As a user, dear reader, I believe you want to be in control of your own feed. I don’t mean that in the sense of creating an echo chamber that exposes you only to things you already agree with, I mean that it is your choice to decide what you should be exposed to. You should be the one curating a diverse feed, one that prioritises you and your values. Not one that is built to generate money for someone else.
The first step towards that is using a feed reader. Going from blog to blog to see if they have a new post is a delicious time waster. I love doing that with my favourite blogs, but using a feed reader allows me to suck in content from hundreds of blogs in a fraction of the time that it would take me to open and check each of them out by hand. Feed readers come in basically three varieties:
A feed reader allows one to be in control over their content consumption. In my opinion, native applications are more liberating. They provide more power-user features and they don’t tie me into someone else SaaS.
The second category are blogging clients, those are the applications that allow one to post to their own blog. Most blogging engines come with a Web interface for posting. A good example of web interface is Wordpress which the most popular blogging engine ever and probably runs more than 40% of the whole Web (available as a service or self-hosted). Having your own blog allows one to participate in the blogosphere, to be a part of the conversation. Instead of sitting in the audience, just absorbing content, you can be on the stage helping the play. Blogging clients are tools for those who want more than a simple web interface. I for example love that I can compose blog posts using a rich native interface, save them on my machine to post later, and benefit from tight integration with the operating system and file system. All that without needing to be online. We’re so used to ubiquitous networking that many of us forgot how refreshing it is to use offline-first desktop applications.
Some of us post to multiple blogs or syndicate their content into multiple silos. A blogging client allows us to use a single application to do all that. Right now, I’m composing this message using Mars Edit while offline. I can chose to post it either to my online blog or to the decentralised platform Secure Scuttlebutt, all from the same interface.
Leveraging feed readers and blogging clients, one can have their blogosphere experience outside a web browser. I don’t know about you, but my capacity for attention and focus has decreased by some orders of magnitude in the last years. When I’m browsing the web, I’m tempted to open a gazillion tabs and switch among them like a crazy squirrel that can’t choose between fourteen equally appetising acorns. It is also very easy to keep doing context switches between long form text consumption (blogs), microblogging (twitter, mastodon), all the instant message and chat apps (work-related stuff) and more. When I close the browser and use my feed reader and blogging client, I’m in a zone of focus. It is a simple workflow, checking things out and composing responses or commentaries if something sparks my interest.
I can synchronise my feed reader, go outside without Internet and read all the blog posts. In most cases it doesn’t matter if I’m connected or not (unless there is some embedded videos or images), in those cases I can simply mark the post as unread and come back to it once I’m connected. I can also compose my posts using an application made specifically for that, I can’t do a context switch and end up on something unrelated. It is a blogging client, all I can do here is work with blog posts.
That doesn’t mean I think that blogs should go independent of the Web, I just think that a Web Browser should just be yet another tool of the blogosphere. Having more tools increase the choices available to each user, that is a good thing. Casual bloggers might want a simple experience, power-users might want a ton of features in a more complex software. To each their own.
A benefit of decoupling the blogs from Web Browsers is that it unlocks the opportunity for platforms that don’t have a modern web browser to participate in the blogosphere. I’m talking about retro computers, older PDAs, and whatever you want to use. In the case that the device you want to use can’t handle modern TCP/IP and encryption, there is always the option of running some kind of middleware on a modern device such as a Raspberry Pi and interfacing your limited device with it. A feed reader can be constructed in a way that it presents itself as a serial terminal for older retro computers. That sounds like a cool project.
Many computers with amazing keyboard (I’m looking at my eMate 300) can’t really participate on the Web. That doesn’t mean we should give up on them. I believe that one could craft a small set of services to run at home on a Raspberry Pi that could be at the same time a feed reader and a blogging client for retro computers. It could even include a small blogging engine to generate the blog and upload making it completely self-contained.
Creating such system is more a matter of connecting some plumbing using either Linux or a BSD on a Pi than actually developing everything from scratch. The reason behind doing all of this in a Raspberry Pi and not a VPS or serverless platform is that you can ditch some security measures inside your local home network making it easier for older machines to communicate with the Pi. You don’t want insecure software being accessible over the Web.
What?! Yeah, you’re now thinking I’m crazy, but let me unpack this for you. All the sections above are building up to the idea that by leveraging native apps, we can have blog reading and writing outside Web browsers and even in older machines. If we consider that simply having OPML and RSS/ATOM files is enough to build a feed reader experience, then these files could be served outside the Web. They could come to your feed reader over newer decentralisation protocols such as Hypercore and IPFS. Your older device can’t participate on those networks, but the Pi mentioned in the section above can.
A developer could build a decentralisation-first blogging engine. One that generates static files and not only upload it to a Web-accessible server but also provides the content to Hypercore and IPFS networks. In such scenario, one can eliminate the need of having a Web server to be a blogger. Your blog could be hosted on a Pi at your home, served to readers over Hypercore or IPFS.
Not needing a server or enrollment on a service to be a blogger opens a lot of opportunity for people to participate in the blogosphere. It becomes a lot easier, just install and app and you’re a part of it. There are of course many challenging caveats such as what happens if you’re not online and someone wants to read your blog. All those can be solved. A multi-pathway solution like POSSE from IndieWeb can be adopted where a local native app running on your home network generates your blog and uploads it to decentralisation networks and the Web. You’re in control over your platform and there is less gatekeeping.
A cutting-edge Raspberry Pi running all the clever decentralisation protocols generating a blog that can be read on the Web and outside the Web, while still allowing the author to write the posts on their beloved C64 or Mac Classic. We can have it all, if we just step outside social networks and stop thinking that Web Browsers are the pinnacle of blogging.
Everything I said here is already available or requires some minimal plumbing to work. None of this makes the Web less useful, or makes a blog less available to Web readers. It is about unlocking agency and features for the blogosphere, not about damaging the Web.
I’m really excited about blogging, decentralisation, and I love my older machines. I just want to do all of this at the same time. Blogging can be more than just Wordpress and a Web Interface.
]]>At the time I was completely obsessed with Racket programming language and it’s offspring Pollen, a very clever static site generator used to write many amazing books such as Beautiful Racket. Pollen is amazing, but it is not geared towards blogging. To make it work, I collated code and techniques from other people also using it to run their blogs into something that I kinda have some agency over but not as much as I’d like. I understand all that is going on until we reach the Pollen source code itself, then it is all magic. So, let’s talk about how this blog actually works.
This blog is a static site generated using Pollen. A folder hosts all the files that makes up the source of the blog and a clever build script goes over it using Pollen and assembling the HTML and other text files.
Screenshot of a Finder window showing the source folder for this blog.
The selected file in the screenshot above is the post I made yesterday. A HTML file is created next to the original source file once Pollen processes it. Only the HTML files are published to the website, the source files remain outside of the web server root folder.
Assembling collection pages such as the index, the RSS feeds, and tag collections would be very slow if the blog had to traverse all text files every time to find their tags and dates. A clever trick is employed to solve that. Every time a post is rendered to HTML, metadata about it is inserted into a SQLite database. This database is not crucial and can be deleted at any time, it will simply be recreated from scratch once the build script runs again. All pages that need to assemble collection of posts query that database instead of traversing the filesystem. It is a bit fragile and it can be out-of-sync with what is on disk if the render pipeline ends up rendering a post after rendering a collection.
That made me create some overly complicated makefileto rebuild all the collection pages if any post page is touched, etc. Still mistakes creep in, specially if somehow the rendering process crashes and has to be restarted. I don’t mind it much anymore, I’ve learned to embrace a bit of chaos. It is only by accepting a bit of disorder in your life that you can have an effective blog if you’re a software developer. Without that you’ll keep working and reworking your CMS and system until it is perfect and will never actually write any post. Just ask the developers you know how many posts they made in the last two months and how many changes they made to their blog source-code in the same period… the answer will be a revelation to both of you.
If you’re a returning reader to this blog, you’ll be aware that I’m a bit old-school. I’m not on the bandwagon of CI/CD deployments and having a cloud system spin up I don’t know how many containers and tests just to render my grammar mistakes for the world to see. For the longest part of the life of this current incarnation of my blog, it has lived quite happily in my computer. I’d create posts using Sublime Text and build the blog by issuing commands on a terminal. That workflow served well me for a long time. It even followed me as I moved from a Surface Pro 4, to a Surface Go, to a Surface Pro X, and finally to the current Macbook Air. It was a refreshingly simple setup.
And soon, it was not enough. As I moved back into the Apple ecosystem, I ended up using more and more mobile devices. Suddenly I had an iPhone and an iPad, and oh my, I love the iPad so much (really unexpected, I thought it would just be a tool to carry my notes). I found myself leaving the laptop home and writing using either the iPad or my Freewrite Traveller. How could I post then? The source-code for the site was locked inside my home computer.
To solve that, I migrated the source to the VPS that is hosting the blog. There is a copy of the source both on the VPS and on my home machine. Each copy is self-contained and enough to regenerate the whole site. They are synchronised using Git, but I don’t do any clever Git stuff, the script just regenerate all the site files and add them to Git and send them over. Once Racket and Pollen were working well on the server, I could create posts by logging over SSH and doing the exact same thing as I did with the Macbook: write a new text file and run a script. That is not my idea of fun.
I turned again towards the IndieWeb people and implemented a subset of Micropub. I wrote about that yesterday. It enabled me to post from my iPhone and iPad. Basically all the Micropub server does is accept the h-entry creation request, write the corresponding text file and run the same script I used to run by hand. Eventually, I got tired of Micropub. I’m much more familiar with metaWeblog API and switched to implement a server for that protocol instead.
Now, the blog is still a statically generated site. It is assembled by Pollen with lots of bespoke code via a makefile and some shell scripts. There are two servers running, one accepting a subset of the Micropub protocol and the other supporting 99% of metaWeblog API (I forgot to implement metaWeblog.getUsersBlogs). If I submit a post through any of those protocols, the servers create the corresponding text file in the correct location and invoke a massive shell script that amounts to "please, rebuild the whole site”because any kind of incremental building script felt too fragile. Be aware that it will skip assembling files that didn’t change, so it is incremental, it is just that it attempts to build everything.
And here we are, this is the stack running this blog:
If you have any question, just AMA on Mastodon or Twitter.
Those among you who know Racket and Pollen and been reading my posts might be wondering how I am posting to the blog using metaWeblog, how that can play well with Pollen markup?! Well, it kinda can’t. MetaWeblog really wants to work with HTML fragments while Pollen has two markups Pollen Markupand Pollen Markdown(which is not exactly Markdown). What I’m doing is posting either HTML fragments or plain-text. In both cases, the file is saved in Pollen Markdown format and processed as such. That leaves all the basic HTML tags I use when editing rich-text on Mars Edit in place while still enabling me to switch to plain-text mode and write the correct Markup if needed. What I can’t do is edit the post as Pollen Markdown mostly because Mars Edit is breaking the newlines, so instead if I need to edit a post, I’ll edit it as an HTML fragment. Initially the post will be sent to the server as either an HTML fragment or a Markdown text, it will be rendered into HTML. If I try to edit it, what I’ll edit is the rendered HTML fragment which will then be saved back to the original file. It is not ideal, but as I said before, you either embrace the chaos or you’ll use 100% of your time to fix bugs in your CMS instead of writing.
]]>That means that my implementation was half-baked. Not only I was working with Racket—a language with which I’m nothing but a beginner—but I couldn’t implement all of the spec. That meant that some crucial parts of it like media upload were missing. I could create new posts on my blog, but they would only consist of styled text. That is not bad, but I enjoy uploading images every now and then.
Enter metaWeblog API, a blog posting API that is used by most of the old-school blogging systems (and also by some new ones) and that I know almost by heart having implemented a client for it all the way back in MacOS 9.
iBlog public beta 7 running on MacOS 9 probably around 2001 or 2002
A client implementing the metaWeblog API uses XML-RPC to communicate with blog server. I love XML-RPC, I think it is terribly underrated and I’d rather use it in 2022 than add a truckload the size of Jupiter of complexity to my apps by using GraphQL or the new shiny RPC mechanisms. I simply don’t need them, they’re overkill for my use cases, I’m not Facebook. Anyway, I got frustrated with Micropub and decided to implement metaWeblog API support for my static-site generator. It took a bit less than a week to get everything running. At this very moment, I’m typing this message on the wonderful Mars Edit (a brilliant blogging client for macOS) using a rich native interface. I have my fingers crossed that when I hit Send to Blogthat it will actually work.
A screenshot of Mars Edit showing this very post during my writing process.
This time I implemented the server using NodeJS. It is much easier for me to craft small toys out of JS. It is not as fun as doing the same stuff using Racket, but life is too short and I need this to work because I want to use my blog and not just develop my blog. It is so refreshing to be able to use a real native application to actually write on my blog. Using programmer’s editors such as VSCode and Sublime Text is great when you’re doing development, but I got tired of using those apps for posting to my blog. A richer interface reduces friction, posting becomes effortless.
For those of you reading, and coping with my various tests during the last couple weeks, I hope you won’t discount XML-RPC or metaWeblog API from your toolset. Choosing boring well-understood technology does wonders to one’s sanity. I’m already more productive with my blog than I was with Micropub. Even media uploads are working.
Feel free to reach out and ask me anything about this whole process. I want more people having fun blogging.
]]>Somehow, I ended up working with HTML generators, blogging, and eventually backend programming. Slowly, I crawled from the invisible dungeons of server-side programming into the shiny fast realms of web development. For more than a decade, most of what I’ve done could be classified as web development.
When I started, we were discussing the merits of mooTools vs Dojo Toolkit vs Prototype. JQuery was new and awesome (now it is old and awesome). As the years went on, more and more of my work became either web development or mobile app development.
I noticed recently that I don’t enjoy neither of them anymore. First, I avoid mobile apps—a bit of hypocrisy since I’m writing this blog post on an iPad with iA Writer—let me unpack that for a second: I use as few mobile apps as I can and usually not on my iPhone. I make a lot of use out of mobile apps on my iPad, but that is probably because they’re closer to desktop apps than tiny cramped smartphone apps.
I love the web, but I don’t enjoy being a web developer anymore. I think that the ecosystem is moving too fast and in a direction that doesn’t please me at all.
My sweet spot was probably webOS with EnyoJS. Anything after that felt too complex for my subjective taste.
If you enjoy being a web developer that is great. This is not a post to try to discourage you, I’m really happy that you’ve found joy in your work, that is quite rare.
In the last three or four years, all the personal web projects I’ve started followed the same pattern, one that I only realised recently. I coded those projects as 100% client-side apps running without backends.
They’re just small tools, such as Little Webby Press which is an eBook and static-site generator. Originally, I thought that I was doing that to save on the cost and effort to maintain a safe backend. Now I know that I’ve been doing that because I’m trying to distance myself from the web and go back to thinking of applications as desktop apps.
All my recent webapps could have been desktop applications. They would behave better and been easier to maintain had I developed them in that way. Most of them work with files, manipulating and generating complex nested folders. Doing that in the client-side sucks. I need to use fake in-memory filesystems, export them as zipfiles and download to the client machine. All because web browsers can’t access the filesystem. That’s for a good reason, I’m not advocating for allowing browsers unrestricted access to the user’s HD, I understand the risks involved.
What I want to say is that maybe I had my mind locked into web development and that is the wrong paradigm to approach the apps that I want to develop.
Another important trend that I noticed lately is that I’ve been using more desktop apps than web apps. I’m happy with NetNewsWire for my news, iA Writer to do my blog posting via Micropub, Apple Mail instead of GMail, etc.
I enjoy desktop apps, both as a user and as a developer.
I guess I should go back to developing for the desktop (and maybe the iPad).
An ecosystem that I would like to explore more is desktop apps that leverage the new decentralisation protocols—such as SSB, Hypercore, IPFS—to connect its users. Most of the desktop apps I see being developed in those ecosystems are actually web technology in disguise. It is usually Electron that is serving backend and frontend from the user’s machine. I’m OK with that. Electron is one of the easiest ways to ship cross-platform hybrid apps. Many of these apps are simply browsers whose feature set is similar to that of a web browser but that fetches their content from non-web peers. I’ve been wondering if there is a place for decentralised native desktop apps that are not generic content-browsers for those ecosystems.
Yesterday I did a little experiment. As many people know, my favourite protocol and platform is Secure Scuttlebutt. It is a wonderful community to be a part of, but most of its tech stack is strongly tied to web technologies. Yesterday I decided to make a little toy to liberate it to desktop apps (as least on my machine).
I’m still running the SSB backend server needed to be an active peer in that network, but I coded a very small client that provides an RPC mechanism instead of a web or GUI interface. I chose boring technology for the RPC. Something that I think is way better for my use case than a lot of the new trendy protocols that are out there. I went with XML-RPC. SSB has it’s own RPC mechanism called MUXRPC. What I built is a small server that receives a method call via XML-RPC, forwards it to the running SSB backend via MUXRPC, and then translates back the response.
Now, anything on my desktop that has a working XML library could talk to SSB. I had a ton of fun making small toys in Python, Swift, etc. Just for the sake of it. Developing for the desktop is so refreshing. I can even access the filesystem.
I guess I’ll do more desktop stuff going forward, maybe ship some small apps. That sounds way more appealing than doing web development to me.
Is there any kind of decentralised app that you would like to see? What small app do you wish you had available to you on the desktop?
]]>