The technology industry is meant to be inventing the future. That requires keeping an open mind: it's not a given that something new will work - or won't. Everything is an experiment, and sometimes the technologies and ideas that change the world are half a nudge away from something that didn't work at all. That means that neuroplasticity, a sense of play, and optimism are all key skills.
Unfortunately, it's really easy to let them slip, both on the business and the technology side. And the more we let go of this, the more we fall into the trap of thinking that the world is going to stay more or less the same.
On the business side, we've been thinking about funding technology in terms of startups for quite some time. Startups can be great: my job is to find and fund mission-driven ventures, and it's the most rewarding thing I've ever done. Of course, startups can also be harmful or deceptive (think Uber or Theranos); they're a tool that can be used for good or ill. The mechanisms we use to fund them are also tools: equity investment, convertible notes, and SAFE.
It would be easy to think, this is how I need to fund my business, or this is just how it's done. And it's certainly true that there's a lot of funding out there - more than ever before, in fact - that follows these standard models. Each has roughly the same, simple mechanism at its heart: investors make money through an exit event (an acquisition by another company, or, less commonly, an IPO). The literature makes clear that this is the most established route, and it is.
But that doesn't mean it's the only route by any means, or that it will remain the dominant route in the future. For all its popularity, there are clear drawbacks in the venture investment model. Exit events are relatively rare, and for investors and founders to make a significant return, there is an implied incentive to grow quickly - sometimes unhealthily so. "Unicorns" - startups that quickly grow to be worth $1B or more - are not always supportive places to work, or beneficial to their surrounding communities.
I've seen a lot of interest in revenue sharing investment, as popularized by Indie.vc. Here, investors are paid back through a dividend based on real revenue made by the company, usually with a capped multiple on the original investment. The zebra movement - one of the most exciting things to have happened in startups for decades - advocates for models along these lines, and I strongly agree with a lot of their manifesto. When you dig into the details, there's a lot that still needs to be worked out in order to make the model truly viable - but I know from first-hand experience that it's possible to get there.
Another route is crowdfunding investment. The local news site Berkeleyside raised $1M through a Direct Public Offering - a type of crowdfunding that offers shares directly to the public. Matter's portfolio company RadioPublic has an open crowdfunding campaign right now using something called a crowd safe: an adaptation of a SAFE note that gives equity to a community. (The crowdfunding site Republic lists many such offerings.)
Another is, of course, an ICO. I've been personally skeptical of these - over half die within four months of raising money - but there have been significant success stories. Holo raised a little over 30,000 ETH, which at the time was valued at around $20M. The sector is rife with scammers and even more serious criminals, but if you're building a decentralized platform for the right reasons, it's possible to raise significant funds quickly.
It seems likely to me that we'll see more innovation in the space - and that some iteration on crowdfunding or ICOs (or both together) will eventually take off like wildfire. The point is, the investment tools that we commonly use are convention, not hard-set rules, and conventions change. They should change. We should be experimenting, while remembering a core set of guiding principles:
1. Many (but not all) ventures require investment at multiple, different points in their lives.
2. Most startups fail, because they are experiments, and they need to be able to do this without recrimination.
3. Founders should retain control of their ventures, and no investment should put the founders or the venture in jeopardy.
4. Investors need to see a potential return on their investment in order to have motivation to invest, and usually have financial models that require them to return a multiple on their total investment dollars.
I'd also hazard to add a fifth, newer idea: startups should do no harm.
On the technology side, technoconservatism is rampant, and even easier to see. When you care about a platform enough, as many of us do about the web and the internet as a whole, it's easy to get trapped in a kind of nostalgia bubble. Rather than seeing the internet as an interconnected set of networks of people, the trap here is to see it as a set of protocols and technologies that must be preserved.
Falling into this trap opens the playing field for exploitation by bad actors - which is something I'll go into in my next post.
Since the ICO "sector" is full of scammers, I wouldn't recommend anyone to use ICOs for any real products. You'd be instantly associating yourself with scammers!
So wait. Where are you?
I guess I’m caught in my filter bubble again. After #DeleteFacebook, maybe you went back to your bicycle and your Polaroid camera. But maybe you’re out there still—can I still find you?
Hey, I have an idea! Just tell me where you’re at. A link to your personal blog or home page or whatever it is.
Not a business or a software project. Not a venture. Just your zany vaporwave page or your sobering, grief-stricken page or your low-key photos of old train cars page. Video, audio, telnet—whatever kind of link. Even a paragraph explaining why you hate links, so I know where you’re at mentally.
Just send a link:
- To my e-mail: email@example.com.
- Or to my Twitter: @kickscondor.
- Or by a Webmention to this page.
Then check back in two days and I am going to build a miniature directory out of these links. I will put a big link to it at the top of this page. I don’t have a predefined scheme in mind for organizing them. I’m going to sort that out once I see what comes in. (And I will also present a raw dump of them in chronological order—the order in which I received them.)
And if you don’t have a link. Well, I’m giving you two days! (Please. I beg of you: Make something with an image map.)
Ok, the backstory. I recently stumbled across a very interesting website called h0p3’s Wiki. It was on Hacker News—the author asks Am I allowed to plug my site? After the audience grants permission, this fantastic link gets dropped. (I haven’t explored all of the words of the site; I am mostly commenting on the look and the uniqueness of the site.)
But I thought: why is it such taboo to drop a link? We’ve become very hard on self-promotion. I’m not going to just stumble on this person’s website. I NEED THEM TO SHOW ME WHERE IT IS!
I started wondering what else is out there. I don’t want to wait to be on the same subreddit as you. Or to live in the same neighborhood or have the same uncle or be fighting the same cause. In fact, it’s better if there’s no possible connection between us. It would be a rupture in the balloon.
But we could still link to each other. Maybe we don’t need an elaborate system beyond that.
Not entirely terrible but of course a lot of "what about OpenID Connect" and "OAuth2 is not secure enough"..
cool, I hope webmention receiving is coming!!
(i wrote this reply about the storage stuff)
lots of stuff is new, since i haven't posted a changelog since 1.9.4! let's focus on the important things i guess?
- all my html templates are jinja2 now instead of vanilla django - jinja2 is faster and also much more capable, since it supports pretty much arbitrary python expressions rather than a very strict specialised syntax
- lemoncurry now natively serves avatars through the libravatar protocol, which is basically like an open distributed version of gravatar? sadly, the main libravatar server later announced that it's shutting down in october :( my implementation will still work at least, since it's distributed, but i expect fewer services will actually support libravatar after the main server's gone :( :(
- micropub support is way better now - i have a micropub media endpoint, which lets you upload images and stuff for your post before you publish it, posts can be deleted over micropub, and additional properties work now too. neato
- i use messagepack now for caching stuff in redis, since it's faster as well as more space-efficient than the other serialisation formats available
- amp is no longer supported, because i decided amp sucks. you're welcome?
- changed the layout of entries so they now have way less vertical overhead. i did this to encourage me to use notes more, since they're meant to be little status updates like toots and making them Big discouraged me from Doing That
next, i think i might be planning to break backwards compatibility. yep :o
specifically, i store entries internally using fairly typical relational database modeling: fields for single-valued stuff like name and content, many-to-many associations for stuff like cats and syndications, etc. etc.? and i'm running into a mismatch between that structure and what i need the site to handle
specifically, while i can easily produce microformats2 html from that structure, micropub works by sending microformats2 items into the site, which means i need to convert back and forth between mf2 and my internal format. this ends up being a big hassle in some cases! what micropub really wants is for microformats2 to be the site's native format, since that eliminates the need for translation
so basically: i'm planning to change lemoncurry's internal entry format from traditional entity-relationship modeling to native microformats2 json, just like the json i send to its micropub endpoint right now. then i can natively exchange microformats2 items with my site, without the translation mismatch
that's a gigantic change, and i haven't even decided exactly how i wanna implement it. so i'm planning to make it a major version. that's right, lemoncurry 2.0 is on the roadmap!! :o
Awesome! I store everything as microformats2 json, even configuration and such. I kinda went overboard with the storage system though: all nested objects (like replies) are extracted into their own entries and querying has to reassemble the object, looking up any nested object by url :D This is not very fast...
- I’m looking for an outliner!
- Why I’m looking
- I’ve been creating outlines since high school. I started with pen and paper, then moved to text editors—first Word, then Google Docs, and finally, plaintext files. I’ve been creating plaintext outlines for about 5 years. I typically save my outlines as Markdown and export them using Pandoc if they need to be shared with other folks. I use outlines to take notes, plan projects, create documentation, and as todo lists. I’m a plaintext junky. Whenever and wherever I can, I use plaintext. I edit plaintext with either Neovim, VimR, or nvALT depending on whether I’m in the command line or not. The major hiccup in my plaintext-outline-life is when it comes to editing and creating plaintext outlines on the go. Typically, I use Coda on iOS…which is very much like bringing a siege weapon and a concrete mixer to a Pokémon card tournament. Coda is a spectacular piece of software, but very much not meant for maintaining plaintext outlines, notes and todos. I need a new tool. I want to supercharge my outlining workflow. I’m on the hunt! I’m looking for more than an iOS plaintext editor, I’m looking for a full(ish) featured outliner.
- What I’m looking for
- Cross platform, both macOS and iOS (or web-based)
- Document syncing (Dropbox or SFTP preferable)
- Ability to reorder items in a list
- Preferably scriptable (e.g. I’d like to be able to setup reminders triggered off of certain markup so that I can use outlines as task lists…but this may be asking a lot)
- Seems to be the tried and true tool for this kind of thing
- OmniGroup seems to be leaning into the whole scriptability thing lately
- A wee bit expensive since I’d need to buy both the desktop and basic iOS clients
- Not 100% certain how interoperable their file formate is
- “Desktop,” e.g. electron app
- I already have an account
- 250 free notes a month
- Weird to say, but it is just really ugly, aesthetically
- Plaintext 😁
- I love how minimal this app is. It brings almost all of the functionality I’m looking for to what is ostensibly just plaintext!
- Other Thoughts
- @jthingelstad recommend that I also check out mind mapping applications. I’ve been doing a big of research there, too, but don’t think they fill the role with what I’m looking to do, mostly because they don’t mesh wicked well with my way of thinking. I like to think things together, nested parentheticals
- In writing this post, I’ve come to wonder if the real solution would be to separate concerns a bit?
- Find an app other than Coda for editing and creating plaintext on iOS (Drafts, perhaps?)
- Start to use a more feature-rich todo manager other than SwiftoDo
- At this moment I’m leaning towards OmniOutliner or Taskpaper…but I’m incredibly indecisive when it comes to this sort of thing, so who knows.
- Input always welcome!
The inevitable happened: Intel confirms critical security issues with Intel ME: https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00086&languageid=en-fr
"This includes scenarios where a successful attacker could: ... Load and execute arbitrary code outside the visibility of the user and operating system."
And of course it affects "6th, 7th & 8th Generation Intel® Core™ Processor Family", meaning pretty much all our desktop/laptop CPUs.
"All our CPUs" is inaccurate. Most people have older CPUs, and AMD Ryzen are the best selling CPUs this year…
It's interesting that you're already doing versions/releases already! Sweetroll is 3.5 years old, still version 0.0.0 :D
Looking forward to your implementation of replies and stuff.
After a long time of using fish, and before that zsh I've returned to being a full time bash user. There was no particular rhyme nor reason to my using one shell over another, so in an effort to simplify my setup across my two computers and the heaps of servers I interact with on the daily I've gone all in on bash.
wow, no particular reason? Not all the zsh features? I've been looking into new hipster shells (ion, xonsh, elvish…) but none of them can fully match the power of my zsh setup yet :D
Today’s I streamed for ten hours.
I was a little slow getting ready this morning but I managed to be on stream only fifteen minutes late. I managed to get up to the final card in Hand Of Fate but after playing for six hours I couldn’t beat it so now I have the dilemma of whether or not I spend another stream trying to defeat that last card boss or do I just start afresh with Hand Of Fate 2 for my next PC stream?
Afterwards I napped hard on the couch for a few hours and then managed to get the dishes done quickly enough to allow myself time to prep for Alas For The Awful Sea.
Our Alas game this evening was actually awesome. We built some excellent characters and I found my feet GMing it pretty quickly. I’m super keen to see how the game carries on from over the next few sessions.
Tomorrow I’m working in the city but I am feeling confident about my ability to get things done and then in the evening I get to go hang out with Melody and maybe Sarah.
wow, 10 hours… I mean, I've seen a 24 hour stream of Sonic '06 :D but that's still a lot
i just implemented support for sending webmentions!
it’s not tested yet, because the source url basically needs to be publicly visible for a webmention from it to work? so this note is me testing it now :3 here’s a bunch of urls from webmention.rocks, which is a service for testing out your webmention sending - putting a url in my entry should cause it to get mentioned!
this test was performed using lemoncurry 1.6.1, a django-powered indieweb site codebase!
hi! do you receive webmentions? :)
I've seen cards with orange edges (in Russia) :D
CSS development isn’t programming in the traditional sense where you have loops, conditions and variables. CSS is going that direction to a degree and Sass paved the way. But the most needed skill in CSS is not syntax. It is to understand what interfaces you describe with it. And how to ensure that they are flexible enough that users can’t do things wrong and get locked out. You can avoid a lot of code when you understand HTML and use CSS to style it.
A lot of “CSS is not real programming” arguments are a basic misunderstanding what CSS is there to achieve. If you want full control over and interface and strive for pixel perfection – don’t use it. If you want to build an interface for an inclusive and diverse web, CSS is a great tool. Writing CSS is describing interfaces and needs empathy with the users. It is not about turning a Photoshop file into a web interface. It requires a different skillset and attitude of the maintainer and initial programmer than a backend language would.
So much 🙌 for this.
Yeah, talkind about "real" and "not real" programming is weird and kinda pointless. CSS is a domain-specific declarative programming language. You can't practically write general programs in CSS, but that doesn't make it "not real".
In a world before social media, a lot of online communities existed around blog comments. The particular community I was part of – web standards – was all built up around the personal websites of those involved.
As social media sites gained traction, those communities moved away from blog commenting systems. Instead of reacting to a post underneath the post, most people will now react with a URL someplace else. That might be a tweet, a Reddit post, a Facebook emission, basically anywhere that combines an audience with the ability to comment on a URL.
Oh man, the memories of dynamic text replacement and the lengths we went to just to get some non-standard text. https://t.co/f0whYW6hh1— One Bright Light ☣️ (@onebrightlight) July 13, 2017
Whether you think that’s a good thing or not isn’t really worth debating – it’s just the way it is now, things change, no big deal. However, something valuable that has been lost is the ability to see others’ reactions when viewing a post. Comments from others can add so much to a post, and that overview is lost when the comments exist elsewhere.
This is what webmentions do
Webmention is a W3C Recommendation that solves a big part of this. It describes a system for one site to notify another when it links to it. It’s similar in concept to Pingback for those who remember that, just with all the lessons learned from Pingback informing the design.
The flow goes something like this.
- Frankie posts a blog entry.
- Alex has thoughts in response, so also posts a blog entry linking to Frankie’s.
- Alex’s publishing software finds the link and fetches Frankie’s post, finding the URL of Frankie’s Webmention endpoint in the document.
- Alex’s software sends a notification to the endpoint.
- Frankie’s software then fetches Alex’s post to verify that it really does link back, and then chooses how to display the reaction alongside Frankie’s post.
The end result is that by being notified of the external reaction, the publisher is able to aggregate those reactions and collect them together with the original content.
The reactions can be comments, but also likes or reposts, which is quite a nice touch. For the nuts and bolts of how that works, Jeremy explains it better than I could.
Not two minutes ago was I talking about the reactions occurring in places other than blogs, so what about that, hotshot? It would be totally possible for services like Twitter and Facebook to implement Webmention themselves, in the meantime there are services like Bridgy that can act as a proxy for you. They’ll monitor your social feed and then send corresponding webmentions as required. Nice, right?
I’ve been implementing Webmention for the Perch Blog add-on, which has by and large been straightforward. For sending webmentions, I was able to make use of Aaron Parecki’s PHP client, but the process for receiving mentions is very much implementation-specific so you’re on your own when it comes to how to actually deal with an incoming mention.
Keeping it asynchronous
In order for your mention endpoint not to be a vector for a DoS attack, the spec highly recommends that you make processing of incoming mentions asynchronous. I believe this was a lesson learned from Pingback.
In practise that means doing as little work as possible when receiving the mention, just minimally validating it and adding it to a job queue. Then you’d have another worker pick up and process those jobs at a rate you control.
In Perch we have a central task scheduler, so that’s fine for this purpose. My job queue is a basic MySQL database table, and I have a scheduled task to pick up the next job and process it once a minute.
I work in publishing, dhaaaling
Another issue that popped up for me in Perch was that we didn’t have any sort of post published event I could hook into for sending webmentions out to any URLs we link to. Blog posts have a publish status (usually draft or published in 99% of cases) but they also have a publish date which is dynamically filtered to make posts visible when the date is reached.
If we sent our outgoing webmentions as soon as a post was marked as published, it still might not be visible on the site due to the date filter, causing the process to fail.
The solution was to go back to the task scheduler and again run a task to find newly published posts and fire off a publish event. This is an API event that any other add-on can listen for, so opens up options for us to do this like auto-tweeting of blog posts in the future.
A massive improvement of webmentions over most commenting systems is the affordance in the spec for updating a reaction. If you change a post, your software will re-notify the URLs you link to, sending out more webmention notifications.
A naive implementation would then pull in duplicate content, so it’s important to understand this process and know how to deal with updating (or removing) a reaction when a duplicate notification comes along. For us, that meant also thinking carefully about the moderation logic to try to do the right thing around deciding which content should be re-moderated when it changes.
Finding the target
One interesting problem I hit in my endpoint code was trying to figure out which blog post was being reacted to when a mention was received. The mention includes a source URL (the thing linking to you) and a target URL (the URL on your site they link to) which in many cases should be enough.
For Perch, we don’t actually know what content you’re displaying on any given URL. It’s a completely flexible system where the CMS doesn’t try to impose a structure on your site – you build the pages you want and pull out the content you want onto those pages. From the URL alone, we can’t tell what content is being displayed.
This required going back to the spec and confirming two things:
- The endpoint advertised with a post is scoped to that one URL. i.e. this is the endpoint that should be used for reacting to content on this page. If it’s another page, you should check that page for its endpoint.
- If an endpoint URL has query string parameters, those must be preserved.
The combination of those two factors means that I can provide an endpoint URL that has the ID of the post built into it. When a mention comes in, I don’t need to look at the target but instead the endpoint URL itself.
It’s possible that Bridgy might not be compliant with the spec on this point, so it’s something I’m actively testing on this blog first.
With that, after about fifteen years of having them enabled, I’ve disabled comments on this blog. I’m still displaying all the old comments, of course, but for the moment at least I’m only accepting reactions via webmentions.
Welcome to the webmention world :D
It should be pretty much the same as any other interpreter… What was difficult in particular?
It works! Webmention, that is. I didn't even notice that the post is about Micropub, not Webmention :D
h-entry doesn't have
The size of my images changes fluidly with my responsive layout. Since the browser does not know their heights a priori, the space collapses while the images are still loading. Once the images load the entire page reflows and the rest of the content jumps around to make space for them. It would be much better if the space for the images was reserved from the start and, as a bonus, if some lower resolution version of the images displayed, while the images load. Here is how I do it.
Nice. I do that as well, but with WebP only (WebP's header is smaller than JPEG, I remember that Facebook article about their app reconstructing the JPEG header…).
So WebP-supporting browsers get the tiny base64 preview and the full image loading over it, and JPEG-only browsers get a progressive JPEG (mozjpeg-optimized), with the image's dominant color as the background before the JPEG even starts loading.